text stringlengths 16 172k | source stringlengths 32 122 |
|---|---|
AnAI takeoveris an imagined scenario in whichartificial intelligence(AI) emerges as the dominant form ofintelligenceon Earth andcomputer programsorrobotseffectively take control of the planet away from thehuman species, which relies onhuman intelligence. Possible scenarios includereplacement of the entire human workforcedue toautomation, takeover by anartificial superintelligence(ASI), and the notion of arobot uprising. Stories of AI takeovershave been popularthroughoutscience fiction, but recent advancements have made the threat more real. Some public figures such asStephen Hawkinghave advocated research intoprecautionary measuresto ensure future superintelligent machines remain under human control.[1]
The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields ofroboticsand artificial intelligence has raised worries that human labor will become obsolete, leaving some people in various sectors without jobs to earn a living, leading to an economic crisis.[2][3][4][5]Many small- and medium-size businesses may also be driven out of business if they cannot afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.[6]
AI technologies have been widely adopted in recent years. While these technologies have replaced some traditional workers, they also create new opportunities. Industries that are most susceptible to AI takeover include transportation, retail, and military. AI military technologies, for example, allow soldiers to work remotely without risk of injury. A study in 2024 highlights AI's ability to perform routine and repetitive tasks poses significant risks of job displacement, especially in sectors like manufacturing and administrative support.[7]Author Dave Bond argues that as AI technologies continue to develop and expand, the relationship between humans and robots will change; they will become closely integrated in several aspects of life. AI will likely displace some workers while creating opportunities for new jobs in other sectors, especially in fields where tasks are repeatable.[8][9]
Computer-integrated manufacturinguses computers to control the production process. This allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.
The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research, and journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.[10][11][12][13]
Anautonomous caris a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are operational and others are being developed, withlegislationrapidly expanding to allow their use. Obstacles to widespread adoption of autonomous vehicles have included concerns about the resulting loss of driving-related jobs in the road transport industry, and safety concerns. On March 18, 2018,the first human was killedby an autonomous vehicle inTempe, Arizonaby anUberself-driving car.[14]
The use of automated content has become relevant since the technological advancements in artificial intelligence models such asChatGPT,DALL-E, andStable Diffusion. In most cases, AI-generated content such as imagery, literature, and music are produced through text prompts and these AI models have been integrated into other creative programs. Artists are threatened by displacement from AI-generated content due to these models sampling from other creative works, producing results sometimes indiscernible to those of man-made content. This complication has become widespread enough to where other artists and programmers are creating software and utility programs to retaliate against these text-to-image models from giving accurate outputs. While some industries in the economy benefit from artificial intelligence through new jobs, this issue does not create new jobs and threatens replacement entirely. It has made public headlines in the media recently: In February 2024,Willy's Chocolate ExperienceinGlasgow, Scotlandwas an infamous children's event in which the imagery and scripts were created using artificial intelligence models to the dismay of children, parents, and actors involved. There is an ongoing lawsuit placed againstOpenAIfromThe New York Timeswhere it is claimed that there is copyright infringement due to the sampling methods their artificial intelligence models use for their outputs.[15][16][17][18][19]
Scientists such asStephen Hawkingare confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".[20][21]Scholars likeNick Bostromdebate how far off superhuman intelligence is, and whether it poses a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the sameemotionaldesire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, apaperclip maximizerdesigned solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.[22]
AI takeover is a common theme inscience fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing its goals.[23]The idea is seen inKarel Čapek'sR.U.R., which introduced the wordrobotin 1921,[24]and can be glimpsed inMary Shelley'sFrankenstein(published in 1818), as Victor ponders whether, if he grantshis monster'srequest and makes him a wife, they would reproduce and their kind would destroy humanity.[25]
According toToby Ord, the idea that an AI takeover requires robots is a misconception driven by the media and Hollywood. He argues that the most damaging humans in history were not physically the strongest, but that they used words instead to convince people and gain control of large parts of the world. He writes that asufficientlyintelligent AI with an access to the internet could scatter backup copies of itself, gather financial and human resources (via cyberattacks or blackmails), persuade people on a large scale, and exploit societal vulnerabilities that are too subtle for humans to anticipate.[26]
The word "robot" fromR.U.R.comes from the Czech wordrobota, meaning laborer orserf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.[27]HAL 9000(1968) and the originalTerminator(1984) are two iconic examples of hostile AI in pop culture.[28]
Nick Bostromand others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to getting even better at being able to reprogram itself, the result could be a recursiveintelligence explosionin which it would rapidly leave human intelligence far behind. Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans:[23][29]
According to Bostrom, a computer program that faithfully emulates a human brain, or that runs algorithms that are as powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.[23]
A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".[23]
More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints onworking memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.[23]
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergoinstrumental convergencein ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[31]
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[23][32]Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According toEliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[33]
Many scholars, including evolutionary psychologistSteven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans.[34]
The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.[35]According to AI researcherSteve Omohundro, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources—would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.[36]
Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such asThe Matrix, arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence.[34]In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that theirgoals are unintentionally incompatiblewith human survival or well-being (as in the filmI, Robotand in the short story "The Evitable Conflict"). Omohundro suggests that present-day automation systems are notdesigned for safetyand that AIs may blindly optimize narrowutilityfunctions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.[37]
TheAI control problemis the issue of how to build asuperintelligentagent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators.[38]Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.[39]
Major approaches to the control problem includealignment, which aims to align AI goal systems with human values, andcapability control, which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "AI box". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.[23]
PhysicistStephen Hawking,MicrosoftfounderBill Gates, andSpaceXfounderElon Muskhave expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[40]Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmartingfinancial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015,Nick Bostromjoined Stephen Hawking,Max Tegmark, Elon Musk, LordMartin Rees,Jaan Tallinn, and numerous AI researchers in signing theFuture of Life Institute's open letter speaking to the potential risks and benefits associated withartificial intelligence. The signatories "believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."[41][42]
Arthur C. Clarke's Odyssey series and Charles Stross's Accelerando relate to humanity's narcissistic injuries in the face of powerful artificial intelligences threatening humanity's self-perception.[43] | https://en.wikipedia.org/wiki/AI_takeover |
Univariateis a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry.[1]Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed.[2]
Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the termscategoricalunivariate data andnumericalunivariate data are used to distinguish between these types.
Categorical univariate data consists of non-numericalobservationsthat may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use eithernominalorordinalscale of measurement.[3]
Numerical univariate data consists of observations that are numbers. They are obtained using eitherintervalorratioscale of measurement. This type of univariate data can be classified even further into two subcategories:discreteandcontinuous.[2]A numerical univariate data is discrete if the set of all possible values isfiniteor countablyinfinite. Discrete univariate data are usually associated with counting (such as the number of books read by a person). A numerical univariate data is continuous if the set of all possible values is an interval of numbers. Continuous univariate data are usually associated with measuring (such as the weights of people).
Univariate analysis is the simplest form of analyzing data.Unimeans "one", so the data has only one variable (univariate).[4]Univariate data requires to analyze eachvariableseparately. Data is gathered for the purpose of answering a question, or more specifically, a research question. Univariate data does not answer research questions about relationships between variables, but rather it is used to describe one characteristic or attribute that varies from observation to observation.[5]Usually there are two purposes that a researcher can look for. The first one is to answer a research question with descriptive study and the second one is to get knowledge about howattributevaries with individual effect of a variable inregression analysis. There are some ways to describe patterns found in univariate data which include graphical methods, measures of central tendency and measures of variability.[6]
Like other forms of statistics, it can beinferentialordescriptive. The key fact is that only one variable is involved.
Univariate analysis can yield misleading results in cases in whichmultivariate analysisis more appropriate.
Central tendency is one of the most common numerical descriptive measures. It is used to estimate the central location of the univariate data by the calculation ofmean,medianandmode.[7]Each of these calculations has its own advantages and limitations. The mean has the advantage that its calculation includes each value of the data set, but it is particularly susceptible to the influence ofoutliers. The median is a better measure when the data set contains outliers. The mode is simple to locate.
One is not restricted to using only one of these measures of central tendency. If the data being analyzed is categorical, then the only measure of central tendency that can be used is the mode. However, if the data is numerical in nature (ordinalorinterval/ratio) then the mode, median, or mean can all be used to describe the data. Using more than one of these measures provides a more accurate descriptive summary of central tendency for the univariate.[8]
A measure ofvariabilityordispersion(deviation from the mean) of a univariate data set can reveal the shape of a univariate data distribution more sufficiently. It will provide some information about the variation among data values. The measures of variability together with the measures of central tendency give a better picture of the data than the measures of central tendency alone.[9]The three most frequently used measures of variability arerange,varianceandstandard deviation.[10]The appropriateness of each measure would depend on the type of data, the shape of the distribution of data and which measure of central tendency are being used. If the data is categorical, then there is no measure of variability to report. For data that is numerical, all three measures are possible. If the distribution of data is symmetrical, then the measures of variability are usually the variance and standard deviation. However, if the data areskewed, then the measure of variability that would be appropriate for that data set is the range.[3]
Descriptive statistics describe a sample or population. They can be part ofexploratory data analysis.[11]
The appropriate statistic depends on thelevel of measurement. For nominal variables, afrequency tableand a listing of themode(s)is sufficient. For ordinal variables themediancan be calculated as a measure ofcentral tendencyand therange(and variations of it) as a measure of dispersion. For interval level variables, thearithmetic mean(average) andstandard deviationare added to the toolbox and, for ratio level variables, we add thegeometric meanandharmonic meanas measures of central tendency and thecoefficient of variationas a measure of dispersion.
For interval and ratio level data, further descriptors include the variable's skewness andkurtosis.
Inferential methods allow us to infer from a sample to a population.[11]For a nominal variable a one-way chi-square (goodness of fit) test can help determine if our sample matches that of some population.[12]For interval and ratio level data, aone-sample t-testcan let us infer whether the mean in our sample matches some proposed number (typically 0). Other available tests of location include the one-samplesign testandWilcoxon signed rank test.
The most frequently used graphical illustrations for univariate data are:
Frequency is how many times a number occurs. The frequency of an observation in statistics tells us the number of times the observation occurs in the data. For example, in the following list of numbers {1, 2, 3, 4, 6, 9, 9, 8, 5, 1, 1, 9, 9, 0, 6, 9}, the frequency of the number 9 is 5 (because it occurs 5 times in this data set).
Bar chart is agraphconsisting ofrectangularbars. These bars actually representsnumberor percentage of observations of existing categories in a variable. Thelengthorheightof bars gives a visual representation of the proportional differences among categories.
Histogramsare used to estimate distribution of the data, with the frequency of values assigned to a value range called abin.[13]
Pie chart is a circle divided into portions that represent the relative frequencies or percentages of a population or a sample belonging to different categories.
Univariate distributionis a dispersal type of a single random variable described either with aprobability mass function(pmf) fordiscrete probability distribution, orprobability density function(pdf) forcontinuous probability distribution.[14]It is not to be confused withmultivariate distribution. | https://en.wikipedia.org/wiki/Univariate_analysis |
Incryptography, aring signatureis a type ofdigital signaturethat can be performed by any member of a set of users that each havekeys. Therefore, a message signed with a ring signature is endorsed by someone in a particular set of people. One of the security properties of a ring signature is that it should be computationally infeasible to determinewhichof the set's members' keys was used to produce the signature. Ring signatures are similar togroup signaturesbut differ in two key ways: first, there is no way to revoke the anonymity of an individual signature; and second, any set of users can be used as a signing set without additional setup.
Ring signatures were invented byRon Rivest,Adi Shamir, andYael Tauman Kalai, and introduced atASIACRYPTin 2001.[1]The name,ring signature, comes from the ring-like structure of the signaturealgorithm.
Suppose that a set of entities each have public/private key pairs, (P1,S1), (P2,S2), ..., (Pn,Sn). Partyican compute a ring signature σ on a messagem, on input (m,Si,P1, ...,Pn). Anyone can check the validity of a ring signature given σ,m, and the public keys involved,P1, ...,Pn. If a ring signature is properly computed, it should pass the check. On the other hand, it should be hard for anyone to create a valid ring signature on any message for any set without knowing any of the private keys for that set.[2]
In the original paper, Rivest, Shamir, and Tauman described ring signatures as a way to leak a secret. For instance, a ring signature could be used to provide an anonymous signature from "a high-rankingWhite Houseofficial", without revealing which official signed the message. Ring signatures are right for this application because the anonymity of a ring signature cannot be revoked, and because the group for a ring signature can be improvised.
Another application, also described in the original paper, is fordeniable signatures. Here the sender and the recipient of a message form a group for the ring signature, then the signature is valid to the recipient, but anyone else will be unsure whether the recipient or the sender was the actual signer. Thus, such a signature is convincing, but cannot be transferred beyond its intended recipient.
There were various works, introducing new features and based on different assumptions:
Most of the proposed algorithms haveasymptoticoutput sizeO(n){\displaystyle O(n)}; i.e., the size of the resulting signature increases linearly with the size of input (number of public keys). That means that such schemes are impracticable for real use cases with sufficiently largen{\displaystyle n}(for example, an e-voting with millions of participants). But for some application with relatively smallmedianinput size such estimate may be acceptable.CryptoNoteimplementsO(n){\displaystyle O(n)}ring signature scheme by Fujisaki and Suzuki[5]in p2p payments to achieve sender's untraceability.
More efficient algorithms have appeared recently. There are schemes with the sublinear size of the signature,[6]as well as with constant size.[7]
The original paper describes anRSAbased ring signature scheme, as well as one based onRabin signatures. They define akeyed"combining function"Ck,v(y1,y2,…,yn){\displaystyle C_{k,v}(y_{1},y_{2},\dots ,y_{n})}which takes a keyk{\displaystyle k}, an initialization valuev{\displaystyle v}, and a list of arbitrary valuesy1,…yn{\displaystyle y_{1},\dots y_{n}}.yi{\displaystyle y_{i}}is defined asgi(xi){\displaystyle g_{i}(x_{i})}, wheregi{\displaystyle g_{i}}is a trap-door function (i.e. an RSA public key in the case of RSA based ring signatures).
The functionCk,v(y1,y2,…,yn){\displaystyle C_{k,v}(y_{1},y_{2},\dots ,y_{n})}is called the ring equation, and is defined below. The equation is based on asymmetric encryption functionEk{\displaystyle E_{k}}:
It outputs a single valuez{\displaystyle z}which is forced to be equal tov{\displaystyle v}. The equationv=Ck,v(y1,y2,…,yn){\displaystyle v=C_{k,v}(y_{1},y_{2},\dots ,y_{n})}can be solved as long as at least oneyi{\displaystyle y_{i}}, and by extensionxi{\displaystyle x_{i}}, can be freely chosen. Under the assumptions of RSA, this implies knowledge of at least one of the inverses of the trap door functionsgi−1{\displaystyle g_{i}^{-1}}(i.e. a private key), sincegi−1(yi)=xi{\displaystyle g_{i}^{-1}(y_{i})=x_{i}}.
Generating a ring signature involves six steps. The plaintext is signified bym{\displaystyle m}, the ring's public keys byP1,P2,…,Pn{\displaystyle P_{1},P_{2},\dots ,P_{n}}.
Signature verification involves three steps.
Here is aPythonimplementation of the original paper usingRSA. Requires 3rd-party module PyCryptodome.
To sign and verify 2 messages in a ring of 4 users:
Monero[8]and several othercryptocurrenciesuse this technology.[citation needed]
This article incorporatestextavailable under theCC BY-SA 4.0license. | https://en.wikipedia.org/wiki/Ring_signature |
Incryptography,key sizeorkey lengthrefers to the number ofbitsin akeyused by acryptographicalgorithm (such as acipher).
Key length defines the upper-bound on an algorithm'ssecurity(i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated bybrute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length).
Mostsymmetric-key algorithmsare designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance,Triple DESwas designed to have a 168-bit key, but an attack of complexity 2112is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important forasymmetric-key algorithms, because no such algorithm is known to satisfy this property;elliptic curve cryptographycomes the closest with an effective security of roughly half its key length.
Keysare used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) toplaintext. All commonly-used ciphers are based on publicly knownalgorithmsor areopen sourceand so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated byAuguste Kerckhoffs(in the 1880s) andClaude Shannon(in the 1940s); the statements are known asKerckhoffs' principleand Shannon's Maxim respectively.
A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long and/or would take too much memory to execute.Shannon'swork oninformation theoryshowed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called theone-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses oncomputational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker.
Encryption systems are often grouped into families. Common families include symmetric systems (e.g.AES) and asymmetric systems (e.g.RSAandElliptic-curve cryptography[ECC]). They may be grouped according to the centralalgorithmused (e.g.ECCandFeistel ciphers). Because each of these has a different level of cryptographic complexity, it is usual to have different key sizes for the samelevel of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetricRSAis considered approximately equal in security to an 80-bit key in a symmetric algorithm.[1]
The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, as of May 2007[update], a 1039-bit integer was factored with thespecial number field sieveusing 400 computers over 11 months.[2]The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA keys used in secure online commerce should bedeprecated, since they may become breakable in the foreseeable future. Cryptography professorArjen Lenstraobserved that "Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is an unqualified yes."[3]
The 2015Logjam attackrevealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This practice, somewhat common at the time, allows large amounts of communications to be compromised at the expense of attacking a small number of primes.[4][5]
Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it may be possible to run through the entirespaceof keys in what is known as a brute-force attack. Because longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical.
With a key of lengthnbits, there are 2npossible keys. This number grows very rapidly asnincreases. The large number of operations (2128) required to try all possible 128-bit keys is widely consideredout of reachfor conventional digital computing techniques for the foreseeable future.[6]However, aquantum computercapable of runningGrover's algorithmwould be able to search the possible keys more efficiently. If a suitably sized quantum computer would reduce a 128-bit key down to 64-bit security, roughly aDESequivalent. This is one of the reasons whyAESsupports key lengths of 256 bits and longer.[a]
IBM'sLucifer cipherwas selected in 1974 as the base for what would become theData Encryption Standard. Lucifer's key length was reduced from 128 bits to56 bits, which theNSAand NIST argued was sufficient for non-governmental protection at the time. The NSA has major computing resources and a large budget; some cryptographers includingWhitfield DiffieandMartin Hellmancomplained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute forceparallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years".[7]
However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government.[8][9]The bookCracking DES(O'Reilly and Associates) tells of the successful ability in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; seeEFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length forsymmetric algorithmkeys for general use. Because of this, DES was replaced in most security applications byTriple DES, which has 112 bits of security when using 168-bit keys (triple key).[1]
TheAdvanced Encryption Standardpublished in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms ofAES's quality untilquantum computersbecome available.[citation needed]However, as of 2015, the U.S.National Security Agencyhas issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for dataclassified up to Top Secret.[10]
In 2003, the U.S. National Institute for Standards and Technology,NISTproposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010.[11]
Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits ofsecurity strengthfor key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-keyTriple DES, andAES. Approvals for two-key Triple DES andSkipjackwere withdrawn in 2015; theNSA's Skipjack algorithm used in itsFortezzaprogram employs 80-bit keys.[1]
The effectiveness ofpublic key cryptosystemsdepends on the intractability (computational and theoretical) of certain mathematical problems such asinteger factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus,asymmetric keysmust be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerfulquantum computersin the future.
Since 2015, NIST recommends a minimum of 2048-bit keys forRSA,[12]an update to the widely accepted recommendation of a 1024-bit minimum since at least 2002.[13]
1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys.[14]In 2003,RSA Securityclaimed that 1024-bit keys were likely to become crackable sometime between 2006 and 2010, while 2048-bit keys are sufficient until 2030.[15]As of 2020[update]the largest RSA key publicly known to be cracked isRSA-250with 829 bits.[16]
The Finite FieldDiffie-Hellmanalgorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on thediscrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key.
Elliptic-curve cryptography(ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bitElliptic-curve Diffie–Hellman(ECDH) key has approximately the same safety factor as a 128-bitAESkey.[12]A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004.[17]
TheNSApreviously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET;[10]In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information.[18]
The two best known quantum computing attacks are based onShor's algorithmandGrover's algorithm. Of the two, Shor's offers the greater risk to current security systems.
Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms includingRSA,Diffie-Hellmanandelliptic curve cryptography. According to Professor GillesBrassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitousSSLused to protect e-commerce and Internet banking andSSHused to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time, commonly known as retroactive/retrospective decryption or "harvest now, decrypt later".
Mainstream symmetric ciphers (such asAESorTwofish) and collision resistant hash functions (such asSHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable toGrover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2n/2invocations of the underlying cryptographic algorithm, compared with roughly 2nin the classical case.[19]Thus in the presence of large quantum computers ann-bit key can provide at leastn/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms.[10]
In a 2016 Quantum Computing FAQ, the NSA affirmed:
"A sufficiently large quantum computer, if built, would be capable of undermining all widely-deployed public key algorithms used for key establishment and digital signatures. [...] It is generally accepted that quantum computing techniques are much less effective against symmetric algorithms than against current widely used public key algorithms. While public key cryptography requires changes in the fundamental design to protect against a potential future quantum computer, symmetric key algorithms are believed to be secure provided a sufficiently large key size is used. [...] The public-key algorithms (RSA,Diffie-Hellman,[Elliptic-curve Diffie–Hellman] ECDH, and[Elliptic Curve Digital Signature Algorithm] ECDSA) are all vulnerable to attack by a sufficiently large quantum computer. [...] While a number of interesting quantum resistant public key algorithms have been proposed external to NSA, nothing has been standardized byNIST, and NSA is not specifying any commercial quantum resistant standards at this time. NSA expects that NIST will play a leading role in the effort to develop a widely accepted, standardized set of quantum resistant algorithms. [...] Given the level of interest in the cryptographic community, we hope that there will be quantum resistant algorithms widely available in the next decade. [...] The AES-256 and SHA-384 algorithms are symmetric, and believed to be safe from attack by a large quantum computer."[20]
In a 2022 press release, the NSA notified:
"A cryptanalytically-relevant quantum computer (CRQC) would have the potential to break public-key systems (sometimes referred to as asymmetric cryptography) that are used today. Given foreign pursuits in quantum computing, now is the time to plan, prepare and budget for a transition to [quantum-resistant] QR algorithms to assure sustained protection of [National Security Systems] NSS and related assets in the event a CRQC becomes an achievable reality."[21]
Since September 2022, the NSA has been transitioning from theCommercial National Security Algorithm Suite(now referred to as CNSA 1.0), originally launched in January 2016, to the Commercial National Security Algorithm Suite 2.0 (CNSA 2.0), both summarized below:[22][b]
CNSA 2.0
CNSA 1.0 | https://en.wikipedia.org/wiki/Cryptographic_key_length |
Inmachine learning,normalizationis a statistical technique with various applications. There are two main forms of normalization, namelydata normalizationandactivation normalization. Data normalization (orfeature scaling) includes methods that rescale input data so that thefeatureshave the same range, mean, variance, or other statistical properties. For instance, a popular choice of feature scaling method ismin-max normalization, where each feature is transformed to have the same range (typically[0,1]{\displaystyle [0,1]}or[−1,1]{\displaystyle [-1,1]}). This solves the problem of different features having vastly different scales, for example if one feature is measured in kilometers and another in nanometers.
Activation normalization, on the other hand, is specific todeep learning, and includes methods that rescale the activation ofhidden neuronsinsideneural networks.
Normalization is often used to:
Normalization techniques are often theoretically justified as reducing covariance shift, smoothing optimization landscapes, and increasingregularization, though they are mainly justified by empirical success.[1]
Batch normalization(BatchNorm)[2]operates on the activations of a layer for each mini-batch.
Consider a simple feedforward network, defined by chaining together modules:
x(0)↦x(1)↦x(2)↦⋯{\displaystyle x^{(0)}\mapsto x^{(1)}\mapsto x^{(2)}\mapsto \cdots }
where each network module can be a linear transform, a nonlinear activation function, a convolution, etc.x(0){\displaystyle x^{(0)}}is the input vector,x(1){\displaystyle x^{(1)}}is the output vector from the first module, etc.
BatchNorm is a module that can be inserted at any point in the feedforward network. For example, suppose it is inserted just afterx(l){\displaystyle x^{(l)}}, then the network would operate accordingly:
⋯↦x(l)↦BN(x(l))↦x(l+1)↦⋯{\displaystyle \cdots \mapsto x^{(l)}\mapsto \mathrm {BN} (x^{(l)})\mapsto x^{(l+1)}\mapsto \cdots }
The BatchNorm module does not operate over individual inputs. Instead, it must operate over one batch of inputs at a time.
Concretely, suppose we have a batch of inputsx(1)(0),x(2)(0),…,x(B)(0){\displaystyle x_{(1)}^{(0)},x_{(2)}^{(0)},\dots ,x_{(B)}^{(0)}}, fed all at once into the network. We would obtain in the middle of the network some vectors:
x(1)(l),x(2)(l),…,x(B)(l){\displaystyle x_{(1)}^{(l)},x_{(2)}^{(l)},\dots ,x_{(B)}^{(l)}}
The BatchNorm module computes the coordinate-wise mean and variance of these vectors:
μi(l)=1B∑b=1Bx(b),i(l)(σi(l))2=1B∑b=1B(x(b),i(l)−μi(l))2{\displaystyle {\begin{aligned}\mu _{i}^{(l)}&={\frac {1}{B}}\sum _{b=1}^{B}x_{(b),i}^{(l)}\\(\sigma _{i}^{(l)})^{2}&={\frac {1}{B}}\sum _{b=1}^{B}(x_{(b),i}^{(l)}-\mu _{i}^{(l)})^{2}\end{aligned}}}
wherei{\displaystyle i}indexes the coordinates of the vectors, andb{\displaystyle b}indexes the elements of the batch. In other words, we are considering thei{\displaystyle i}-th coordinate of each vector in the batch, and computing the mean and variance of these numbers.
It then normalizes each coordinate to have zero mean and unit variance:
x^(b),i(l)=x(b),i(l)−μi(l)(σi(l))2+ϵ{\displaystyle {\hat {x}}_{(b),i}^{(l)}={\frac {x_{(b),i}^{(l)}-\mu _{i}^{(l)}}{\sqrt {(\sigma _{i}^{(l)})^{2}+\epsilon }}}}
Theϵ{\displaystyle \epsilon }is a small positive constant such as10−9{\displaystyle 10^{-9}}added to the variance for numerical stability, to avoiddivision by zero.
Finally, it applies a linear transformation:
y(b),i(l)=γix^(b),i(l)+βi{\displaystyle y_{(b),i}^{(l)}=\gamma _{i}{\hat {x}}_{(b),i}^{(l)}+\beta _{i}}
Here,γ{\displaystyle \gamma }andβ{\displaystyle \beta }are parameters inside the BatchNorm module. They are learnable parameters, typically trained bygradient descent.
The following is aPythonimplementation of BatchNorm:
γ{\displaystyle \gamma }andβ{\displaystyle \beta }allow the network to learn to undo the normalization, if this is beneficial.[3]BatchNorm can be interpreted as removing the purely linear transformations, so that its layers focus solely on modelling the nonlinear aspects of data, which may be beneficial, as a neural network can always be augmented with a linear transformation layer on top.[4][3]
It is claimed in the original publication that BatchNorm works by reducing internal covariance shift, though the claim has both supporters[5][6]and detractors.[7][8]
The original paper[2]recommended to only use BatchNorms after a linear transform, not after a nonlinear activation. That is,ϕ(BN(Wx+b)){\displaystyle \phi (\mathrm {BN} (Wx+b))}, notBN(ϕ(Wx+b)){\displaystyle \mathrm {BN} (\phi (Wx+b))}. Also, the biasb{\displaystyle b}does not matter, since it would be canceled by the subsequent mean subtraction, so it is of the formBN(Wx){\displaystyle \mathrm {BN} (Wx)}. That is, if a BatchNorm is preceded by a linear transform, then that linear transform's bias term is set to zero.[2]
Forconvolutional neural networks(CNNs), BatchNorm must preserve the translation-invariance of these models, meaning that it must treat all outputs of the samekernelas if they are different data points within a batch.[2]This is sometimes called Spatial BatchNorm, or BatchNorm2D, or per-channel BatchNorm.[9][10]
Concretely, suppose we have a 2-dimensional convolutional layer defined by:
xh,w,c(l)=∑h′,w′,c′Kh′−h,w′−w,c,c′(l)xh′,w′,c′(l−1)+bc(l){\displaystyle x_{h,w,c}^{(l)}=\sum _{h',w',c'}K_{h'-h,w'-w,c,c'}^{(l)}x_{h',w',c'}^{(l-1)}+b_{c}^{(l)}}
where:
In order to preserve the translational invariance, BatchNorm treats all outputs from the same kernel in the same batch as more data in a batch. That is, it is applied once perkernelc{\displaystyle c}(equivalently, once per channelc{\displaystyle c}), not peractivationxh,w,c(l+1){\displaystyle x_{h,w,c}^{(l+1)}}:
μc(l)=1BHW∑b=1B∑h=1H∑w=1Wx(b),h,w,c(l)(σc(l))2=1BHW∑b=1B∑h=1H∑w=1W(x(b),h,w,c(l)−μc(l))2{\displaystyle {\begin{aligned}\mu _{c}^{(l)}&={\frac {1}{BHW}}\sum _{b=1}^{B}\sum _{h=1}^{H}\sum _{w=1}^{W}x_{(b),h,w,c}^{(l)}\\(\sigma _{c}^{(l)})^{2}&={\frac {1}{BHW}}\sum _{b=1}^{B}\sum _{h=1}^{H}\sum _{w=1}^{W}(x_{(b),h,w,c}^{(l)}-\mu _{c}^{(l)})^{2}\end{aligned}}}
whereB{\displaystyle B}is the batch size,H{\displaystyle H}is the height of the feature map, andW{\displaystyle W}is the width of the feature map.
That is, even though there are onlyB{\displaystyle B}data points in a batch, allBHW{\displaystyle BHW}outputs from the kernel in this batch are treated equally.[2]
Subsequently, normalization and the linear transform is also done per kernel:
x^(b),h,w,c(l)=x(b),h,w,c(l)−μc(l)(σc(l))2+ϵy(b),h,w,c(l)=γcx^(b),h,w,c(l)+βc{\displaystyle {\begin{aligned}{\hat {x}}_{(b),h,w,c}^{(l)}&={\frac {x_{(b),h,w,c}^{(l)}-\mu _{c}^{(l)}}{\sqrt {(\sigma _{c}^{(l)})^{2}+\epsilon }}}\\y_{(b),h,w,c}^{(l)}&=\gamma _{c}{\hat {x}}_{(b),h,w,c}^{(l)}+\beta _{c}\end{aligned}}}
Similar considerations apply for BatchNorm forn-dimensional convolutions.
The following is a Python implementation of BatchNorm for 2D convolutions:
For multilayeredrecurrent neural networks(RNN), BatchNorm is usually applied only for theinput-to-hiddenpart, not thehidden-to-hiddenpart.[11]Let the hidden state of thel{\displaystyle l}-th layer at timet{\displaystyle t}beht(l){\displaystyle h_{t}^{(l)}}. The standard RNN, without normalization, satisfiesht(l)=ϕ(W(l)htl−1+U(l)ht−1l+b(l)){\displaystyle h_{t}^{(l)}=\phi (W^{(l)}h_{t}^{l-1}+U^{(l)}h_{t-1}^{l}+b^{(l)})}whereW(l),U(l),b(l){\displaystyle W^{(l)},U^{(l)},b^{(l)}}are weights and biases, andϕ{\displaystyle \phi }is the activation function. Applying BatchNorm, this becomesht(l)=ϕ(BN(W(l)htl−1)+U(l)ht−1l){\displaystyle h_{t}^{(l)}=\phi (\mathrm {BN} (W^{(l)}h_{t}^{l-1})+U^{(l)}h_{t-1}^{l})}There are two possible ways to define what a "batch" is in BatchNorm for RNNs:frame-wiseandsequence-wise. Concretely, consider applying an RNN to process a batch of sentences. Lethb,t(l){\displaystyle h_{b,t}^{(l)}}be the hidden state of thel{\displaystyle l}-th layer for thet{\displaystyle t}-th token of theb{\displaystyle b}-th input sentence. Then frame-wise BatchNorm means normalizing overb{\displaystyle b}:μt(l)=1B∑b=1Bhi,t(l)(σt(l))2=1B∑b=1B(ht(l)−μt(l))2{\displaystyle {\begin{aligned}\mu _{t}^{(l)}&={\frac {1}{B}}\sum _{b=1}^{B}h_{i,t}^{(l)}\\(\sigma _{t}^{(l)})^{2}&={\frac {1}{B}}\sum _{b=1}^{B}(h_{t}^{(l)}-\mu _{t}^{(l)})^{2}\end{aligned}}}and sequence-wise means normalizing over(b,t){\displaystyle (b,t)}:μ(l)=1BT∑b=1B∑t=1Thi,t(l)(σ(l))2=1BT∑b=1B∑t=1T(ht(l)−μ(l))2{\displaystyle {\begin{aligned}\mu ^{(l)}&={\frac {1}{BT}}\sum _{b=1}^{B}\sum _{t=1}^{T}h_{i,t}^{(l)}\\(\sigma ^{(l)})^{2}&={\frac {1}{BT}}\sum _{b=1}^{B}\sum _{t=1}^{T}(h_{t}^{(l)}-\mu ^{(l)})^{2}\end{aligned}}}Frame-wise BatchNorm is suited for causal tasks such as next-character prediction, where future frames are unavailable, forcing normalization per frame. Sequence-wise BatchNorm is suited for tasks such as speech recognition, where the entire sequences are available, but with variable lengths. In a batch, the smaller sequences are padded with zeroes to match the size of the longest sequence of the batch. In such setups, frame-wise is not recommended, because the number of unpadded frames decreases along the time axis, leading to increasingly poorer statistics estimates.[11]
It is also possible to apply BatchNorm toLSTMs.[12]
BatchNorm has been very popular and there were many attempted improvements. Some examples include:[13]
A particular problem with BatchNorm is that during training, the mean and variance are calculated on the fly for each batch (usually as anexponential moving average), but during inference, the mean and variance were frozen from those calculated during training. This train-test disparity degrades performance. The disparity can be decreased by simulating the moving average during inference:[13]: Eq. 3
μ=αE[x]+(1−α)μx,trainσ2=(αE[x]2+(1−α)μx2,train)−μ2{\displaystyle {\begin{aligned}\mu &=\alpha E[x]+(1-\alpha )\mu _{x,{\text{ train}}}\\\sigma ^{2}&=(\alpha E[x]^{2}+(1-\alpha )\mu _{x^{2},{\text{ train}}})-\mu ^{2}\end{aligned}}}
whereα{\displaystyle \alpha }is a hyperparameter to be optimized on a validation set.
Other works attempt to eliminate BatchNorm, such as the Normalizer-Free ResNet.[14]
Layer normalization(LayerNorm)[15]is a popular alternative to BatchNorm. Unlike BatchNorm, which normalizes activations across the batch dimension for a given feature, LayerNorm normalizes across all the features within a single data sample. Compared to BatchNorm, LayerNorm's performance is not affected by batch size. It is a key component oftransformermodels.
For a given data input and layer, LayerNorm computes the meanμ{\displaystyle \mu }and varianceσ2{\displaystyle \sigma ^{2}}over all the neurons in the layer. Similar to BatchNorm, learnable parametersγ{\displaystyle \gamma }(scale) andβ{\displaystyle \beta }(shift) are applied. It is defined by:
xi^=xi−μσ2+ϵ,yi=γixi^+βi{\displaystyle {\hat {x_{i}}}={\frac {x_{i}-\mu }{\sqrt {\sigma ^{2}+\epsilon }}},\quad y_{i}=\gamma _{i}{\hat {x_{i}}}+\beta _{i}}
where:
μ=1D∑i=1Dxi,σ2=1D∑i=1D(xi−μ)2{\displaystyle \mu ={\frac {1}{D}}\sum _{i=1}^{D}x_{i},\quad \sigma ^{2}={\frac {1}{D}}\sum _{i=1}^{D}(x_{i}-\mu )^{2}}
and the indexi{\displaystyle i}ranges over the neurons in that layer.
For example, in CNN, a LayerNorm applies to all activations in a layer. In the previous notation, we have:
μ(l)=1HWC∑h=1H∑w=1W∑c=1Cxh,w,c(l)(σ(l))2=1HWC∑h=1H∑w=1W∑c=1C(xh,w,c(l)−μ(l))2x^h,w,c(l)=x^h,w,c(l)−μ(l)(σ(l))2+ϵyh,w,c(l)=γ(l)x^h,w,c(l)+β(l){\displaystyle {\begin{aligned}\mu ^{(l)}&={\frac {1}{HWC}}\sum _{h=1}^{H}\sum _{w=1}^{W}\sum _{c=1}^{C}x_{h,w,c}^{(l)}\\(\sigma ^{(l)})^{2}&={\frac {1}{HWC}}\sum _{h=1}^{H}\sum _{w=1}^{W}\sum _{c=1}^{C}(x_{h,w,c}^{(l)}-\mu ^{(l)})^{2}\\{\hat {x}}_{h,w,c}^{(l)}&={\frac {{\hat {x}}_{h,w,c}^{(l)}-\mu ^{(l)}}{\sqrt {(\sigma ^{(l)})^{2}+\epsilon }}}\\y_{h,w,c}^{(l)}&=\gamma ^{(l)}{\hat {x}}_{h,w,c}^{(l)}+\beta ^{(l)}\end{aligned}}}
Notice that the batch indexb{\displaystyle b}is removed, while the channel indexc{\displaystyle c}is added.
Inrecurrent neural networks[15]andtransformers,[16]LayerNorm is applied individually to each timestep. For example, if the hidden vector in an RNN at timestept{\displaystyle t}isx(t)∈RD{\displaystyle x^{(t)}\in \mathbb {R} ^{D}}, whereD{\displaystyle D}is the dimension of the hidden vector, then LayerNorm will be applied with:
xi^(t)=xi(t)−μ(t)(σ(t))2+ϵ,yi(t)=γixi^(t)+βi{\displaystyle {\hat {x_{i}}}^{(t)}={\frac {x_{i}^{(t)}-\mu ^{(t)}}{\sqrt {(\sigma ^{(t)})^{2}+\epsilon }}},\quad y_{i}^{(t)}=\gamma _{i}{\hat {x_{i}}}^{(t)}+\beta _{i}}
where:
μ(t)=1D∑i=1Dxi(t),(σ(t))2=1D∑i=1D(xi(t)−μ(t))2{\displaystyle \mu ^{(t)}={\frac {1}{D}}\sum _{i=1}^{D}x_{i}^{(t)},\quad (\sigma ^{(t)})^{2}={\frac {1}{D}}\sum _{i=1}^{D}(x_{i}^{(t)}-\mu ^{(t)})^{2}}
Root mean square layer normalization(RMSNorm)[17]changes LayerNorm by:
xi^=xi1D∑i=1Dxi2,yi=γxi^+β{\displaystyle {\hat {x_{i}}}={\frac {x_{i}}{\sqrt {{\frac {1}{D}}\sum _{i=1}^{D}x_{i}^{2}}}},\quad y_{i}=\gamma {\hat {x_{i}}}+\beta }
Essentially, it is LayerNorm where we enforceμ,ϵ=0{\displaystyle \mu ,\epsilon =0}.
Adaptive layer norm(adaLN) computes theγ,β{\displaystyle \gamma ,\beta }in a LayerNorm not from the layer activation itself, but from other data. It was first proposed for CNNs,[18]and has been used effectively indiffusiontransformers (DiTs).[19]For example, in a DiT, the conditioning information (such as a text encoding vector) is processed by amultilayer perceptronintoγ,β{\displaystyle \gamma ,\beta }, which is then applied in the LayerNorm module of a transformer.
Weight normalization(WeightNorm)[20]is a technique inspired by BatchNorm that normalizes weight matrices in a neural network, rather than its activations.
One example isspectral normalization, which divides weight matrices by theirspectral norm. The spectral normalization is used ingenerative adversarial networks(GANs) such as theWasserstein GAN.[21]The spectral radius can be efficiently computed by the following algorithm:
INPUTmatrixW{\displaystyle W}and initial guessx{\displaystyle x}
Iteratex↦1‖Wx‖2Wx{\displaystyle x\mapsto {\frac {1}{\|Wx\|_{2}}}Wx}to convergencex∗{\displaystyle x^{*}}. This is the eigenvector ofW{\displaystyle W}with eigenvalue‖W‖s{\displaystyle \|W\|_{s}}.
RETURNx∗,‖Wx∗‖2{\displaystyle x^{*},\|Wx^{*}\|_{2}}
By reassigningWi←Wi‖Wi‖s{\displaystyle W_{i}\leftarrow {\frac {W_{i}}{\|W_{i}\|_{s}}}}after each update of the discriminator, we can upper-bound‖Wi‖s≤1{\displaystyle \|W_{i}\|_{s}\leq 1}, and thus upper-bound‖D‖L{\displaystyle \|D\|_{L}}.
The algorithm can be further accelerated bymemoization: at stept{\displaystyle t}, storexi∗(t){\displaystyle x_{i}^{*}(t)}. Then, at stept+1{\displaystyle t+1}, usexi∗(t){\displaystyle x_{i}^{*}(t)}as the initial guess for the algorithm. SinceWi(t+1){\displaystyle W_{i}(t+1)}is very close toWi(t){\displaystyle W_{i}(t)}, so isxi∗(t){\displaystyle x_{i}^{*}(t)}toxi∗(t+1){\displaystyle x_{i}^{*}(t+1)}, thus allowing rapid convergence.
There are some activation normalization techniques that are only used for CNNs.
Local response normalization[22]was used inAlexNet. It was applied in a convolutional layer, just after a nonlinear activation function. It was defined by:
bx,yi=ax,yi(k+α∑j=max(0,i−n/2)min(N−1,i+n/2)(ax,yj)2)β{\displaystyle b_{x,y}^{i}={\frac {a_{x,y}^{i}}{\left(k+\alpha \sum _{j=\max(0,i-n/2)}^{\min(N-1,i+n/2)}\left(a_{x,y}^{j}\right)^{2}\right)^{\beta }}}}
whereax,yi{\displaystyle a_{x,y}^{i}}is the activation of the neuron at location(x,y){\displaystyle (x,y)}and channeli{\displaystyle i}. I.e., each pixel in a channel is suppressed by the activations of the same pixel in its adjacent channels.
k,n,α,β{\displaystyle k,n,\alpha ,\beta }are hyperparameters picked by using a validation set.
It was a variant of the earlierlocal contrast normalization.[23]
bx,yi=ax,yi(k+α∑j=max(0,i−n/2)min(N−1,i+n/2)(ax,yj−a¯x,yj)2)β{\displaystyle b_{x,y}^{i}={\frac {a_{x,y}^{i}}{\left(k+\alpha \sum _{j=\max(0,i-n/2)}^{\min(N-1,i+n/2)}\left(a_{x,y}^{j}-{\bar {a}}_{x,y}^{j}\right)^{2}\right)^{\beta }}}}
wherea¯x,yj{\displaystyle {\bar {a}}_{x,y}^{j}}is the average activation in a small window centered on location(x,y){\displaystyle (x,y)}and channeli{\displaystyle i}. The hyperparametersk,n,α,β{\displaystyle k,n,\alpha ,\beta }, and the size of the small window, are picked by using a validation set.
Similar methods were calleddivisive normalization, as they divide activations by a number depending on the activations. They were originally inspired by biology, where it was used to explain nonlinear responses of cortical neurons and nonlinear masking in visual perception.[24]
Both kinds of local normalization were obviated by batch normalization, which is a more global form of normalization.[25]
Response normalization reappeared in ConvNeXT-2 asglobal response normalization.[26]
Group normalization(GroupNorm)[27]is a technique also solely used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel group.
Suppose at a layerl{\displaystyle l}, there are channels1,2,…,C{\displaystyle 1,2,\dots ,C}, then it is partitioned into groupsg1,g2,…,gG{\displaystyle g_{1},g_{2},\dots ,g_{G}}. Then, LayerNorm is applied to each group.
Instance normalization(InstanceNorm), orcontrast normalization, is a technique first developed forneural style transfer, and is also only used for CNNs.[28]It can be understood as the LayerNorm for CNN applied once per channel, or equivalently, as group normalization where each group consists of a single channel:
μc(l)=1HW∑h=1H∑w=1Wxh,w,c(l)(σc(l))2=1HW∑h=1H∑w=1W(xh,w,c(l)−μc(l))2x^h,w,c(l)=x^h,w,c(l)−μc(l)(σc(l))2+ϵyh,w,c(l)=γc(l)x^h,w,c(l)+βc(l){\displaystyle {\begin{aligned}\mu _{c}^{(l)}&={\frac {1}{HW}}\sum _{h=1}^{H}\sum _{w=1}^{W}x_{h,w,c}^{(l)}\\(\sigma _{c}^{(l)})^{2}&={\frac {1}{HW}}\sum _{h=1}^{H}\sum _{w=1}^{W}(x_{h,w,c}^{(l)}-\mu _{c}^{(l)})^{2}\\{\hat {x}}_{h,w,c}^{(l)}&={\frac {{\hat {x}}_{h,w,c}^{(l)}-\mu _{c}^{(l)}}{\sqrt {(\sigma _{c}^{(l)})^{2}+\epsilon }}}\\y_{h,w,c}^{(l)}&=\gamma _{c}^{(l)}{\hat {x}}_{h,w,c}^{(l)}+\beta _{c}^{(l)}\end{aligned}}}
Adaptive instance normalization(AdaIN) is a variant of instance normalization, designed specifically for neural style transfer with CNNs, rather than just CNNs in general.[29]
In the AdaIN method of style transfer, we take a CNN and two input images, one forcontentand one forstyle. Each image is processed through the same CNN, and at a certain layerl{\displaystyle l}, AdaIn is applied.
Letx(l),content{\displaystyle x^{(l),{\text{ content}}}}be the activation in the content image, andx(l),style{\displaystyle x^{(l),{\text{ style}}}}be the activation in the style image. Then, AdaIn first computes the mean and variance of the activations of the content imagex′(l){\displaystyle x'^{(l)}}, then uses those as theγ,β{\displaystyle \gamma ,\beta }for InstanceNorm onx(l),content{\displaystyle x^{(l),{\text{ content}}}}. Note thatx(l),style{\displaystyle x^{(l),{\text{ style}}}}itself remains unchanged. Explicitly, we have:
yh,w,c(l),content=σc(l),style(xh,w,c(l),content−μc(l),content(σc(l),content)2+ϵ)+μc(l),style{\displaystyle {\begin{aligned}y_{h,w,c}^{(l),{\text{ content}}}&=\sigma _{c}^{(l),{\text{ style}}}\left({\frac {x_{h,w,c}^{(l),{\text{ content}}}-\mu _{c}^{(l),{\text{ content}}}}{\sqrt {(\sigma _{c}^{(l),{\text{ content}}})^{2}+\epsilon }}}\right)+\mu _{c}^{(l),{\text{ style}}}\end{aligned}}}
Some normalization methods were designed for use intransformers.
The original 2017 transformer used the "post-LN" configuration for its LayerNorms. It was difficult to train, and required carefulhyperparameter tuningand a "warm-up" inlearning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,[30]was found to be easier to train, requiring no warm-up, leading to faster convergence.[31]
FixNorm[32]andScaleNorm[33]both normalize activation vectors in a transformer. The FixNorm method divides theoutputvectors from a transformer by their L2 norms, then multiplies by a learned parameterg{\displaystyle g}. The ScaleNorm replaces all LayerNorms inside a transformer by division with L2 norm, then multiplying by a learned parameterg′{\displaystyle g'}(shared by all ScaleNorm modules of a transformer).Query-Key normalization(QKNorm)[34]normalizes query and key vectors to have unit L2 norm.
InnGPT, many vectors are normalized to have unit L2 norm:[35]hidden state vectors, input and output embedding vectors, weight matrix columns, and query and key vectors.
Gradient normalization(GradNorm)[36]normalizes gradient vectors during backpropagation. | https://en.wikipedia.org/wiki/Normalization_(machine_learning) |
Distributional semantics[1]is a research area that develops and studies theories and methods for quantifying and categorizing semantic similarities between linguistic items based on their distributional properties in large samples of language data. The basic idea of distributional semantics can be summed up in the so-calleddistributionalhypothesis:linguistic items with similar distributions have similar meanings.
Thedistributional hypothesisinlinguisticsis derived from thesemantic theoryof language usage, i.e. words that are used and occur in the samecontextstend to purport similar meanings.[2]
The underlying idea that "a word is characterized by the company it keeps" was popularized byFirthin the 1950s.[3]
The distributional hypothesis is the basis forstatistical semantics. Although the Distributional Hypothesis originated in linguistics,[4][5]it is now receiving attention incognitive scienceespecially regarding the context of word use.[6]
In recent years, the distributional hypothesis has provided the basis for the theory ofsimilarity-based generalizationin language learning: the idea that children can figure out how to use words they've rarely encountered before by generalizing about their use from distributions of similar words.[7][8]
The distributional hypothesis suggests that the more semantically similar two words are, the more distributionally similar they will be in turn, and thus the more that they will tend to occur in similar linguistic contexts.
Whether or not this suggestion holds has significant implications for both thedata-sparsityproblem in computational modeling,[9]and for the question of how children are able to learn language so rapidly given relatively impoverished input (this is also known as the problem of thepoverty of the stimulus) is unclear.
Distributional semantics favor the use of linear algebra as a computational tool and representational framework. The basic approach is to collect distributional information in high-dimensional vectors, and to define distributional/semantic similarity in terms of vector similarity.[10]Different kinds of similarities can be extracted depending on which type of distributional information is used to collect the vectors:topicalsimilarities can be extracted by populating the vectors with information on which text regions the linguistic items occur in;paradigmaticsimilarities can be extracted by populating the vectors with information on which other linguistic items the items co-occur with. Note that the latter type of vectors can also be used to extractsyntagmaticsimilarities by looking at the individual vector components.
The basic idea of a correlation between distributional and semantic similarity can be operationalized in many different ways. There is a rich variety of computational models implementing distributional semantics, includinglatent semantic analysis(LSA),[11][12]Hyperspace Analogue to Language(HAL), syntax- or dependency-based models,[13]random indexing,semantic folding[14]and various variants of thetopic model.[15]
Distributional semantic models differ primarily with respect to the following parameters:
Distributional semantic models that use linguistic items as context have also been referred to asword space, or vector space models.[17][18]
While distributional semantics typically has been applied to lexical items—words and multi-word terms—with considerable success, not least due to its applicability as an input layer for neurally inspired deep learning models,lexical semantics, i.e. the meaning of words, will only carry part of the semantics of an entire utterance. The meaning of a clause, e.g."Tigers love rabbits.", can only partially be understood from examining the meaning of the three lexical items it consists of. Distributional semantics can straightforwardly be extended to cover larger linguistic item such as constructions, with and without non-instantiated items, but some of the base assumptions of the model need to be adjusted somewhat.Construction grammarand its formulation of the lexical-syntactic continuum offers one approach for including more elaborate constructions in a distributional semantic model and some experiments have been implemented using the Random Indexing approach.[19]
Compositional distributional semanticmodels extend distributional semantic models by explicit semantic functions that use syntactically based rules to combine the semantics of participating lexical units into acompositional modelto characterize the semantics of entire phrases or sentences. This work was originally proposed by Stephen Clark,Bob Coecke, andMehrnoosh SadrzadehofOxford Universityin their 2008 paper, "A Compositional Distributional Model of Meaning".[20]Different approaches to composition have been explored—including neural models—and are under discussion at established workshops such asSemEval.[21]
Distributional semantic models have been applied successfully to the following tasks: | https://en.wikipedia.org/wiki/Distributional_semantics |
Malvertising(aportmanteauof "malicious software (malware) advertising") is the use ofonline advertisingto spreadmalware.[1]It typically involves injecting malicious or malware-laden advertisements into legitimateonline advertising networksandwebpages.[2]Because advertising content can be inserted into high-profile and reputable websites, malvertising provides malefactors an opportunity to push their attacks to web users who might not otherwise see the ads, due to firewalls, more safety precautions, or the like.[3][4]Malvertising is "attractive to attackers because they 'can be easily spread across a large number of legitimate websites without directly compromising those websites'."[5]
Malvertising can be extremely hard to combat because it can quietly work its way into a webpage or webpage advertisement and spread unknowingly: "The interesting thing about infections delivered through malvertising is that it does not require any user action (like clicking) to compromise the system and it does not exploit any vulnerabilities on the website or the server it is hosted from... infections delivered through malvertising silently travel through Web page advertisements."[6]It is able to expose millions of users to malware, even the most cautious, and is growing rapidly: "In 2012, it was estimated nearly 10 billion ad impressions were compromised by malvertising."[2]Attackers have a very wide reach and are able to deliver these attacks easily through advertisement networks. Companies and websites have had difficulty diminishing the number of malvertising attacks, which "suggests that this attack vector isn’t likely to disappear soon."[5]
When websites or web publishers unknowingly incorporate corrupted or malicious advertisements into their page, computers can become infected pre-click and post-click. It is a misconception that infection only happens when visitors begin clicking on a malvertisement. "Examples of pre-click malware include being embedded in main scripts of the page ordrive-by-downloads. Malware can also auto-run, as in the case of auto redirects, where the user is automatically taken to a different site (without user interaction, such as clicking on them), which could be malicious. Malware can also be found in the delivery of an ad – where a clean ad that has no malware pre- or post-click (in its build and design) can still be infected whilst being called.[7]Malicious code can hide undetected and the user has no idea what's coming their way. A post-click malvertisement example: "the user clicks on the ad to visit the advertised site, and instead is directly infected or redirected to a malicious site. These sites trick users into copying viruses or spyware usually disguised as Flash files, which are very popular on the web."[8]Redirectionis often built into online advertising, and this spread of malware is often successful because users expect a redirection to happen when clicking on an advertisement. A redirection that is taking place only needs to be co-opted in order to infect a user's computer.[1]
Malvertising affects every part of the digital advertising chain differently. From platforms to publishers, and all the way down to the end-user who may have been the victim of a malvertising attack, everyone is affected.[9]Malvertising often involves the exploitation of trustworthy companies. Those attempting to spread malware place "clean" advertisements on trustworthy sites first in order to gain a good reputation, then they later "insert a virus or spyware in the code behind the ad, and after a mass virus infection is produced, they remove the virus", thus infecting all visitors of the site during that time period. The identities of those responsible are often hard to trace, making it hard to prevent the attacks or stop them altogether, because the "ad network infrastructure is very complex with many linked connections between ads and click-through destinations."[8]
Some malvertisements can infect a vulnerable computer even if the user never clicks on the (normal-appearing) advertisement.[10]
The first recorded sightings of malvertising were in late 2007 and early 2008. The threat was based on a vulnerability in Adobe Flash (something that has continued into the late 2010s[11]) and affected a number of platforms includingMySpace, Excite and Rhapsody. In 2009, the online edition ofThe New York Times Magazinewas found to be serving an ad that was part of a largerclick fraudscam that created a botnet network of malware-infected computers, nicknamed the Bahama botnet, that then went on to be used to carry out click fraud on pay per click ads all over the web. The banner feed ofThe New York Timeswas hacked for the weekend of September 11 to 14, causing some readers to see advertisements telling them their systems were infected and trying to trick them into installingrogue security softwareon their computers. According to spokeswoman Diane McNulty, "The culprit approached the newspaper as a national advertiser and had provided apparently legitimate ads for a week", and the ads were switched to the virus alert malvertisement afterwards.The New York Timesthen suspended third-party advertisements to address the problem, and even posted advice for readers regarding this issue on its technology blog.[12]
In 2010, malvertising took off. Marketing analysts ClickZ[13]noted that the Online Trust Alliance (OTA) identified billions of display ads, across 3500 sites carrying malware. In the same year the Online Trust Alliance[14]formed a cross industry Anti-Malvertising Task Force. In 2011, Spotify had a malvertising attack which used theBlackhole exploit kit– this was one of the first instances of adrive-by download, where a user does not even have to click on an ad to become infected with malware. Symantec added malvertising as a section in their Internet Security Threat Report 2013 in 2012.[15]Symantec used scanning software across a series of websites and detected that half of them were infected with malvertising. In 2012, theLos Angeles Timeswas hit by a massive malvertising attack which used the Blackhole exploit kit to infect users. It was seen as part of a general campaign of malvertising to hit large news portals – this strategy carried on into subsequent years with attacks on huffingtonpost.com andThe New York Times. The growing intensity of malvertising continued in 2013, when a major malvertising campaign was waged againstYahoo.com, one of the largest ad platforms with monthly visits of 6.9 billion. The malware exploit was based on the commonly used web attack,Cross-site scripting(XSS), number three in the top ten web attacks types identified by the Open Web Application Security Project[16](OWASP). The attack infected users' machines with the ransomware Cryptowall, a type of malware that extorts money from users by encrypting their data and placing a ransom of up to $1000 in bitcoins, to be paid in seven days, to decrypt the data. In 2014, there were major malvertising campaigns on theDoubleClickandZedoad networks. Various news portals, includingThe Times of Israeland theHindustan Times, were affected. As in previous attacks the cybercrime involved Cryptowall as the malware infection. This spate of malvertising was believed to have brought over $1 million of ransom money in by infecting over 600,000 computers.[17]
According toMcAfee's February 2015 Threat Report, malvertising was beginning to grow quickly on mobile platforms in late 2014 and early 2015.[18]Additionally, in 2015, there were malvertising campaigns oneBay,Answers.com, talktalk.co.uk, and wowhead.com, among others. The campaigns involved breaches of ad networks, including DoubleClick and engage:BDR. There was also a report of possibly the first "political malvertising" campaign by pro-Russian activists, which was based on a botnet, which then forced users' machines to visit bogus sites that generated ad revenue for the activists. The users also ended up at several pro-Russian propaganda videos.[19]
In 2021,ransomwaregang REvil was spotted using paid positioning in Google search results to deliver malicious files to victims.[20]Malvertising cash orcryptocurrencygiveaway campaigns with actors masquerading as popular figures including YouTuberMrBeast,Elon Musk, and others have been seen across many advertising platforms and social media sites.[21][22]In 2022, reports surfaced ofNative advertisingon google search masquerading to be various software download pages (oftentimesopen source), leading users to instead downloadransomware, info stealer, or redirect them totech support scams[23][24][25]
Several popular websites and news sources have been victims to malvertising and have had malicious advertisements placed on their webpages or widgets unknowingly, including Horoscope.com,The New York Times,[26]theLondon Stock Exchange,Spotify, andThe Onion.[5]
By visiting websites that are affected by malvertising, users are at risk of infection. There are many different methods used for injecting malicious advertisements or programs into webpages:
There are several precautions that people can take to reduce their chances of getting tricked by these advertisements. Commonly used programs such asAdobe Flash PlayerandAdobe Readercan and have had their flaws exploited, and become vulnerable to attacks, so it should no longer be used. Users can also download anti-virus software that protects against threats and removes malicious software from their systems. Users can also push companies and websites to scan advertisements before making them active on their webpages.[2]Users can also usead blockingsoftware to avoid downloading the malware contained in advertisements[32]or a specific browser extension alerting malvertising campaigns.[33] | https://en.wikipedia.org/wiki/Malvertising |
Inprobability theory, theDoob–Dynkin lemma, named afterJoseph L. DoobandEugene Dynkin(also known as thefactorization lemma), characterizes the situation when onerandom variableis a function of another by theinclusionof theσ{\displaystyle \sigma }-algebrasgenerated by the random variables. The usual statement of the lemma is formulated in terms of one random variable beingmeasurablewith respect to theσ{\displaystyle \sigma }-algebra generated by the other.
The lemma plays an important role in theconditional expectationin probability theory, where it allows replacement of the conditioning on arandom variableby conditioning on theσ{\displaystyle \sigma }-algebrathat isgeneratedby the random variable.
In the lemma below,B[0,1]{\displaystyle {\mathcal {B}}[0,1]}is theσ{\displaystyle \sigma }-algebra ofBorel setson[0,1].{\displaystyle [0,1].}IfT:X→Y,{\displaystyle T\colon X\to Y,}and(Y,Y){\displaystyle (Y,{\mathcal {Y}})}is a measurable space, then
is the smallestσ{\displaystyle \sigma }-algebra onX{\displaystyle X}such thatT{\displaystyle T}isσ(T)/Y{\displaystyle \sigma (T)/{\mathcal {Y}}}-measurable.
LetT:Ω→Ω′{\displaystyle T\colon \Omega \rightarrow \Omega '}be a function, and(Ω′,A′){\displaystyle (\Omega ',{\mathcal {A}}')}a measurable space. A functionf:Ω→[0,1]{\displaystyle f\colon \Omega \rightarrow [0,1]}isσ(T)/B[0,1]{\displaystyle \sigma (T)/{\mathcal {B}}[0,1]}-measurable if and only iff=g∘T,{\displaystyle f=g\circ T,}for someA′/B[0,1]{\displaystyle {\mathcal {A}}'/{\mathcal {B}}[0,1]}-measurableg:Ω′→[0,1].{\displaystyle g\colon \Omega '\to [0,1].}[1]
Remark.The "if" part simply states that the composition of two measurable functions is measurable. The "only if" part is proven below.
Letf{\displaystyle f}beσ(T)/B[0,1]{\displaystyle \sigma (T)/{\mathcal {B}}[0,1]}-measurable.
First, note that, by the above descriptive definition ofσ(T){\displaystyle \sigma (T)}as the set of preimages ofA′{\displaystyle {\mathcal {A}}'}-measurable sets underT{\displaystyle T}, we know that ifA∈σ(T){\displaystyle A\in \sigma (T)}, then there exists someA′∈A′{\displaystyle A'\in {\mathcal {A}}'}such thatA=T−1(A′){\displaystyle A=T^{-1}(A')}.
Now, assume thatf=1A{\displaystyle f=\mathbf {1} _{A}}is anindicatorof some setA∈σ(T){\displaystyle A\in \sigma (T)}. If we identifyA′∈A′{\displaystyle A'\in {\mathcal {A}}'}such thatA=T−1(A′){\displaystyle A=T^{-1}(A')}, then the functiong=1A′{\displaystyle g=\mathbf {1} _{A'}}suits the requirement, and sinceA∈σ(T){\displaystyle A\in \sigma (T)}, such a setA′∈A′{\displaystyle A'\in {\mathcal {A}}'}always exists. By linearity, the claim extends to anysimple measurable functionf.{\displaystyle f.}
Letf{\displaystyle f}bemeasurablebut not necessarily simple. As explained in the article onsimple functions,f{\displaystyle f}is a pointwise limit of a monotonically non-decreasing sequencefn≥0{\displaystyle f_{n}\geq 0}of simple functions. The previous step guarantees thatfn=gn∘T,{\displaystyle f_{n}=g_{n}\circ T,}for some measurablegn.{\displaystyle g_{n}.}The supremumg(x)=supn≥1gn(x){\displaystyle \textstyle g(x)=\sup _{n\geq 1}g_{n}(x)}exists on the entireΩ′{\displaystyle \Omega '}and is measurable. (The article onmeasurable functionsexplains why supremum of a sequence of measurable functions is measurable). For everyx∈ImT,{\displaystyle x\in \operatorname {Im} T,}the sequencegn(x){\displaystyle g_{n}(x)}is non-decreasing, sog|ImT(x)=limn→∞gn|ImT(x){\displaystyle \textstyle g|_{\operatorname {Im} T}(x)=\lim _{n\to \infty }g_{n}|_{\operatorname {Im} T}(x)}which shows thatf=g∘T.{\displaystyle f=g\circ T.}
Remark.The lemma remains valid if the space([0,1],B[0,1]){\displaystyle ([0,1],{\mathcal {B}}[0,1])}is replaced with(S,B(S)),{\displaystyle (S,{\mathcal {B}}(S)),}whereS⊆[−∞,∞],{\displaystyle S\subseteq [-\infty ,\infty ],}S{\displaystyle S}is bijective with[0,1],{\displaystyle [0,1],}and the bijection is measurable in both directions.
By definition, the measurability off{\displaystyle f}means thatf−1(S)∈σ(T){\displaystyle f^{-1}(S)\in \sigma (T)}for every Borel setS⊆[0,1].{\displaystyle S\subseteq [0,1].}Thereforeσ(f)⊆σ(T),{\displaystyle \sigma (f)\subseteq \sigma (T),}and the lemma may be restated as follows.
Lemma.LetT:Ω→Ω′,{\displaystyle T\colon \Omega \rightarrow \Omega ',}f:Ω→[0,1],{\displaystyle f\colon \Omega \rightarrow [0,1],}and(Ω′,A′){\displaystyle (\Omega ',{\mathcal {A}}')}is a measurable space. Thenf=g∘T,{\displaystyle f=g\circ T,}for someA′/B[0,1]{\displaystyle {\mathcal {A}}'/{\mathcal {B}}[0,1]}-measurableg:Ω′→[0,1],{\displaystyle g\colon \Omega '\to [0,1],}if and only ifσ(f)⊆σ(T){\displaystyle \sigma (f)\subseteq \sigma (T)}. | https://en.wikipedia.org/wiki/Factorization_lemma |
Aparallel importis a non-counterfeitproductimported from another country without the permission of theintellectual propertyowner. Parallel imports are often referred to as agrey productand are implicated in issues ofinternational trade, andintellectual property.[1]
Parallel importing is based on concept ofexhaustion of intellectual property rights; according to this concept, when the product is first launched on the market in a particular jurisdiction, parallel importation is authorized to all residents in the state in question.[2]Some countries allow it but others do not.[3]
Parallel importing of pharmaceuticals reduces price of pharmaceuticals by introducing competition;TRIPS Agreementin Article 6 states that this practice cannot be challenged under the WTO dispute settlement system and so is effectively a matter of national discretion.[4]
The practice of parallel importing is often advocated in the case of software, music, printed texts and electronic products, and occurs for several reasons:
Parallel importing is regulated differently in different jurisdictions; there is no consistency in laws dealing with parallel imports between countries. Neither theBerne Conventionnor theParis Conventionexplicitly prohibit parallel importation.
The Australian market is an example of a relatively small consumer market which does not benefit from theeconomies of scaleand competition available in the larger global economies. Australia tends to have lower levels of competition in many industries and oligopolies are common in industries like banking, supermarkets, and mobile telecommunications.
Private enterprise will use product segmentation strategies to legally maximise profit. This often includes varying service levels, pricing and product features to improve the so-called "fit" to the local marketplace. However, this segmentation may mean identical products at higher prices. This can be termed price discrimination.[7]With the advent of the Internet, Australian consumers can readily compare prices globally and have been able to identify products exhibiting price discrimination, also known as the "Australia Tax".
In 1991, the Australian Government resolved to remove parallel import restrictions from a range of products except cars. It followed this up with legislation making it legal to source music and software CDs from overseas and import them into Australia. An Australian Productivity Commission report recommended in July 2009 that legislation be extended to legalise the parallel importing of books, with three years' notice for publishers.[8]The commission also recommended abolishing restrictions on parallel importing of cars.[9]
The Federal Court of Australiadecisionhas ruled that parallel imported items with valid trademarks are subject toSection 123 of the Trade Mark Act.
Various Australian Parliament committees have investigated allegations of price discrimination.[10]
TheEuropean Union(andEuropean Economic Area) require the doctrine of international exhaustion to exist between member states, but EU legislation for trademarks, design rights and copyright prohibits its application to goods put on the market outside the EU/EEA.
InGermany, theBundesgerichtshofhas held that thedoctrine of international exhaustiongoverns parallel importation, subject to the EU rules above.
In Hong Kong, parallel importation is permitted under both the Trade Mark and (amended) Copyright Ordinance before The Copyright (Amendment) Ordinance 2007 came into force 6 July.[11]
Japan's intellectual property rights law prohibits audiovisual articles marketed for export from being sold domestically, and such sale of "re-imported" CDs are illegal.
In the United States, courts have established that parallel importation is legal.[12]In the case ofKirtsaeng v. John Wiley & Sons, Inc., the US Supreme Court held that thefirst-sale doctrineapplies to copies of a copyrighted work lawfully made abroad, thus permitting importation and resale of many product categories.
Moreover, the Science, State, Justice, and Commerce, and Related Agencies, Appropriations Act of 2006 prohibits future free trade agreements from categorically disallowing the parallel import of patented products.[13]
The United States has unique automobile design legislation administered by theNational Highway Traffic Safety Administration. Certain car makers find the required modifications too expensive. In the past, this created demand forgrey import vehicles, where certain models are modified for individual customers to meet these requirements at a higher cost than if it had been done by the original manufacturer. This procedure interferes with the marketing scheme of the manufacturer, who might plan to import a less powerful car and force consumers to accept it. The Imported Vehicle Safety Compliance Act of 1988 basically ended the gray market by requiring manufacturer certification of U.S.-bound cars.[14]
Markets for parallel imports and locally made products sometimes exist alongside each other even though the parallel imports are markedly more expensive. This may be for various reasons, but is mostly observed in foodstuffs and toiletry.
Due to the nature of hotels, travellers often have little information on where to shop except in the immediate vicinity. Grocery shops opened to serve brand-name hotels often feature parallel-imported foodstuffs and toiletry to cater to travellers so that they can easily recognise the product they have been using at home.
Foodstuffs and toiletry made from different plants may vary in quality because different plants may use materials or reagents (such as water used for washing, food additives) from different sources, although they are usually subject to the same standards by internal QC or public health authorities. A person may be allergic to the foodstuff or toiletry made by some plants but not others.
To sum up, the major reasons for such a market are:
A manifestation of the philosophical divide between those who support various intellectual property andthose who are critical of it, is the divide over the legitimacy of parallel importation. Some believe that it benefits consumers by lowering prices and widening the selection and consumption of products available in themarket, while others believe that it discourages intellectual property owners from investing in new andinnovativeproducts. Some also believe that parallel imports tend to facilitatecopyright infringement.
This tension essentially concerns therightsanddutiesof a protectedmonopoly. Intellectual property rights allow the holder to sell at a price that is higher than the price one would pay in acompetitive market, but by doing so the holder relinquishes sales to those who would be prepared to buy at a price between the monopoly price and the competitive price. The presence of parallel imports in the marketplace prevents the holder from exploiting the monopoly further bymarket segmentation, i.e. by applying different prices to different consumers.
Consumer organizationstend to support parallel importation as it offers consumers more choice and lower prices, provided that consumers retain equivalent legal protection to locally sourced products (e.g. in the form ofwarrantieswith international effect), and competition is not diminished.
However, such organisations also warn consumers of certain risks in using parallel-imported products. Although the products may have been made to comply with the laws and customs of their place of origin, these products or their use may not comply with those in places where they are used, or some of their functions may be rendered unusable or meaningless (which may needlessly drive up prices). Electronic devices, however, suffer less from this type of risk because newer models support more than one user language.
Importation ofcomputer gamesand computer game hardware fromAsiais a common practice for some wholesale and/or retail stockists. Many consumers now take advantage of on-line stores inHong Kongand theUnited Statesto purchase computer games at or near half the cost of a retail purchase from the Australian RRP. Often the versions sold by the Asian retailers are manufactured in Australia to begin with. An example isCrysis, which was available from Hong Kong on-line stores for approximately A$50, but whose retail cost in Australia was close to $100. Crysis was sold in Asia using identical versions of the game box and disc, right down to including Australian censor ratings on the box.
Importation ofColgatetoothpastefromThailandintoHong Kong. The goods are bought in markets where the price is lower, and sold in markets where the price of the same goods is, for a variety of reasons, higher. Electronic goods like Apple'siPadare frequently imported in Hong Kong before they're official and resold to South-East Asian early adopters for a premium.
The practice exists of luxurycardealers inNew ZealandbuyingMercedes-BenzvehiclesinMalaysiaat a low price, and importing the cars into New Zealand to sell at a price lower than the price offered by Mercedes Benz to New Zealand consumers.[citation needed]There are also many parallel import dealers of electronics hardware. Parallel importing is allowed in New Zealand and has resulted in a significant lowering of margins on many products.[citation needed]
There is an opinion, not scientifically proven, but very popular among people in Poland that "Western" washing powders are more effective in cleaning than Polish, because chemistry companies allegedly produce items of higher quality for Western Europe. Because of that, there are companies and online stores importing Western chemistry supplies to Poland (for example from Germany), even if similar brands are available there.[15][16]
According toAnatoliy Semyonov, trademark rights exhaustion turned national in 2002, and, as of April 2013, an act is being prepared that could make original goods imported without a permission of the producer officially "counterfeit" (by replacing things on which "a trademark is located illegally" with things "on which an illegally used trademark is located"). He notes that, according to theCriminal Code, illegal use of a trademark can be punished up to 6 years of imprisonment; and a similar article in theOffences Codemakes goods with an illegal copy of a trademark subject to confiscation.[17][18]
In 2022, following the exit of various Western firms from Russia as a result of theRussian invasion of Ukraine, a parallel import scheme was legalized to allow certain goods into Russia.[19]In September, thetrade minister,Denis Manturov, stated that Russian consumers would be able to buy the newly announcediPhone 14, despiteApplehalting all sales in the country. Apple products were already being re-exported and sold in Russia through the scheme, although at a higher price.[20]
SomeSony PSPvideo game consoles were imported into theEuropean Economic Areafrom Japan up to twelve months prior to the European launch. The unusual component of this example is that some importers were selling the console for a higher price than the intended EU price, taking advantage of the relative monopoly they enjoyed. After the release the console was commonly imported from the USA where it was retailed for much lower price.[citation needed]
Other example is smart phones, which were being imported from China, where an average device could be bought[when?]for about $100 while a similar device would be retailed for about €200 in the EU.[citation needed] | https://en.wikipedia.org/wiki/Parallel_import |
Incalculus,logarithmic differentiationordifferentiation by taking logarithmsis a method used todifferentiatefunctionsby employing thelogarithmic derivativeof a functionf,[1](lnf)′=f′f⟹f′=f⋅(lnf)′.{\displaystyle (\ln f)'={\frac {f'}{f}}\quad \implies \quad f'=f\cdot (\ln f)'.}
The technique is often performed in cases where it is easier to differentiate the logarithm of a function rather than the function itself. This usually occurs in cases where the function of interest is composed of a product of a number of parts, so that a logarithmic transformation will turn it into a sum of separate parts (which is much easier to differentiate). It can also be useful when applied to functions raised to the power of variables or functions. Logarithmic differentiation relies on thechain ruleas well as properties oflogarithms(in particular, thenatural logarithm, or the logarithm to the basee) to transform products into sums and divisions into subtractions.[2][3]The principle can be implemented, at least in part, in the differentiation of almost alldifferentiable functions, providing that these functions are non-zero.
The method is used because the properties of logarithms provide avenues to quickly simplify complicated functions to be differentiated.[4]These properties can be manipulated after the taking of natural logarithms on both sides and before the preliminary differentiation. The most commonly used logarithm laws are[3]ln(ab)=ln(a)+ln(b),ln(ab)=ln(a)−ln(b),ln(an)=nln(a).{\displaystyle \ln(ab)=\ln(a)+\ln(b),\qquad \ln \left({\frac {a}{b}}\right)=\ln(a)-\ln(b),\qquad \ln(a^{n})=n\ln(a).}
UsingFaà di Bruno's formula, the n-th order logarithmic derivative is,dndxnlnf(x)=∑m1+2m2+⋯+nmn=nn!m1!m2!⋯mn!⋅(−1)m1+⋯+mn−1(m1+⋯+mn−1)!f(x)m1+⋯+mn⋅∏j=1n(f(j)(x)j!)mj.{\displaystyle {\frac {d^{n}}{dx^{n}}}\ln f(x)=\sum _{m_{1}+2m_{2}+\cdots +nm_{n}=n}{\frac {n!}{m_{1}!\,m_{2}!\,\cdots \,m_{n}!}}\cdot {\frac {(-1)^{m_{1}+\cdots +m_{n}-1}(m_{1}+\cdots +m_{n}-1)!}{f(x)^{m_{1}+\cdots +m_{n}}}}\cdot \prod _{j=1}^{n}\left({\frac {f^{(j)}(x)}{j!}}\right)^{m_{j}}.}Using this, the first four derivatives are,d2dx2lnf(x)=f″(x)f(x)−(f′(x)f(x))2d3dx3lnf(x)=f(3)(x)f(x)−3f′(x)f″(x)f(x)2+2(f′(x)f(x))3d4dx4lnf(x)=f(4)(x)f(x)−4f′(x)f(3)(x)f(x)2−3(f″(x)f(x))2+12f′(x)2f″(x)f(x)3−6(f′(x)f(x))4{\displaystyle {\begin{aligned}{\frac {d^{2}}{dx^{2}}}\ln f(x)&={\frac {f''(x)}{f(x)}}-\left({\frac {f'(x)}{f(x)}}\right)^{2}\\[1ex]{\frac {d^{3}}{dx^{3}}}\ln f(x)&={\frac {f^{(3)}(x)}{f(x)}}-3{\frac {f'(x)f''(x)}{f(x)^{2}}}+2\left({\frac {f'(x)}{f(x)}}\right)^{3}\\[1ex]{\frac {d^{4}}{dx^{4}}}\ln f(x)&={\frac {f^{(4)}(x)}{f(x)}}-4{\frac {f'(x)f^{(3)}(x)}{f(x)^{2}}}-3\left({\frac {f''(x)}{f(x)}}\right)^{2}+12{\frac {f'(x)^{2}f''(x)}{f(x)^{3}}}-6\left({\frac {f'(x)}{f(x)}}\right)^{4}\end{aligned}}}
Anatural logarithmis applied to a product of two functionsf(x)=g(x)h(x){\displaystyle f(x)=g(x)h(x)}to transform the product into a sumln(f(x))=ln(g(x)h(x))=ln(g(x))+ln(h(x)).{\displaystyle \ln(f(x))=\ln(g(x)h(x))=\ln(g(x))+\ln(h(x)).}Differentiating by applying thechainand thesumrules yieldsf′(x)f(x)=g′(x)g(x)+h′(x)h(x),{\displaystyle {\frac {f'(x)}{f(x)}}={\frac {g'(x)}{g(x)}}+{\frac {h'(x)}{h(x)}},}and, after rearranging, yields[5]f′(x)=f(x)×{g′(x)g(x)+h′(x)h(x)}=g(x)h(x)×{g′(x)g(x)+h′(x)h(x)}=g′(x)h(x)+g(x)h′(x),{\displaystyle f'(x)=f(x)\times \left\{{\frac {g'(x)}{g(x)}}+{\frac {h'(x)}{h(x)}}\right\}=g(x)h(x)\times \left\{{\frac {g'(x)}{g(x)}}+{\frac {h'(x)}{h(x)}}\right\}=g'(x)h(x)+g(x)h'(x),}which is theproduct rulefor derivatives.
Anatural logarithmis applied to a quotient of two functionsf(x)=g(x)h(x){\displaystyle f(x)={\frac {g(x)}{h(x)}}}to transform the division into a subtractionln(f(x))=ln(g(x)h(x))=ln(g(x))−ln(h(x)){\displaystyle \ln(f(x))=\ln \left({\frac {g(x)}{h(x)}}\right)=\ln(g(x))-\ln(h(x))}Differentiating by applying thechainand thesumrules yieldsf′(x)f(x)=g′(x)g(x)−h′(x)h(x),{\displaystyle {\frac {f'(x)}{f(x)}}={\frac {g'(x)}{g(x)}}-{\frac {h'(x)}{h(x)}},}and, after rearranging, yieldsf′(x)=f(x)×{g′(x)g(x)−h′(x)h(x)}=g(x)h(x)×{g′(x)g(x)−h′(x)h(x)}=g′(x)h(x)−g(x)h′(x)h(x)2,{\displaystyle f'(x)=f(x)\times \left\{{\frac {g'(x)}{g(x)}}-{\frac {h'(x)}{h(x)}}\right\}={\frac {g(x)}{h(x)}}\times \left\{{\frac {g'(x)}{g(x)}}-{\frac {h'(x)}{h(x)}}\right\}={\frac {g'(x)h(x)-g(x)h'(x)}{h(x)^{2}}},}
which is thequotient rulefor derivatives.
For a function of the formf(x)=g(x)h(x){\displaystyle f(x)=g(x)^{h(x)}}thenatural logarithmtransforms the exponentiation into a productln(f(x))=ln(g(x)h(x))=h(x)ln(g(x)){\displaystyle \ln(f(x))=\ln \left(g(x)^{h(x)}\right)=h(x)\ln(g(x))}Differentiating by applying thechainand theproductrules yieldsf′(x)f(x)=h′(x)ln(g(x))+h(x)g′(x)g(x),{\displaystyle {\frac {f'(x)}{f(x)}}=h'(x)\ln(g(x))+h(x){\frac {g'(x)}{g(x)}},}and, after rearranging, yieldsf′(x)=f(x)×{h′(x)ln(g(x))+h(x)g′(x)g(x)}=g(x)h(x)×{h′(x)ln(g(x))+h(x)g′(x)g(x)}.{\displaystyle f'(x)=f(x)\times \left\{h'(x)\ln(g(x))+h(x){\frac {g'(x)}{g(x)}}\right\}=g(x)^{h(x)}\times \left\{h'(x)\ln(g(x))+h(x){\frac {g'(x)}{g(x)}}\right\}.}The same result can be obtained by rewritingfin terms ofexpand applying the chain rule.
Usingcapital pi notation, letf(x)=∏i(fi(x))αi(x){\displaystyle f(x)=\prod _{i}(f_{i}(x))^{\alpha _{i}(x)}}be a finite product of functions with functional exponents.
The application of natural logarithms results in (withcapital sigma notation)ln(f(x))=∑iαi(x)⋅ln(fi(x)),{\displaystyle \ln(f(x))=\sum _{i}\alpha _{i}(x)\cdot \ln(f_{i}(x)),}and after differentiation,f′(x)f(x)=∑i[αi′(x)⋅ln(fi(x))+αi(x)⋅fi′(x)fi(x)].{\displaystyle {\frac {f'(x)}{f(x)}}=\sum _{i}\left[\alpha _{i}'(x)\cdot \ln(f_{i}(x))+\alpha _{i}(x)\cdot {\frac {f_{i}'(x)}{f_{i}(x)}}\right].}Rearrange to get the derivative of the original function,f′(x)=∏i(fi(x))αi(x)⏞f(x)×∑i{αi′(x)⋅ln(fi(x))+αi(x)⋅fi′(x)fi(x)}⏞[ln(f(x))]′.{\displaystyle f'(x)=\overbrace {\prod _{i}(f_{i}(x))^{\alpha _{i}(x)}} ^{f(x)}\times \overbrace {\sum _{i}\left\{\alpha _{i}'(x)\cdot \ln(f_{i}(x))+\alpha _{i}(x)\cdot {\frac {f_{i}'(x)}{f_{i}(x)}}\right\}} ^{[\ln(f(x))]'}.} | https://en.wikipedia.org/wiki/Logarithmic_differentiation |
Information sensitivityis the control ofaccess to informationorknowledgethat might result in loss of an advantage or level of security if disclosed to others.[1]Loss, misuse, modification, orunauthorized accessto sensitive information can adversely affect theprivacyor welfare of an individual,trade secretsof a business or even thesecurityand international relations of a nation depending on the level of sensitivity and nature of the information.[2]
This refers to information that is already a matter of public record or knowledge. With regard to government and private organizations, access to or release of such information may be requested by any member of the public, and there are often formal processes laid out for how to do so.[3]The accessibility of government-held public records is an important part of government transparency, accountability to its citizens, and the values of democracy.[4]Public recordsmay furthermore refer to information about identifiable individuals that is not considered confidential, including but not limited to:censusrecords,criminal records,sex offender registryfiles, andvoter registration.
This includes business information that is not subjected to special protection and may be routinely shared with anyone inside or outside of the business.
Confidential informationis used in a general sense to mean sensitive information whose access is subject to restriction, and may refer to information about an individual as well as that which pertains to a business.
However, there are situations in which the release of personal information could have a negative effect on its owner. For example, a person trying to avoid a stalker will be inclined to further restrict access to such personal information. Furthermore, a person'sSSNorSIN, credit card numbers, and other financial information may be considered private if their disclosure might lead tocrimessuch asidentity theftorfraud.
Some types of private information, including records of a person'shealth care, education, and employment may be protected byprivacy laws.[5]Unauthorized disclosure of private information can make the perpetrator liable for civil remedies and may in some cases be subject to criminal penalties.
Even though they are often used interchangeably, personal information is sometimes distinguished from private information, orpersonally identifiable information. The latter is distinct from the former in that Private information can be used to identify a unique individual. Personal information, on the other hand, is information belonging to the private life of an individual that cannot be used to uniquely identify that individual. This can range from an individual's favourite colour, to the details of their domestic life.[6]The latter is a common example of personal information that is also regarded as sensitive, where the individual sharing these details with a trusted listener would prefer for it not to be shared with anyone else, and the sharing of which may result in unwanted consequences.
Confidential business information (CBI) refers to information whose disclosure may harm the business. Such information may includetrade secrets, sales and marketing plans, new product plans, notes associated with patentable inventions, customer and supplier information, financial data, and more.[7]
UnderTSCA, CBI is defined as proprietary information, considered confidential to the submitter, the release of which would cause substantial business injury to the owner. The US EPA may as of 2016, review and determine if a company´s claim is valid.[8]
Classified informationgenerally refers to information that is subject to special security classification regulations imposed by many national governments, the disclosure of which may cause harm to national interests and security. The protocol of restriction imposed upon such information is categorized into a hierarchy of classification levels in almost every national government worldwide, with the most restricted levels containing information that may cause the greatest danger to national security if leaked. Authorized access is granted to individuals on aneed to knowbasis who have also passed the appropriate level ofsecurity clearance. Classified information can be reclassified to a different level or declassified (made available to the public) depending on changes of situation or new intelligence.
Classified information may also be further denoted with the method of communication or access. For example, Protectively Marked "Secret" Eyes Only or Protectively Marked "Secret" Encrypted transfer only. Indicating that the document must be physically read by the recipient and cannot be openly discussed for example over a telephone conversation or that the communication can be sent only using encrypted means. Often mistakenly listed as meaning for the eyes of the intended recipient only[9]the anomaly becomes apparent when the additional tag "Not within windowed area" is also used.
Data privacy concerns exist in various aspects of daily life wherever personal data is stored and collected, such as on theinternet, inmedical records,financial records, andexpression of political opinions. In over 80 countries in the world, personally identifiable information is protected byinformation privacy laws, which outline limits to the collection and use of personally identifiable information by public and private entities. Such laws usually require entities to give clear and unambiguous notice to the individual of the types of data being collected, its reason for collection, and planned uses of the data. In consent-based legal frameworks, explicit consent of the individual is required as well.[10]
The EU passed theGeneral Data Protection Regulation(GDPR), replacing the earlierData Protection Directive. The regulation was adopted on 27 April 2016. It became enforceable from 25 May 2018 after a two-year transition period and, unlike a directive, it does not require national governments to pass any enabling legislation, and is thus directly binding and applicable.[11]"The proposed new EU data protection regime extends the scope of the EU data protection law to all foreign companies processing data of EU residents. It provides for a harmonisation of the data protection regulations throughout the EU, thereby making it easier for non-European companies to comply with these regulations; however, this comes at the cost of a strict data protection compliance regime with severe penalties of up to 4% of worldwide turnover."[12]The GDPR also brings a new set of "digital rights" for EU citizens in an age when the economic value of personal data is increasing in the digital economy.
In Canada, thePersonal Information Protection and Electronic Documents Act(PIPEDA) regulates the collection and use of personal data and electronic documents by public and private organizations. PIPEDA is in effect in all federal and provincial jurisdictions, except provinces where existing privacy laws are determined to be “substantially similar”.[13]
Even though not through the unified sensitive information framework, the United States has implemented significant amount of privacy legislation pertaining to different specific aspects of data privacy, with emphasis to privacy in healthcare, financial, e-commerce, educational industries, and both on federal and state levels. Whether being regulated or self regulated, the laws require to establish ways at which access to sensitive information is limited to the people with different roles, thus in essence requiring establishment of the "sensitive data domain" model[14]and mechanisms of its protection. Some of the domains have a guideline in form of pre-defined models such as "Safe Harbor" of HIPAA,[15]based on the research ofLatanya Sweenyand established privacy industry metrics.
Additionally, many other countries have enacted their own legislature regarding data privacy protection, and more are still in the process of doing so.[16]
Theconfidentialityof sensitive business information is established throughnon-disclosure agreements, a legally binding contract between two parties in a professional relationship. NDAs may be one-way, such as in the case of an employee receiving confidential information about the employing organization, or two-way between businesses needing to share information with one another to accomplish a business goal. Depending on the severity of consequences, a violation of non-disclosure may result in employment loss, loss of business and client contacts, criminal charges or a civil lawsuit, and a hefty sum in damages.[17]When NDAs are signed between employer and employee at the initiation of employment, anon-compete clausemay be a part of the agreement as an added protection of sensitive business information, where the employee agrees not to work for competitors or start their own competing business within a certain time or geographical limit.
Unlike personal and private information, there is no internationally recognized framework protectingtrade secrets, or even an agreed-upon definition of the term “trade secret”.[18]However, many countries and political jurisdictions have taken the initiative to account for the violation of commercial confidentiality in their criminal or civil laws. For example, under the USEconomic Espionage Act of 1996, it is a federal crime in the United States to misappropriate trade secrets with the knowledge that it will benefit a foreign power, or will injure the owner of the trade secret.[19]More commonly, breach of commercial confidentiality falls under civil law, such asin the United Kingdom.[20]In some developing countries, trade secret laws are either non-existent or poorly developed and offer little substantial protection.[21]
In many countries, unauthorized disclosure ofclassified informationis a criminal offence, and may be punishable by fines, prison sentence, or even the death penalty, depending on the severity of the violation.[22][23]For less severe violations, civil sanctions may be imposed, ranging from reprimand to revoking of security clearance and subsequent termination of employment.[24]
Whistleblowingis the intentional disclosure of sensitive information to a third-party with the intention of revealing alleged illegal, immoral, or otherwise harmful actions.[25]There are many examples of present and former government employees disclosing classified information regarding national government misconduct to the public and media, in spite of the criminal consequences that await them.
Espionage, or spying, involves obtaining sensitive information without the permission or knowledge of its holder. The use of spies is a part of national intelligence gathering in most countries, and has been used as a political strategy by nation-states since ancient times. It is unspoken knowledge in international politics that countries are spying on one another all the time, even their allies.[26]
Computer securityisinformation securityapplied to computing and network technology, and is a significant and ever-growing field in computer science. The termcomputer insecurity, on the other hand, is the concept that computer systems are inherently vulnerable to attack, and therefore an evolving arms race between those who exploit existing vulnerabilities in security systems and those who must then engineer new mechanisms of security.
A number of security concerns have arisen in the recent years as increasing amounts of sensitive information at every level have found their primary existence in digital form. At the personal level,credit card fraud,internet fraud, and other forms ofidentity thefthave become widespread concerns that individuals need to be aware of on a day-to-day basis.
The existence of large databases of classified information on computer networks is also changing the face of domestic and international politics.Cyber-warfareandcyber espionageis becoming of increasing importance to the national security and strategy of nations around the world, and it is estimated that 120 nations around the world are currently actively engaged in developing and deploying technology for these purposes.[27]
Philosophies and internet cultures such asopen-source governance,hacktivism, and the popular hacktivist slogan "information wants to be free" reflects some of the cultural shifts in perception towards political and government secrecy. The popular, controversialWikiLeaksis just one of many manifestations of a growing cultural sentiment that is becoming an additional challenge to the security and integrity of classified information.[28] | https://en.wikipedia.org/wiki/Information_sensitivity |
Text normalizationis the process of transformingtextinto a singlecanonical formthat it might not have had before. Normalizing text before storing or processing it allows forseparation of concerns, since input is guaranteed to be consistent before operations are performed on it. Text normalization requires being aware of what type of text is to be normalized and how it is to be processed afterwards; there is no all-purpose normalization procedure.[1]
Text normalization is frequently used when convertingtext to speech.Numbers,dates,acronyms, andabbreviationsare non-standard "words" that need to be pronounced differently depending on context.[2]For example:
Text can also be normalized for storing and searching in a database. For instance, if a search for "resume" is to match the word "résumé," then the text would be normalized by removingdiacritical marks; and if "john" is to match "John", the text would be converted to a singlecase. To prepare text for searching, it might also bestemmed(e.g. converting "flew" and "flying" both into "fly"),canonicalized(e.g. consistently usingAmerican or British English spelling), or havestop wordsremoved.
For simple, context-independent normalization, such as removing non-alphanumericcharacters ordiacritical marks,regular expressionswould suffice. For example, thesedscriptsed ‑e "s/\s+/ /g"inputfilewould normalize runs ofwhitespace charactersinto a single space. More complex normalization requires correspondingly complicated algorithms, includingdomain knowledgeof the language and vocabulary being normalized. Among other approaches, text normalization has been modeled as a problem of tokenizing and tagging streams of text[5]and as a special case of machine translation.[6][7]
In the field oftextual scholarshipand the editing of historic texts, the term "normalization" implies a degree of modernization and standardization – for example in the extension ofscribal abbreviationsand the transliteration of the archaicglyphstypically found in manuscript and early printed sources. Anormalized editionis therefore distinguished from adiplomatic edition(orsemi-diplomatic edition), in which some attempt is made to preserve these features. The aim is to strike an appropriate balance between, on the one hand, rigorous fidelity to the source text (including, for example, the preservation of enigmatic and ambiguous elements); and, on the other, producing a new text that will be comprehensible and accessible to the modern reader. The extent of normalization is therefore at the discretion of the editor, and will vary. Some editors, for example, choose to modernize archaic spellings and punctuation, but others do not.[8] | https://en.wikipedia.org/wiki/Text_normalization |
Wi-Fi calling, also calledVoWiFi,[1]refers tomobile phonevoice calls and data that are made overIPnetworks usingWi-Fi, instead of thecell towersprovided bycellular networks.[2]Using this feature, compatible handsets are able to route regular cellular calls through a wireless LAN (Wi-Fi) network withbroadband Internet, while seamlessly changing connections between the two where necessary.[3]This feature makes use of theGeneric Access Network(GAN) protocol, also known asUnlicensed Mobile Access(UMA).[4][5]
Voice over wireless LAN(VoWLAN), alsovoice over Wi‑Fi(VoWiFi[6]), is the use of awirelessbroadband network according to theIEEE 802.11standards for the purpose of vocal conversation. In essence, it isvoice over IP(VoIP) over aWi-Finetwork.
Essentially, GAN/UMA allows cell phone packets to be forwarded to a network access point over the internet, rather than over-the-air usingGSM/GPRS,UMTSor similar. A separate device known as a "GAN Controller" (GANC)[5]receives this data from the Internet and feeds it into the phone network as if it were coming from an antenna on a tower. Calls can be placed from or received to the handset as if it were connected over-the-air directly to the GANC'spoint of presence, making the call invisible to the network as a whole.[7]This can be useful in locations with poor cell coverage where some other form ofinternet accessis available,[2]especially at the home or office. The system offers seamlesshandoff, so the user can move from cell to Wi-Fi and back again with the same invisibility that the cell network offers when moving from tower to tower.[3]
Since the GAN system works over the internet, a UMA-capable handset can connect to its service provider from any location with internet access. This is particularly useful for travelers, who can connect to their provider's GANC and make calls into their home service area from anywhere in the world.[citation needed]This is subject to the quality of the internet connection, however, and may not work well over limited bandwidth or long-latency connection. To improvequality of service(QoS) in the home or office, some providers also supply a specially programmedwireless access pointthat prioritizes UMA packets.[8]Another benefit of Wi-Fi calling is that mobile calls can be made through the internet using the same native calling client; it does not require third-partyVoice over IP(VoIP) closed services likeWhatsApporSkype, relying instead on the mobile cellular operator.[9]
The GAN protocol that extends mobile voice, data and multimedia (IP Multimedia Subsystem/Session Initiation Protocol(IMS/SIP)) applications over IP networks. The latest generation system is named orVoWiFiby a number of handset manufacturers, includingAppleandSamsung, a move that is being mirrored by carriers likeT-Mobile USandVodafone.[citation needed]The service is dependent on IMS, IPsec,IWLANandePDG.
The original Release 6 GAN specification supported a 2G (A/Gb) connection from the GANC into the mobile core network (MSC/GSN). Today[when?]all commercial GAN dual-mode handset deployments are based on a 2G connection and all GAN enabled devices are dual-mode 2G/Wi-Fi. The specification, though, defined support for multimode handset operation. Therefore, 3G/2G/Wi-Fi handsets are supported in the standard. The first 3G/UMA devices were announced in the second half of 2008.
A typical UMA/GAN handset will have four modes of operation:
In all cases, the handset scans for GSM cells when it first turns on, to determine its location area. This allows the carrier to route the call to the nearest GANC, set the correct rate plan, and comply with existing roaming agreements.
At the end of 2007, the GAN specification was enhanced to support 3G (Iu) interfaces from the GANC to the mobile core network (MSC/GSN). This native 3G interface can be used for dual-mode handset as well as 3Gfemtocellservice delivery. The GAN release 8 documentation describes these new capabilities.
While UMA is nearly always associated with dual-mode GSM/Wi-Fi services, it is actually a ‘generic’ access network technology that provides a generic method for extending the services and applications in an operator's mobile core (voice, data, IMS) over IP and the public Internet.
GAN defines a secure, managed connection from the mobile core (GANC) to different devices/access points over IP.
A Wi-Fi network that supports voice telephony must be carefully designed in a way that maximizes performance and is able to support the applicable call density.[12]A voice network includes call gateways in addition to the Wi-Fi access points. The gateways provide call handling among wireless IP phones and connections to traditional telephone systems. The Wi-Fi network supporting voice applications must provide much stronger signal coverage than what's needed for most data-only applications. In addition, the Wi-Fi network must provide seamless roaming between access points.
UMA was developed by a group of operator and vendor companies.[13]The initial specifications were published on 2 September 2004. The companies then contributed the specifications to the3rd Generation Partnership Project(3GPP) as part of 3GPP work item "Generic Access to A/Gb interfaces". On 8 April 2005, 3GPP approved specifications for Generic Access to A/Gb interfaces for 3GPP Release 6 and renamed the system to GAN.[14][15]But the termGANis little known outside the 3GPP community, and the termUMAis more common in marketing.[citation needed]
For carriers:
For subscribers:
The first service launch was BT withBT Fusionin the autumn of 2005. The service is based on pre-3GPP GAN standard technology. Initially, BT Fusion used UMA over Bluetooth with phones fromMotorola. From January 2007, it used UMA over 802.11 with phones from Nokia, Motorola and Samsung[18]and was branded as a "Wi-Fi mobile service". BT has since discontinued the service.
On August 28, 2006,TeliaSonerawas the first to launch an 802.11 based UMA service called "Home Free".[19]The service started in Denmark but is no longer offered.
On September 25, 2006Orangeannounced its "Unik service", also known as Signal Boost in the UK.[20][21]However this service is no longer available to new customers in the UK.[22]The announcement, the largest to date, covers more than 60m of Orange's mobile subscribers in the UK, France, Poland, Spain and the Netherlands.
Cincinnati Bellannounced the first UMA deployment in the United States.[23]The service, originally called CB Home Run, allows users to transfer seamlessly from the Cincinnati Bell cellular network to a home wireless network or to Cincinnati Bell's WiFi HotSpots. It has since been rebranded as Fusion WiFi.
This was followed shortly byT-Mobile USon June 27, 2007.[24]T-Mobile's service, originally named "Hotspot Calling", and rebranded to "Wi-Fi Calling" in 2009, allows users to seamlessly transfer from the T-Mobile cellular network to an 802.11x wireless network or T-Mobile HotSpot in the United States.
In Canada, bothFidoandRogers Wirelesslaunched UMA plans under the names UNO and Rogers Home Calling Zone (later rebranded Talkspot, and subsequently rebranded again as Wi-Fi Calling), respectively, on May 6, 2008.[25]
In Australia, GAN has been implemented by Vodafone, Optus and Telstra.[26]
Since 10 April 2015, Wi-Fi Calling has been available for customers ofEEin the UK initially on theNokia Lumia 640andSamsung Galaxy S6andSamsung Galaxy S6 Edgehandsets.[27]
In March 2016,Vodafone Netherlandslaunched Wi-Fi Calling support along withVoLTE.[28]
Since the Autumn of 2016, Wifi Calling / Voice over Wifi has been available for customers of Telenor Denmark, including the ability to do handover to and from the 4G (VoLTE) network. This is available for several Samsung and Apple handsets.
AT&T[29]andVerizon[30]are going to launch Wi-Fi calling in 2015.
Industry organisationUMA Todaytracks all operator activities and handset development.
In September 2015, South African cellular network Cell C launched WiFi Calling on its South African network.[31]
In November 2024, Belgian cellular network Voo launched WiFi Calling on its Belgian network.[32]
GAN/UMA is not the first system to allow the use of unlicensed spectrum to connect handsets to a GSM network. TheGIP/IWPstandard forDECTprovides similar functionality, but requires a more direct connection to the GSM network from the base station. While dual-mode DECT/GSM phones have appeared, these have generally been functionally cordless phones with a GSM handset built-in (or vice versa, depending on your point of view), rather than phones implementing DECT/GIP, due to the lack of suitable infrastructure to hook DECT base-stations supporting GIP to GSM networks on an ad-hoc basis.[33]
GAN/UMA's ability to use the Internet to provide the "last mile" connection to the GSM network solves the major issue that DECT/GIP has faced. Had GIP emerged as a practical standard, the low power usage of DECT technology when idle would have been an advantage compared to GAN.[citation needed]
There is nothing preventing an operator from deploying micro- and pico-cells that use towers that connect with the home network over the Internet. Several companies have developed femtocell systems that do precisely that, broadcasting a "real" GSM or UMTS signal, bypassing the need for special handsets that require 802.11 technology. In theory, such systems are more universal, and again require lower power than 802.11, but their legality will vary depending on the jurisdiction, and will require the cooperation of the operator. Further, users may be charged at higher cell phone rates, even though they are paying for the DSL or other network that ultimately carries their traffic; in contrast, GAN/UMA providers charge reduced rates when making calls off the providers cellular phone network.[citation needed] | https://en.wikipedia.org/wiki/Generic_access_network |
The study ofinterdependent networksis a subfield ofnetwork sciencedealing with phenomena caused by the interactions betweencomplex networks. Though there may be a wide variety of interactions between networks,dependencyfocuses on the scenario in which the nodes in one network require support from nodes in another network.[1]
In nature, networks rarely appear in isolation. They are typically elements in larger systems and can have non-trivial effects on one another. For example, infrastructure networks exhibit interdependency to a large degree. The power stations which form the nodes of the power grid require fuel delivered via a network of roads or pipes and are also controlled via the nodes of communications network. Though the transportation network does not depend on the power network to function, the communications network does. Thus the deactivation of a critical number of nodes in either the power network or the communication network can lead to a series of cascading failures across the system with potentially catastrophic repercussions. If the two networks were treated in isolation, this importantfeedbackeffect would not be seen and predictions of network robustness would be greatly overestimated.
Links in a standard network representconnectivity, providing information about how one node can be reached from another.Dependencylinks represent a need for support from one node to another. This relationship is often, though not necessarily, mutual and thus the links can be directed or undirected. Crucially, a node loses its ability to function as soon as the node it is dependent on ceases to function while it may not be so severely effected by losing a node it is connected to.
Instatistical physics,phase transitionscan only appear in many particle systems. Though phase transitions are well known in network science, in single networks they are second order only. With the introduction of internetwork dependency, first order transitions emerge. This is a new phenomenon and one with profound implications for systems engineering. Where system dissolution takes place after steady (if steep) degradation for second order transitions, the existence of a first order transition implies that the system can go from a relatively healthy state to complete collapse with no advanced warning. | https://en.wikipedia.org/wiki/Interdependent_networks |
Instatisticsand, in particular, in the fitting oflinearorlogistic regressionmodels, theelastic netis aregularizedregression method thatlinearly combinestheL1andL2penalties of thelassoandridgemethods.
Nevertheless, elastic net regularization is typically more accurate than both methods with regard to reconstruction.[1]
The elastic net method overcomes the limitations of theLASSO(least absolute shrinkage and selection operator) method which uses a penalty function based on
Use of this penalty function has several limitations.[2]For example, in the "largep, smalln" case (high-dimensional data with few examples), the LASSO selects at mostnvariables before it saturates. Also if there is a group of highly correlated variables, then the LASSO tends to select one variable from a group and ignore the others. To overcome these limitations, the elastic net adds a quadratic part (‖β‖2{\displaystyle \|\beta \|^{2}}) to the penalty, which when used alone isridge regression(known also asTikhonov regularization).
The estimates from the elastic net method are defined by
The quadratic penalty term makes the loss function strongly convex, and it therefore has a unique minimum. The elastic net method includes the LASSO and ridge regression: in other words, each of them is a special case whereλ1=λ,λ2=0{\displaystyle \lambda _{1}=\lambda ,\lambda _{2}=0}orλ1=0,λ2=λ{\displaystyle \lambda _{1}=0,\lambda _{2}=\lambda }. Meanwhile, the naive version of elastic net method finds an estimator in a two-stage procedure : first for each fixedλ2{\displaystyle \lambda _{2}}it finds the ridge regression coefficients, and then does a LASSO type shrinkage. This kind of estimation incurs a double amount of shrinkage, which leads to increased bias and poor predictions. To improve the prediction performance, sometimes the coefficients of the naive version of elastic net is rescaled by multiplying the estimated coefficients by(1+λ2){\displaystyle (1+\lambda _{2})}.[2]
Examples of where the elastic net method has been applied are:
It was proven in 2014 that the elastic net can be reduced to the linearsupport vector machine.[7]A similar reduction was previously proven for the LASSO in 2014.[8]The authors showed that for every instance of the elastic net, an artificial binary classification problem can be constructed such that the hyper-plane solution of a linearsupport vector machine(SVM) is identical to the solutionβ{\displaystyle \beta }(after re-scaling). The reduction immediately enables the use of highly optimized SVM solvers for elastic net problems. It also enables the use ofGPUacceleration, which is often already used for large-scale SVM solvers.[9]The reduction is a simple transformation of the original data and regularization constants
into new artificial data instances and a regularization constant that specify a binary classification problem and the SVM regularization constant
Here,y2{\displaystyle y_{2}}consists of binary labels−1,1{\displaystyle {-1,1}}. When2p>n{\displaystyle 2p>n}it is typically faster to solve the linear SVM in the primal, whereas otherwise the dual formulation is faster.
Some authors have referred to the transformation as Support Vector Elastic Net (SVEN), and provided the following MATLAB pseudo-code: | https://en.wikipedia.org/wiki/Elastic_net_regularization |
Insignal processingand related disciplines,aliasingis a phenomenon that a reconstructed signal from samples of the original signal contains low frequency components that are not present in the original one. This is caused when, in the original signal, there are components at frequency exceeding a certain frequency calledNyquist frequency,fs/2{\textstyle f_{s}/2}, wherefs{\textstyle f_{s}}is the sampling frequency (undersampling). This is because typical reconstruction methods use low frequency components while there are a number of frequency components, called aliases, which sampling result in the identical sample. It also often refers to thedistortionorartifactthat results when a signal reconstructed from samples is different from the original continuous signal.
Aliasing can occur in signals sampled in time, for instance indigital audioor thestroboscopic effect, and is referred to astemporal aliasing. Aliasing in spatially sampled signals (e.g.,moiré patternsindigital images) is referred to asspatial aliasing.
Aliasing is generally avoided by applyinglow-pass filtersoranti-aliasing filters(AAF) to the input signal before sampling and when converting a signal from a higher to a lower sampling rate. Suitablereconstruction filteringshould then be used when restoring the sampled signal to the continuous domain or converting a signal from a lower to a higher sampling rate. Forspatial anti-aliasing, the types of anti-aliasing includefast approximate anti-aliasing(FXAA),multisample anti-aliasing, andsupersampling.
When a digital image is viewed, areconstructionis performed by a display or printer device, and by the eyes and the brain. If the image data is processed incorrectly during sampling or reconstruction, the reconstructed image will differ from the original image, and an alias is seen.
An example of spatial aliasing is themoiré patternobserved in a poorly pixelized image of a brick wall.Spatial anti-aliasingtechniques avoid such poor pixelizations. Aliasing can be caused either by the sampling stage or the reconstruction stage; these may be distinguished by calling sampling aliasingprealiasingand reconstruction aliasingpostaliasing.[1]
Temporal aliasing is a major concern in the sampling of video and audio signals. Music, for instance, may contain high-frequency components that are inaudible to humans. If a piece of music is sampled at 32,000samples per second(Hz), any frequency components at or above 16,000Hz(theNyquist frequencyfor this sampling rate) will cause aliasing when the music is reproduced by adigital-to-analog converter(DAC). The high frequencies in the analog signal will appear as lower frequencies (wrong alias) in the recorded digital sample and, hence, cannot be reproduced by the DAC. To prevent this, ananti-aliasing filteris used to remove components above the Nyquist frequency prior to sampling.
In video or cinematography, temporal aliasing results from the limited frame rate, and causes thewagon-wheel effect, whereby a spoked wheel appears to rotate too slowly or even backwards. Aliasing has changed its apparent frequency of rotation. A reversal of direction can be described as anegative frequency. Temporal aliasing frequencies in video and cinematography are determined by the frame rate of the camera, but the relative intensity of the aliased frequencies is determined by the shutter timing (exposure time) or the use of a temporal aliasing reduction filter during filming.[2][unreliable source?]
Like the video camera, most sampling schemes are periodic; that is, they have a characteristicsampling frequencyin time or in space. Digital cameras provide a certain number of samples (pixels) per degree or per radian, or samples per mm in the focal plane of the camera. Audio signals are sampled (digitized) with ananalog-to-digital converter, which produces a constant number of samples per second. Some of the most dramatic and subtle examples of aliasing occur when the signal being sampled also has periodic content.
Actual signals have a finite duration and their frequency content, as defined by theFourier transform, has no upper bound. Some amount of aliasing always occurs when such continuous functions over time are sampled. Functions whose frequency content is bounded (bandlimited) have an infinite duration in the time domain. If sampled at a high enough rate, determined by thebandwidth, the original function can, in theory, be perfectly reconstructed from the infinite set of samples.
Sometimes aliasing is used intentionally on signals with no low-frequency content, calledbandpasssignals.Undersampling, which creates low-frequency aliases, can produce the same result, with less effort, as frequency-shifting the signal to lower frequencies before sampling at the lower rate. Some digital channelizers exploit aliasing in this way for computational efficiency.[3](SeeSampling (signal processing),Nyquist rate (relative to sampling), andFilter bank.)
Sinusoidsare an important type of periodic function, because realistic signals are often modeled as the summation of many sinusoids of different frequencies and different amplitudes (for example, with aFourier seriesortransform). Understanding what aliasing does to the individual sinusoids is useful in understanding what happens to their sum.
When sampling a function at frequencyfs(i.e., the sampling interval is1/fs), the following functions of time(t)yield identical sets of samples if the sampling starts fromt=0{\textstyle t=0}such thatt=1fsn{\displaystyle t={\frac {1}{f_{s}}}n}wheren=0,1,2,3{\textstyle n=0,1,2,3}, and so on:
{sin(2π(f+Nfs)t+φ),N=0,±1,±2,±3,…}.{\displaystyle \{\sin(2\pi (f+Nf_{s})t+\varphi ),N=0,\pm 1,\pm 2,\pm 3,\ldots \}.}
Afrequency spectrumof the samples produces equally strong responses at all those frequencies. Without collateral information, the frequency of the original function is ambiguous. So, the functions and their frequencies are said to bealiasesof each other. Noting the sine functions as odd functions:
thus, we can write all the alias frequencies as positive values:fN(f)≜|f+Nfs|{\displaystyle f_{_{N}}(f)\triangleq \left|f+Nf_{\rm {s}}\right|}. For example, a snapshot of the lower right frame of Fig.2 shows a component at the actual frequencyf{\displaystyle f}and another component at aliasf−1(f){\displaystyle f_{_{-1}}(f)}. Asf{\displaystyle f}increases during the animation,f−1(f){\displaystyle f_{_{-1}}(f)}decreases. The point at which they are equal(f=fs/2){\displaystyle (f=f_{s}/2)}is an axis of symmetry called thefolding frequency, also known asNyquist frequency.
Aliasing matters when one attempts to reconstruct the original waveform from its samples. The most common reconstruction technique produces the smallest of thefN(f){\displaystyle f_{_{N}}(f)}frequencies. So, it is usually important thatf0(f){\displaystyle f_{0}(f)}be the unique minimum. A necessary and sufficient condition for that isfs/2>|f|,{\displaystyle f_{s}/2>|f|,}called theNyquist condition. The lower left frame of Fig.2 depicts the typical reconstruction result of the available samples. Untilf{\displaystyle f}exceeds the Nyquist frequency, the reconstruction matches the actual waveform (upper left frame). After that, it is the low frequency alias of the upper frame.
The figures below offer additional depictions of aliasing, due to sampling. A graph of amplitude vs frequency (not time) for a single sinusoid at frequency0.6fsand some of its aliases at0.4fs,1.4fs,and1.6fswould look like the 4 black dots in Fig.3. The red lines depict the paths (loci) of the 4 dots if we were to adjust the frequency and amplitude of the sinusoid along the solid red segment (betweenfs/2andfs). No matter what function we choose to change the amplitude vs frequency, the graph will exhibit symmetry between 0 andfs.Folding is often observed in practice when viewing thefrequency spectrumof real-valued samples, such as Fig.4.
Complex sinusoidsare waveforms whose samples arecomplex numbers(z=Aeiθ=A(cosθ+isinθ){\textstyle z=Ae^{i\theta }=A(\cos \theta +i\sin \theta )}), and the concept ofnegative frequencyis necessary to distinguish them. In that case, the frequencies of the aliases are given by just:fN(f) =f+N fs.(In real sinusoids, as shown in the above, all alias frequencies can be written as positive frequenciesfN(f)≜|f+Nfs|{\displaystyle f_{_{N}}(f)\triangleq \left|f+Nf_{\rm {s}}\right|}because of sine functions as odd functions.) Therefore, asfincreases from0tofs,f−1(f)also increases (from–fsto 0). Consequently, complex sinusoids do not exhibitfolding.
When the conditionfs/2 >fis met for the highest frequency component of the original signal, then it is met for all the frequency components, a condition called theNyquist criterion. That is typically approximated by filtering the original signal to attenuate high frequency components before it is sampled. These attenuated high frequency components still generate low-frequency aliases, but typically at low enough amplitudes that they do not cause problems. A filter chosen in anticipation of a certain sample frequency is called ananti-aliasing filter.
The filtered signal can subsequently be reconstructed, by interpolation algorithms, without significant additional distortion. Most sampled signals are not simply stored and reconstructed. But the fidelity of a theoretical reconstruction (via theWhittaker–Shannon interpolation formula) is a customary measure of the effectiveness of sampling.
Historically the termaliasingevolved from radio engineering because of the action ofsuperheterodyne receivers. When the receiver shifts multiple signals down to lower frequencies, fromRFtoIFbyheterodyning, an unwanted signal, from an RF frequency equally far from thelocal oscillator(LO) frequency as the desired signal, but on the wrong side of the LO, can end up at the same IF frequency as the wanted one. If it is strong enough it can interfere with reception of the desired signal. This unwanted signal is known as animageoraliasof the desired signal.
The first written use of the terms "alias" and "aliasing" in signal processing appears to be in a 1949 unpublished Bell Laboratories technical memorandum[4]byJohn TukeyandRichard Hamming. That paper includes an example of frequency aliasing dating back to 1922. The firstpublisheduse of the term "aliasing" in this context is due toBlackmanand Tukey in 1958.[5]In their preface to the Dover reprint[6]of this paper, they point out that the idea of aliasing had been illustrated graphically by Stumpf[7]ten years prior.
The 1949 Bell technical report refers to aliasing as though it is a well-known concept, but does not offer a source for the term.Gwilym JenkinsandMaurice Priestleycredit Tukey with introducing it in this context,[8]though ananalogous concept of aliasinghad been introduced a few years earlier[9]infractional factorial designs. While Tukey did significant work in factorial experiments[10]and was certainly aware of aliasing in fractional designs,[11]it cannot be determined whether his use of "aliasing" in signal processing was consciously inspired by such designs.
Aliasing occurs whenever the use of discrete elements to capture or produce a continuous signal causes frequency ambiguity.
Spatial aliasing, particular of angular frequency, can occur when reproducing alight fieldor sound field with discrete elements, as in3D displaysorwave field synthesisof sound.[12]
This aliasing is visible in images such as posters withlenticular printing: if they have low angular resolution, then as one moves past them, say from left-to-right, the 2D image does not initially change (so it appears to move left), then as one moves to the next angular image, the image suddenly changes (so it jumps right) – and the frequency and amplitude of this side-to-side movement corresponds to the angular resolution of the image (and, for frequency, the speed of the viewer's lateral movement), which is the angular aliasing of the 4D light field.
The lack ofparallaxon viewer movement in 2D images and in3-D filmproduced bystereoscopicglasses (in 3D films the effect is called "yawing", as the image appears to rotate on its axis) can similarly be seen as loss of angular resolution, all angular frequencies being aliased to 0 (constant).
The qualitative effects of aliasing can be heard in the following audio demonstration. Sixsawtooth wavesare played in succession, with the first two sawtooths having afundamental frequencyof 440 Hz (A4), the second two having fundamental frequency of 880 Hz (A5), and the final two at 1760 Hz (A6). The sawtooths alternate betweenbandlimited(non-aliased) sawtooths and aliased sawtooths and the sampling rate is 22050 Hz. The bandlimited sawtooths are synthesized from the sawtooth waveform'sFourier seriessuch that no harmonics above theNyquist frequency(11025 Hz = 22050 Hz / 2 here) are present.
The aliasing distortion in the lower frequencies is increasingly obvious with higher fundamental frequencies, and while the bandlimited sawtooth is still clear at 1760 Hz, the aliased sawtooth is degraded and harsh with a buzzing audible at frequencies lower than the fundamental.
A form of spatial aliasing can also occur in antenna arrays or microphone arrays used to estimate the direction of arrival of a wave signal, as in geophysical exploration by seismic waves. Waves must be sampled more densely than two points perwavelength, or the wave arrival direction becomes ambiguous.[13] | https://en.wikipedia.org/wiki/Aliasing |
Justification(also calledepistemic justification) is a property ofbeliefsthat fulfill certain norms about what a person should believe.[1][2]Epistemologistsoften identify justification as a component of knowledge distinguishing it from mere true opinion.[3]They study the reasons why someone holds a belief.[4]Epistemologists are concerned with various features of belief, which include the ideas of warrant (a proper justification for holding a belief),knowledge,rationality, andprobability, among others.
Debates surrounding epistemic justification often involve thestructureof justification, including whether there are foundational justified beliefs or whether merecoherenceis sufficient for a system of beliefs to qualify as justified. Another major subject of debate is the sources of justification, which might includeperceptual experience(the evidence of the senses),reason, and authoritativetestimony, among others.
"Justification" involves the reasons why someone holds abeliefthat oneshouldhold based on one's current evidence.[4]Justification is a property of beliefs insofar as they are held blamelessly. In other words, a justified belief is a belief that a person is entitled to hold.
Many philosophers from Plato onward have treated "justified true belief" (JTB) as constituting knowledge. It is particularly associated with a theory discussed in his dialoguesMenoandTheaetetus. While in fact Plato seems to disavow justified true belief as constituting knowledge at the end ofTheaetetus, the claim that Plato unquestioningly accepted this view of knowledge stuck until the proposal of theGettier problem.[4]
The subject of justification has played a major role in the value of knowledge as "justified true belief".[citation needed]Some contemporary epistemologists, such asJonathan Kvanvig, assert that justification isn't necessary in getting to the truth and avoiding errors. Kvanvig attempts to show that knowledge is no more valuable than true belief, and in the process dismissed the necessity of justification due to justification not being connected to the truth.[citation needed]
William P. Alstonidentifies two conceptions of justification.[5]: 15–16One conception is "deontological" justification, which holds that justification evaluates the obligation and responsibility of a person having only true beliefs. This conception implies, for instance, that a person who has made his best effort but is incapable of concluding the correct belief from his evidence is still justified. The deontological conception of justification corresponds toepistemic internalism. Another conception is "truth-conducive" justification, which holds that justification is based on having sufficient evidence or reasons that entails that the belief is at least likely to be true. The truth-conductive conception of justification corresponds toepistemic externalism.
There are several different views as to what entails justification, mostly focusing on the question "How beliefs are justified?". Differenttheories of justificationrequire different conditions before a belief can be considered justified. Theories of justification generally include other aspects of epistemology, such as defining knowledge.
Notable theories of justification include:
Robert Fogelinclaims to detect a suspicious resemblance between the theories of justification andAgrippa's five modes leading to the suspension of belief. He concludes that the modern proponents have made no significant progress in responding to the ancient modes ofPyrrhonian skepticism.[6]
William P. Alstoncriticizes the very idea of a theory of justification. He claims: "There isn't any unique, epistemically crucial property of beliefs picked out by 'justified'. Epistemologists who suppose the contrary have been chasing a will-o'-the-wisp. What has really been happening is this. Different epistemologists have been emphasizing, concentrating on, "pushing" different epistemic desiderata, different features of belief that are positively valuable from the standpoint of the aims of cognition."[5]: 22 | https://en.wikipedia.org/wiki/Theory_of_justification |
Anapostolic nunciatureis a top-leveldiplomatic missionof theHoly Seethat is equivalent to anembassy. However, it neither issues visas nor hasconsulates.
The head of the apostolic nunciature is called anuncio, an ecclesiastical diplomatic title. A papal nuncio (officially known as an apostolic nuncio) is a permanent diplomatic representative (head of diplomatic mission) of the Holy See to a state or to one of two international intergovernmental organizations, theEuropean UnionorASEAN, having the rank of anambassadorextraordinary and plenipotentiary, and the ecclesiastical rank oftitulararchbishop. Papal representatives to other intergovernmental organizations are known as "permanent observers" or "delegates".
In several countries that have diplomatic relations with the Holy See, the apostolic nuncio isipso factothedean of the diplomatic corps. The nuncio is, in such a country, first in theorder of precedenceamong all the diplomats accredited to the country, and he speaks for the diplomatic corps in matters of diplomatic privilege and protocol. Most countries that concede priority to the nuncio are officially Catholic, but some are not.
In addition, the nuncio serves as the liaison between the Holy See and the Church in that particular nation. The nuncio has an important role in the selection of bishops.
The pope accredits diplomats with the following states and other subjects of international law (list as per January 2010):[2]
Algeria,Angola,Benin,Burkina Faso,Burundi,Botswana,Cameroon,Cape Verde,Central African Republic,Chad,Congo (Republic of),Congo (Democratic Republic of),Côte d'Ivoire,Djibouti,Egypt,Equatorial Guinea,Eritrea,Ethiopia,Gabon,Gambia,Ghana,Guinea,Guinea-Bissau,Kenya,Lesotho,Liberia,Libya,Madagascar,Malawi,Mali,Mauritius,Morocco,Mozambique,Namibia,Niger,Nigeria,Rwanda,São Tomé and Príncipe,Sénégal,Seychelles,Sierra Leone,South Africa,Sudan,Swaziland,Tanzania,Togo,Tunisia,Uganda,Zambia,Zimbabwe
Antigua and Barbuda,Argentina, Bahamas, Barbados, Belize,Bolivia,Brazil,Canada,Chile,Colombia,Costa Rica, Cuba, Dominica,Dominican Republic, Ecuador, El Salvador, Grenada, Guatemala, Guyana, Haiti, Honduras, Jamaica,México, Nicaragua, Panama, Paraguay,Peru, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and Grenadines, Suriname,Trinidad and Tobago,United States of America, Uruguay,Venezuela
Bahrain,Bangladesh, Cambodia,Republic of China (Taiwan), East Timor,India,Indonesia,Iran,Iraq,Israel,Japan, Jordan, Kazakhstan, Korea[which?], Kuwait,Kyrgyzstan,Lebanon, Malaysia, Mongolia,Nepal,Pakistan,Philippines, Qatar, Singapore, Sri Lanka, Syria, Tajikistan,Thailand, Turkmenistan, United Arab Emirates, Uzbekistan, Vietnam (Resident), Yemen.
Albania, Andorra, Armenia,Austria, Azerbaijan, Belarus,Belgium, Bosnia-Herzegovina, Bulgaria, Croatia, Cyprus, Czech Republic, Estonia, European Union,France, Georgia,Germany,Great Britain, Greece, Hungary,Ireland,Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Macedonia, Malta, Moldova, Monaco, Montenegro,The Netherlands,Nordic Countries,Poland,Portugal, Romania,Russia, San Marino, Serbia, Slovakia, Slovenia,Spain, Switzerland, Turkey,Ukraine
Australia, the Cook Islands, Fiji, Guam, Kiribati, Marshall Islands, Micronesia (Federated States of), Nauru, New Zealand, Palau, Papua New Guinea, Samoa, Solomon Islands, Tonga, Vanuatu.
An apostolic delegate may be sent to liaison between the Catholic Church and a country with which the Holy See has no diplomatic ties, though not accredited to the government of the country. Apostolic delegates have no formal diplomatic status, though in some countries they have some diplomatic privileges. | https://en.wikipedia.org/wiki/Apostolic_nunciature |
In the area ofsystem identification, adynamical systemisstructurally identifiableif it is possible to infer its unknown parameters by measuring its output over time. This problem arises in many branch of applied mathematics, sincedynamical systems(such as the ones described byordinary differential equations) are commonly utilized to model physical processes and these models contain unknown parameters that are typically estimated using experimental data.[1][2][3]
However, in certain cases, the model structure may not permit a unique solution for this estimation problem, even when the data is continuous and free from noise. To avoid potential issues, it is recommended to verify the uniqueness of the solution in advance, prior to conducting any actual experiments.[4]The lack of structural identifiability implies that there are multiple solutions for the problem of system identification, and the impossibility of distinguishing between these solutions suggests that the system has poor forecasting power as a model.[5]On the other hand,control systemshave been proposed with the goal of rendering the closed-loop system unidentifiable, decreasing its susceptibility to covert attacks targetingcyber-physical systems.[6]
Source[2]
Consider alinear time-invariant systemwith the followingstate-space representation:
x˙1(t)=−θ1x1,x˙2(t)=θ1x1,y(t)=θ2x2,{\displaystyle {\begin{aligned}{\dot {x}}_{1}(t)&=-\theta _{1}x_{1},\\{\dot {x}}_{2}(t)&=\theta _{1}x_{1},\\y(t)&=\theta _{2}x_{2},\end{aligned}}}
and with initial conditions given byx1(0)=θ3{\displaystyle x_{1}(0)=\theta _{3}}andx2(0)=0{\displaystyle x_{2}(0)=0}. The solution of the outputy{\displaystyle y}is
y(t)=θ2θ3e−θ1t(eθ1t−1),{\displaystyle y(t)=\theta _{2}\theta _{3}e^{-\theta _{1}t}\left(e^{\theta _{1}t}-1\right),}
which implies that the parametersθ2{\displaystyle \theta _{2}}andθ3{\displaystyle \theta _{3}}are not structurally identifiable. For instance, the parametersθ1=1,θ2=1,θ3=1{\displaystyle \theta _{1}=1,\theta _{2}=1,\theta _{3}=1}generates the same output as the parametersθ1=1,θ2=2,θ3=0.5{\displaystyle \theta _{1}=1,\theta _{2}=2,\theta _{3}=0.5}.
Source[7]
A model of a possible glucose homeostasis mechanism is given by the differential equations[8]
G˙=u(0)+u−(c+siI)G,β˙=β(1.4583⋅10−51+(8.4G)1.7−1.7361⋅10−51+(G8.4)8.5),I˙=pβG2α2+G2−γI,{\displaystyle {\begin{aligned}&{\dot {G}}=u(0)+u-(c+s_{\mathrm {i} }\,I)G,\\&{\dot {\beta }}=\beta \left({\frac {1.4583\cdot 10^{-5}}{1+\left({\frac {8.4}{G}}\right)^{1.7}}}-{\frac {1.7361\cdot 10^{-5}}{1+\left({\frac {G}{8.4}}\right)^{8.5}}}\right),\\&{\dot {I}}=p\,\beta \,{\frac {G^{2}}{\alpha ^{2}+G^{2}}}-\gamma \,I,\end{aligned}}}
where (c,si,p,α,γ) are parameters of the system, and the states are the plasma glucose concentrationG, the plasma insulin concentrationI, and the beta-cell functional massβ.It is possible to show that the parameterspandsiare not structurally identifiable: any numerical choice of parameterspandsithat have the same productpsiare indistinguishable.[7]
Structural identifiability is assessed by analyzing the dynamical equations of the system, and does not take into account possible noises in the measurement of the output. In contrast,practical non-identifiabilityalso takes noises into account.[1][9]
The notion of structurally identifiable is closely related toobservability, which refers to the capacity of inferring the state of the system by measuring the trajectories of the system output. It is also closely related todata informativity, which refers to the proper selection of inputs that enables the inference of the unknown parameters.[10][11]
The (lack of) structural identifiability is also important in the context of dynamical compensation of physiological control systems. These systems should ensure a precise dynamical response despite variations in certain parameters.[12][13]In other words, while in the field of systems identification, unidentifiability is considered a negative property, in the context of dynamical compensation, unidentifiability becomes a desirable property.[13]
Identifiability also appears in the context of inverseoptimal control. Here, one assumes that the data comes from a solution of an optimal control problem with unknown parameters in the objective function. Here, identifiability refers to the possibility of infering the parameters present in the objective function by using the measured data.[14]
There exist many software that can be used for analyzing the identifiability of a system, including non-linear systems:[15] | https://en.wikipedia.org/wiki/Structural_identifiability |
Electronic serial numbers(ESNs) were created by the U.S.Federal Communications Commission(FCC) to uniquely identifymobile devices, from the days ofAMPSin the United States starting in the early 1980s. The administrative role was taken over by theTelecommunications Industry Associationin 1997 and is still maintained by them. ESNs are currently mainly used withCDMAphones (and were previously used byAMPSandTDMAphones), compared toInternational Mobile Equipment Identity(IMEI) numbers used by allGSMphones.[1]
The first eight bits of the ESN were originally the manufacturer code, leaving 24 bits for the manufacturer to assign up to 16,777,215 codes to mobiles. To allow more than 256 manufacturers to be identified, the manufacturer code was extended to 14 bits, leaving 18 bits for the manufacturer to assign up to 262,144 codes. Manufacturer code 0x80 is reserved from assignment and is used instead as an eight-bit prefix for pseudo-ESNs (pESN). The remaining 24 bits are the least significant bits of theSHA-1hash of amobile equipment identifier(MEID). Pseudo-ESNs are not guaranteed to be unique (the MEID is the unique identifier if the phone has a pseudo-ESN).
ESNs are often represented as either 11-digit decimal numbers or 8-digit hexadecimal numbers. For the decimal format the first three digits are the decimal representation of the first eight bits (between 00 and 255 inclusive) and the next eight digits are derived from the remaining 24 bits and will be between 0000000 and 16777215 inclusive. The decimal format of pseudo ESNs will therefore begin with 128. The decimal format separately displays eight bit manufacturer codes in the first three digits, but 14 bit codes are not displayed as separate digits. The hexadecimal format displays an ESN as eight digits and also does not separately display 14 bit manufacturer codes which occupy 3.5 hexadecimal digits.
As ESNs have essentially run out, a new serial number format,MEID, was created by3GPP2and was first implemented by Verizon in 2006. MEIDs are 56 bits long, the same length as the IMEI and, in fact, MEID was created to be a superset of IMEI. The main difference between MEID and IMEI is that the MEID allows hexadecimal digits while IMEI allows only decimal digits – "IMEI shall consist of decimal digits (0 through 9) only".[2]
The last of the previously unused ESN codes were allocated in November 2008.[3]Applications for assignments were accepted until June 30, 2010 using reclaimed ESN codes, those previously assigned toAMPSorTDMAphones and therefore not present onCDMA2000systems. Reclaimed codes have also been used forUIMIDassignments. Codes are assigned according to industry guidelines.[4]
Although ESN assignments may still occur in the future based on applications received before June 30, 2010, there have not been any assignments made since December 31, 2010. | https://en.wikipedia.org/wiki/Electronic_Serial_Number |
Inmathematics, aclassification theoremanswers theclassificationproblem: "What are the objects of a given type, up to someequivalence?". It gives a non-redundantenumeration: each object is equivalent to exactly one class.
A few issues related to classification are the following.
There exist manyclassification theoremsinmathematics, as described below. | https://en.wikipedia.org/wiki/Classification_theorem |
PDF417is a stacked linearbarcodeformat used in a variety of applications such as transport, identification cards, and inventory management. "PDF" stands forPortable Data File, while "417" signifies that each pattern in the code consists of 4 bars and spaces in a pattern that is 17 units (modules) long.
The PDF417 symbology was invented by Dr. Ynjiun P. Wang atSymbol Technologiesin 1991.[1]It is defined in ISO 15438.
The PDF417 bar code (also called asymbol) consists of 3 to 90 rows, each of which is like a small linear bar code. Each row has:
All rows are the same width; each row has the same number of codewords.
PDF417 uses abase929 encoding. Each codeword represents a number from 0 to 928.
The codewords are represented by patterns of dark (bar) and light (space) regions. Each of these patterns contains four bars and four spaces (where the 4 in the name comes from). The total width is 17 times the width of the narrowest allowed vertical bar (the X dimension); this is where the 17 in the name comes from. Each pattern starts with a bar and ends with a space.
The row height must be at least 3 times the minimum width: Y ≥ 3 X.[2]: 5.8.2
There are three distinct bar–space patterns used to represent each codeword. These patterns are organized into three groups known asclusters. The clusters are labeled 0, 3, and 6. No bar–space pattern is used in more than one cluster. The rows of the symbol cycle through the three clusters, so row 1 uses patterns from cluster 0, row 2 uses cluster 3, row 3 uses cluster 6, and row 4 again uses cluster 0.
Which cluster can be determined by an equation:[2]: 5.3.1
WhereKis the cluster number and thebirefer to the width of thei-th black bar in the symbol character (inXunits).
Alternatively:[2]: 76–78
WhereEiis thei-th edge-to-next-same-edge distance. Odd indices are the leading edge of a bar to the leading edge of the next bar; even indices are for the trailing edges.
One purpose of the three clusters is to determine which row (mod 3) the codeword is in. The clusters allow portions of the symbol to be read using a single scan line that may be skewed from the horizontal.[2]: 5.11.1For instance, the scan might start on row 6 at the start of the row but end on row 10. At the beginning of the scan, the scanner sees the constant start pattern, and then it sees symbols in cluster 6. When the skewed scan straddles rows 6 and 7, then the scanner sees noise. When the scan is on row 7, the scanner sees symbols in cluster 0. Consequently, the scanner knows the direction of the skew. By the time the scanner reaches the right, it is on row 10, so it sees cluster 0 patterns. The scanner will also see a constant stop pattern.
Of the 929 available code words, 900 are used for data, and 29 for special functions, such as shifting between major modes. The three major modes encode different types of data in different ways, and can be mixed as necessary within a single bar code:
When the PDF417 symbol is created, from 2 to 512 error detection and correction codewords are added. PDF417 usesReed–Solomon error correction. When the symbol is scanned, the maximum number of corrections that can be made is equal to the number of codewords added, but the standard recommends that two codewords be held back to ensure reliability of the corrected information.
PDF417 is a stacked barcode that can be read with a simple linear scan being swept over the symbol.[3]Those linear scans need the left and right columns with the start and stop code words. Additionally, the scan needs to know what row it is scanning, so each row of the symbol must also encode its row number. Furthermore, the reader's line scan won't scan just a row; it will typically start scanning one row, but then cross over to a neighbor and possibly continuing on to cross successive rows. In order to minimize the effect of these crossings, the PDF417 modules are tall and narrow — the height is typically three times the width. Also, each code word must indicate which row it belongs to so crossovers, when they occur, can be detected. The code words are also designed to be delta-decodable, so some code words are redundant. Each PDF data code word represents about 10 bits of information (log2(900) ≈ 9.8), but the printed code word (character) is 17 modules wide. Including a height of 3 modules, a PDF417 code word takes 51 square modules to represent 10 bits. That area does not count other overhead such as the start, stop, row, format, and ECC information.
Other 2D codes, such asDataMatrixandQR, are decoded with image sensors instead of uncoordinated linear scans. Those codes still need recognition and alignment patterns, but they do not need to be as prominent. An 8 bit code word will take 8 square modules (ignoring recognition, alignment, format, and ECC information).
In practice, a PDF417 symbol takes about four times the area of a DataMatrix or QR Code.[4]
In addition to features typical of two dimensional bar codes, PDF417's capabilities include:
The introduction of the ISO/IEC document states:[2]
Manufacturers of bar code equipment and users of bar code technology require publicly available standard symbology specifications to which they can refer when developing equipment and application standards. It is the intent and understanding of ISO/IEC that the symbology presented in this International Standard is entirely in the public domain and free of all user restrictions, licences and fees.
PDF417 is used in many applications by both commercial and government organizations. PDF417 is one of the formats (along withData Matrix) that can be used to printpostageaccepted by theUnited States Postal Service. PDF417 is also used by the airline industry'sBar Coded Boarding Pass(BCBP) standard as the 2D bar code symbolism for paper boarding passes. PDF417 is the standard selected by theDepartment of Homeland Securityas the machine readable zone technology forRealIDcompliantdriver licensesand state issued identification cards. PDF417 barcodes are also included onvisasand border crossing cards issued by theState of Israel. | https://en.wikipedia.org/wiki/PDF417 |
Proof of personhood (PoP)is a means of resisting malicious attacks on peer to peer networks, particularly, attacks that utilize multiple fake identities, otherwise known as aSybil attack. Decentralized online platforms are particularly vulnerable to such attacks by their very nature, as notionally democratic and responsive to large voting blocks. In PoP, each unique human participant obtains one equal unit of voting power, and any associated rewards.
The term is used in for cryptocurrency and blockchains as a parallel toproof of work,proof of stake, and otherconsensusmechanisms which attempt to distribute voting power and rewards to participants proportionately to an investment of resources.
The problem ofSybil attacksusing many virtual identities has been recognized for decades as a fundamental challenge for distributed systems that expect each human user to have only one account or identity.[1]CAPTCHAsattempt to rate-limit automated Sybil attacks by using automatedTuring teststo distinguish humans from machines creating accounts or requesting services. Even when successful in this goal, however, CAPTCHAs allow one human to obtain multiple accounts or shares of a resource simply by solving multiple CAPTCHAs in succession, and thus do not satisfy the one-per-person goal in proof of personhood. Aside from CAPTCHAs allowing people to obtain multiple users, there are additional complications. Many users who are visually impaired or have learning disabilities may struggle to complete the puzzles. Additionally, some recently developed AI has succeeded in solving the CAPTCHA issue.[2]
Distributed systems could require users to authenticate using strong identities verified by a government ortrusted third party, using anidentity verification serviceorself-sovereign identitysystem for example, but strong identification requirements conflict with theprivacyandanonymity, and increasebarriers to entry.[citation needed]One approach proposed to create anonymous but one-per-person credentials for use in distributed systems ispseudonym parties, in which participants gather periodically at in-person events and leverage the fact that humans can physically be in only one place at a time.[3]
In 2014,Vitalik Buterinproposed the problem of creating a "unique identity system" for cryptocurrencies, which would give each human user one and only one anti-Sybil participation token.[4][non-primary source needed]In 2017, the term "proof of personhood" was proposed for an approach based on pseudonym parties.[5]
A variety of approaches to implementing proof of personhood have been proposed, some in experimental deployment.[6]
The approach originally proposed by Borge et al. was to use in-person pseudonym parties as a basis to create anonymous one-per-person tokens periodically without requiring any form of identity verification.[3][5]Theencointerproject adapts this approach by asking participants to meet in small groups simultaneously at randomly-chosen places, to verify each other's physical presence.[7]
One drawback of this approach is the inconvenience to participants of going to designated physical locations at specific times, especially for participants with conflicting responsibilities at those times. Another issue is the challenge of organizingfederated pseudonym partiesin multiple locations simultaneously while allowing each group to verify that all other groups are organized honestly without inflating the number of digital credentials they issue.[citation needed]
Another approach, related to thePGPWeb of Trust, relies on users forming asocial networkto verify and attest to each other's identities.[8]UniqueID incorporates biometric verification into the social network approach.[9]
One criticism of the social network approach is that there is no straightforward way for a participant to verify that a social connection has not created other Sybil identities connected to and verified by other, disjoint sets of social contacts. A related challenge is that Sybil detection based on graph analysis make certain assumptions about the behavior of a Sybil attacker, and it is not clear that real-world social networks satisfy these assumptions.[10]Finally, graph-based Sybil detection algorithms tend to be able to detect only large, densely-clustered groups of Sybil nodes in a social network, leaving small-scale attacks difficult or impossible to distinguish by graph structure alone from legitimate users' connectivity structures.[citation needed]
Another approach requires participants to have verified identities, but to hide oranonymizethose identities in subsequent use. One criticism of this approach is the privacy and surveillance risks inherent in such databases, especially biometric databases, and the level of trust users must place in the verification service for both Sybil protection and privacy of their identity information. Other critics highlight thatfacial recognition systemsfail on a global scale due to insufficient facial entropy.[citation needed]
Apple, who are known for implementing a facial recognition feature into theiPhone, attempts to protect users' privacy with theSecure Enclave. The mathematical structure of a user's face captured by the TrueDepth camera does not leave the user's device, increasing the privacy and protection of personal information.[11][12]However, some concerns have been raised in regards to the level of security of the facial recognision on the devices. For example, there have been cases where family members were mistakenly recognized as their siblings.[13]
Even with decentralized privacy protections, a criticism of this approach is the inconvenience and cost to users of verifying strong identities, and the risk of potentialexclusionof users who do not readily have or cannot afford the requisite identity documents, are reluctant to participate due to privacy and surveillance concerns, or are wrongly excluded by errors in biometric tests.[14]
To resolve the security concerns over using biometrics to prove human uniqueness, only encrypting the biometrics data through cryptographic models is not enough. For this purpose,Humanodepresented a new technique to useConfidential computing,homomorphic encryptionalong withzero-knowledge proofto encrypt biometrics data in a way that the original biometrics data never leaves the device of the user. Instead, the decentralized network is provided only with the relevant information to verify if a person is a real human being through liveness detection.[citation needed]
Another proposed class of approach extends theCAPTCHAprinciple of using Turing tests to the unique human verification problem. The Idena network, for example, assigns participants to verify each other usingfliptests.[15]Criticisms of this approach include the inconvenience to users of solving Turing tests, and whetherartificial intelligenceanddeepfaketechnologies will soon be able to solve such tests automatically or convince real participants that a synthetic user is human during a verification interaction.[citation needed]
One proposed use for proof of personhood is to ensure that voting power in permissionlessconsensus algorithmsis widely distributed,[5]and to avoid the re-centralization that has been observed inproof of workmining pools,[16]and predicted inproof of stakesystems.[17]
Another proposed use is to facilitatedemocraticgovernance in decentralized online systems, including blockchains and cryptocurrencies, that wish to enforce a "one person, one vote" rule.[18] | https://en.wikipedia.org/wiki/Proof_of_personhood |
Inlinguistics, especially withingenerative grammar,phi features(denoted with the Greek letterφ'phi') are themorphologicalexpression of asemanticprocess in which a word ormorphemevaries with the form of another word or phrase in the same sentence.[1]This variation can includeperson,number,gender, andcase, as encoded in pronominal agreement withnounsandpronouns(the latter are said to consist only of phi-features, containing no lexical head). Several other features are included in the set of phi-features, such as the categorical features ±N (nominal) and ±V (verbal), which can be used to describelexical categoriesand case features.[2]
Phi-features are often thought of as the "silent" features that exist onlexicalheads (or, according to some theories,[3]within the syntactic structure) that are understood for number, gender, person or reflexivity. Due to their silent nature, phi-features are often only understood if someone is anativespeaker of a language, or if the translation includes a gloss of all these features. Many languages exhibit apro-dropphenomenon which means that they rely on other lexical categories to determine the phi-features of the lexical heads.
Chomskyfirst proposed that the N node in a clause carries with it all the features to include person, number and gender.[4]InEnglish, we rely on nouns to determine the phi-features of a word, but some other languages rely on inflections of the different parts of speech to determine person, number and gender of the nominal phrases to which they refer.[5]Adjectives also carry phi-features in some languages, however, they tend to agree in number and gender but rarely for person.[5]
The grammatical termnumberis the name of the system contrasting singular and plural.[6]In English, number agreement is not expressed through agreement of verbal elements like they are in other languages (though present tense verbs do agree in number with third person subjects). This is partly because English is a language that requires subjects and the subjects in English overtly express number. Instead, English number is a phi-feature that is inflected on nouns when the nominal phrase isplural. The most common in English is-sinflected on nouns that are plural:
- Ducks, fridges, baseballs, cups, books, mirrors, cars, buildings, clowns, bridges, creams....
Some cases of plurality in English require inflection within the noun to express the phi-feature of plurality:
- Men, women, mice, teeth....
Neither verbs nor adjectives are used to agree with the number feature of the noun that they are agreeing with in English.
Some languages, however, likeSalish Halkomelem, differ from English in their syntactic categorization of plural marking. Halkomelem allows for both marked and unmarked plural forms of its nouns. It also allows for the determiners to be marked or unmarked in their plurality. Plural nouns and determiners in Halkomelem can be freely combined as well, but it appears that if a determiner is plural in a phrase it is sufficient to pluralize the noun that it modifies:[7]
English is a language that does not have nominal phrases that belong to a gender class where agreement of other elements in the phrase is required. Dutch is another language that only differentiates between neuter and the common gender.[8]Many other languages of the world do have gender classes.German, for example, has three genders; feminine, masculine and neuter.[8]For a Romance language like Italian, there are feminine and masculine genders. Inflections on theadjectivesanddeterminersare used for gender agreement within the pronominal phrase.[9]
English only expresses gender when the pronoun addresses a specific person who semantically belongs to a certain gender. See the table below for Pronominal Case Forms in English under3 sg. fem/masc.
The phi-feature ofcaseis explicit in English only for pronominal forms (see the picture of the table for Pronominal Case Forms in English). English is not a language that has inflectional case forms for proper nouns.
German is a language that exhibits some inflectional case forms on nouns.[10]It also obligatorily displays case forms on its determiners:
Case in terms ofreflexivityis overt in English for every person (see the table for Pronominal Case Forms in English):
myself,yourself,himself, herself, yourselves, ourselves, themselves
The pattern for the reflexive form of the third person masculine pronoun does not follow the same pattern of reflexivity as the other pronouns in terms of case form marking for pronominal English.
In many languages, reflexivity is not overt for person. A prime example is apparent inFrenchse. Frenchseis used to express reflexivity for every expression of the third person, regardless of gender or number. It also functions as a middle, an inchoative, an applicative and an impersonal. For this reason, some theories suggest that reflexive phi-features for languages such as French posit in a level on the syntactic structure that is silent, between the determiner and the noun. This creates a new "silent" projection to a node specifically for φ-reflexives in French structure.[11]
When phi feature agreement occurs on a verb, it typically marks features relating to grammatical function (subject versus object), person, gender, or case.[12]A key area of verbal agreement is attraction, in which case verbs are sensitive to the grammatical number of a noun phrase that is not the expected controller, but is close in vicinity.[13]In other words, agreement is understood to be a relationship between a probing head and a target goal in the probe's c-command domain.[14]
In English, agreement on a verb is triggered by the highest DP in subject position of a finite clause.[15]Overt agreement is found only in the present tense, with a 3rd person singular subject, in which case the verb is suffixed with -s:[16]
In anull-subject languagesuch asItalian, however, pronominal subjects are not required (in fact, in many null-subject languages, producing overt subjects is a sign of non-nativity). These type of "unstressed" pronouns are called clitic pronouns. Therefore, Italian uses a different inflectional morphology on verbs that is based on the person features of the nominal subject it agrees with:[9]
Past tense, present continuous tense and the future tense are the three divisions of time expression of the action of a verb.[17]In languages such as English, verbs agree with their subjects and not their objects. However, inMohawk, an Indigenous language of North America, verbs agree with their subjects as well as their objects. Interestingly in Mohawk, a predicate can be counted as a verb, like 'big'. As shown in 1a), the form of 'big' changes to express the particular grammatical function, tense.[18]
CIS:cislocative
NE:Mohawk prenominal particle
Ra-kowan-v-hne'
MsS-be.big-STAT-PAST
ne
NE
Sak.
Sak
Ra-kowan-v-hne' ne Sak.
MsS-be.big-STAT-PAST NE Sak
‘Sak used to be big.’
This change utilizes /v-hne/ as can be compared with the verb present in sentence 1b), 'fallen'.
t-yo-ya't-y'-v-hne'
CIS-NsO-body-fall-STAT-PAST
t-yo-ya't-y'-v-hne'
CIS-NsO-body-fall-STAT-PAST
‘It has fallen.'[18]
Example 2) demonstrates the use of 'big' without inflecting for tense (/v-hne/), but instead we see, /-v/.
w-a'shar-owan-v.
NsS-knife-be.big-STAT
w-a'shar-owan-v.
NsS-knife-be.big-STAT
"The knife is big; it is a big knife.'[18]
Verb negation in many languages, including English, is not subject to phi-feature agreement. However, there do exist some languages which possess the morphological variance that indicates agreement. As an example, one of these is theIbibiolanguage inNigeria. In sentences with anauxiliary verb, the auxiliary verb is directly affixed with a negation agreement morpheme /í/, in place of the typical subject-verb agreement morphemes /á/ or /é/, and the non-auxiliary verb subject-verb agreement also changes in agreement with the negation, despite the fact that only the auxiliary undergoes negation.[19]This double variation is shown in 1a-b), where I in the sentence gloss indicates the agreement affix /í/. In these examples, the verbs are undergoing morphological changes in order to be in agreement with the negation, regardless of whether they are directly negated or not.
Okon
Okon
i-sʌk-kɔ
I-AUX-NEG
i-di
I-come
Okoni-sʌk-kɔi-di
OkonI-AUX-NEGI-come
'Okon has still not come (in spite of...)'
Okon
Okon
i-sɔp-pɔ
I-do.quickly-NEG
i-dɔk
I-make
ekpat.
bag
Okoni-sɔp-pɔi-dɔk ekpat.
OkonI-do.quickly-NEGI-make bag
'Okon did not make the bag quickly.'[19]
In certain languages, verb agreement can be controlled by formality, as withKoreansubject honorific agreement. When the subject of the sentence is a respected person, the honorificsuffixsioccurs after the verbrootand the honorific subjectcase markerisKkeysein as seen in (3a). Moreover, honorific agreement is optional, as seen in(3b).,
Seonsaengnim-kkeys-e
teacher-HON.NOM
o-si-ess-ta.
come-HON-past-DEC
Seonsaengnim-kkeys-eo-si-ess-ta.
teacher-HON.NOMcome-HON-past-DEC
‘The teacher came.’
Seonsaengnim-i
teacher-NOM
o-ass-ta.
come-past-DEC
Seonsaengnim-i o-ass-ta.
teacher-NOM come-past-DEC
‘The teacher came.’[20]
There is a debate about whether Korean subject honorific marking is authentic agreement. The debate stems from the fact that languages where verbs show person agreement, the agreement is obligatory. Based on this reason, some scholars contend that since honorific marking is optional, it is not an instance of agreement. However, other scholars others argue that it is indeed agreement.[20]A fundamental quality about honorific marking that is often overlooked is the fact that is it possible only with a human referent. Consequently, as shown by the examples in (4), when the subject is non-human, honorific agreement is ungrammatical.[21]
cha-ka
car-NOM
o-(*si)-ess-e.
come-HON-PST-DECL
cha-ka o-(*si)-ess-e.
car-NOM come-HON-PST-DECL
‘The car came.’
kwukhoy-ka
congress-NOM
ku
the
pepan-ul
bill-ACC
simuy-ha-(*si)-ess-e
review-do-HON-PST-DECL
kwukhoy-ka ku pepan-ul simuy-ha-(*si)-ess-e
congress-NOM the bill-ACC review-do-HON-PST-DECL
‘The congress reviewed the bill.’[21]
Phi-features can also be considered the silent features that determine whether a root word is a noun or a verb. This is called the noun-verb distinction ofDistributed Morphology. The table below describes how category classes are organized by their Nominal or Verbal characteristics. Definitions for these four categories of predicates have been described as follows:
Averbal predicatehas a predicative use only; anominalpredicate can be used as the head of a term; anadjectivalpredicate can be used as a modifier of a nominal head; aprepositionacts as a term-predicate for which the noun is still the head; anadverbial(not shown below) predicate can be used as a modifier of a non-nominal head.[22]
X-bar theory approaches categorical features in this way: when a head X selects its complement to project to X', the XP that it projects to then is a combination of the head X and all of its categorical features, those being either nominal, verbal, adjectival or prepositional features.[23]It has also been argued thatadpositions(cover term for prepositions and postpositions[24]) are not part of the [+/-N] [+/-V] system as shown above. This is because they resist being part of a single class category, like nouns, verbs and adjectives do. This argument also posits that some appositions may behave as part of this type of categorization, but not all of them do.
There arethree main hypothesesregarding the syntactic categories of words. The first one is theStrong Lexicalhypothesis, which states that verbs and nouns are inherent in nature, and when a word such as "walk" in English can surface as either a noun or a verb, depending on the speaker's intuitions of what the meaning of the verb is.[25]This means that the root "walk" in English has two separate lexical entries:[26]
walkN <[AP]>an act or instance of going on foot especially for exercise or pleasure[27]
walkV <[DPtheme]>to move along on foot : advance by steps[27]
This analysis states that the category is determined by syntax or context. Aroot wordis inserted into the syntax as bare and the surrounding syntax determines if it will behave as a verb or a noun. Once the environment has determined its category, morphologicalinflectionsalso surface on the root according to the determined category. Typically, if the element before it is a determiner, the word will surface as a noun, and if the element before it is a tense element, the root word will surface as a verb.[29]The example in the photo shows an example from Italian. The root of the word iscammin-("walk"). This word could surface as either a noun or verb. The first tree shows that when the element before is a D "una", the root will be an N and the following morphology will inflect-atawhich is the correct full orthography for the noun "walk" in Italian. The tree on the right shows a similar process but in the environment where the root follows a tense element, and the morphology inflects-oas a suffix, which makes the verb surface not only as a verb, but as discussed before in person agreement, also shows that this is the first person present form of the verb ("I walk").
Syntactic decomposition for categorization ofparts of speechincludes an explanation for why some verbs and nouns have a predictable relationship to their nominal counterparts and why some don't. It says that the predictable forms aredenominaland that the unpredictable forms are strictly root-derived.[30]The examples provided are of the English verbshammerandtape. A verb such as hammer is aroot-derived form, meaning that it can appear within an NP or within a VP. Adenominalized verb, such as tape must first be converted from an NP because its meaning relies on the semantics of the noun.[31]
The discussion of how categorical features are determined is still up for debate and there have been numerous other theories trying to explain how words get their meanings and surface in a category. This is an issue within categorical distinction theories that has not yet come to a conclusion which is agreed upon in the linguistic community. This is interesting because phi-features in terms of person, number and gender are concrete features that have been observed numerous times in natural languages, and are consistent patterns that are rooted in rule-based grammar. | https://en.wikipedia.org/wiki/Phi_features |
In mostUnixandUnix-like operating systems, theps(process status) program displays the currently-runningprocesses. The related Unix utilitytopprovides a real-time view of the running processes.
KolibriOSincludes an implementation of thepscommand.[1]Thepscommand has also been ported to theIBM ioperating system.[2]InWindows PowerShell,psis a predefinedcommand aliasfor theGet-Processcmdlet, which essentially serves the same purpose.
Users canpipelinepswith other commands, such aslessto view the process status output one page at a time:
Users can also utilize thepscommand in conjunction with thegrepcommand (see thepgrepandpkillcommands) to find information about a single process, such as its id:
The use ofpgrepsimplifies the syntax and avoids potential race conditions:
To see every process running as root in user format:
* = Often abbreviated
pshas many options. Onoperating systemsthat support theSUSandPOSIXstandards,pscommonly runs with the options-ef, where "-e" selectsevery process and "-f" chooses the "full" output format. Another common option on these systems is-l, which specifies the "long" output format.
Most systems derived fromBSDfail to accept the SUS and POSIX standard options because of historical conflicts. (For example, the "e" or "-e" option will displayenvironment variables.) On such systems,pscommonly runs with the non-standard optionsaux, where "a" lists all processes on aterminal, including those of other users, "x" lists all processes withoutcontrolling terminalsand "u" adds a column for the controlling user for each process. For maximum compatibility, there is no "-" in front of the "aux". "ps auxww" provides complete information about the process, including all parameters. | https://en.wikipedia.org/wiki/Ps_(Unix) |
Incomputingandsystems design, aloosely coupledsystem is one
Components in a loosely coupled system can be replaced with alternative implementations that provide the same services. Components in a loosely coupled system are less constrained to the same platform,language,operating system, or build environment.
If systems are decoupled in time, it is difficult to also provide transactional integrity; additional coordination protocols are required.Data replicationacross different systems provides loose coupling (in availability), but creates issues in maintainingconsistency(data synchronization).
Loose coupling in broaderdistributed systemdesign is achieved by the use of transactions, queues provided bymessage-oriented middleware, and interoperability standards.[2]
Four types of autonomy which promote loose coupling, are:reference autonomy,time autonomy,format autonomy, andplatform autonomy.[3]
Loose coupling is an architectural principle and design goal inservice-oriented architectures. Eleven forms of loose coupling and their tight coupling counterparts are listed in:[4]
Enterprise Service Bus(ESB) middleware was invented to achieve loose coupling in multiple dimensions.[5]However, overengineered and mispositioned ESBs can also have the contrary effect and create undesired tight coupling and a central architectural hotspot.
Event-driven architecturealso aims at promoting loose coupling.[6]
Loose coupling ofinterfacescan be enhanced by publishing data in a standard format (such asXMLorJSON).
Loose coupling between program components can be enhanced by using standard data types in parameters. Passing customized data types or objects requires both components to have knowledge of the custom data definition.
Loose coupling of services can be enhanced by reducing the information passed into a service to the key data. For example, a service that sends a letter is most reusable when just the customer identifier is passed and the customer address is obtained within the service. This decouples services because services do not need to be called in a specific order (e.g. GetCustomerAddress, SendLetter).
Coupling refers to the degree of direct knowledge that one component has of another. Loose coupling in computing is interpreted asencapsulationversus non-encapsulation.
An example of tight coupling is when a dependent class contains a pointer directly to a concrete class which provides the required behavior. The dependency cannot be substituted, or its "signature" changed, without requiring a change to the dependent class. Loose coupling occurs when the dependent class contains a pointer only to an interface, which can then be implemented by one or many concrete classes. This is known asdependency inversion. The dependent class's dependency is to a "contract" specified by the interface; a defined list of methods and/or properties that implementing classes must provide. Any class that implements the interface can thus satisfy the dependency of a dependent class without having to change the class. This allows for extensibility in software design. A new class implementing an interface can be written to replace a current dependency in some or all situations, without requiring a change to the dependent class; the new and old classes can be interchanged freely. Strong coupling does not allow this.
This is aUMLdiagram illustrating an example ofloosecoupling between a dependent class and a set of concrete classes, which provide the required behavior:
For comparison, this diagram illustrates the alternative design withstrongcoupling between the dependent class and a provider:
Computer programming languages having notions of either functions as the core module (seeFunctional programming) or functions as objects provide excellent examples of loosely coupled programming. Functional languages have patterns ofContinuations,Closure, or generators. SeeClojureandLispas examples of functional programming languages. Object-oriented languages likeSmalltalkandRubyhave code blocks, whereasEiffelhas agents. The basic idea is to objectify (encapsulate as an object) a function independent of any other enclosing concept (e.g. decoupling an object function from any direct knowledge of the enclosing object). SeeFirst-class functionfor further insight into functions as objects, which qualifies as one form of first-class function.
For example, in an object-oriented language, when a function of an object is referenced as an object (freeing it from having any knowledge of its enclosing host object) the new function object can be passed, stored, and called at a later time. Recipient objects (to whom these functional objects are given) can safely execute (call) the contained function at their own convenience without any direct knowledge of the enclosing host object. In this way, a program can execute chains or groups of functional objects, while safely decoupled from having any direct reference to the enclosing host object.
Phone numbers are an excellent analog and can easily illustrate the degree of this decoupling.
For example, some entity provides another with a phone number to get a particular job done. When the number is called, the calling entity is effectively saying, "Please do this job for me." The decoupling or loose coupling is immediately apparent. The entity receiving the number may have no knowledge of where the number came from (e.g. a reference to the supplier of the number). On the other side, the caller is decoupled from specific knowledge of who they are calling, where they are, and knowing how the receiver of the call operates internally.
Carrying the example a step further, the caller might say to the receiver of the call, "Please do this job for me. Call me back at this number when you are finished." The 'number' being offered to the receiver is referred to as a "Call-back". Again, the loose coupling or decoupled nature of this functional object is apparent. The receiver of the call-back is unaware of what or who is being called. It only knows that it can make the call and decides for itself when to call. In reality, the call-back may not even be to the one who provided the call-back in the first place. This level of indirection is what makes function objects an excellent technology for achieving loosely coupled programs.
Communication between loosely coupled components may be based on a flora of mechanisms, like the mentionedasynchronous communicationstyle or thesynchronous message passingstyle[7]
The degree of the loose coupling can be measured by noting the number of changes indata elementsthat could occur in the sending or receiving systems and determining if the computers would still continue communicating correctly. These changes include items such as: | https://en.wikipedia.org/wiki/Loose_coupling |
Thephase-space formulationis a formulation ofquantum mechanicsthat places thepositionandmomentumvariables on equal footing inphase space. The two key features of the phase-space formulation are that the quantum state is described by aquasiprobability distribution(instead of awave function,state vector, ordensity matrix) and operator multiplication is replaced by astar product.
The theory was fully developed byHilbrand Groenewoldin 1946 in his PhD thesis,[1]and independently byJoe Moyal,[2]each building on earlier ideas byHermann Weyl[3]andEugene Wigner.[4]
In contrast to the phase-space formulation, theSchrödinger pictureuses the positionormomentum representations (see alsoposition and momentum space).
The chief advantage of the phase-space formulation is that it makes quantum mechanics appear as similar toHamiltonian mechanicsas possible by avoiding the operator formalism, thereby "'freeing' the quantization of the 'burden' of theHilbert space".[5]This formulation is statistical in nature and offers logical connections between quantum mechanics and classicalstatistical mechanics, enabling a natural comparison between the two (seeclassical limit). Quantum mechanics in phase space is often favored in certainquantum opticsapplications (seeoptical phase space), or in the study ofdecoherenceand a range of specialized technical problems, though otherwise the formalism is less commonly employed in practical situations.[6]
The conceptual ideas underlying the development of quantum mechanics in phase space have branched into mathematical offshoots such as Kontsevich's deformation-quantization (seeKontsevich quantization formula) andnoncommutative geometry.
The phase-space distributionf(x,p)of a quantum state is a quasiprobability distribution. In the phase-space formulation, the phase-space distribution may be treated as the fundamental, primitive description of the quantum system, without any reference to wave functions or density matrices.[7]
There are several different ways to represent the distribution, all interrelated.[8][9]The most noteworthy is theWigner representation,W(x,p), discovered first.[4]Other representations (in approximately descending order of prevalence in the literature) include theGlauber–Sudarshan P,[10][11]Husimi Q,[12]Kirkwood–Rihaczek, Mehta, Rivier, and Born–Jordan representations.[13][14]These alternatives are most useful when the Hamiltonian takes a particular form, such asnormal orderfor the Glauber–Sudarshan P-representation. Since the Wigner representation is the most common, this article will usually stick to it, unless otherwise specified.
The phase-space distribution possesses properties akin to the probability density in a 2n-dimensional phase space. For example, it isreal-valued, unlike the generally complex-valued wave function. We can understand the probability of lying within a position interval, for example, by integrating the Wigner function over all momenta and over the position interval:
IfÂ(x,p)is an operator representing an observable, it may be mapped to phase space asA(x,p)through theWigner transform. Conversely, this operator may be recovered by theWeyl transform.
The expectation value of the observable with respect to the phase-space distribution is[2][15]
A point of caution, however: despite the similarity in appearance,W(x,p)is not a genuinejoint probability distribution, because regions under it do not represent mutually exclusive states, as required in thethird axiom of probability theory. Moreover, it can, in general, takenegative valueseven for pure states, with the unique exception of (optionallysqueezed)coherent states, in violation of thefirst axiom.
Regions of such negative value are provable to be "small": they cannot extend to compact regions larger than a fewħ, and hence disappear in theclassical limit. They are shielded by theuncertainty principle, which does not allow precise localization within phase-space regions smaller thanħ, and thus renders such "negative probabilities" less paradoxical. If the left side of the equation is to be interpreted as an expectation value in the Hilbert space with respect to an operator, then in the context ofquantum opticsthis equation is known as theoptical equivalence theorem. (For details on the properties and interpretation of the Wigner function, see itsmain article.)
An alternative phase-space approach to quantum mechanics seeks to define a wave function (not just a quasiprobability density) on phase space, typically by means of theSegal–Bargmann transform. To be compatible with the uncertainty principle, the phase-space wave function cannot be an arbitrary function, or else it could be localized into an arbitrarily small region of phase space. Rather, the Segal–Bargmann transform is aholomorphic functionofx+ip{\displaystyle x+ip}. There is a quasiprobability density associated to the phase-space wave function; it is theHusimi Q representationof the position wave function.
The fundamental noncommutative binary operator in the phase-space formulation that replaces the standard operator multiplication is thestar product, represented by the symbol★.[1]Each representation of the phase-space distribution has adifferentcharacteristic star product. For concreteness, we restrict this discussion to the star product relevant to the Wigner–Weyl representation.
For notational convenience, we introduce the notion ofleft and right derivatives. For a pair of functionsfandg, the left and right derivatives are defined as
Thedifferential definitionof the star product is
where the argument of the exponential function can be interpreted as apower series.
Additional differential relations allow this to be written in terms of a change in the arguments offandg:
It is also possible to define the★-product in a convolution integral form,[16]essentially through theFourier transform:
(Thus, e.g.,[7]Gaussians composehyperbolically:
or
etc.)
The energyeigenstatedistributions are known asstargenstates,★-genstates,stargenfunctions, or★-genfunctions, and the associated energies are known asstargenvaluesor★-genvalues. These are solved, analogously to the time-independentSchrödinger equation, by the★-genvalue equation,[17][18]
whereHis the Hamiltonian, a plain phase-space function, most often identical to the classical Hamiltonian.
Thetime evolutionof the phase space distribution is given by a quantum modification ofLiouville flow.[2][9][19]This formula results from applying theWigner transformationto the density matrix version of thequantum Liouville equation,
thevon Neumann equation.
In any representation of the phase space distribution with its associated star product, this is
or, for the Wigner function in particular,
where {{ , }} is theMoyal bracket, the Wigner transform of the quantum commutator, while { , } is the classicalPoisson bracket.[2]
This yields a concise illustration of thecorrespondence principle: this equation manifestly reduces to the classical Liouville equation in the limitħ→ 0. In the quantum extension of the flow, however,the density of points in phase space is not conserved; the probability fluid appears "diffusive" and compressible.[2]The concept of quantum trajectory is therefore a delicate issue here.[20]See the movie for the Morse potential, below, to appreciate the nonlocality of quantum phase flow.
N.B. Given the restrictions placed by the uncertainty principle on localization,Niels Bohrvigorously denied the physical existence of such trajectories on the microscopic scale. By means of formal phase-space trajectories, the time evolution problem of the Wigner function can be rigorously solved using the path-integral method[21]and themethod of quantum characteristics,[22]although there are severe practical obstacles in both cases.
The Hamiltonian for the simple harmonic oscillator in one spatial dimension in the Wigner–Weyl representation is
The★-genvalue equation for thestaticWigner function then reads
Consider, first, the imaginary part of the★-genvalue equation,
This implies that one may write the★-genstates as functions of a single argument:
With this change of variables, it is possible to write the real part of the★-genvalue equation in the form of a modified Laguerre equation (notHermite's equation!), the solution of which involves theLaguerre polynomialsas[18]
introduced by Groenewold,[1]with associated★-genvalues
For the harmonic oscillator, the time evolution of an arbitrary Wigner distribution is simple. An initialW(x,p;t= 0) =F(u)evolves by the above evolution equation driven by the oscillator Hamiltonian given, by simplyrigidly rotating in phase space,[1]
Typically, a "bump" (or coherent state) of energyE≫ħωcan represent a macroscopic quantity and appear like a classical object rotating uniformly in phase space, a plain mechanical oscillator (see the animated figures). Integrating over all phases (starting positions att= 0) of such objects, a continuous "palisade", yields a time-independent configuration similar to the above static★-genstatesF(u), an intuitive visualization of theclassical limitfor large-action systems.[6]
The eigenfunctions can also be characterized by being rotationally symmetric (thus time-invariant) pure states. That is, they are functions of formW(x,p)=f(x2+p2){\displaystyle W(x,p)=f({\sqrt {x^{2}+p^{2}}})}that satisfyW⋆W=(2πℏ)−1W{\displaystyle W\star W=(2\pi \hbar )^{-1}W}.
Suppose a particle is initially in a minimally uncertainGaussian state, with the expectation values of position and momentum both centered at the origin in phase space. The Wigner function for such a state propagating freely is
whereαis a parameter describing the initial width of the Gaussian, andτ=m/α2ħ.
Initially, the position and momenta are uncorrelated. Thus, in 3 dimensions, we expect the position and momentum vectors to be twice as likely to be perpendicular to each other as parallel.
However, the position and momentum become increasingly correlated as the state evolves, because portions of the distribution farther from the origin in position require a larger momentum to be reached: asymptotically,
(This relative"squeezing"reflects the spreading of the freewave packetin coordinate space.)
Indeed, it is possible to show that the kinetic energy of the particle becomes asymptotically radial only, in agreement with the standard
quantum-mechanical notion of the ground-state nonzero angular momentum specifying orientation independence:[24]
TheMorse potentialis used to approximate the vibrational structure of a diatomic molecule.
Tunnelingis a hallmark quantum effect where a quantum particle, not having sufficient energy to fly above, still goes through a barrier. This effect does not exist in classical mechanics. | https://en.wikipedia.org/wiki/Phase_space_formulation |
Inmathematics, more specifically incategory theory, auniversal propertyis a property that characterizesup toanisomorphismthe result of some constructions. Thus, universal properties can be used for defining some objects independently from the method chosen for constructing them. For example, the definitions of theintegersfrom thenatural numbers, of therational numbersfrom the integers, of thereal numbersfrom the rational numbers, and ofpolynomial ringsfrom thefieldof their coefficients can all be done in terms of universal properties. In particular, the concept of universal property allows a simple proof that allconstructions of real numbersare equivalent: it suffices to prove that they satisfy the same universal property.
Technically, a universal property is defined in terms ofcategoriesandfunctorsby means of auniversal morphism(see§ Formal definition, below). Universal morphisms can also be thought more abstractly asinitial or terminal objectsof acomma category(see§ Connection with comma categories, below).
Universal properties occur almost everywhere in mathematics, and the use of the concept allows the use of general properties of universal properties for easily proving some properties that would need boring verifications otherwise. For example, given acommutative ringR, thefield of fractionsof thequotient ringofRby aprime idealpcan be identified with theresidue fieldof thelocalizationofRatp; that isRp/pRp≅Frac(R/p){\displaystyle R_{p}/pR_{p}\cong \operatorname {Frac} (R/p)}(all these constructions can be defined by universal properties).
Other objects that can be defined by universal properties include: allfree objects,direct productsanddirect sums,free groups,free lattices,Grothendieck group,completion of a metric space,completion of a ring,Dedekind–MacNeille completion,product topologies,Stone–Čech compactification,tensor products,inverse limitanddirect limit,kernelsandcokernels,quotient groups,quotient vector spaces, and otherquotient spaces.
Before giving a formal definition of universal properties, we offer some motivation for studying such constructions.
To understand the definition of a universal construction, it is important to look at examples. Universal constructions were not defined out of thin air, but were rather defined after mathematicians began noticing a pattern in many mathematical constructions (see Examples below). Hence, the definition may not make sense to one at first, but will become clear when one reconciles it with concrete examples.
LetF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}be a functor between categoriesC{\displaystyle {\mathcal {C}}}andD{\displaystyle {\mathcal {D}}}. In what follows, letX{\displaystyle X}be an object ofD{\displaystyle {\mathcal {D}}},A{\displaystyle A}andA′{\displaystyle A'}be objects ofC{\displaystyle {\mathcal {C}}}, andh:A→A′{\displaystyle h:A\to A'}be a morphism inC{\displaystyle {\mathcal {C}}}.
Then, the functorF{\displaystyle F}mapsA{\displaystyle A},A′{\displaystyle A'}andh{\displaystyle h}inC{\displaystyle {\mathcal {C}}}toF(A){\displaystyle F(A)},F(A′){\displaystyle F(A')}andF(h){\displaystyle F(h)}inD{\displaystyle {\mathcal {D}}}.
Auniversal morphism fromX{\displaystyle X}toF{\displaystyle F}is a unique pair(A,u:X→F(A)){\displaystyle (A,u:X\to F(A))}inD{\displaystyle {\mathcal {D}}}which has the following property, commonly referred to as auniversal property:
For any morphism of the formf:X→F(A′){\displaystyle f:X\to F(A')}inD{\displaystyle {\mathcal {D}}}, there exists auniquemorphismh:A→A′{\displaystyle h:A\to A'}inC{\displaystyle {\mathcal {C}}}such that the following diagramcommutes:
We candualizethis categorical concept. Auniversal morphism fromF{\displaystyle F}toX{\displaystyle X}is a unique pair(A,u:F(A)→X){\displaystyle (A,u:F(A)\to X)}that satisfies the following universal property:
For any morphism of the formf:F(A′)→X{\displaystyle f:F(A')\to X}inD{\displaystyle {\mathcal {D}}}, there exists auniquemorphismh:A′→A{\displaystyle h:A'\to A}inC{\displaystyle {\mathcal {C}}}such that the following diagram commutes:
Note that in each definition, the arrows are reversed. Both definitions are necessary to describe universal constructions which appear in mathematics; but they also arise due to the inherent duality present in category theory.
In either case, we say that the pair(A,u){\displaystyle (A,u)}which behaves as above satisfies a universal property.
Universal morphisms can be described more concisely as initial and terminal objects in acomma category(i.e. one where morphisms are seen as objects in their own right).
LetF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}be a functor andX{\displaystyle X}an object ofD{\displaystyle {\mathcal {D}}}. Then recall that the comma category(X↓F){\displaystyle (X\downarrow F)}is the category where
Now suppose that the object(A,u:X→F(A)){\displaystyle (A,u:X\to F(A))}in(X↓F){\displaystyle (X\downarrow F)}is initial. Then
for every object(A′,f:X→F(A′)){\displaystyle (A',f:X\to F(A'))}, there exists a unique morphismh:A→A′{\displaystyle h:A\to A'}such that the following diagram commutes.
Note that the equality here simply means the diagrams are the same. Also note that the diagram on the right side of the equality is the exact same as the one offered in defining auniversal morphism fromX{\displaystyle X}toF{\displaystyle F}. Therefore, we see that a universal morphism fromX{\displaystyle X}toF{\displaystyle F}is equivalent to an initial object in the comma category(X↓F){\displaystyle (X\downarrow F)}.
Conversely, recall that the comma category(F↓X){\displaystyle (F\downarrow X)}is the category where
Suppose(A,u:F(A)→X){\displaystyle (A,u:F(A)\to X)}is a terminal object in(F↓X){\displaystyle (F\downarrow X)}. Then for every object(A′,f:F(A′)→X){\displaystyle (A',f:F(A')\to X)},
there exists a unique morphismh:A′→A{\displaystyle h:A'\to A}such that the following diagrams commute.
The diagram on the right side of the equality is the same diagram pictured when defining auniversal morphism fromF{\displaystyle F}toX{\displaystyle X}. Hence, a universal morphism fromF{\displaystyle F}toX{\displaystyle X}corresponds with a terminal object in the comma category(F↓X){\displaystyle (F\downarrow X)}.
Below are a few examples, to highlight the general idea. The reader can construct numerous other examples by consulting the articles mentioned in the introduction.
LetC{\displaystyle {\mathcal {C}}}be thecategory of vector spacesK{\displaystyle K}-Vectover afieldK{\displaystyle K}and letD{\displaystyle {\mathcal {D}}}be the category ofalgebrasK{\displaystyle K}-AlgoverK{\displaystyle K}(assumed to beunitalandassociative). Let
be theforgetful functorwhich assigns to each algebra its underlying vector space.
Given anyvector spaceV{\displaystyle V}overK{\displaystyle K}we can construct thetensor algebraT(V){\displaystyle T(V)}. The tensor algebra is characterized by the fact:
This statement is an initial property of the tensor algebra since it expresses the fact that the pair(T(V),i){\displaystyle (T(V),i)}, wherei:V→U(T(V)){\displaystyle i:V\to U(T(V))}is the inclusion map, is a universal morphism from the vector spaceV{\displaystyle V}to the functorU{\displaystyle U}.
Since this construction works for any vector spaceV{\displaystyle V}, we conclude thatT{\displaystyle T}is a functor fromK{\displaystyle K}-VecttoK{\displaystyle K}-Alg. This means thatT{\displaystyle T}isleft adjointto the forgetful functorU{\displaystyle U}(see the section below onrelation to adjoint functors).
Acategorical productcan be characterized by a universal construction. For concreteness, one may consider theCartesian productinSet, thedirect productinGrp, or theproduct topologyinTop, where products exist.
LetX{\displaystyle X}andY{\displaystyle Y}be objects of a categoryC{\displaystyle {\mathcal {C}}}with finite products. The product ofX{\displaystyle X}andY{\displaystyle Y}is an objectX{\displaystyle X}×Y{\displaystyle Y}together with two morphisms
such that for any other objectZ{\displaystyle Z}ofC{\displaystyle {\mathcal {C}}}and morphismsf:Z→X{\displaystyle f:Z\to X}andg:Z→Y{\displaystyle g:Z\to Y}there exists a unique morphismh:Z→X×Y{\displaystyle h:Z\to X\times Y}such thatf=π1∘h{\displaystyle f=\pi _{1}\circ h}andg=π2∘h{\displaystyle g=\pi _{2}\circ h}.
To understand this characterization as a universal property, take the categoryD{\displaystyle {\mathcal {D}}}to be theproduct categoryC×C{\displaystyle {\mathcal {C}}\times {\mathcal {C}}}and define thediagonal functor
byΔ(X)=(X,X){\displaystyle \Delta (X)=(X,X)}andΔ(f:X→Y)=(f,f){\displaystyle \Delta (f:X\to Y)=(f,f)}. Then(X×Y,(π1,π2)){\displaystyle (X\times Y,(\pi _{1},\pi _{2}))}is a universal morphism fromΔ{\displaystyle \Delta }to the object(X,Y){\displaystyle (X,Y)}ofC×C{\displaystyle {\mathcal {C}}\times {\mathcal {C}}}: if(f,g){\displaystyle (f,g)}is any morphism from(Z,Z){\displaystyle (Z,Z)}to(X,Y){\displaystyle (X,Y)}, then it must equal
a morphismΔ(h:Z→X×Y)=(h,h){\displaystyle \Delta (h:Z\to X\times Y)=(h,h)}fromΔ(Z)=(Z,Z){\displaystyle \Delta (Z)=(Z,Z)}toΔ(X×Y)=(X×Y,X×Y){\displaystyle \Delta (X\times Y)=(X\times Y,X\times Y)}followed by(π1,π2){\displaystyle (\pi _{1},\pi _{2})}. As a commutative diagram:
For the example of the Cartesian product inSet, the morphism(π1,π2){\displaystyle (\pi _{1},\pi _{2})}comprises the two projectionsπ1(x,y)=x{\displaystyle \pi _{1}(x,y)=x}andπ2(x,y)=y{\displaystyle \pi _{2}(x,y)=y}. Given any setZ{\displaystyle Z}and functionsf,g{\displaystyle f,g}the unique map such that the required diagram commutes is given byh=⟨x,y⟩(z)=(f(z),g(z)){\displaystyle h=\langle x,y\rangle (z)=(f(z),g(z))}.[3]
Categorical products are a particular kind oflimitin category theory. One can generalize the above example to arbitrary limits and colimits.
LetJ{\displaystyle {\mathcal {J}}}andC{\displaystyle {\mathcal {C}}}be categories withJ{\displaystyle {\mathcal {J}}}asmallindex categoryand letCJ{\displaystyle {\mathcal {C}}^{\mathcal {J}}}be the correspondingfunctor category. Thediagonal functor
is the functor that maps each objectN{\displaystyle N}inC{\displaystyle {\mathcal {C}}}to the constant functorΔ(N):J→C{\displaystyle \Delta (N):{\mathcal {J}}\to {\mathcal {C}}}(i.e.Δ(N)(X)=N{\displaystyle \Delta (N)(X)=N}for eachX{\displaystyle X}inJ{\displaystyle {\mathcal {J}}}andΔ(N)(f)=1N{\displaystyle \Delta (N)(f)=1_{N}}for eachf:X→Y{\displaystyle f:X\to Y}inJ{\displaystyle {\mathcal {J}}}) and each morphismf:N→M{\displaystyle f:N\to M}inC{\displaystyle {\mathcal {C}}}to the natural transformationΔ(f):Δ(N)→Δ(M){\displaystyle \Delta (f):\Delta (N)\to \Delta (M)}inCJ{\displaystyle {\mathcal {C}}^{\mathcal {J}}}defined as, for every objectX{\displaystyle X}ofJ{\displaystyle {\mathcal {J}}}, the componentΔ(f)(X):Δ(N)(X)→Δ(M)(X)=f:N→M{\displaystyle \Delta (f)(X):\Delta (N)(X)\to \Delta (M)(X)=f:N\to M}atX{\displaystyle X}. In other words, the natural transformation is the one defined by having constant componentf:N→M{\displaystyle f:N\to M}for every object ofJ{\displaystyle {\mathcal {J}}}.
Given a functorF:J→C{\displaystyle F:{\mathcal {J}}\to {\mathcal {C}}}(thought of as an object inCJ{\displaystyle {\mathcal {C}}^{\mathcal {J}}}), thelimitofF{\displaystyle F}, if it exists, is nothing but a universal morphism fromΔ{\displaystyle \Delta }toF{\displaystyle F}. Dually, thecolimitofF{\displaystyle F}is a universal morphism fromF{\displaystyle F}toΔ{\displaystyle \Delta }.
Defining a quantity does not guarantee its existence. Given a functorF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}and an objectX{\displaystyle X}ofD{\displaystyle {\mathcal {D}}},
there may or may not exist a universal morphism fromX{\displaystyle X}toF{\displaystyle F}. If, however, a universal morphism(A,u){\displaystyle (A,u)}does exist, then it is essentially unique.
Specifically, it is uniqueup toauniqueisomorphism: if(A′,u′){\displaystyle (A',u')}is another pair, then there exists a unique isomorphismk:A→A′{\displaystyle k:A\to A'}such thatu′=F(k)∘u{\displaystyle u'=F(k)\circ u}.
This is easily seen by substituting(A,u′){\displaystyle (A,u')}in the definition of a universal morphism.
It is the pair(A,u){\displaystyle (A,u)}which is essentially unique in this fashion. The objectA{\displaystyle A}itself is only unique up to isomorphism. Indeed, if(A,u){\displaystyle (A,u)}is a universal morphism andk:A→A′{\displaystyle k:A\to A'}is any isomorphism then the pair(A′,u′){\displaystyle (A',u')}, whereu′=F(k)∘u{\displaystyle u'=F(k)\circ u}is also a universal morphism.
The definition of a universal morphism can be rephrased in a variety of ways. LetF:C→D{\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}be a functor and letX{\displaystyle X}be an object ofD{\displaystyle {\mathcal {D}}}. Then the following statements are equivalent:
(F(∙)∘u)B(f:A→B):X→F(B)=F(f)∘u:X→F(B){\displaystyle (F(\bullet )\circ u)_{B}(f:A\to B):X\to F(B)=F(f)\circ u:X\to F(B)}
for each objectB{\displaystyle B}inC.{\displaystyle {\mathcal {C}}.}
The dual statements are also equivalent:
(u∘F(∙))B(f:B→A):F(B)→X=u∘F(f):F(B)→X{\displaystyle (u\circ F(\bullet ))_{B}(f:B\to A):F(B)\to X=u\circ F(f):F(B)\to X}
for each objectB{\displaystyle B}inC.{\displaystyle {\mathcal {C}}.}
Suppose(A1,u1){\displaystyle (A_{1},u_{1})}is a universal morphism fromX1{\displaystyle X_{1}}toF{\displaystyle F}and(A2,u2){\displaystyle (A_{2},u_{2})}is a universal morphism fromX2{\displaystyle X_{2}}toF{\displaystyle F}.
By the universal property of universal morphisms, given any morphismh:X1→X2{\displaystyle h:X_{1}\to X_{2}}there exists a unique morphismg:A1→A2{\displaystyle g:A_{1}\to A_{2}}such that the following diagram commutes:
IfeveryobjectXi{\displaystyle X_{i}}ofD{\displaystyle {\mathcal {D}}}admits a universal morphism toF{\displaystyle F}, then the assignmentXi↦Ai{\displaystyle X_{i}\mapsto A_{i}}andh↦g{\displaystyle h\mapsto g}defines a functorG:D→C{\displaystyle G:{\mathcal {D}}\to {\mathcal {C}}}. The mapsui{\displaystyle u_{i}}then define anatural transformationfrom1D{\displaystyle 1_{\mathcal {D}}}(the identity functor onD{\displaystyle {\mathcal {D}}}) toF∘G{\displaystyle F\circ G}. The functors(F,G){\displaystyle (F,G)}are then a pair ofadjoint functors, withG{\displaystyle G}left-adjoint toF{\displaystyle F}andF{\displaystyle F}right-adjoint toG{\displaystyle G}.
Similar statements apply to the dual situation of terminal morphisms fromF{\displaystyle F}. If such morphisms exist for everyX{\displaystyle X}inC{\displaystyle {\mathcal {C}}}one obtains a functorG:C→D{\displaystyle G:{\mathcal {C}}\to {\mathcal {D}}}which is right-adjoint toF{\displaystyle F}(soF{\displaystyle F}is left-adjoint toG{\displaystyle G}).
Indeed, all pairs of adjoint functors arise from universal constructions in this manner. LetF{\displaystyle F}andG{\displaystyle G}be a pair of adjoint functors with unitη{\displaystyle \eta }and co-unitϵ{\displaystyle \epsilon }(see the article onadjoint functorsfor the definitions). Then we have a universal morphism for each object inC{\displaystyle {\mathcal {C}}}andD{\displaystyle {\mathcal {D}}}:
Universal constructions are more general than adjoint functor pairs: a universal construction is like an optimization problem; it gives rise to an adjoint pair if and only if this problem has a solution for every object ofC{\displaystyle {\mathcal {C}}}(equivalently, every object ofD{\displaystyle {\mathcal {D}}}).
Universal properties of various topological constructions were presented byPierre Samuelin 1948. They were later used extensively byBourbaki. The closely related concept of adjoint functors was introduced independently byDaniel Kanin 1958. | https://en.wikipedia.org/wiki/Universal_property |
In mathematics,infinitecompositionsofanalytic functions(ICAF)offer alternative formulations ofanalytic continued fractions,series,productsand other infinite expansions, and the theory evolving from such compositions may shed light on theconvergence/divergenceof these expansions. Some functions can actually be expanded directly as infinite compositions. In addition, it is possible to use ICAF to evaluate solutions offixed pointequations involving infinite expansions.Complex dynamicsoffers another venue foriteration of systems of functionsrather than a single function. For infinite compositions of asingle functionseeIterated function. For compositions of a finite number of functions, useful infractaltheory, seeIterated function system.
Although the title of this article specifies analytic functions, there are results for more generalfunctions of a complex variableas well.
There are several notations describing infinite compositions, including the following:
Forward compositions:Fk,n(z)=fk∘fk+1∘⋯∘fn−1∘fn(z).{\displaystyle F_{k,n}(z)=f_{k}\circ f_{k+1}\circ \dots \circ f_{n-1}\circ f_{n}(z).}
Backward compositions:Gk,n(z)=fn∘fn−1∘⋯∘fk+1∘fk(z).{\displaystyle G_{k,n}(z)=f_{n}\circ f_{n-1}\circ \dots \circ f_{k+1}\circ f_{k}(z).}
In each case convergence is interpreted as the existence of the following limits:
For convenience, setFn(z) =F1,n(z)andGn(z) =G1,n(z).
One may also writeFn(z)=Rnk=1fk(z)=f1∘f2∘⋯∘fn(z){\displaystyle F_{n}(z)={\underset {k=1}{\overset {n}{\mathop {R} }}}\,f_{k}(z)=f_{1}\circ f_{2}\circ \cdots \circ f_{n}(z)}andGn(z)=Lnk=1gk(z)=gn∘gn−1∘⋯∘g1(z){\displaystyle G_{n}(z)={\underset {k=1}{\overset {n}{\mathop {L} }}}\,g_{k}(z)=g_{n}\circ g_{n-1}\circ \cdots \circ g_{1}(z)}
Many results can be considered extensions of the following result:
Contraction Theorem for Analytic Functions[1]—Letfbe analytic in a simply-connected regionSand continuous on the closureSofS. Supposef(S) is a bounded set contained inS. Then for allzinSthere exists anattractive fixed pointα offinSsuch that:Fn(z)=(f∘f∘⋯∘f)(z)→α.{\displaystyle F_{n}(z)=(f\circ f\circ \cdots \circ f)(z)\to \alpha .}
Let {fn} be a sequence of functions analytic on a simply-connected domainS. Suppose there exists a compact set Ω ⊂Ssuch that for eachn,fn(S) ⊂ Ω.
Forward (inner or right) Compositions Theorem—{Fn} converges uniformly on compact subsets ofSto a constant functionF(z) =λ.[2]
Backward (outer or left) Compositions Theorem—{Gn} converges uniformly on compact subsets ofStoγ∈ Ω if and only if the sequence of fixed points {γn} of the {fn} converges toγ.[3]
Additional theory resulting from investigations based on these two theorems, particularly Forward Compositions Theorem, include location analysis for the limits obtained in the following reference.[4]For a different approach to Backward Compositions Theorem, see the following reference.[5]
Regarding Backward Compositions Theorem, the examplef2n(z) = 1/2 andf2n−1(z) = −1/2 forS= {z: |z| < 1} demonstrates the inadequacy of simply requiring contraction into a compact subset, like Forward Compositions Theorem.
For functions not necessarily analytic theLipschitzcondition suffices:
Theorem[6]—SupposeS{\displaystyle S}is a simply connected compact subset ofC{\displaystyle \mathbb {C} }and lettn:S→S{\displaystyle t_{n}:S\to S}be a family of functions that satisfies∀n,∀z1,z2∈S,∃ρ:|tn(z1)−tn(z2)|≤ρ|z1−z2|,ρ<1.{\displaystyle \forall n,\forall z_{1},z_{2}\in S,\exists \rho :\quad \left|t_{n}(z_{1})-t_{n}(z_{2})\right|\leq \rho |z_{1}-z_{2}|,\quad \rho <1.}Define:Gn(z)=(tn∘tn−1∘⋯∘t1)(z)Fn(z)=(t1∘t2∘⋯∘tn)(z){\displaystyle {\begin{aligned}G_{n}(z)&=\left(t_{n}\circ t_{n-1}\circ \cdots \circ t_{1}\right)(z)\\F_{n}(z)&=\left(t_{1}\circ t_{2}\circ \cdots \circ t_{n}\right)(z)\end{aligned}}}ThenFn(z)→β∈S{\displaystyle F_{n}(z)\to \beta \in S}uniformly onS.{\displaystyle S.}Ifαn{\displaystyle \alpha _{n}}is the unique fixed point oftn{\displaystyle t_{n}}thenGn(z)→α{\displaystyle G_{n}(z)\to \alpha }uniformly onS{\displaystyle S}if and only if|αn−α|=εn→0{\displaystyle |\alpha _{n}-\alpha |=\varepsilon _{n}\to 0}.
Results involvingentire functionsinclude the following, as examples. Set
Then the following results hold:
Theorem E1[7]—Ifan≡ 1,∑n=1∞ρn<∞{\displaystyle \sum _{n=1}^{\infty }\rho _{n}<\infty }thenFn→Fis entire.
Theorem E2[8]—Setεn= |an−1| suppose there exists non-negativeδn,M1,M2,Rsuch that the following holds:∑n=1∞εn<∞,∑n=1∞δn<∞,∏n=1∞(1+δn)<M1,∏n=1∞(1+εn)<M2,ρn<δnRM1M2.{\displaystyle {\begin{aligned}\sum _{n=1}^{\infty }\varepsilon _{n}&<\infty ,\\\sum _{n=1}^{\infty }\delta _{n}&<\infty ,\\\prod _{n=1}^{\infty }(1+\delta _{n})&<M_{1},\\\prod _{n=1}^{\infty }(1+\varepsilon _{n})&<M_{2},\\\rho _{n}&<{\frac {\delta _{n}}{RM_{1}M_{2}}}.\end{aligned}}}ThenGn(z) →G(z) is analytic for |z| <R. Convergence is uniform on compact subsets of {z: |z| <R}.
Additional elementary results include:
Theorem GF3[6]—Supposefk(z)=z+ρkφk(z){\displaystyle f_{k}(z)=z+\rho _{k}\varphi _{k}(z)}where there existR,M>0{\displaystyle R,M>0}such that|z|<R{\displaystyle |z|<R}implies|φk(z)|<M,∀k,{\displaystyle |\varphi _{k}(z)|<M,\forall k,\ }Furthermore, supposeρk≥0,∑k=1∞ρk<∞{\textstyle \rho _{k}\geq 0,\sum _{k=1}^{\infty }\rho _{k}<\infty }andR>M∑k=1∞ρk.{\textstyle R>M\sum _{k=1}^{\infty }\rho _{k}.}Then forR∗<R−M∑k=1∞ρk{\textstyle R*<R-M\sum _{k=1}^{\infty }\rho _{k}}Gn(z)≡(fn∘fn−1∘⋯∘f1)(z)→G(z)for{z:|z|<R∗}{\displaystyle G_{n}(z)\equiv \left(f_{n}\circ f_{n-1}\circ \cdots \circ f_{1}\right)(z)\to G(z)\qquad {\text{ for }}\{z:|z|<R*\}}
Theorem GF4[6]—Supposefk(z)=z+ρkφk(z){\displaystyle f_{k}(z)=z+\rho _{k}\varphi _{k}(z)}where there existR,M>0{\displaystyle R,M>0}such that|z|<R{\displaystyle |z|<R}and|ζ|<R{\displaystyle |\zeta |<R}implies|φk(z)|<M{\displaystyle |\varphi _{k}(z)|<M}and|φk(z)−φk(ζ)|≤r|z−ζ|,∀k.{\displaystyle |\varphi _{k}(z)-\varphi _{k}(\zeta )|\leq r|z-\zeta |,\forall k.\ }Furthermore, supposeρk≥0,∑k=1∞ρk<∞{\textstyle \rho _{k}\geq 0,\sum _{k=1}^{\infty }\rho _{k}<\infty }andR>M∑k=1∞ρk.{\textstyle R>M\sum _{k=1}^{\infty }\rho _{k}.}Then forR∗<R−M∑k=1∞ρk{\textstyle R*<R-M\sum _{k=1}^{\infty }\rho _{k}}Fn(z)≡(f1∘f2∘⋯∘fn)(z)→F(z)for{z:|z|<R∗}{\displaystyle F_{n}(z)\equiv \left(f_{1}\circ f_{2}\circ \cdots \circ f_{n}\right)(z)\to F(z)\qquad {\text{ for }}\{z:|z|<R*\}}
Results[8]for compositions oflinear fractional (Möbius) transformationsinclude the following, as examples:
Theorem LFT1—On the set of convergence of a sequence {Fn} of non-singular LFTs, the limit function is either:
In (a), the sequence converges everywhere in the extended plane. In (b), the sequence converges either everywhere, and to the same value everywhere except at one point, or it converges at only two points. Case (c) can occur with every possible set of convergence.[9]
Theorem LFT2[10]—If {Fn} converges to an LFT, thenfnconverge to the identity functionf(z) =z.
Theorem LFT3[11]—Iffn→fand all functions arehyperbolicorloxodromicMöbius transformations, thenFn(z) →λ, a constant, for allz≠β=limn→∞βn{\textstyle z\neq \beta =\lim _{n\to \infty }\beta _{n}}, where {βn} are the repulsive fixed points of the {fn}.
Theorem LFT4[12]—Iffn→fwherefisparabolicwith fixed pointγ. Let the fixed-points of the {fn} be {γn} and {βn}. If∑n=1∞|γn−βn|<∞and∑n=1∞n|βn+1−βn|<∞{\displaystyle \sum _{n=1}^{\infty }\left|\gamma _{n}-\beta _{n}\right|<\infty \quad {\text{and}}\quad \sum _{n=1}^{\infty }n\left|\beta _{n+1}-\beta _{n}\right|<\infty }thenFn(z) →λ, a constant in the extended complex plane, for allz.
The value of the infinite continued fraction
may be expressed as the limit of the sequence {Fn(0)} where
As a simple example, a well-known result (Worpitsky's circle theorem[13]) follows from an application of Theorem (A):
Consider the continued fraction
with
Stipulate that |ζ| < 1 and |z| <R< 1. Then for 0 <r< 1,
Example.F(z)=(i−1)z1+i+z+(2−i)z1+2i+z+(3−i)z1+3i+z+⋯,{\displaystyle F(z)={\frac {(i-1)z}{1+i+z{\text{ }}+}}{\text{ }}{\frac {(2-i)z}{1+2i+z{\text{ }}+}}{\text{ }}{\frac {(3-i)z}{1+3i+z{\text{ }}+}}\cdots ,}[−15,15]{\displaystyle [-15,15]}
Example.[8]Afixed-point continued fraction form(a single variable).
Examples illustrating the conversion of a function directly into a composition follow:
Example 1.[7][14]Supposeϕ{\displaystyle \phi }is an entire function satisfying the following conditions:
Then
Example 2.[7]
Example 3.[6]
Example 4.[6]
Theorem (B) can be applied to determine the fixed-points of functions defined by infinite expansions or certain integrals. The following examples illustrate the process:
Example FP1.[3]For |ζ| ≤ 1 let
To find α =G(α), first we define:
Then calculateGn(ζ)=fn∘⋯∘f1(ζ){\displaystyle G_{n}(\zeta )=f_{n}\circ \cdots \circ f_{1}(\zeta )}with ζ = 1, which gives: α = 0.087118118... to ten decimal places after ten iterations.
Theorem FP2[8]—Letφ(ζ,t) be analytic inS= {z: |z| <R} for alltin [0, 1] and continuous int. Setfn(ζ)=1n∑k=1nφ(ζ,kn).{\displaystyle f_{n}(\zeta )={\frac {1}{n}}\sum _{k=1}^{n}\varphi \left(\zeta ,{\tfrac {k}{n}}\right).}If |φ(ζ,t)| ≤r<Rforζ∈Sandt∈ [0, 1], thenζ=∫01φ(ζ,t)dt{\displaystyle \zeta =\int _{0}^{1}\varphi (\zeta ,t)\,dt}has a unique solution,αinS, withlimn→∞Gn(ζ)=α.{\displaystyle \lim _{n\to \infty }G_{n}(\zeta )=\alpha .}
Consider a time interval, normalized toI= [0, 1]. ICAFs can be constructed to describe continuous motion of a point,z, over the interval, but in such a way that at each "instant" the motion is virtually zero (seeZeno's Arrow): For the interval divided into n equal subintervals, 1 ≤k≤nsetgk,n(z)=z+φk,n(z){\displaystyle g_{k,n}(z)=z+\varphi _{k,n}(z)}analytic or simply continuous – in a domainS, such that
andgk,n(z)∈S{\displaystyle g_{k,n}(z)\in S}.
Source:[8]
implies
where the integral is well-defined ifdzdt=ϕ(z,t){\displaystyle {\tfrac {dz}{dt}}=\phi (z,t)}has a closed-form solutionz(t). Then
Otherwise, the integrand is poorly defined although the value of the integral is easily computed. In this case one might call the integral a "virtual" integral.
Example.ϕ(z,t)=2t−cosy1−sinxcosy+i1−2tsinx1−sinxcosy,∫01ψ(z,t)dt{\displaystyle \phi (z,t)={\frac {2t-\cos y}{1-\sin x\cos y}}+i{\frac {1-2t\sin x}{1-\sin x\cos y}},\int _{0}^{1}\psi (z,t)\,dt}
Example.Let:
Next, setT1,n(z)=gn(z),Tk,n(z)=gn(Tk−1,n(z)),{\displaystyle T_{1,n}(z)=g_{n}(z),T_{k,n}(z)=g_{n}(T_{k-1,n}(z)),}andTn(z) =Tn,n(z). Let
when that limit exists. The sequence {Tn(z)} defines contours γ = γ(cn,z) that follow the flow of the vector fieldf(z). If there exists an attractive fixed point α, meaning |f(z) − α| ≤ ρ|z− α| for 0 ≤ ρ < 1, thenTn(z) →T(z) ≡ α along γ = γ(cn,z), provided (for example)cn=n{\displaystyle c_{n}={\sqrt {n}}}. Ifcn≡c> 0, thenTn(z) →T(z), a point on the contour γ = γ(c,z). It is easily seen that
and
when these limits exist.
These concepts are marginally related toactive contour theoryin image processing, and are simple generalizations of theEuler method
The series defined recursively byfn(z) =z+gn(z) have the property that the nth term is predicated on the sum of the firstn− 1 terms. In order to employ theorem (GF3) it is necessary to show boundedness in the following sense: If eachfnis defined for |z| <Mthen |Gn(z)| <Mmust follow before |fn(z) −z| = |gn(z)| ≤Cβnis defined for iterative purposes. This is becausegn(Gn−1(z)){\displaystyle g_{n}(G_{n-1}(z))}occurs throughout the expansion. The restriction
serves this purpose. ThenGn(z) →G(z) uniformly on the restricted domain.
Example (S1).Set
andM= ρ2. ThenR= ρ2− (π/6) > 0. Then, ifS={z:|z|<R,Re(z)>0}{\displaystyle S=\left\{z:|z|<R,\operatorname {Re} (z)>0\right\}},zinSimplies |Gn(z)| <Mand theorem (GF3) applies, so that
converges absolutely, hence is convergent.
Example (S2):fn(z)=z+1n2⋅φ(z),φ(z)=2cos(x/y)+i2sin(x/y),>Gn(z)=fn∘fn−1∘⋯∘f1(z),[−10,10],n=50{\displaystyle f_{n}(z)=z+{\frac {1}{n^{2}}}\cdot \varphi (z),\varphi (z)=2\cos(x/y)+i2\sin(x/y),>G_{n}(z)=f_{n}\circ f_{n-1}\circ \cdots \circ f_{1}(z),\qquad [-10,10],n=50}
The product defined recursively by
has the appearance
In order to apply Theorem GF3 it is required that:
Once again, a boundedness condition must support
If one knowsCβnin advance, the following will suffice:
ThenGn(z) →G(z) uniformly on the restricted domain.
Example (P1).Supposefn(z)=z(1+gn(z)){\displaystyle f_{n}(z)=z(1+g_{n}(z))}withgn(z)=z2n3,{\displaystyle g_{n}(z)={\tfrac {z^{2}}{n^{3}}},}observing after a few preliminary computations, that |z| ≤ 1/4 implies |Gn(z)| < 0.27. Then
and
converges uniformly.
Example (P2).
Example (CF1): A self-generating continued fraction.[8]
Example (CF2): Best described as a self-generating reverseEuler continued fraction.[8] | https://en.wikipedia.org/wiki/Infinite_compositions_of_analytic_functions |
Inmathematics, atensoris analgebraic objectthat describes amultilinearrelationship between sets of algebraic objects related to avector space. Tensors may map between different objects such asvectors,scalars, and even other tensors. There are many types of tensors, includingscalarsandvectors(which are the simplest tensors),dual vectors,multilinear mapsbetween vector spaces, and even some operations such as thedot product. Tensors are definedindependentof anybasis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensionalmatrix.
Tensors have become important inphysicsbecause they provide a concise mathematical framework for formulating and solving physics problems in areas such asmechanics(stress,elasticity,quantum mechanics,fluid mechanics,moment of inertia, ...),electrodynamics(electromagnetic tensor,Maxwell tensor,permittivity,magnetic susceptibility, ...), andgeneral relativity(stress–energy tensor,curvature tensor, ...). In applications, it is common to study situations in which a different tensor can occur at each point of an object; for example the stress within an object may vary from one location to another. This leads to the concept of atensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Tullio Levi-CivitaandGregorio Ricci-Curbastropopularised tensors in 1900 – continuing the earlier work ofBernhard Riemann,Elwin Bruno Christoffel, and others – as part of theabsolute differential calculus. The concept enabled an alternative formulation of the intrinsicdifferential geometryof amanifoldin the form of theRiemann curvature tensor.[1]
Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction.
A tensor may be represented as a (potentially multidimensional) array. Just as avectorin ann-dimensionalspace is represented by aone-dimensionalarray withncomponents with respect to a givenbasis, any tensor with respect to a basis is represented by a multidimensional array. For example, alinear operatoris represented in a basis as a two-dimensional squaren×narray. The numbers in the multidimensional array are known as thecomponentsof the tensor. They are denoted by indices giving their position in the array, assubscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order-2tensorTcould be denotedTij, whereiandjare indices running from1ton, or also byTij. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus whileTijandTijcan both be expressed asn-by-nmatrices, and are numerically related viaindex juggling, the difference in their transformation laws indicates it would be improper to add them together.
The total number of indices (m) required to identify each component uniquely is equal to thedimensionor the number ofwaysof an array, which is why a tensor is sometimes referred to as anm-dimensional array or anm-way array. The total number of indices is also called theorder,degreeorrankof a tensor,[2][3][4]although the term "rank" generally hasanother meaningin the context of matrices and tensors.
Just as the components of a vector change when we change thebasisof the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with atransformation lawthat details how the components of the tensor respond to achange of basis. The components of a vector can respond in two distinct ways to achange of basis(seeCovariance and contravariance of vectors), where the newbasis vectorse^i{\displaystyle \mathbf {\hat {e}} _{i}}are expressed in terms of the old basis vectorsej{\displaystyle \mathbf {e} _{j}}as,
HereRjiare the entries of the change of basis matrix, and in the rightmost expression thesummationsign was suppressed: this is theEinstein summation convention, which will be used throughout this article.[Note 1]The componentsviof a column vectorvtransform with theinverseof the matrixR,
where the hat denotes the components in the new basis. This is called acontravarianttransformation law, because the vector components transform by theinverseof the change of basis. In contrast, the components,wi, of a covector (or row vector),w, transform with the matrixRitself,
This is called acovarianttransformation law, because the covector components transform by thesame matrixas the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is calledcontravariantand is conventionally denoted with an upper index (superscript). If the transformation matrix of an index is the basis transformation itself, then the index is calledcovariantand is denoted with a lower index (subscript).
As a simple example, the matrix of a linear operator with respect to a basis is a rectangular arrayT{\displaystyle T}that transforms under a change of basis matrixR=(Rij){\displaystyle R=\left(R_{i}^{j}\right)}byT^=R−1TR{\displaystyle {\hat {T}}=R^{-1}TR}. For the individual matrix entries, this transformation law has the formT^j′i′=(R−1)ii′TjiRj′j{\displaystyle {\hat {T}}_{j'}^{i'}=\left(R^{-1}\right)_{i}^{i'}T_{j}^{i}R_{j'}^{j}}so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type (1,1).
Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:
whereδjk{\displaystyle \delta _{j}^{k}}is theKronecker delta, which functions similarly to theidentity matrix, and has the effect of renaming indices (jintokin this example). This shows several features of the component notation: the ability to re-arrange terms at will (commutativity), the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions likeviei{\displaystyle {v}^{i}\,\mathbf {e} _{i}}can immediately be seen to be geometrically identical in all coordinate systems.
Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components(Tv)i{\displaystyle (Tv)^{i}}are given by(Tv)i=Tjivj{\displaystyle (Tv)^{i}=T_{j}^{i}v^{j}}. These components transform contravariantly, since
The transformation law for an orderp+qtensor withpcontravariant indices andqcovariant indices is thus given as,
Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order ortype(p,q). The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array (or its generalization in other definitions),p+qin the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type(p,q)is also called a(p,q)-tensor for short.
This discussion motivates the following formal definition:[5][6]
Definition.A tensor of type (p,q) is an assignment of a multidimensional array
to each basisf= (e1, ...,en)of ann-dimensional vector space such that, if we apply the change of basis
then the multidimensional array obeys the transformation law
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.[1]
An equivalent definition of a tensor uses therepresentationsof thegeneral linear group. There is anactionof the general linear group on the set of allordered basesof ann-dimensional vector space. Iff=(f1,…,fn){\displaystyle \mathbf {f} =(\mathbf {f} _{1},\dots ,\mathbf {f} _{n})}is an ordered basis, andR=(Rji){\displaystyle R=\left(R_{j}^{i}\right)}is an invertiblen×n{\displaystyle n\times n}matrix, then the action is given by
LetFbe the set of all ordered bases. ThenFis aprincipal homogeneous spacefor GL(n). LetWbe a vector space and letρ{\displaystyle \rho }be a representation of GL(n) onW(that is, agroup homomorphismρ:GL(n)→GL(W){\displaystyle \rho :{\text{GL}}(n)\to {\text{GL}}(W)}). Then a tensor of typeρ{\displaystyle \rho }is anequivariant mapT:F→W{\displaystyle T:F\to W}. Equivariance here means that
Whenρ{\displaystyle \rho }is atensor representationof the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds,[7]and readily generalizes to other groups.[5]
A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common indifferential geometryis to define tensors relative to a fixed (finite-dimensional) vector spaceV, which is usually taken to be a particular vector space of some geometrical significance like thetangent spaceto a manifold.[8]In this approach, a type(p,q)tensorTis defined as amultilinear map,
whereV∗is the correspondingdual spaceof covectors, which is linear in each of its arguments. The above assumesVis a vector space over thereal numbers,R{\displaystyle \mathbb {R} }. More generally,Vcan be taken over anyfieldF(e.g. thecomplex numbers), withFreplacingR{\displaystyle \mathbb {R} }as the codomain of the multilinear maps.
By applying a multilinear mapTof type(p,q)to a basis {ej} forVand a canonical cobasis {εi} forV∗,
a(p+q)-dimensional array of components can be obtained. A different choice of basis will yield different components. But, becauseTis linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components ofTthus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear mapT. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
In viewing a tensor as a multilinear map, it is conventional to identify thedouble dualV∗∗of the vector spaceV, i.e., the space of linear functionals on the dual vector spaceV∗, with the vector spaceV. There is always anatural linear mapfromVto its double dual, given by evaluating a linear form inV∗against a vector inV. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identifyVwith its double dual.
For some mathematical applications, a more abstract approach is sometimes useful. This can be achieved by defining tensors in terms of elements oftensor productsof vector spaces, which in turn are defined through auniversal propertyas explainedhereandhere.
Atype(p,q)tensoris defined in this context as an element of the tensor product of vector spaces,[9][10]
A basisviofVand basiswjofWnaturally induce a basisvi⊗wjof the tensor productV⊗W. The components of a tensorTare the coefficients of the tensor with respect to the basis obtained from a basis{ei}forVand its dual basis{εj}, i.e.
Using the properties of the tensor product, it can be shown that these components satisfy the transformation law for a type(p,q)tensor. Moreover, the universal property of the tensor product gives aone-to-one correspondencebetween tensors defined in this way and tensors defined as multilinear maps.
This 1 to 1 correspondence can be achieved in the following way, because in the finite-dimensional case there exists a canonical isomorphism between a vector space and its double dual:
The last line is using the universal property of the tensor product, that there is a 1 to 1 correspondence between maps fromHom2(U∗×V∗;F){\displaystyle \operatorname {Hom} ^{2}\left(U^{*}\times V^{*};\mathbb {F} \right)}andHom(U∗⊗V∗;F){\displaystyle \operatorname {Hom} \left(U^{*}\otimes V^{*};\mathbb {F} \right)}.[11]
Tensor products can be defined in great generality – for example,involving arbitrary modulesover a ring. In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the termtensorfor an element of a tensor product of any number of copies of a single vector spaceVand its dual, as above.
This discussion of tensors so far assumes finite dimensionality of the spaces involved, where the spaces of tensors obtained by each of these constructions arenaturally isomorphic.[Note 2]Constructions of spaces of tensors based on the tensor product and multilinear mappings can be generalized, essentially without modification, tovector bundlesorcoherent sheaves.[12]For infinite-dimensional vector spaces, inequivalent topologies lead to inequivalent notions of tensor, and these various isomorphisms may or may not hold depending on what exactly is meant by a tensor (seetopological tensor product). In some applications, it is thetensor product of Hilbert spacesthat is intended, whose properties are the most similar to the finite-dimensional case. A more modern view is that it is the tensors' structure as asymmetric monoidal categorythat encodes their most important properties, rather than the specific models of those categories.[13]
In many applications, especially in differential geometry and physics, it is natural to consider a tensor with components that are functions of the point in a space. This was the setting of Ricci's original work. In modern mathematical terminology such an object is called atensor field, often referred to simply as a tensor.[1]
In this context, acoordinate basisis often chosen for thetangent vector space. The transformation law may then be expressed in terms ofpartial derivativesof the coordinate functions,
defining a coordinate transformation,[1]
The concepts of later tensor analysis arose from the work ofCarl Friedrich Gaussindifferential geometry, and the formulation was much influenced by the theory ofalgebraic formsand invariants developed during the middle of the nineteenth century.[14]The word "tensor" itself was introduced in 1846 byWilliam Rowan Hamilton[15]to describe something different from what is now meant by a tensor.[Note 3]Gibbs introduceddyadicsandpolyadic algebra, which are also tensors in the modern sense.[16]The contemporary usage was introduced byWoldemar Voigtin 1898.[17]
Tensor calculus was developed around 1890 byGregorio Ricci-Curbastrounder the titleabsolute differential calculus, and originally presented in 1892.[18]It was made accessible to many mathematicians by the publication of Ricci-Curbastro andTullio Levi-Civita's 1900 classic textMéthodes de calcul différentiel absolu et leurs applications(Methods of absolute differential calculus and their applications).[19]In Ricci's notation, he refers to "systems" with covariant and contravariant components, which are known as tensor fields in the modern sense.[16]
In the 20th century, the subject came to be known astensor analysis, and achieved broader acceptance with the introduction ofAlbert Einstein's theory ofgeneral relativity, around 1915. General relativity is formulated completely in the language of tensors. Einstein had learned about them, with great difficulty, from the geometerMarcel Grossmann.[20]Levi-Civita then initiated a correspondence with Einstein to correct mistakes Einstein had made in his use of tensor analysis. The correspondence lasted 1915–17, and was characterized by mutual respect:
I admire the elegance of your method of computation; it must be nice to ride through these fields upon the horse of true mathematics while the like of us have to make our way laboriously on foot.
Tensors andtensor fieldswere also found to be useful in other fields such ascontinuum mechanics. Some well-known examples of tensors indifferential geometryarequadratic formssuch asmetric tensors, and theRiemann curvature tensor. Theexterior algebraofHermann Grassmann, from the middle of the nineteenth century, is itself a tensor theory, and highly geometric, but it was some time before it was seen, with the theory ofdifferential forms, as naturally unified with tensor calculus. The work ofÉlie Cartanmade differential forms one of the basic kinds of tensors used in mathematics, andHassler Whitneypopularized thetensor product.[16]
From about the 1920s onwards, it was realised that tensors play a basic role inalgebraic topology(for example in theKünneth theorem).[22]Correspondingly there are types of tensors at work in many branches ofabstract algebra, particularly inhomological algebraandrepresentation theory. Multilinear algebra can be developed in greater generality than for scalars coming from afield. For example, scalars can come from aring. But the theory is then less geometric and computations more technical and less algorithmic.[23]Tensors are generalized withincategory theoryby means of the concept ofmonoidal category, from the 1960s.[24]
An elementary example of a mapping describable as a tensor is thedot product, which maps two vectors to a scalar. A more complex example is theCauchy stress tensorT, which takes a directional unit vectorvas input and maps it to the stress vectorT(v), which is the force (per unit area) exerted by material on the negative side of the plane orthogonal tovagainst the material on the positive side of the plane, thus expressing a relationship between these two vectors, shown in the figure (right). Thecross product, where two vectors are mapped to a third one, is strictly speaking not a tensor because it changes its sign under those transformations that change the orientation of the coordinate system. Thetotally anti-symmetric symbolεijk{\displaystyle \varepsilon _{ijk}}nevertheless allows a convenient handling of the cross product in equally oriented three dimensional coordinate systems.
This table shows important examples of tensors on vector spaces and tensor fields on manifolds. The tensors are classified according to their type(n,m), wherenis the number of contravariant indices,mis the number of covariant indices, andn+mgives the total order of the tensor. For example, abilinear formis the same thing as a(0, 2)-tensor; aninner productis an example of a(0, 2)-tensor, but not all(0, 2)-tensors are inner products. In the(0,M)-entry of the table,Mdenotes the dimensionality of the underlying vector space or manifold because for each dimension of the space, a separate index is needed to select that dimension to get a maximally covariant antisymmetric tensor.
Raising an index on an(n,m)-tensor produces an(n+ 1,m− 1)-tensor; this corresponds to moving diagonally down and to the left on the table. Symmetrically, lowering an index corresponds to moving diagonally up and to the right on the table.Contractionof an upper with a lower index of an(n,m)-tensor produces an(n− 1,m− 1)-tensor; this corresponds to moving diagonally up and to the left on the table.
Assuming abasisof a real vector space, e.g., a coordinate frame in the ambient space, a tensor can be represented as an organizedmultidimensional arrayof numerical values with respect to this specific basis. Changing the basis transforms the values in the array in a characteristic way that allows todefinetensors as objects adhering to this transformational behavior. For example, there are invariants of tensors that must be preserved under any change of the basis, thereby making only certain multidimensional arrays of numbers atensor.Compare this to the array representingεijk{\displaystyle \varepsilon _{ijk}}not being a tensor, for the sign change under transformations changing the orientation.
Because the components of vectors and their duals transform differently under the change of their dual bases, there is acovariant and/or contravariant transformation lawthat relates the arrays, which represent the tensor with respect to one basis and that with respect to the other one. The numbers of, respectively,vectors:n(contravariantindices) and dualvectors:m(covariantindices) in the input and output of a tensor determine thetype(orvalence) of the tensor, a pair of natural numbers(n,m), which determine the precise form of the transformation law. Theorderof a tensor is the sum of these two numbers.
The order (alsodegreeorrank) of a tensor is thus the sum of the orders of its arguments plus the order of the resulting tensor. This is also the dimensionality of the array of numbers needed to represent the tensor with respect to a specific basis, or equivalently, the number of indices needed to label each component in that array. For example, in a fixed basis, a standard linear map that maps a vector to a vector, is represented by a matrix (a 2-dimensional array), and therefore is a 2nd-order tensor. A simple vector can be represented as a 1-dimensional array, and is therefore a 1st-order tensor. Scalars are simple numbers and are thus 0th-order tensors. This way the tensor representing the scalar product, taking two vectors and resulting in a scalar has order2 + 0 = 2, the same as the stress tensor, taking one vector and returning another1 + 1 = 2. Theεijk{\displaystyle \varepsilon _{ijk}}-symbol,mapping two vectors to one vector, would have order2 + 1 = 3.
The collection of tensors on a vector space and its dual forms atensor algebra, which allows products of arbitrary tensors. Simple applications of tensors of order2, which can be represented as a square matrix, can be solved by clever arrangement of transposed vectors and by applying the rules of matrix multiplication, but the tensor product should not be confused with this.
There are several notational systems that are used to describe tensors and perform calculations involving them.
Ricci calculusis the modern formalism and notation for tensor indices: indicatinginnerandouter products,covariance and contravariance,summationsof tensor components,symmetryandantisymmetry, andpartialandcovariant derivatives.
TheEinstein summation conventiondispenses with writingsummation signs, leaving the summation implicit. Any repeated index symbol is summed over: if the indexiis used twice in a given term of a tensor expression, it means that the term is to be summed for alli. Several distinct pairs of indices may be summed this way.
Penrose graphical notationis a diagrammatic notation which replaces the symbols for tensors with shapes, and their indices by lines and curves. It is independent of basis elements, and requires no symbols for the indices.
Theabstract index notationis a way to write tensors such that the indices are no longer thought of as numerical, but rather areindeterminates. This notation captures the expressiveness of indices and the basis-independence of index-free notation.
Acomponent-free treatment of tensorsuses notation that emphasises that tensors do not rely on any basis, and is defined in terms of thetensor product of vector spaces.
There are several operations on tensors that again produce a tensor. The linear nature of tensor implies that two tensors of the same type may be added together, and that tensors may be multiplied by a scalar with results analogous to thescaling of a vector. On components, these operations are simply performed component-wise. These operations do not change the type of the tensor; but there are also operations that produce a tensor of different type.
Thetensor producttakes two tensors,SandT, and produces a new tensor,S⊗T, whose order is the sum of the orders of the original tensors. When described as multilinear maps, the tensor product simply multiplies the two tensors, i.e.,(S⊗T)(v1,…,vn,vn+1,…,vn+m)=S(v1,…,vn)T(vn+1,…,vn+m),{\displaystyle (S\otimes T)(v_{1},\ldots ,v_{n},v_{n+1},\ldots ,v_{n+m})=S(v_{1},\ldots ,v_{n})T(v_{n+1},\ldots ,v_{n+m}),}which again produces a map that is linear in all its arguments. On components, the effect is to multiply the components of the two input tensors pairwise, i.e.,(S⊗T)j1…jkjk+1…jk+mi1…ilil+1…il+n=Sj1…jki1…ilTjk+1…jk+mil+1…il+n.{\displaystyle (S\otimes T)_{j_{1}\ldots j_{k}j_{k+1}\ldots j_{k+m}}^{i_{1}\ldots i_{l}i_{l+1}\ldots i_{l+n}}=S_{j_{1}\ldots j_{k}}^{i_{1}\ldots i_{l}}T_{j_{k+1}\ldots j_{k+m}}^{i_{l+1}\ldots i_{l+n}}.}IfSis of type(l,k)andTis of type(n,m), then the tensor productS⊗Thas type(l+n,k+m).
Tensor contractionis an operation that reduces a type(n,m)tensor to a type(n− 1,m− 1)tensor, of which thetraceis a special case. It thereby reduces the total order of a tensor by two. The operation is achieved by summing components for which one specified contravariant index is the same as one specified covariant index to produce a new component. Components for which those two indices are different are discarded. For example, a(1, 1)-tensorTij{\displaystyle T_{i}^{j}}can be contracted to a scalar throughTii{\displaystyle T_{i}^{i}}, where the summation is again implied. When the(1, 1)-tensor is interpreted as a linear map, this operation is known as thetrace.
The contraction is often used in conjunction with the tensor product to contract an index from each tensor.
The contraction can also be understood using the definition of a tensor as an element of a tensor product of copies of the spaceVwith the spaceV∗by first decomposing the tensor into a linear combination of simple tensors, and then applying a factor fromV∗to a factor fromV. For example, a tensorT∈V⊗V⊗V∗{\displaystyle T\in V\otimes V\otimes V^{*}}can be written as a linear combination
The contraction ofTon the first and last slots is then the vector
In a vector space with aninner product(also known as ametric)g, the termcontractionis used for removing two contravariant or two covariant indices by forming a trace with the metric tensor or its inverse. For example, a(2, 0)-tensorTij{\displaystyle T^{ij}}can be contracted to a scalar throughTijgij{\displaystyle T^{ij}g_{ij}}(yet again assuming the summation convention).
When a vector space is equipped with anondegenerate bilinear form(ormetric tensoras it is often called in this context), operations can be defined that convert a contravariant (upper) index into a covariant (lower) index and vice versa. A metric tensor is a (symmetric) (0, 2)-tensor; it is thus possible to contract an upper index of a tensor with one of the lower indices of the metric tensor in the product. This produces a new tensor with the same index structure as the previous tensor, but with lower index generally shown in the same position of the contracted upper index. This operation is quite graphically known aslowering an index.
Conversely, the inverse operation can be defined, and is calledraising an index. This is equivalent to a similar contraction on the product with a(2, 0)-tensor. Thisinverse metric tensorhas components that are the matrix inverse of those of the metric tensor.
Important examples are provided bycontinuum mechanics. The stresses inside a solid body orfluid[28]are described by a tensor field. Thestress tensorandstrain tensorare both second-order tensor fields, and are related in a general linear elastic material by a fourth-orderelasticity tensorfield. In detail, the tensor quantifying stress in a 3-dimensional solid object has components that can be conveniently represented as a 3 × 3 array. The three faces of a cube-shaped infinitesimal volume segment of the solid are each subject to some given force. The force's vector components are also three in number. Thus, 3 × 3, or 9 components are required to describe the stress at this cube-shaped infinitesimal segment. Within the bounds of this solid is a whole mass of varying stress quantities, each requiring 9 quantities to describe. Thus, a second-order tensor is needed.
If a particularsurface elementinside the material is singled out, the material on one side of the surface will apply a force on the other side. In general, this force will not be orthogonal to the surface, but it will depend on the orientation of the surface in a linear manner. This is described by a tensor oftype(2, 0), inlinear elasticity, or more precisely by a tensor field of type(2, 0), since the stresses may vary from point to point.
Common applications include:
The concept of a tensor of order two is often conflated with that of a matrix. Tensors of higher order do however capture ideas important in science and engineering, as has been shown successively in numerous areas as they develop. This happens, for instance, in the field ofcomputer vision, with thetrifocal tensorgeneralizing thefundamental matrix.
The field ofnonlinear opticsstudies the changes to materialpolarization densityunder extreme electric fields. The polarization waves generated are related to the generatingelectric fieldsthrough the nonlinear susceptibility tensor. If the polarizationPis not linearly proportional to the electric fieldE, the medium is termednonlinear. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present),Pis given by aTaylor seriesinEwhose coefficients are the nonlinear susceptibilities:
Hereχ(1){\displaystyle \chi ^{(1)}}is the linear susceptibility,χ(2){\displaystyle \chi ^{(2)}}gives thePockels effectandsecond harmonic generation, andχ(3){\displaystyle \chi ^{(3)}}gives theKerr effect. This expansion shows the way higher-order tensors arise naturally in the subject matter.
The properties oftensors, especiallytensor decomposition, have enabled their use inmachine learningto embed higher dimensional data inartificial neural networks. This notion of tensor differs significantly from that in other areas of mathematics and physics, in the sense that a tensor is usually regarded as a numerical quantity in a fixed basis, and the dimension of the spaces along the different axes of the tensor need not be the same.
The vector spaces of atensor productneed not be the same, and sometimes the elements of such a more general tensor product are called "tensors". For example, an element of the tensor product spaceV⊗Wis a second-order "tensor" in this more general sense,[29]and an order-dtensor may likewise be defined as an element of a tensor product ofddifferent vector spaces.[30]A type(n,m)tensor, in the sense defined previously, is also a tensor of ordern+min this more general sense. The concept of tensor productcan be extendedto arbitrarymodules over a ring.
The notion of a tensor can be generalized in a variety of ways toinfinite dimensions. One, for instance, is via thetensor productofHilbert spaces.[31]Another way of generalizing the idea of tensor, common innonlinear analysis, is via themultilinear maps definitionwhere instead of using finite-dimensional vector spaces and theiralgebraic duals, one uses infinite-dimensionalBanach spacesand theircontinuous dual.[32]Tensors thus live naturally onBanach manifolds[33]andFréchet manifolds.
Suppose that a homogeneous medium fillsR3, so that the density of the medium is described by a singlescalarvalueρinkg⋅m−3. The mass, in kg, of a regionΩis obtained by multiplyingρby the volume of the regionΩ, or equivalently integrating the constantρover the region:
where the Cartesian coordinatesx,y,zare measured inm. If the units of length are changed intocm, then the numerical values of the coordinate functions must be rescaled by a factor of 100:
The numerical value of the densityρmust then also transform by100−3m3/cm3to compensate, so that the numerical value of the mass in kg is still given by integral ofρdxdydz{\displaystyle \rho \,dx\,dy\,dz}. Thusρ′=100−3ρ{\displaystyle \rho '=100^{-3}\rho }(in units ofkg⋅cm−3).
More generally, if the Cartesian coordinatesx,y,zundergo a linear transformation, then the numerical value of the densityρmust change by a factor of the reciprocal of the absolute value of thedeterminantof the coordinate transformation, so that the integral remains invariant, by thechange of variables formulafor integration. Such a quantity that scales by the reciprocal of the absolute value of the determinant of the coordinate transition map is called ascalar density. To model a non-constant density,ρis a function of the variablesx,y,z(ascalar field), and under acurvilinearchange of coordinates, it transforms by the reciprocal of theJacobianof the coordinate change. For more on the intrinsic meaning, seeDensity on a manifold.
A tensor density transforms like a tensor under a coordinate change, except that it in addition picks up a factor of the absolute value of the determinant of the coordinate transition:[34]
Herewis called the weight. In general, any tensor multiplied by a power of this function or its absolute value is called a tensor density, or a weighted tensor.[35][36]An example of a tensor density is thecurrent densityofelectromagnetism.
Under an affine transformation of the coordinates, a tensor transforms by the linear part of the transformation itself (or its inverse) on each index. These come from therational representationsof the general linear group. But this is not quite the most general linear transformation law that such an object may have: tensor densities are non-rational, but are stillsemisimplerepresentations. A further class of transformations come from the logarithmic representation of the general linear group, a reducible but not semisimple representation,[37]consisting of an(x,y) ∈R2with the transformation law
The transformation law for a tensor behaves as afunctoron the category of admissible coordinate systems, under general linear transformations (or, other transformations within some class, such aslocal diffeomorphisms). This makes a tensor a special case of a geometrical object, in the technical sense that it is a function of the coordinate system transforming functorially under coordinate changes.[38]Examples of objects obeying more general kinds of transformation laws arejetsand, more generally still,natural bundles.[39][40]
When changing from oneorthonormal basis(called aframe) to another by a rotation, the components of a tensor transform by that same rotation. This transformation does not depend on the path taken through the space of frames. However, the space of frames is notsimply connected(seeorientation entanglementandplate trick): there are continuous paths in the space of frames with the same beginning and ending configurations that are not deformable one into the other. It is possible to attach an additional discrete invariant to each frame that incorporates this path dependence, and which turns out (locally) to have values of ±1.[41]Aspinoris an object that transforms like a tensor under rotations in the frame, apart from a possible sign that is determined by the value of this discrete invariant.[42][43]
Spinors are elements of thespin representationof the rotation group, while tensors are elements of itstensor representations. Otherclassical groupshave tensor representations, and so also tensors that are compatible with the group, but all non-compact classical groups have infinite-dimensional unitary representations as well. | https://en.wikipedia.org/wiki/Tensor |
Inmathematical logic, thearithmetical hierarchy,arithmetic hierarchyorKleene–Mostowski hierarchy(after mathematiciansStephen Cole KleeneandAndrzej Mostowski) classifies certainsetsbased on the complexity offormulasthatdefinethem. Any set that receives a classification is calledarithmetical. The arithmetical hierarchy was invented independently by Kleene (1943) and Mostowski (1946).[1]
The arithmetical hierarchy is important incomputability theory,effective descriptive set theory, and the study offormal theoriessuch asPeano arithmetic.
TheTarski–Kuratowski algorithmprovides an easy way to get an upper bound on the classifications assigned to a formula and the set it defines.
Thehyperarithmetical hierarchyand theanalytical hierarchyextend the arithmetical hierarchy to classify additional formulas and sets.
The arithmetical hierarchy assigns classifications to the formulas in the language offirst-order arithmetic. The classifications are denotedΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}fornatural numbersn(including 0). The Greek letters here arelightfacesymbols, which indicates that the formulas do not containset parameters.[clarification needed]
If a formulaϕ{\displaystyle \phi }islogically equivalentto a formula having no unbounded quantifiers, i.e. in which all quantifiers arebounded quantifiersthenϕ{\displaystyle \phi }is assigned the classificationsΣ00{\displaystyle \Sigma _{0}^{0}}andΠ00{\displaystyle \Pi _{0}^{0}}.
The classificationsΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}are defined inductively for every natural numbernusing the following rules:
AΣn0{\displaystyle \Sigma _{n}^{0}}formula is equivalent to a formula that begins with someexistential quantifiersand alternatesn−1{\displaystyle n-1}times between series of existential anduniversal quantifiers; while aΠn0{\displaystyle \Pi _{n}^{0}}formula is equivalent to a formula that begins with some universal quantifiers and alternates analogously.
Because every first-order formula has aprenex normal form, every formula is assigned at least one classification. Because redundant quantifiers can be added to any formula, once a formula is assigned the classificationΣn0{\displaystyle \Sigma _{n}^{0}}orΠn0{\displaystyle \Pi _{n}^{0}}it will be assigned the classificationsΣm0{\displaystyle \Sigma _{m}^{0}}andΠm0{\displaystyle \Pi _{m}^{0}}for everym>n. The only relevant classification assigned to a formula is thus the one with the leastn; all the other classifications can be determined from it.
A setXof natural numbers is defined by a formulaφin the language ofPeano arithmetic(the first-order language with symbols "0" for zero, "S" for the successor function, "+" for addition, "×" for multiplication, and "=" for equality), if the elements ofXare exactly the numbers that satisfyφ. That is, for all natural numbersn,
wheren_{\displaystyle {\underline {n}}}is the numeral in the language of arithmetic corresponding ton{\displaystyle n}. A set is definable in first-order arithmetic if it is defined by some formula in the language of Peano arithmetic.
Each setXof natural numbers that is definable in first-order arithmetic is assigned classifications of the formΣn0{\displaystyle \Sigma _{n}^{0}},Πn0{\displaystyle \Pi _{n}^{0}}, andΔn0{\displaystyle \Delta _{n}^{0}}, wheren{\displaystyle n}is a natural number, as follows. IfXis definable by aΣn0{\displaystyle \Sigma _{n}^{0}}formula thenXis assigned the classificationΣn0{\displaystyle \Sigma _{n}^{0}}. IfXis definable by aΠn0{\displaystyle \Pi _{n}^{0}}formula thenXis assigned the classificationΠn0{\displaystyle \Pi _{n}^{0}}. IfXis bothΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}thenX{\displaystyle X}is assigned the additional classificationΔn0{\displaystyle \Delta _{n}^{0}}.
Note that it rarely makes sense to speak ofΔn0{\displaystyle \Delta _{n}^{0}}formulas; the first quantifier of a formula is either existential or universal. So aΔn0{\displaystyle \Delta _{n}^{0}}set is not necessarily defined by aΔn0{\displaystyle \Delta _{n}^{0}}formula in the sense of a formula that is bothΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}; rather, there are bothΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}formulas that define the set. For example, the set of odd natural numbersn{\displaystyle n}is definable by either∀k(n≠2×k){\displaystyle \forall k(n\neq 2\times k)}or∃k(n=2×k+1){\displaystyle \exists k(n=2\times k+1)}.
A parallel definition is used to define the arithmetical hierarchy on finiteCartesian powersof the set of natural numbers. Instead of formulas with one free variable, formulas withkfree first-order variables are used to define the arithmetical hierarchy on sets ofk-tuplesof natural numbers. These are in fact related by the use of apairing function.
The following meanings can be attached to the notation for the arithmetical hierarchy on formulas.
The subscriptn{\displaystyle n}in the symbolsΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}indicates the number of alternations of blocks of universal and existential first-order quantifiers that are used in a formula. Moreover, the outermost block is existential inΣn0{\displaystyle \Sigma _{n}^{0}}formulas and universal inΠn0{\displaystyle \Pi _{n}^{0}}formulas.
The superscript0{\displaystyle 0}in the symbolsΣn0{\displaystyle \Sigma _{n}^{0}},Πn0{\displaystyle \Pi _{n}^{0}}, andΔn0{\displaystyle \Delta _{n}^{0}}indicates the type of the objects being quantified over. Type 0 objects are natural numbers, and objects of typei+1{\displaystyle i+1}are functions that map the set of objects of typei{\displaystyle i}to the natural numbers. Quantification over higher type objects, such as functions from natural numbers to natural numbers, is described by a superscript greater than 0, as in theanalytical hierarchy. The superscript 0 indicates quantifiers over numbers, the superscript 1 would indicate quantification over functions from numbers to numbers (type 1 objects), the superscript 2 would correspond to quantification over functions that take a type 1 object and return a number, and so on.
Just as we can define what it means for a setXto berecursiverelative to another setYby allowing the computation definingXto consultYas anoraclewe can extend this notion to the whole arithmetic hierarchy and define what it means forXto beΣn0{\displaystyle \Sigma _{n}^{0}},Δn0{\displaystyle \Delta _{n}^{0}}orΠn0{\displaystyle \Pi _{n}^{0}}inY, denoted respectivelyΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}},Δn0,Y{\displaystyle \Delta _{n}^{0,Y}}andΠn0,Y{\displaystyle \Pi _{n}^{0,Y}}. To do so, fix a set of natural numbersYand add apredicatefor membership ofYto the language of Peano arithmetic. We then say thatXis inΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}if it is defined by aΣn0{\displaystyle \Sigma _{n}^{0}}formula in this expanded language. In other words,XisΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}if it is defined by aΣn0{\displaystyle \Sigma _{n}^{0}}formula allowed to ask questions about membership ofY. Alternatively one can view theΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}sets as those sets that can be built starting with sets recursive inYand alternately takingunionsandintersectionsof these sets up tontimes.
For example, letYbe a set of natural numbers. LetXbe the set of numbersdivisibleby an element ofY. ThenXis defined by the formulaϕ(n)=∃m∃t(Y(m)∧m×t=n){\displaystyle \phi (n)=\exists m\exists t(Y(m)\land m\times t=n)}soXis inΣ10,Y{\displaystyle \Sigma _{1}^{0,Y}}(actually it is inΔ00,Y{\displaystyle \Delta _{0}^{0,Y}}as well, since we could bound both quantifiers byn).
Arithmetical reducibility is an intermediate notion betweenTuring reducibilityandhyperarithmetic reducibility.
A set isarithmetical(alsoarithmeticandarithmetically definable) if it is defined by some formula in the language of Peano arithmetic. EquivalentlyXis arithmetical ifXisΣn0{\displaystyle \Sigma _{n}^{0}}orΠn0{\displaystyle \Pi _{n}^{0}}for some natural numbern. A setXis arithmetical ina setY, denotedX≤AY{\displaystyle X\leq _{A}Y}, ifXis definable as some formula in the language of Peano arithmetic extended by a predicate for membership ofY. Equivalently,Xis arithmetical inYifXis inΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}orΠn0,Y{\displaystyle \Pi _{n}^{0,Y}}for some natural numbern. A synonym forX≤AY{\displaystyle X\leq _{A}Y}is:Xisarithmetically reducibletoY.
The relationX≤AY{\displaystyle X\leq _{A}Y}isreflexiveandtransitive, and thus the relation≡A{\displaystyle \equiv _{A}}defined by the rule
is anequivalence relation. Theequivalence classesof this relation are called thearithmetic degrees; they arepartially orderedunder≤A{\displaystyle \leq _{A}}.
TheCantor space, denoted2ω{\displaystyle 2^{\omega }}, is the set of all infinite sequences of 0s and 1s; theBaire space, denotedωω{\displaystyle \omega ^{\omega }}orN{\displaystyle {\mathcal {N}}}, is the set of all infinite sequences of natural numbers. Note that elements of the Cantor space can be identified with sets of natural numbers and elements of the Baire space with functions from natural numbers to natural numbers.
The ordinary axiomatization ofsecond-order arithmeticuses a set-based language in which the set quantifiers can naturally be viewed as quantifying over Cantor space. A subset of Cantor space is assigned the classificationΣn0{\displaystyle \Sigma _{n}^{0}}if it is definable by aΣn0{\displaystyle \Sigma _{n}^{0}}formula. The set is assigned the classificationΠn0{\displaystyle \Pi _{n}^{0}}if it is definable by aΠn0{\displaystyle \Pi _{n}^{0}}formula. If the set is bothΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}then it is given the additional classificationΔn0{\displaystyle \Delta _{n}^{0}}. For example, letO⊆2ω{\displaystyle O\subseteq 2^{\omega }}be the set of all infinite binary strings that aren't all 0 (or equivalently the set of all non-empty sets of natural numbers). AsO={X∈2ω|∃n(X(n)=1)}{\displaystyle O=\{X\in 2^{\omega }|\exists n(X(n)=1)\}}we see thatO{\displaystyle O}is defined by aΣ10{\displaystyle \Sigma _{1}^{0}}formula and hence is aΣ10{\displaystyle \Sigma _{1}^{0}}set.
Note that while both the elements of the Cantor space (regarded as sets of natural numbers) and subsets of the Cantor space are classified in arithmetic hierarchies, these are not the same hierarchy. In fact the relationship between the two hierarchies is interesting and non-trivial. For instance theΠn0{\displaystyle \Pi _{n}^{0}}elements of the Cantor space are not (in general) the same as the elementsX{\displaystyle X}of the Cantor space so that{X}{\displaystyle \{X\}}is aΠn0{\displaystyle \Pi _{n}^{0}}subset of the Cantor space. However, many interesting results relate the two hierarchies.
There are two ways that a subset of Baire space can be classified in the arithmetical hierarchy.
A parallel definition is used to define the arithmetical hierarchy on finite Cartesian powers of Baire space or Cantor space, using formulas with several free variables. The arithmetical hierarchy can be defined on anyeffective Polish space; the definition is particularly simple for Cantor space and Baire space because they fit with the language of ordinary second-order arithmetic.
Note that we can also define the arithmetic hierarchy of subsets of the Cantor and Baire spaces relative to some set of natural numbers. In fact boldfaceΣn0{\displaystyle \mathbf {\Sigma } _{n}^{0}}is just the union ofΣn0,Y{\displaystyle \Sigma _{n}^{0,Y}}for all sets of natural numbersY. Note that the boldface hierarchy is just the standard hierarchy ofBorel sets.
It is possible to define the arithmetical hierarchy of formulas using a language extended with a function symbol for eachprimitive recursive function. This variation slightly changes the classification ofΣ00=Π00=Δ00{\displaystyle \Sigma _{0}^{0}=\Pi _{0}^{0}=\Delta _{0}^{0}}, sinceusing primitive recursive functions in first-order Peano arithmeticrequires, in general, an unbounded existential quantifier, and thus some sets that are inΣ00{\displaystyle \Sigma _{0}^{0}}by this definition are strictly inΣ10{\displaystyle \Sigma _{1}^{0}}by the definition given in the beginning of this article. The classΣ10{\displaystyle \Sigma _{1}^{0}}and thus all higher classes in the hierarchy remain unaffected.
A more semantic variation of the hierarchy can be defined on all finitary relations on the natural numbers; the following definition is used. Every computable relation is defined to beΣ00=Π00=Δ00{\displaystyle \Sigma _{0}^{0}=\Pi _{0}^{0}=\Delta _{0}^{0}}. The classificationsΣn0{\displaystyle \Sigma _{n}^{0}}andΠn0{\displaystyle \Pi _{n}^{0}}are defined inductively with the following rules.
This variation slightly changes the classification of some sets. In particular,Σ00=Π00=Δ00{\displaystyle \Sigma _{0}^{0}=\Pi _{0}^{0}=\Delta _{0}^{0}}, as a class of sets (definable by the relations in the class), is identical toΔ10{\displaystyle \Delta _{1}^{0}}as the latter was formerly defined. It can be extended to cover finitary relations on the natural numbers, Baire space, and Cantor space.
The following properties hold for the arithmetical hierarchy of sets of natural numbers and the arithmetical hierarchy of subsets of Cantor or Baire space.
IfSis aTuring computable set, then bothSand itscomplementare recursively enumerable (ifTis a Turing machine giving 1 for inputs inSand 0 otherwise, we may build a Turing machine halting only on the former, and another halting only on the latter).
ByPost's theorem, bothSand its complement are inΣ10{\displaystyle \Sigma _{1}^{0}}. This means thatSis both inΣ10{\displaystyle \Sigma _{1}^{0}}and inΠ10{\displaystyle \Pi _{1}^{0}}, and hence it is inΔ10{\displaystyle \Delta _{1}^{0}}.
Similarly, for every setSinΔ10{\displaystyle \Delta _{1}^{0}}, bothSand its complement are inΣ10{\displaystyle \Sigma _{1}^{0}}and are therefore (byPost's theorem) recursively enumerable by some Turing machinesT1andT2, respectively. For every numbern, exactly one of these halts. We may therefore construct a Turing machineTthat alternates betweenT1andT2, halting and returning 1 when the former halts or halting and returning 0 when the latter halts. ThusThalts on everynand returns whether it is inS; soSis computable.
The Turing computable sets of natural numbers are exactly the sets at levelΔ10{\displaystyle \Delta _{1}^{0}}of the arithmetical hierarchy. The recursively enumerable sets are exactly the sets at levelΣ10{\displaystyle \Sigma _{1}^{0}}.
Nooracle machineis capable of solving its ownhalting problem(a variation of Turing's proof applies). The halting problem for aΔn0,Y{\displaystyle \Delta _{n}^{0,Y}}oracle in fact sits inΣn+10,Y{\displaystyle \Sigma _{n+1}^{0,Y}}.
Post's theoremestablishes a close connection between the arithmetical hierarchy of sets of natural numbers and theTuring degrees. In particular, it establishes the following facts for alln≥ 1:
Thepolynomial hierarchyis a "feasible resource-bounded" version of the arithmetical hierarchy in which polynomial length bounds are placed on the numbers involved (or, equivalently, polynomial time bounds are placed on the Turing machines involved). It gives a finer classification of some sets of natural numbers that are at levelΔ10{\displaystyle \Delta _{1}^{0}}of the arithmetical hierarchy. | https://en.wikipedia.org/wiki/Arithmetical_hierarchy |
Natural language processing(NLP) is a subfield ofcomputer scienceand especiallyartificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded innatural languageand is thus closely related toinformation retrieval,knowledge representationandcomputational linguistics, a subfield oflinguistics.
Major tasks in natural language processing arespeech recognition,text classification,natural-language understanding, andnatural-language generation.
Natural language processing has its roots in the 1950s.[1]Already in 1950,Alan Turingpublished an article titled "Computing Machinery and Intelligence" which proposed what is now called theTuring testas a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language.
The premise of symbolic NLP is well-summarized byJohn Searle'sChinese roomexperiment: Given a collection of rules (e.g., a Chinese phrasebook, with questions and matching answers), the computer emulates natural language understanding (or other NLP tasks) by applying those rules to the data it confronts.
Up until the 1980s, most natural language processing systems were based on complex sets of hand-written rules. Starting in the late 1980s, however, there was a revolution in natural language processing with the introduction ofmachine learningalgorithms for language processing. This was due to both the steady increase in computational power (seeMoore's law) and the gradual lessening of the dominance ofChomskyantheories of linguistics (e.g.transformational grammar), whose theoretical underpinnings discouraged the sort ofcorpus linguisticsthat underlies the machine-learning approach to language processing.[8]
Symbolic approach, i.e., the hand-coding of a set of rules for manipulating symbols, coupled with a dictionary lookup, was historically the first approach used both by AI in general and by NLP in particular:[18][19]such as by writing grammars or devising heuristic rules forstemming.
Machine learningapproaches, which include both statistical and neural networks, on the other hand, have many advantages over the symbolic approach:
Rule-based systems are commonly used:
In the late 1980s and mid-1990s, the statistical approach ended a period ofAI winter, which was caused by the inefficiencies of the rule-based approaches.[20][21]
The earliestdecision trees, producing systems of hardif–then rules, were still very similar to the old rule-based approaches.
Only the introduction of hiddenMarkov models, applied to part-of-speech tagging, announced the end of the old rule-based approach.
A major drawback of statistical methods is that they require elaboratefeature engineering. Since 2015,[22]the statistical approach has been replaced by theneural networksapproach, usingsemantic networks[23]andword embeddingsto capture semantic properties of words.
Intermediate tasks (e.g., part-of-speech tagging and dependency parsing) are not needed anymore.
Neural machine translation, based on then-newly inventedsequence-to-sequencetransformations, made obsolete the intermediate steps, such as word alignment, previously necessary forstatistical machine translation.
The following is a list of some of the most commonly researched tasks in natural language processing. Some of these tasks have direct real-world applications, while others more commonly serve as subtasks that are used to aid in solving larger tasks.
Though natural language processing tasks are closely intertwined, they can be subdivided into categories for convenience. A coarse division is given below.
Based on long-standing trends in the field, it is possible to extrapolate future directions of NLP. As of 2020, three trends among the topics of the long-standing series of CoNLL Shared Tasks can be observed:[46]
Most higher-level NLP applications involve aspects that emulate intelligent behaviour and apparent comprehension of natural language. More broadly speaking, the technical operationalization of increasingly advanced aspects of cognitive behaviour represents one of the developmental trajectories of NLP (see trends among CoNLL shared tasks above).
Cognitionrefers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses."[47]Cognitive scienceis the interdisciplinary, scientific study of the mind and its processes.[48]Cognitive linguisticsis an interdisciplinary branch of linguistics, combining knowledge and research from both psychology and linguistics.[49]Especially during the age ofsymbolic NLP, the area of computational linguistics maintained strong ties with cognitive studies.
As an example,George Lakoffoffers a methodology to build natural language processing (NLP) algorithms through the perspective of cognitive science, along with the findings of cognitive linguistics,[50]with two defining aspects:
Ties with cognitive linguistics are part of the historical heritage of NLP, but they have been less frequently addressed since the statistical turn during the 1990s. Nevertheless, approaches to develop cognitive models towards technically operationalizable frameworks have been pursued in the context of various frameworks, e.g., of cognitive grammar,[53]functional grammar,[54]construction grammar,[55]computational psycholinguistics and cognitive neuroscience (e.g.,ACT-R), however, with limited uptake in mainstream NLP (as measured by presence on major conferences[56]of theACL). More recently, ideas of cognitive NLP have been revived as an approach to achieveexplainability, e.g., under the notion of "cognitive AI".[57]Likewise, ideas of cognitive NLP are inherent to neural modelsmultimodalNLP (although rarely made explicit)[58]and developments inartificial intelligence, specifically tools and technologies usinglarge language modelapproaches[59]and new directions inartificial general intelligencebased on thefree energy principle[60]by British neuroscientist and theoretician at University College LondonKarl J. Friston. | https://en.wikipedia.org/wiki/Natural-language_processing |
Bletchley Parkis anEnglish country houseand estate inBletchley,Milton Keynes(Buckinghamshire), that became the principal centre ofAlliedcode-breaking during the Second World War. DuringWorld War II, the estate housed theGovernment Code and Cypher School(GC&CS), which regularly penetrated the secret communications of theAxis Powers– most importantly the GermanEnigmaandLorenzciphers. The GC&CS team of codebreakers includedJohn Tiltman,Dilwyn Knox,Alan Turing,Harry Golombek,Gordon Welchman,Hugh Alexander,Donald Michie,Bill TutteandStuart Milner-Barry.
The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development ofColossus, the world's first programmable digital electronic computer.[a]Codebreaking operations at Bletchley Park ended in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war it had various uses and now houses theBletchley Park Museum.
A mansion was first built here in 1711, with the current house built in the 1870s.[1]In 1938, the mansion and much of the site was bought by a builder for a housing estate, but in May 1938 Admiral SirHugh Sinclair, head of theSecret Intelligence Service(SIS orMI6), bought the mansion and 58 acres (23 ha) of land for £6,000 (£484,000 today) for use by Code and Cypher School and SIS in the event of war. He used his own money as the Government said they did not have the budget to do so.[2]
A key advantage seen by Sinclair and his colleagues (inspecting the site under the cover of "Captain Ridley's shooting party")[3]was Bletchley's geographical centrality. It was almost immediately adjacent toBletchley railway station, where the "Varsity Line" betweenOxfordandCambridge– whose universities were expected to supply many of the code-breakers – met the mainWest Coast railway lineconnecting London,Birmingham,Manchester,Liverpool,GlasgowandEdinburgh.Watling Street, the main road linking London to the north-west (subsequently theA5) was close by, and high-volume communication links were available at the telegraph and telephone repeater station in nearbyFenny Stratford.[4]
Five weeks before the outbreak of war, Warsaw'sCipher Bureaurevealedits achievementsin breaking Enigma to astonished French and British personnel.[5]The British used the Poles' information and techniques, and theEnigma clonesent to them in August 1939, which greatly increased their (previously very limited) success in decrypting Enigma messages.[6]
The first personnel of theGovernment Code and Cypher School(GC&CS) moved to Bletchley Park on 15 August 1939. The Naval, Military, and Air Sections were on the ground floor of the mansion, together with a telephone exchange, teleprinter room, kitchen, and dining room; the top floor was allocated toMI6. Construction of the wooden huts began in late 1939, and Elmers School, a neighbouring boys' boarding school in a Victorian Gothic redbrick building by a church, was acquired for the Commercial and Diplomatic Sections.[8]
The only direct enemy damage to the site was done 20–21 November 1940 by three bombs probably intended forBletchley railway station; Hut 4, shifted two feet off its foundation, was winched back into place as work inside continued.[9]
During a morale-boosting visit on 9 September 1941,Winston Churchillreportedly remarked to Denniston or Menzies: "I told you to leave no stone unturned to get staff, but I had no idea you had taken me so literally."[10]Six weeks later, having failed to get sufficient typing and unskilled staff to achieve the productivity that was possible, Turing, Welchman, Alexander and Milner-Barry wrote directly to Churchill. His response was "Action this day make sure they have all they want on extreme priority and report to me that this has been done."[11]
After theUnited Statesjoined World War II, a number of Americancryptographerswere posted toHut 3, and from May 1943 onwards there was close co-operation between British and American intelligence[12]leading to the1943 BRUSA Agreementwhich was the forerunner of theFive Eyespartnership.[13]
In contrast, theSoviet Unionwas never officially told of Bletchley Park and its activities, a reflection of Churchill's distrust of the Soviets even during the US-UK-USSR alliance imposed by the Nazi threat.[14]However Bletchley Park was infiltrated by the Soviet moleJohn Cairncross, a member of theCambridge Spy Ring, who leaked Ultra material to Moscow.[15]
After the War, the secrecy imposed on Bletchley staff remained in force, so that most relatives never knew more than that a child, spouse, or parent had done some kind of secret war work.[16]Churchill referred to the Bletchley staff as "the geese who laid the golden eggs and never cackled".[17]That said, occasional mentions of the work performed at Bletchley Park slipped the censor's net and appeared in print.[18]
The site passed through a succession of hands and saw a number of uses, including as a teacher-training college and localGPOheadquarters. By 1991, the site was nearly empty and the buildings were at risk of demolition for redevelopment,[19]before the gradual development of theBletchley Park Museum.[20]
The Government Code & Cypher School became the Government Communications Headquarters (GCHQ), moving toEastcotein 1946 and toCheltenhamin the 1950s.[21]The site was used by various government agencies, including theGPOand theCivil Aviation Authority. One large building, block F, was demolished in 1987 by which time the site was being run down with tenants leaving.[22]
AdmiralHugh Sinclairwas the founder and head of GC&CS between 1919 and 1938 with CommanderAlastair Dennistonbeing operational head of the organization from 1919 to 1942, beginning with its formation from theAdmiralty'sRoom 40(NID25) and theWar Office'sMI1b.[23]Key GC&CScryptanalystswho moved from London to Bletchley Park includedJohn Tiltman,Dillwyn "Dilly" Knox,Josh Cooper,Oliver StracheyandNigel de Grey. These people had a variety of backgrounds – linguists and chess champions were common, and Knox's field waspapyrology. The British War Office recruited top solvers ofcryptic crosswordpuzzles, as these individuals had stronglateral thinkingskills.[24]
Onthe day Britain declared war on Germany, Denniston wrote to theForeign Officeabout recruiting "men of the professor type".[25]Personal networking drove early recruitments, particularly of men from the universities of Cambridge and Oxford. Trustworthy women were similarly recruited for administrative and clerical jobs.[26]In one 1941 recruiting stratagem,The Daily Telegraphwas asked to organise a crossword competition, after which promising contestants were discreetly approached about "a particular type of work as a contribution to the war effort".[27]
Denniston recognised, however, that the enemy's use of electromechanical cipher machines meant that formally trained mathematicians would also be needed;[28]Oxford'sPeter Twinnjoined GC&CS in February 1939;[29]Cambridge'sAlan Turing[30]andGordon Welchman[31]began training in 1938 and reported to Bletchley the day after war was declared, along withJohn Jeffreys. Later-recruited cryptanalysts included the mathematiciansDerek Taunt,[32]Jack Good,Bill Tutte,[33]andMax Newman; historianHarry Hinsley, and chess championsHugh AlexanderandStuart Milner-Barry.[34]Joan Clarkewas one of the few women employed at Bletchley as a full-fledged cryptanalyst.[35][36]
When seeking to recruit more suitably advanced linguists,John Tiltmanturned toPatrick Wilkinsonof the Italian section for advice, and he suggested askingLord Lindsay of Birker, ofBalliol College, Oxford, S. W. Grose, andMartin Charlesworth, ofSt John's College, Cambridge, to recommend classical scholars or applicants to their colleges.[37]
This eclectic staff of "BoffinsandDebs" (scientists and debutantes, young women of high society)[38]caused GC&CS to be whimsically dubbed the "Golf, Cheese and Chess Society".[39]Among those who worked there and later became famous in other fields were historianAsa Briggs, politicianRoy Jenkinsand novelistAngus Wilson.[40]
After initial training at the Inter-Service Special Intelligence School set up byJohn Tiltman(initially at an RAF depot in Buckingham and later inBedford– where it was known locally as "the Spy School")[41]staff worked a six-day week, rotating through three shifts: 4 p.m. to midnight, midnight to 8 a.m. (the most disliked shift), and 8 a.m. to 4 p.m., each with a half-hour meal break. At the end of the third week, a worker went off at 8 a.m. and came back at 4 p.m., thus putting in 16 hours on that last day. The irregular hours affected workers' health and social life, as well as the routines of the nearby homes at which most staff lodged. The work was tedious and demanded intense concentration; staff got one week's leave four times a year, but some "girls" collapsed and required extended rest.[42]Recruitment took place to combat a shortage of experts in Morse code and German.[43]
In January 1945, at the peak of codebreaking efforts, 8,995 personnel were working at Bletchley and its outstations.[44]About three-quarters of these were women.[45]Many of the women came from middle-class backgrounds and held degrees in the areas of mathematics, physics and engineering; they were given the chance due to the lack of men, who had been sent to war. They performed calculations and coding and hence were integral to the computing processes.[46]Among them wereEleanor Ireland, who worked on theColossus computers[47]andRuth Briggs, a German scholar, who worked within the Naval Section.[48][49]
The female staff in Dilwyn Knox's section were sometimes termed "Dilly's Fillies".[50]Knox's methods enabledMavis Lever(who married mathematician and fellow code-breakerKeith Batey) andMargaret Rockto solve a German code, theAbwehrcipher.[51][52]
Many of the women had backgrounds in languages, particularly French, German and Italian. Among them wereRozanne Colchester, a translator who worked mainly for the Italian air forces Section,[53]andCicely Mayhew, recruited straight from university, who worked in Hut 8, translating decoded German Navy signals,[54]as didJane Fawcett(née Hughes) who decrypted a vital message concerning theGerman battleshipBismarckand after the war became an opera singer and buildings conservationist.[40]
Alan Brooke(CIGS) in his secret wartime diary frequently refers to “intercepts”:[55]
Properly used, the German Enigma andLorenz ciphersshould have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities that made Bletchley's attacks just barely feasible. These vulnerabilities, however, could have been remedied by relatively simple improvements in enemy procedures,[5]and such changes would certainly have been implemented had Germany had any hint of Bletchley's success. Thus the intelligence Bletchley produced was considered wartime Britain's "Ultrasecret" – higher even than the normally highest classificationMost Secret–and security was paramount.[56]
All staff signed theOfficial Secrets Act (1939)and a 1942 security warning emphasised the importance of discretion even within Bletchley itself: "Do not talk at meals. Do not talk in the transport. Do not talk travelling. Do not talk in the billet. Do not talk by your own fireside. Be careful even in your Hut ..."[57]
Nevertheless, there were security leaks.Jock Colville, the Assistant Private Secretary toWinston Churchill, recorded in his diary on 31 July 1941, that the newspaper proprietorLord Camrosehad discovered Ultra and that security leaks "increase in number and seriousness".[58]
Despite the high degree of secrecy surrounding Bletchley Park during the Second World War, unique and hitherto unknown amateur film footage of the outstation at nearbyWhaddon Hallcame to light in 2020, after being anonymously donated to the Bletchley Park Trust.[59][60]A spokesman for the Trust noted the film's existence was all the more incredible because it was "very, very rare even to have [still] photographs" of the park and its associated sites.[61]
Bletchley Park was known as "B.P." to those who worked there.[62]"Station X" (X =Roman numeralten), "London Signals Intelligence Centre", and "Government Communications Headquarters" were all cover names used during the war.[63]The formal posting of the many "Wrens" – members of theWomen's Royal Naval Service– working there, was toHMSPembroke V. Royal Air Force names of Bletchley Park and its outstations includedRAF Eastcote, RAF Lime Grove and RAF Church Green.[64]The postal address that staff had to use was "Room 47, Foreign Office".[65]
Initially, when only a very limited amount of Enigma traffic was being read,[67]deciphered non-Naval Enigma messages were sent fromHut 6toHut 3which handled their translation and onward transmission. Subsequently, underGroup Captain Eric Jones, Hut 3 expanded to become the heart of Bletchley Park's intelligence effort, with input from decrypts of "Tunny" (Lorenz SZ42) traffic and many other sources. Early in 1942 it moved into Block D, but its functions were still referred to as Hut 3.[68]
Hut 3 contained a number of sections: Air Section "3A", Military Section "3M", a small Naval Section "3N", a multi-service Research Section "3G" and a large liaison section "3L".[69]It also housed the Traffic Analysis Section, SIXTA.[70]An important function that allowed the synthesis of raw messages into valuableMilitary intelligencewas the indexing and cross-referencing of information in a number of different filing systems.[71]Intelligence reports were sent out to the Secret Intelligence Service, the intelligence chiefs in the relevant ministries, and later on to high-level commanders in the field.[72]
Naval Enigma deciphering was inHut 8, with translation inHut 4. Verbatim translations were sent to theNaval Intelligence Division(NID) of the Admiralty's Operational Intelligence Centre (OIC), supplemented by information from indexes as to the meaning of technical terms and cross-references from a knowledge store of German naval technology.[73]Where relevant to non-naval matters, they would also be passed to Hut 3. Hut 4 also decoded a manual system known as the dockyard cipher, which sometimes carried messages that were also sent on an Enigma network. Feeding these back to Hut 8 provided excellent "cribs" forKnown-plaintext attackson the daily naval Enigma key.[74]
Initially, awirelessroom was established at Bletchley Park.
It was set up in the mansion's water tower under the code name "Station X",[75]a term now sometimes applied to the codebreaking efforts at Bletchley as a whole. The "X" is theRoman numeral"ten", this being the Secret Intelligence Service's tenth such station. Due to the long radio aerials stretching from the wireless room, the radio station was moved from Bletchley Park to nearbyWhaddon Hallto avoid drawing attention to the site.[76][77]
Subsequently, other listening stations – theY-stations, such as the ones atChicksandsin Bedfordshire,Beaumanor Hall, Leicestershire (where the headquarters of the War Office "Y" Group was located) andBeeston Hill Y Stationin Norfolk – gathered raw signals for processing at Bletchley. Coded messages were taken down by hand and sent to Bletchley on paper by motorcycledespatch ridersor (later) by teleprinter.[78]
The wartime needs required the building of additional accommodation.[79]
Often a hut's number became so strongly associated with the work performed inside that even when the work was moved to another building it was still referred to by the original "Hut" designation.[80][81]
In addition to the wooden huts, there were a number of brick-built "blocks".
Most German messages decrypted at Bletchley were produced by one or another version of theEnigmacipher machine, but an important minority were produced by the even more complicated twelve-rotorLorenz SZ42 on-line teleprinter cipher machineused for high command messages, known asFish.[95]
Thebombewas an electromechanical device whose function was to discover some of the daily settings of the Enigma machines on the various German militarynetworks.[97][98][99]Its pioneering design was developed byAlan Turing(with an important contribution from Gordon Welchman) and the machine was engineered byHarold 'Doc' Keenof theBritish Tabulating Machine Company. Each machine was about 7 feet (2.1 m) high and wide, 2 feet (0.61 m) deep and weighed about a ton.[100]
At its peak, GC&CS was reading approximately 4,000 messages per day.[101]As a hedge against enemy attack[102]most bombes were dispersed to installations atAdstockandWavendon(both later supplanted by installations atStanmoreandEastcote), andGayhurst.[103][104]
Luftwaffemessages were the first to be read in quantity. The German navy had much tighter procedures, and the capture of code books was needed before they could be broken. When, in February 1942, the German navy introduced the four-rotor Enigma for communications with its Atlantic U-boats, this traffic became unreadable for a period of ten months.[105]Britain produced modified bombes, but it was the success of theUS Navy Bombethat was the main source of reading messages from this version of Enigma for the rest of the war. Messages were sent to and fro across the Atlantic by enciphered teleprinter links.[78]
Bletchley's work was essential to defeating theU-boatsin theBattle of the Atlantic, and to the British naval victories in theBattle of Cape Matapanand theBattle of North Cape. In 1941, Ultra exerted a powerful effect on theNorth African desert campaignagainst German forces under GeneralErwin Rommel. General SirClaude Auchinleckwrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". While not changing the events, "Ultra" decrypts featured prominently in the story ofOperation SALAM,László Almásy's mission acrossthe desertbehind Allied lines in 1942.[106]Prior to theNormandy landingson D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eight Western-front divisions.[107]
TheLorenz messageswere codenamedTunnyat Bletchley Park. They were only sent in quantity from mid-1942. The Tunny networks were used for high-level messages between German High Command and field commanders. With the help of German operator errors, the cryptanalysts in theTestery(named afterRalph Tester, its head) worked out the logical structure of the machine despite not knowing its physical form. They devised automatic machinery to help with decryption, which culminated inColossus, the world's first programmable digital electronic computer. This was designed and built byTommy Flowersand his team at thePost Office Research StationatDollis Hill. The prototype first worked in December 1943, was delivered to Bletchley Park in January and first worked operationally on 5 February 1944. Enhancements were developed for the Mark 2 Colossus, the first of which was working at Bletchley Park on the morning of 1 June in time forD-day. Flowers then produced one Colossus a month for the rest of the war, making a total of ten with an eleventh part-built. The machines were operated mainly by Wrens in a section named theNewmanryafter its headMax Newman.[108]
Italian signals had been of interest since Italy's attack on Abyssinia in 1935.
During theSpanish Civil WartheItalian Navyused the K model of the commercial Enigma without a plugboard; this was solved by Knox in 1937.
When Italy entered the war in 1940 an improved version of the machine was used, though little traffic was sent by it and there were "wholesale changes" in Italian codes and cyphers.[109]
Knox was given a new section for work on Enigma variations, which he staffed with women ("Dilly's girls"), who includedMargaret Rock, Jean Perrin, Clare Harding, Rachel Ronald, Elisabeth Granger; andMavis Lever.[110]Mavis Lever solved the signals revealing the Italian Navy's operational plans before theBattle of Cape Matapanin 1941, leading to a British victory.[111]
Although most Bletchley staff did not know the results of their work, AdmiralCunninghamvisited Bletchley in person a few weeks later to congratulate them.[111]
On entering World War II in June 1940, theItalianswere using book codes for most of their military messages. The exception was theItalian Navy, which after the Battle of Cape Matapan started using theC-38version of theBoris Hagelinrotor-basedcipher machine, particularly to route their navy and merchant marine convoys to the conflict in North Africa.[112]As a consequence,JRM Butlerrecruited his former studentBernard Willsonto join a team with two others in Hut 4.[86][113]In June 1941, Willson became the first of the team to decode the Hagelin system, thus enabling military commanders to direct theRoyal NavyandRoyal Air Forceto sink enemy ships carrying supplies from Europe to Rommel'sAfrika Korps. This led to increased shipping losses and, from reading the intercepted traffic, the team learnt that between May and September 1941 the stock of fuel for theLuftwaffein North Africa reduced by 90 per cent.[114]After an intensive language course, in March 1944 Willson switched to Japanese language-based codes.[115]
A Middle East Intelligence Centre (MEIC) was set up in Cairo in 1939. When Italy entered the war in June 1940, delays in forwarding intercepts to Bletchley via congested radio links resulted in cryptanalysts being sent to Cairo. A Combined Bureau Middle East (CBME) was set up in November, though the Middle East authorities made "increasingly bitter complaints" that GC&CS was giving too little priority to work on Italian cyphers. However, the principle of concentrating high-grade cryptanalysis at Bletchley was maintained.[116]John Chadwickstarted cryptanalysis work in 1942 on Italian signals at the naval base 'HMS Nile' in Alexandria. Later, he was with GC&CS; in the Heliopolis Museum, Cairo and then in the Villa Laurens, Alexandria.[117]
Soviet signals had been studied since the 1920s. In 1939–40,John Tiltman(who had worked on Russian Army traffic from 1930) set up two Russian sections at Wavendon (a country house near Bletchley) and atSarafandin Palestine. Two Russian high-grade army and navy systems were broken early in 1940. Tiltman spent two weeks in Finland, where he obtained Russian traffic from Finland and Estonia in exchange for radio equipment. In June 1941, when the Soviet Union became an ally, Churchill ordered a halt to intelligence operations against it. In December 1941, the Russian section was closed down, but in late summer 1943 or late 1944, a small GC&CS Russian cypher section was set up in London overlooking Park Lane, then in Sloane Square.[118]
An outpost of the Government Code and Cypher School had been set up in Hong Kong in 1935, theFar East Combined Bureau(FECB). The FECB naval staff moved in 1940 to Singapore, thenColombo,Ceylon, thenKilindini,Mombasa, Kenya. They succeeded in deciphering Japanese codes with a mixture of skill and good fortune.[119]The Army and Air Force staff went from Singapore to theWireless Experimental CentreatDelhi, India.[120]
In early 1942, a six-month crash course in Japanese, for 20 undergraduates from Oxford and Cambridge, was started by the Inter-Services Special Intelligence School in Bedford, in a building across from the main Post Office. This course was repeated every six months until war's end. Most of those completing these courses worked on decoding Japanese naval messages inHut 7, underJohn Tiltman.[120]
By mid-1945, well over 100 personnel were involved with this operation, which co-operated closely with the FECB and the US Signal intelligence Service atArlington Hall, Virginia. In 1999, Michael Smith wrote that: "Only now are the British codebreakers (likeJohn Tiltman,Hugh Foss, andEric Nave) beginning to receive the recognition they deserve for breaking Japanese codes and cyphers".[121]
Until the mid 1970s thethirty year rulemeant that there was no official mention of Bletchley Park. This meant that there were many operations where codes broken by Bletchley Park played an important role, but this was not present in the history of those events.[122]
With the publication ofF. W. Winterbotham'sThe Ultra Secretin 1974[123][b]public discussion of Bletchley Park's work in the English speaking world finally became accepted, although some former staff considered themselves bound to silence forever.[124]Winterbotham's book was written from memory and although officially allowed, there was no access to archives.[125]
Not until July 2009 did the British government fully acknowledge the contribution of the many people working at Bletchley Park.[126]Only then was a commemorative medal struck to be presented to those involved.[127]The gilded medal bears the inscriptionGC&CS 1939–1945 Bletchley Park and its Outstations.[128]
TheBletchley Park Museumoperates on the current site[129]with a learning center and science centre. Other organisations share the campus includingThe National Museum of Computingand heRadio Society of Great Britain's National Radio Centre. The construction of aNational College of Cyber Securityhad previously been envisaged on the site.[130]
Bletchley Park is oppositeBletchley railway station. It is close to junctions 13 and 14 of theM1, about 50 miles (80 km) northwest ofLondon.[131]
Maps | https://en.wikipedia.org/wiki/Bletchley_Park |
Incomputer science, aradix tree(alsoradix trieorcompact prefix treeorcompressed trie) is adata structurethat represents aspace-optimizedtrie(prefix tree) in which each node that is the only child is merged with its parent. The result is that the number of children of every internal node is at most theradixrof the radix tree, wherer= 2xfor some integerx≥ 1. Unlike regular trees, edges can be labeled with sequences of elements as well as single elements. This makes radix trees much more efficient for small sets (especially if the strings are long) and for sets of strings that share long prefixes.
Unlike regular trees (where whole keys are compareden massefrom their beginning up to the point of inequality), the key at each node is compared chunk-of-bits by chunk-of-bits, where the quantity of bits in that chunk at that node is the radixrof the radix trie. Whenris 2, the radix trie is binary (i.e., compare that node's 1-bit portion of the key), which minimizes sparseness at the expense of maximizing trie depth—i.e., maximizing up to conflation of nondiverging bit-strings in the key. Whenr≥ 4 is a power of 2, then the radix trie is anr-ary trie, which lessens the depth of the radix trie at the expense of potential sparseness.
As an optimization, edge labels can be stored in constant size by using two pointers to a string (for the first and last elements).[1]
Note that although the examples in this article show strings as sequences of characters, the type of the string elements can be chosen arbitrarily; for example, as a bit or byte of the string representation when usingmultibyte characterencodings orUnicode.
Radix trees are useful for constructingassociative arrayswith keys that can be expressed as strings. They find particular application in the area ofIProuting,[2][3][4]where the ability to contain large ranges of values with a few exceptions is particularly suited to the hierarchical organization ofIP addresses.[5]They are also used forinverted indexesof text documents ininformation retrieval.
Radix trees support insertion, deletion, and searching operations. Insertion adds a new string to the trie while trying to minimize the amount of data stored. Deletion removes a string from the trie. Searching operations include (but are not necessarily limited to) exact lookup, find predecessor, find successor, and find all strings with a prefix. All of these operations are O(k) where k is the maximum length of all strings in the set, where length is measured in the quantity of bits equal to the radix of the radix trie.
The lookup operation determines if a string exists in a trie. Most operations modify this approach in some way to handle their specific tasks. For instance, the node where a string terminates may be of importance. This operation is similar to tries except that some edges consume multiple elements.
The following pseudo code assumes that these methods and members exist.
Edge
Node
To insert a string, we search the tree until we can make no further progress. At this point we either add a new outgoing edge labeled with all remaining elements in the input string, or if there is already an outgoing edge sharing a prefix with the remaining input string, we split it into two edges (the first labeled with the common prefix) and proceed. This splitting step ensures that no node has more children than there are possible string elements.
Several cases of insertion are shown below, though more may exist. Note that r simply represents the root. It is assumed that edges can be labelled with empty strings to terminate strings where necessary and that the root has no incoming edge. (The lookup algorithm described above will not work when using empty-string edges.)
To delete a string x from a tree, we first locate the leaf representing x. Then, assuming x exists, we remove the corresponding leaf node. If the parent of our leaf node has only one other child, then that child's incoming label is appended to the parent's incoming label and the child is removed.
The datastructure was invented in 1968 by Donald R. Morrison,[6]with whom it is primarily associated, and by Gernot Gwehenberger.[7]
Donald Knuth, pages 498-500 in Volume III ofThe Art of Computer Programming, calls these "Patricia's trees", presumably after the acronym in the title of Morrison's paper: "PATRICIA - Practical Algorithm to Retrieve Information Coded in Alphanumeric". Today, Patricia trees are seen as radix trees with radix equals 2, which means that each bit of the key is compared individually and each node is a two-way (i.e., left versus right) branch.
(In the following comparisons, it is assumed that the keys are of lengthkand the data structure containsnmembers.)
Unlikebalanced trees, radix trees permit lookup, insertion, and deletion in O(k) time rather than O(logn). This does not seem like an advantage, since normallyk≥ logn, but in a balanced tree every comparison is a string comparison requiring O(k) worst-case time, many of which are slow in practice due to long common prefixes (in the case where comparisons begin at the start of the string). In a trie, all comparisons require constant time, but it takesmcomparisons to look up a string of lengthm. Radix trees can perform these operations with fewer comparisons, and require many fewer nodes.
Radix trees also share the disadvantages of tries, however: as they can only be applied to strings of elements or elements with an efficiently reversible mapping to strings, they lack the full generality of balanced search trees, which apply to any data type with atotal ordering. A reversible mapping to strings can be used to produce the required total ordering for balanced search trees, but not the other way around. This can also be problematic if a data type onlyprovidesa comparison operation, but not a (de)serializationoperation.
Hash tablesare commonly said to have expected O(1) insertion and deletion times, but this is only true when considering computation of the hash of the key to be a constant-time operation. When hashing the key is taken into account, hash tables have expected O(k) insertion and deletion times, but may take longer in the worst case depending on how collisions are handled. Radix trees have worst-case O(k) insertion and deletion. The successor/predecessor operations of radix trees are also not implemented by hash tables.
A common extension of radix trees uses two colors of nodes, 'black' and 'white'. To check if a given string is stored in the tree, the search starts from the top and follows the edges of the input string until no further progress can be made. If the search string is consumed and the final node is a black node, the search has failed; if it is white, the search has succeeded. This enables us to add a large range of strings with a common prefix to the tree, using white nodes, then remove a small set of "exceptions" in a space-efficient manner byinsertingthem using black nodes.
TheHAT-trieis a cache-conscious data structure based on radix trees that offers efficient string storage and retrieval, and ordered iterations. Performance, with respect to both time and space, is
comparable to the cache-conscioushashtable.[8][9]
APATRICIAtrie is a special variant of the radix 2 (binary) trie, in which rather than explicitly store every bit of every key, the nodes store only the position of the first bit which differentiates two sub-trees. During traversal the algorithm examines the indexed bit of the search key and chooses the left or right sub-tree as appropriate. Notable features of the PATRICIA trie include that the trie only requires one node to be inserted for every unique key stored, making PATRICIA much more compact than a standard binary trie. Also, since the actual keys are no longer explicitly stored it is necessary to perform one full key comparison on the indexed record in order to confirm a match. In this respect PATRICIA bears a certain resemblance to indexing using a hash table.[6]
Theadaptive radix treeis a radix tree variant that integrates adaptive node sizes to the radix tree. One major drawback of the usual radix trees is the use of space, because it uses a constant node size in every level. The major difference between the radix tree and the adaptive radix tree is its variable size for each node based on the number of child elements, which grows while adding new entries. Hence, the adaptive radix tree leads to a better use of space without reducing its speed.[10][11][12]
A common practice is to relax the criteria of disallowing parents with only one child in situations where the parent represents a valid key in the data set. This variant of radix tree achieves a higher space efficiency than the one which only allows internal nodes with at least two children.[13] | https://en.wikipedia.org/wiki/Radix_tree |
Metacomputingis allcomputingand computing-oriented activity which involves computingknowledge(science and technology) utilized for theresearch, development and application of different types of computing. It may also deal with numerous types of computing applications, such as: industry, business, management and human-related management. New emerging fields of metacomputing focus on the methodological and technological aspects of the development of largecomputer networks/grids, such as theInternet,intranetand other territorially distributed computer networks for special purposes.[1]
Metacomputing, as acomputing of computing, includes: the organization of large computer networks, choice of the design criteria (for example:peer-to-peeror centralized solution) and metacomputing software (middleware,metaprogramming) development where, in the specific domains, the concept metacomputing is used as a description of softwaremeta-layerswhich are networked platforms for the development of user-oriented calculations, for example forcomputational physicsandbio-informatics.
Here, serious scientific problems ofsystems/networkscomplexityemerge, not only related to domain-dependentcomplexitiesbut focused onsystemicmeta-complexityof computer network infrastructures.
Metacomputing is also a useful descriptor for self-referential programming systems. Often these systems are functional asfifth-generation computer languageswhich require the use of an underlying metaprocessor software operating system in order to be operative. Typically metacomputing occurs in an interpreted or real-time compiling system since the changing nature of information in processing results may result in an unpredictable compute state throughout the existence of the metacomputer (the information state operated upon by the metacomputing platform).
From the human and social perspectives, metacomputing is especially focused on: human-computer software, cognitive interrelations/interfaces, the possibilities of the development of intelligent computer grids for the cooperation of human organizations, and onubiquitous computingtechnologies. In particular, it relates to the development of software infrastructures for the computational modeling and simulation ofcognitive architecturesfor variousdecision support systems.
Metacomputing refers to the general problems ofcomputationalityof human knowledge, to the limits of the transformation of human knowledge and individual thinking to the form of computer programs. These and similar questions are also of interest ofmathematical psychology. | https://en.wikipedia.org/wiki/Metacomputing |
Inpsychoacoustics, apure toneis a sound with asinusoidalwaveform; that is, asinewaveof constantfrequency,phase-shift, andamplitude.[1]By extension, insignal processinga single-frequency tone or pure tone is a purely sinusoidalsignal(e.g., a voltage).
A pure tone has the property – unique among real-valued wave shapes – that its wave shape is unchanged bylinear time-invariant systems; that is, only the phase and amplitude change between such a system's pure-tone input and its output.
Sine and cosine waves can be used asbasicbuilding blocks of more complex waves. As additional sine waves having different frequencies arecombined, the waveform transforms from a sinusoidal shape into a more complex shape.
When considered as part of a wholespectrum, a pure tone may also be called aspectral component.
In clinicalaudiology, pure tones are used forpure-tone audiometryto characterize hearing thresholds at different frequencies.Sound localizationis often more difficult with pure tones than with other sounds.[2][3]
Pure tones have been used by 19th century physicists likeGeorg OhmandHermann von Helmholtzto support theories asserting that the ear functions in a way equivalent to aFourier frequency analysis.[4][5]InOhm's acoustic law, later further elaborated by Helmholtz,musical tonesare perceived as a set of pure tones. The percept ofpitchdepends on the frequency of the most prominent tone, and the phases of the individual components is discarded. This theory has often been blamed for creating a confusion between pitch, frequency and pure tones.[6]
Unlikemusical tonesthat are composed of the sum of a number of harmonically related sinusoidal components, pure tones only contain one such sinusoidal waveform. When presented in isolation, and when its frequency pertains to a certain range, pure tones give rise to a single pitch percept, which can be characterized by its frequency. In this situation, the instantaneous phase of the pure tone varies linearly with time. If a pure tone gives rise to a constant, steady-state percept, then it can be concluded that its phase does not influence this percept. However, when multiple pure tones are presented at once, like in musical tones, their relative phase plays a role in the resulting percept. In such a situation, the perceived pitch is not determined by the frequency of any individual component, but by the frequency relationship between these components (seemissing fundamental). | https://en.wikipedia.org/wiki/Pure_tone |
Indatabases, andtransaction processing(transaction management),snapshot isolationis a guarantee that all reads made in atransactionwill see a consistent snapshot of the database (in practice it reads the last committed values that existed at the time it started), and the transaction itself will successfully commit only if no updates it has made conflict with any concurrent updates made since that snapshot.
Snapshot isolation has been adopted by several majordatabase management systems, such asInterBase,Firebird,Oracle,MySQL,[1]PostgreSQL,SQL Anywhere,MongoDB[2]andMicrosoft SQL Server(2005 and later). The main reason for its adoption is that it allows better performance thanserializability, yet still avoids most of the concurrency anomalies that serializability avoids (but not all). In practice snapshot isolation is implemented withinmultiversion concurrency control(MVCC), where generational values of each data item (versions) are maintained: MVCC is a common way to increase concurrency and performance by generating a new version of adatabase objecteach time the object is written, and allowing transactions' read operations of several last relevant versions (of each object). Snapshot isolation has been used[3]to criticize theANSISQL-92 standard's definition ofisolationlevels, as it exhibits none of the "anomalies" that the SQL standard prohibited, yet is not serializable (the anomaly-free isolation level defined by ANSI).
In spite of its distinction from serializability, snapshot isolation is sometimes referred to asserializableby Oracle.
A transaction executing under snapshot isolation appears to operate on a personalsnapshotof the database, taken at the start of the transaction. When the transaction concludes, it will successfully commit only if the values updated by the transaction have not been changed externally since the snapshot was taken. Such awrite–write conflictwill cause the transaction to abort.
In awrite skewanomaly, two transactions (T1 and T2) concurrently read an overlapping data set (e.g. values V1 and V2), concurrently make disjoint updates (e.g. T1 updates V1, T2 updates V2), and finally concurrently commit, neither having seen the update performed by the other. Were the system serializable, such an anomaly would be impossible, as either T1 or T2 would have to occur "first", and be visible to the other. In contrast, snapshot isolation permits write skew anomalies.
As a concrete example, imagine V1 and V2 are two balances held by a single person, Phil. The bank will allow either V1 or V2 to run a deficit, provided the total held in both is never negative (i.e. V1 + V2 ≥ 0). Both balances are currently $100. Phil initiates two transactions concurrently, T1 withdrawing $200 from V1, and T2 withdrawing $200 from V2.
If the database guaranteed serializable transactions, the simplest way of coding T1 is to deduct $200 from V1, and then verify that V1 + V2 ≥ 0 still holds, aborting if not. T2 similarly deducts $200 from V2 and then verifies V1 + V2 ≥ 0. Since the transactions must serialize, either T1 happens first, leaving V1 = −$100, V2 = $100, and preventing T2 from succeeding (since V1 + (V2 − $200) is now −$200), or T2 happens first and similarly prevents T1 from committing.
If the database is under snapshot isolation(MVCC), however, T1 and T2 operate on private snapshots of the database: each deducts $200 from an account, and then verifies that the new total is zero, using the other account value that held when the snapshot was taken. Since neitherupdateconflicts, both commit successfully, leaving V1 = V2 = −$100, and V1 + V2 = −$200.
Some systems built usingmultiversion concurrency control(MVCC) may support (only) snapshot isolation to allow transactions to proceed without worrying about concurrent operations, and more importantly without needing to re-verify all read operations when the transaction finally commits. This is convenient because MVCC maintains a series of recent history consistent states. The only information that must be stored during the transaction is a list of updates made, which can be scanned for conflicts fairly easily before being committed. However, MVCC systems (such as MarkLogic) will use locks to serialize writes together with MVCC to obtain some of the performance gains and still support the stronger "serializability" level of isolation.
Potential inconsistency problems arising from write skew anomalies can be fixed by adding (otherwise unnecessary) updates to the transactions in order to enforce theserializabilityproperty.[4][5][6][7]
In the example above, we can materialize the conflict by adding a new table which makes the hidden constraint explicit, mapping each person to theirtotal balance. Phil would start off with a total balance of $200, and each transaction would attempt to subtract $200 from this, creating a write–write conflict that would prevent the two from succeeding concurrently. However, this approach violates thenormal form.
Alternatively, we can promote one of the transaction's reads to a write. For instance, T2 could set V1 = V1, creating an artificial write–write conflict with T1 and, again, preventing the two from succeeding concurrently. This solution may not always be possible.
In general, therefore, snapshot isolation puts some of the problem of maintaining non-trivial constraints onto the user, who may not appreciate either the potential pitfalls or the possible solutions. The upside to this transfer is better performance.
Snapshot isolation is called "serializable" mode inOracle[8][9][10]andPostgreSQLversions prior to 9.1,[11][12][13]which may cause confusion with the "realserializability" mode. There are arguments both for and against this decision; what is clear is that users must be aware of the distinction to avoid possible undesired anomalous behavior in their database system logic.
Snapshot isolation arose from work onmultiversion concurrency controldatabases, where multiple versions of the database are maintained concurrently to allow readers to execute without colliding with writers. Such a system allows a natural definition and implementation of such an isolation level.[3]InterBase, later owned byBorland, was acknowledged to provide SI rather than full serializability in version 4,[3]and likely permitted write-skew anomalies since its first release in 1985.[14]
Unfortunately, the ANSISQL-92standard was written with alock-based database in mind, and hence is rather vague when applied to MVCC systems. Berensonet al.wrote a paper in 1995[3]critiquing the SQL standard, and cited snapshot isolation as an example of an isolation level that did not exhibit the standard anomalies described in the ANSI SQL-92 standard, yet still had anomalous behaviour when compared withserializabletransactions.
In 2008, Cahillet al.showed that write-skew anomalies could be prevented by detecting and aborting "dangerous" triplets of concurrent transactions.[15]This implementation of serializability is well-suited tomultiversion concurrency controldatabases, and has been adopted in PostgreSQL 9.1,[12][13][16]where it is known as Serializable Snapshot Isolation (SSI). When used consistently, this eliminates the need for the above workarounds. The downside over snapshot isolation is an increase in aborted transactions. This can perform better or worse than snapshot isolation with the above workarounds, depending on workload. | https://en.wikipedia.org/wiki/Snapshot_isolation |
Instatistics, thejackknife(jackknife cross-validation) is across-validationtechnique and, therefore, a form ofresampling.
It is especially useful forbiasandvarianceestimation. The jackknife pre-dates other common resampling methods such as thebootstrap. Given a sample of sizen{\displaystyle n}, a jackknifeestimatorcan be built by aggregating the parameter estimates from each subsample of size(n−1){\displaystyle (n-1)}obtained by omitting one observation.[1]
The jackknife technique was developed byMaurice Quenouille(1924–1973) from 1949 and refined in 1956.John Tukeyexpanded on the technique in 1958 and proposed the name "jackknife" because, like a physicaljack-knife(a compact folding knife), it is arough-and-readytool that can improvise a solution for a variety of problems even though specific problems may be more efficiently solved with a purpose-designed tool.[2]
The jackknife is a linear approximation of thebootstrap.[2]
The jackknifeestimatorof a parameter is found by systematically leaving out each observation from a dataset and calculating the parameter estimate over the remaining observations and then aggregating these calculations.
For example, if the parameter to be estimated is the population mean of random variablex{\displaystyle x}, then for a given set ofi.i.d.observationsx1,...,xn{\displaystyle x_{1},...,x_{n}}the natural estimator is the sample mean:
where the last sum used another way to indicate that the indexi{\displaystyle i}runs over the set[n]={1,…,n}{\displaystyle [n]=\{1,\ldots ,n\}}.
Then we proceed as follows: For eachi∈[n]{\displaystyle i\in [n]}we compute the meanx¯(i){\displaystyle {\bar {x}}_{(i)}}of the jackknife subsample consisting of all but thei{\displaystyle i}-th data point, and this is called thei{\displaystyle i}-th jackknife replicate:
It could help to think that thesen{\displaystyle n}jackknife replicatesx¯(1),…,x¯(n){\displaystyle {\bar {x}}_{(1)},\ldots ,{\bar {x}}_{(n)}}approximate the distribution of the sample meanx¯{\displaystyle {\bar {x}}}. A largern{\displaystyle n}improves the approximation. Then finally to get the jackknife estimator, then{\displaystyle n}jackknife replicates are averaged:
One may ask about the bias and the variance ofx¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}. From the definition ofx¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}as the average of the jackknife replicates one could try to calculate explicitly. The bias is a trivial calculation, but the variance ofx¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}is more involved since the jackknife replicates are not independent.
For the special case of the mean, one can show explicitly that the jackknife estimate equals the usual estimate:
This establishes the identityx¯jack=x¯{\displaystyle {\bar {x}}_{\mathrm {jack} }={\bar {x}}}. Then taking expectations we getE[x¯jack]=E[x¯]=E[x]{\displaystyle E[{\bar {x}}_{\mathrm {jack} }]=E[{\bar {x}}]=E[x]}, sox¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}is unbiased, while taking variance we getV[x¯jack]=V[x¯]=V[x]/n{\displaystyle V[{\bar {x}}_{\mathrm {jack} }]=V[{\bar {x}}]=V[x]/n}. However, these properties do not generally hold for parameters other than the mean.
This simple example for the case of mean estimation is just to illustrate the construction of a jackknife estimator, while the real subtleties (and the usefulness) emerge for the case of estimating other parameters, such as higher moments than the mean or other functionals of the distribution.
x¯jack{\displaystyle {\bar {x}}_{\mathrm {jack} }}could be used to construct an empirical estimate of the bias ofx¯{\displaystyle {\bar {x}}}, namelybias^(x¯)jack=c(x¯jack−x¯){\displaystyle {\widehat {\operatorname {bias} }}({\bar {x}})_{\mathrm {jack} }=c({\bar {x}}_{\mathrm {jack} }-{\bar {x}})}with some suitable factorc>0{\displaystyle c>0}, although in this case we know thatx¯jack=x¯{\displaystyle {\bar {x}}_{\mathrm {jack} }={\bar {x}}}so this construction does not add any meaningful knowledge, but it gives the correct estimation of the bias (which is zero).
A jackknife estimate of the variance ofx¯{\displaystyle {\bar {x}}}can be calculated from the variance of the jackknife replicatesx¯(i){\displaystyle {\bar {x}}_{(i)}}:[3][4]
The left equality defines the estimatorvar^(x¯)jack{\displaystyle {\widehat {\operatorname {var} }}({\bar {x}})_{\mathrm {jack} }}and the right equality is an identity that can be verified directly. Then taking expectations we getE[var^(x¯)jack]=V[x]/n=V[x¯]{\displaystyle E[{\widehat {\operatorname {var} }}({\bar {x}})_{\mathrm {jack} }]=V[x]/n=V[{\bar {x}}]}, so this is an unbiased estimator of the variance ofx¯{\displaystyle {\bar {x}}}.
The jackknife technique can be used to estimate (and correct) the bias of an estimator calculated over the entire sample.
Supposeθ{\displaystyle \theta }is the target parameter of interest, which is assumed to be some functional of the distribution ofx{\displaystyle x}. Based on a finite set of observationsx1,...,xn{\displaystyle x_{1},...,x_{n}}, which is assumed to consist ofi.i.d.copies ofx{\displaystyle x}, the estimatorθ^{\displaystyle {\hat {\theta }}}is constructed:
The value ofθ^{\displaystyle {\hat {\theta }}}is sample-dependent, so this value will change from one random sample to another.
By definition, the bias ofθ^{\displaystyle {\hat {\theta }}}is as follows:
One may wish to compute several values ofθ^{\displaystyle {\hat {\theta }}}from several samples, and average them, to calculate an empirical approximation ofE[θ^]{\displaystyle E[{\hat {\theta }}]}, but this is impossible when there are no "other samples" when the entire set of available observationsx1,...,xn{\displaystyle x_{1},...,x_{n}}was used to calculateθ^{\displaystyle {\hat {\theta }}}. In this kind of situation the jackknife resampling technique may be of help.
We construct the jackknife replicates:
where each replicate is a "leave-one-out" estimate based on the jackknife subsample consisting of all but one of the data points:
Then we define their average:
The jackknife estimate of the bias ofθ^{\displaystyle {\hat {\theta }}}is given by:
and the resulting bias-corrected jackknife estimate ofθ{\displaystyle \theta }is given by:
This removes the bias in the special case that the bias isO(n−1){\displaystyle O(n^{-1})}and reduces it toO(n−2){\displaystyle O(n^{-2})}in other cases.[2]
The jackknife technique can be also used to estimate the variance of an estimator calculated over the entire sample. | https://en.wikipedia.org/wiki/Jackknife_(statistics) |
The following is alist of PowerPC processors.
32-bit and 64-bit PowerPC processors have been a favorite of embedded computer designers. To keep costs low on high-volume competitive products, the CPU core is usually bundled into a system-on-chip (SOC) integrated circuit. SOCs contain the processor core, cache and the processor's local data on-chip, along with clocking, timers, memory (SDRAM), peripheral (network, serial I/O), and bus (PCI, PCI-X, ROM/Flash bus, I2C) controllers. IBM also offers an open bus architecture (calledCoreConnect) to facilitate connection of the processor core to memory and peripherals in a SOC design. IBM and Motorola have competed along parallel development lines in overlapping markets. A later development was theBook E PowerPC Specification, implemented by both IBM and Freescale Semiconductor, which defines embedded extensions to the PowerPC programming model.
Northbridgeor host bridge for PowerPC CPU is anIntegrated Circuit(IC) for interfacing PowerPC CPU with memory, and Southbridge IC. Some Northbridge also provide interface forAccelerated Graphics Ports(AGP) bus,Peripheral Component Interconnect(PCI), PCI-X, PCI Express, orHypertransportbus. Specific Northbridge IC must be used for PowerPC CPU. It is impossible to use Northbridge for Intel or AMD x86 CPU with PowerPC CPU. However it is possible to use certain types of x86Southbridgein PowerPC based motherboards. Example:VIA686B andAMDGeode CS5536.
Apple used their own type of northbridges which were customASICsmanufactured byVLSI(laterPhilips),Texas InstrumentsandLucent(later agere systems)
List of Northbridge for PowerPC: | https://en.wikipedia.org/wiki/List_of_PowerPC_processors |
TheISO/IEC 11179metadata registry(MDR) standard is an internationalISO/IECstandard for representingmetadatafor an organization in a metadata registry. It documents the standardization and registration of metadata to make data understandable and shareable.[1]
The ISO/IEC 11179 model is a result of two principles of semantic theory, combined with basic principles of data modelling.
The first principle from semantic theory is the thesaurus type relation between wider and more narrow (or specific) concepts, e.g. the wide concept "income" has a relation to the more narrow concept "net income".
The second principle from semantic theory is the relation between a concept and its representation, e.g., "buy" and "purchase" are the same concept although different terms are used.
A basic principle of data modelling is the combination of an object class and a characteristic. For example, "Person - hair color".
When applied to data modelling, ISO/IEC 11179 combines a wide "concept" with an "object class" to form a more specific "data element concept". For example, the high-level concept "income" is combined with the object class "person" to form the data element concept "net income of person". Note that "net income" is more specific than "income".
The different possible representations of a data element concept are then described with the use of one or more data elements. Differences in representation may be a result of the use of synonyms or different value domains in different data sets in a data holding. A value domain is the permitted range of values for a characteristic of an object class. An example of a value domain for "sex of person" is "M = Male, F = Female, U = Unknown". The letters M, F and U are then the permitted values of sex of person in a particular data set.
The data element concept "monthly net income of person" may thus have one data element called "monthly net income of individual by 100 dollar groupings" and one called "monthly net income of person range 0-1000 dollars", etc., depending on the heterogeneity of representation that exists within the data holdings covered by one ISO/IEC 11179 registry. Note that these two examples have different terms for the object class (person/individual) and different value sets (a 0-1000 dollar range as opposed to 100 dollar groupings).
The result of this is a catalogue of sorts, in which related data element concepts are grouped by a high-level concept and an object class, and data elements grouped by a shared data element concept. Strictly speaking, this is not a hierarchy, even if it resembles one.
ISO/IEC 11179 proper does not describe data as it is actually stored. It does not refer to the description of physical files, tables and columns. The ISO/IEC 11179 constructs are "semantic" as opposed to "physical" or "technical".
The standard has two main purposes: definition and exchange. The core object is the data element concept, since it defines a concept and, ideally, describes data independent of its representation in any one system, table, column or organisation.
The standard consists of seven parts:
Part 1 explains the purpose of each part. Part 3 specifies the metamodel that defines the registry. Part 7 is released per December 2019 and provides an extension to part 3 for registration of metadata about data sets. The other parts specify various aspects of the use of the registry.
Thedata elementis foundational concept in an ISO/IEC 11179 metadata registry. The purpose of the registry is to maintain a semantically precise structure of data elements.
Each Data element in an ISO/IEC 11179 metadata registry:
Data elements that store "Codes" or enumerated values must also specify the semantics of each of the code values with precise definitions.
Software AG's COTS Metadata Registry (MDR) product supports the ISO 11179 standard and continues to be sold and used for this purpose in both commercial and government applications (see Vendor Tools section below).
While commercial adoption is increasing, the spread of ISO/IEC 11179 has been more successful in the public sector. However, the reason for this is unclear. ISO membership is open to organizations through their national bodies. Countries with public sector repositories across various industries include Australia, Canada, Germany, United States and the United Kingdom.
The United Nations and the US Government refer to and use the 11179 standards. 11179 is strongly recommended on the U.S. government'sXMLwebsite.[2]and is promoted byThe Open Groupas a foundation of theUniversal Data Element Framework.[3]The Open Group is avendor-neutraland technology-neutralconsortiumworking to enable access to integrated information within and between enterprises based onopen standardsand globalinteroperability.
Although the ISO/IEC 11179 metadata registry is 6-part standard comprising several hundreds of pages, the primary model is presented in Part-3 and depicted in UML diagrams to facilitate understanding, supported by normative text. The eXtended Metadata Registry initiative,XMDRled by the US, explored the use of ontologies as the basis for MDR content in order to provide richer semantic framework than could be achieved by lexical and syntax naming conventions alone. The XMDR experimented with a prototype using OWL, RDF and SPARQL to prove the concept. The initiative resulted in Edition 3 of ISO/IEC 11179. The first part published is ISO/IEC 11179-3:2013. The primary extension in Edition 3 is the Concept Region, expanding the use of concepts to more components within the standard, and supporting registration of a Concept system for use within the registry. The standard also supports the use of externally defined concept systems. Edition 3 versions of Parts 1, 5, and 6 were published in 2015. Part 2, Classifications, is subsumed by the Concept Region in Part 3, but is being updated to a Technical Report (TR) to provide guidance on the development of Classification Schemes. Part 4 describes principles for forming data definitions; an Edition 3 has not been proposed.
The following metadata registries state that they follow ISO/IEC 11179 guidelines although there have been no formal third party tests developed to test for metadata registry compliance.
No independent agencies certify ISO/IEC 11179 compliance. To some extent, certain existing software implementations suffer from poor design and potential security vulnerabilities, which hinder the adoption of ISO/IEC 11179.
Open Metadata | https://en.wikipedia.org/wiki/ISO/IEC_11179 |
L2 Syntactical Complexity Analyzer(L2SCA) developed byXiaofei Luat thePennsylvania State University, is acomputationaltool which producessyntactic complexityindices of written English language texts.[1]Along withCoh-Metrix, the L2SCA is one of the most extensively used computational tool to compute indices ofsecond language writingdevelopment. The L2SCA is also widely utilised in the field ofcorpus linguistics.[2]The L2SCA is available in a single and a batch mode. The first provides the possibility of analyzing a single written text for 14 syntactic complexity indices.[3]The latter allows the user to analyze 30 written texts simultaneously.
The L2SCA has been used in numerous studies in the field of second language writing development to compute indices of syntactic complexity.[4][5][6]
The L2SCA has also been used in various studies in the field ofcorpus linguistics.[7][8]
Thiscomputational linguistics-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/L2_Syntactic_Complexity_Analyzer |
TheKruskal count[1][2](also known asKruskal's principle,[3][4][5][6][7]Dynkin–Kruskal count,[8]Dynkin's counting trick,[9]Dynkin's card trick,[10][11][12][13]coupling card trick[14][15][16]orshift coupling[10][11][12][13]) is aprobabilisticconcept originally demonstrated by the Russian mathematicianEvgenii Borisovich Dynkinin the 1950s or 1960s[when?]discussingcouplingeffects[14][15][9][16]and rediscovered as acard trickby the American mathematicianMartin David Kruskalin the early 1970s[17][nb 1]as a side-product while working on another problem.[18]It was published by Kruskal's friend[19]Martin Gardner[20][1]and magicianKarl Fulvesin 1975.[21]This is related to a similar trick published by magician Alexander F. Kraus in 1957 asSum total[22][23][24][25]and later calledKraus principle.[2][7][25][18]
Besides uses as a card trick, the underlying phenomenon has applications incryptography,code breaking,software tamper protection,code self-synchronization,control-flow resynchronization, design ofvariable-length codesandvariable-length instruction sets,web navigation, object alignment, and others.
The trick is performed with cards, but is more a magical-looking effect than a conventional magic trick. The magician has no access to the cards, which are manipulated by members of the audience. Thussleight of handis not possible. Rather the effect is based on the mathematical fact that the output of aMarkov chain, under certain conditions, is typically independent of the input.[26][27][28][29][6]A simplified version using the hands of a clock performed byDavid Copperfieldis as follows.[30][31]A volunteer picks a number from one to twelve and does not reveal it to the magician. The volunteer is instructed to start from 12 on the clock and move clockwise by a number of spaces equal to the number of letters that the chosen number has when spelled out. This is then repeated, moving by the number of letters in the new number. The output after three or more moves does not depend on the initially chosen number and therefore the magician can predict it. | https://en.wikipedia.org/wiki/Kruskal_count |
A wide variety of different wireless data technologies exist, some in direct competition with one another, others designed for specific applications.Wirelesstechnologies can be evaluated by a variety of different metrics of which some are described in this entry.
Standards can be grouped as follows in increasing range order:
Personal area network(PAN) systems are intended for short range communication between devices typically controlled by a single person. Some examples include wireless headsets for mobile phones or wireless heart rate sensors communicating with a wrist watch. Some of these technologies include standards such asANTUWB,Bluetooth,Zigbee, andWireless USB.
Wireless Sensor Networks(WSN / WSAN) are, generically, networks of low-power, low-cost devices that interconnect wirelessly to collect, exchange, and sometimes act-on data collected from their physical environments - "sensor networks". Nodes typically connect in a star or mesh topology. While most individual nodes in a WSAN are expected to have limited range (Bluetooth, Zigbee,6LoWPAN, etc.), particular nodes may be capable of more expansive communications (Wi-Fi,Cellular networks, etc.) and any individual WSAN can span a wide geographical range. An example of a WSAN would be a collection of sensors arranged throughout an agricultural facility to monitor soil moisture levels, report the data back to a computer in the main office for analysis and trend modeling, and maybe turn on automatic watering spigots if the level is too low.
For wider area communications,wireless local area network(WLAN) is used. WLANs are often known by their commercial product nameWi-Fi. These systems are used to provide wireless access to other systems on the local network such as other computers, shared printers, and other such devices or even the internet. Typically a WLAN offers much better speeds and delays within the local network than an average consumer'sInternet access. Older systems that provide WLAN functionality includeDECTandHIPERLAN. These however are no longer in widespread use. One typical characteristic of WLANs is that they are mostly very local, without the capability of seamless movement from one network to another.
Cellular networksorWANare designed for citywide/national/global coverage areas and seamless mobility from one access point (often defined as abase station) to another allowing seamless coverage for very wide areas. Cellular network technologies are often split into 2nd generation2G,3Gand4Gnetworks. Originally 2G networks were voice centric or even voice only digital cellular systems (as opposed to the analog 1G networks). Typical 2G standards includeGSMandIS-95with extensions viaGPRS,EDGEand1xRTT, providing Internet access to users of originally voice centric 2G networks. BothEDGEand1xRTTare 3G standards, as defined by theITU, but are usually marketed as 2.9G due to their comparatively low speeds and high delays when compared to true 3G technologies.
True 3G systems such asEV-DO,W-CDMA(includingHSPAandHSPA+) provide combinedcircuit switchedandpacket switcheddata and voice services from the outset, usually at far better data rates than 2G networks with their extensions. All of these services can be used to provide combined mobile voice access and Internet access at remote locations.
4G networks provide even higher bitrates and many architectural improvements, which are not necessarily visible to the consumer. The current 4G systems that are deployed widely areWIMAXandLTE. The two are pure packet based networks without traditional voice circuit capabilities. These networks provide voice services viaVoIPorVoLTE.
Some systems are designed for point-to-point line-of-sight communications, once two such nodes get too far apart they can no longer communicate. Other systems are designed to form awireless mesh networkusing one of a variety ofrouting protocols. In a mesh network, when nodes get too far apart to communicate directly, they can still communicate indirectly through intermediate nodes.
The following standards are included in this comparison.
Antenna,RF front endenhancements and minor protocol timer tweaks have helped deploy long rangeP2Pnetworks compromising on radial coverage, throughput and/or spectra efficiency (310 km&382 km)
Notes: All speeds are theoretical maximums and will vary by a number of factors, including the use of external antennas, distance from the tower and the ground speed (e.g. communications on a train may be poorer than when standing still). Usually the bandwidth is shared between several terminals. The performance of each technology is determined by a number of constraints, including thespectral efficiencyof the technology, the cell sizes used, and the amount of spectrum available.
For more comparison tables, seebit rate progress trends,comparison of mobile phone standards,spectral efficiency comparison tableandOFDM system comparison table.
When discussing throughput, there is often a distinction between the peak data rate of the physical layer, the theoretical maximum data throughput and typical throughput.
The peak bit rate of the standard is thenet bit rateprovided by the physical layer in the fastest transmission mode (using the fastest modulation scheme and error code), excluding forward error correction coding and other physical layer overhead.
The theoreticalmaximum throughputfor end user is clearly lower than the peak data rate due to higher layer overheads. Even this is never possible to achieve unless the test is done under perfect laboratory conditions.
The typical throughput is what users have experienced most of the time when well within the usable range to the base station. The typical throughput is hard to measure, and depends on many protocol issues such as transmission schemes (slower schemes are used at longer distance from the access point due to better redundancy), packet retransmissions and packet size. The typicalthroughputis often even lower because of other traffic sharing the same network or cell, interference or even the fixed line capacity from the base station onwards being limited.
Note that these figures cannot be used to predict the performance of any given standard in any given environment, but rather as benchmarks against which actual experience might be compared. | https://en.wikipedia.org/wiki/Comparison_of_wireless_data_standards |
MoIPorMOIPcan mean: | https://en.wikipedia.org/wiki/MoIP_(disambiguation) |
Incomputer science, analgorithmis said to beasymptotically optimalif, roughly speaking, for large inputs it performsat worsta constant factor (independent of the input size) worse than the best possible algorithm. It is a term commonly encountered in computer science research as a result of widespread use ofbig-O notation.
More formally, an algorithm is asymptotically optimal with respect to a particularresourceif the problem has been proven to requireΩ(f(n))of that resource, and the algorithm has been proven to use onlyO(f(n)).
These proofs require an assumption of a particularmodel of computation, i.e., certain restrictions on operations allowable with the input data.
As a simple example, it's known that allcomparison sortsrequire at leastΩ(nlogn)comparisons in the average and worst cases.Mergesortandheapsortare comparison sorts which performO(nlogn)comparisons, so they are asymptotically optimal in this sense.
If the input data have somea prioriproperties which can be exploited in construction of algorithms, in addition to comparisons, then asymptotically faster algorithms may be possible. For example, if it is known that theNobjects areintegersfrom the range[1,N],then they may be sortedO(N)time, e.g., by thebucket sort.
A consequence of an algorithm being asymptotically optimal is that, for large enough inputs, no algorithm can outperform it by more than a constant factor. For this reason, asymptotically optimal algorithms are often seen as the "end of the line" in research, the attaining of a result that cannot be dramatically improved upon. Conversely, if an algorithm is not asymptotically optimal, this implies that as the input grows in size, the algorithm performs increasingly worse than the best possible algorithm.
In practice it's useful to find algorithms that perform better, even if they do not enjoy any asymptotic advantage. New algorithms may also present advantages such as better performance on specific inputs, decreased use of resources, or being simpler to describe and implement. Thus asymptotically optimal algorithms are not always the "end of the line".
Although asymptotically optimal algorithms are important theoretical results, an asymptotically optimal algorithm might not be used in a number of practical situations:
An example of an asymptotically optimal algorithm not used in practice isBernard Chazelle'slinear-timealgorithm fortriangulationof asimple polygon. Another is theresizable arraydata structure published in "Resizable Arrays in Optimal Time and Space",[1]which can index in constant time but on many machines carries a heavy practical penalty compared to ordinary array indexing.
Formally, suppose that we have a lower-bound theorem showing that a problem requires Ω(f(n)) time to solve for an instance (input) of sizen(seeBig O notation § Big Omega notationfor the definition of Ω). Then, an algorithm which solves the problem in O(f(n)) time is said to be asymptotically optimal. This can also be expressed using limits: suppose that b(n) is a lower bound on the running time, and a given algorithm takes time t(n). Then the algorithm is asymptotically optimal if:
This limit, if it exists, is always at least 1, as t(n) ≥ b(n).
Although usually applied to time efficiency, an algorithm can be said to use asymptotically optimal space, random bits, number of processors, or any other resource commonly measured using big-O notation.
Sometimes vague or implicit assumptions can make it unclear whether an algorithm is asymptotically optimal. For example, a lower bound theorem might assume a particularabstract machinemodel, as in the case of comparison sorts, or a particular organization of memory. By violating these assumptions, a new algorithm could potentially asymptotically outperform the lower bound and the "asymptotically optimal" algorithms.
The nonexistence of an asymptotically optimal algorithm is called speedup.Blum's speedup theoremshows that there exist artificially constructed problems with speedup. However, it is anopen problemwhether many of the most well-known algorithms today are asymptotically optimal or not. For example, there is anO(nα(n)){\displaystyle O(n\alpha (n))}algorithm for findingminimum spanning trees, whereα(n){\displaystyle \alpha (n)}is the very slowly growing inverse of theAckermann function, but the best known lower bound is the trivialΩ(n){\displaystyle \Omega (n)}. Whether this algorithm is asymptotically optimal is unknown, and would be likely to be hailed as a significant result if it were resolved either way. Coppersmith and Winograd (1982) proved that matrix multiplication has a weak form of speed-up among a restricted class of algorithms (Strassen-type bilinear identities with lambda-computation). | https://en.wikipedia.org/wiki/Asymptotically_optimal_algorithm |
Regulation of algorithms, oralgorithmic regulation, is the creation of laws, rules and public sector policies for promotion andregulationofalgorithms, particularly inartificial intelligenceandmachine learning.[1][2][3]For the subset of AI algorithms, the termregulation of artificial intelligenceis used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union.[4]Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging.[5]Another emerging topic is theregulation ofblockchainalgorithms (Use of the smart contracts must be regulated) and is mentioned along with regulation of AI algorithms.[6]Many countries have enactedregulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.[citation needed]
The motivation for regulation of algorithms is the apprehension oflosing control over the algorithms, whose impact on human life increases. Multiple countries have already introduced regulations in case of automatedcredit scorecalculation—right to explanationis mandatory for those algorithms.[7][8]For example, The IEEE has begun developing a new standard to explicitly address ethical issues and the values of potential future users.[9]Bias, transparency, and ethics concerns have emerged with respect to the use of algorithms in diverse domains ranging from criminal justice[10]to healthcare[11]—many fear that artificial intelligence could replicate existing social inequalities along race, class, gender, and sexuality lines.
In 2016,Joy BuolamwinifoundedAlgorithmic Justice Leagueafter a personal experience with biased facial detection software in order to raise awareness of the social implications of artificial intelligence through art and research.[12]
In 2017Elon Muskadvocated regulation of algorithms in the context of theexistential risk from artificial general intelligence.[13][14][15]According toNPR, theTeslaCEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation."[13]
In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development.[14]Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEOBrian Krzanichhas argued that artificial intelligence is in its infancy and that it is too early to regulate the technology.[15]Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty.[16]One suggestion has been for the development of a global governance board to regulate AI development.[17]In 2020, the European Union published its draft strategy paper for promoting and regulating AI.[18]
Algorithmic tacit collusionis a legally dubious antitrust practise committed by means of algorithms, which the courts are not able to prosecute.[19]This danger concerns scientists and regulators in EU, US and beyond.[19]European CommissionerMargrethe Vestagermentioned an early example of algorithmic tacit collusion in her speech on "Algorithms and Collusion" on March 16, 2017, described as follows:[20]
"A few years ago, two companies were selling a textbook called The Making of a Fly. One of those sellers used an algorithm which essentially matched its rival’s price. That rival had an algorithm which always set a price 27% higher than the first. The result was that prices kept spiralling upwards, until finally someone noticed what was going on, and adjusted the price manually. By that time, the book was selling – or rather, not selling – for 23 million dollars a copy."
In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committingwelfare fraud, which quietly flagged thousands of people to investigators.[21]This caused a public protest. The district court of Hague shut down SyRI referencingArticle 8 of the European Convention on Human Rights(ECHR).[22]
In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm."[23]This protest was successful and the grades were taken back.[24]
AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues.[5]The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national,[25]and international levels[18]and in fields from public service management[26]to law enforcement,[18]the financial sector,[25]robotics,[27]the military,[28]and international law.[29][30]There are many concerns that there is not enough visibility and monitoring of AI in these sectors.[31]In the United States financial sector, for example, there have been calls for theConsumer Financial Protection Bureauto more closely examine source code and algorithms when conducting audits of financial institutions' non-public data.[32]
In the United States, on January 7, 2019, following an Executive Order on 'Maintaining American Leadership in Artificial Intelligence', the White House'sOffice of Science and Technology Policyreleased a draftGuidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI.[33][34]In response, theNational Institute of Standards and Technologyhas released a position paper,[35]theNational Security Commission on Artificial Intelligencehas published an interim report,[36]and theDefense Innovation Boardhas issued recommendations on the ethical use of AI.[37]
In April 2016, for the first time in more than two decades, the European Parliament adopted a set of comprehensive regulations for the collection, storage, and use of personal information, theGeneral Data Protection Regulation(GDPR)1 (European Union, Parliament and Council 2016).[6] The GDPR's policy on the right of citizens to receive an explanation for algorithmic decisions highlights the pressing importance of human interpretability in algorithm design.[38]
In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality offully autonomous weapons, becoming the first permanent member of the U.N.Security Councilto broach the issue,[29]and leading toproposals for global regulation.[39]In the United States, steering on regulating security-related AI is provided by theNational Security Commission on Artificial Intelligence.[40]
In 2017, the U.K. Vehicle Technology and Aviation Bill imposes liability on the owner of an uninsured automated vehicle when driving itself and makes provisions for cases where the owner has made “unauthorized alterations” to the vehicle or failed to update its software. Further ethical issues arise when, e.g., aself-driving carswerves to avoid a pedestrian and causes a fatal accident.[41]
In 2021, theEuropean Commissionproposed theArtificial Intelligence Act.[42]
There is a concept of algorithm certification emerging as a method of regulating algorithms. Algorithm certification involves auditing whether the algorithm used during the life cycle 1) conforms to the protocoled requirements (e.g., for correctness, completeness, consistency, and accuracy); 2) satisfies the standards, practices, and conventions; and 3) solves the right problem (e.g., correctly model physical laws), and satisfies the intended use and user needs in the operational environment.[9]
Blockchainsystems provide transparent and fixed records of transactions and hereby contradict the goal of the EuropeanGDPR, which is to give individuals full control of their private data.[43][44]
By implementing theDecree on Development of Digital Economy,Belarushas become the first-ever country to legalizesmart contracts. Belarusian lawyer Denis Aleinikov is considered to be the author of a smart contract legal concept introduced by the decree.[45][46][47]There are strong arguments that the existing US state laws are already a sound basis for the smart contracts' enforceability —Arizona,Nevada,OhioandTennesseehave amended their laws specifically to allow for the enforceability of blockchain-based contracts nevertheless.[48]
There have been proposals to regulate robots and autonomous algorithms. These include:
In 1942, authorIsaac Asimovaddressed regulation of algorithms by introducing the fictionalThree Laws of Robotics:
The main alternative to regulation is a ban, and the banning of algorithms is presently highly unlikely. However, inFrank Herbert'sDuneuniverse,thinking machinesis a collective term forartificial intelligence, which were completely destroyed and banned after a revolt known as theButlerian Jihad:[50]
JIHAD, BUTLERIAN: (see also Great Revolt) — the crusade against computers, thinking machines, and conscious robots begun in 201 B.G. and concluded in 108 B.G. Its chief commandment remains in theO.C. Bibleas "Thou shalt not make a machine in the likeness of a human mind."[51] | https://en.wikipedia.org/wiki/Regulation_of_algorithms |
Morphophonology(alsomorphophonemicsormorphonology) is the branch oflinguisticsthat studies the interaction betweenmorphologicalandphonologicalorphoneticprocesses. Its chief focus is the sound changes that take place inmorphemes(minimal meaningful units) when they combine to form words.
The origins of morphophonology trace back to the early 20th century with foundational works in structural linguistics. Notable contributions includeRoman Jakobson’s insights into phonological alternations andChomsky& Halle’sThe Sound Pattern of English(1968), which formalized the relationship between phonology and morphology within generative grammar. Subsequent theories, such as Autosegmental Phonology and Optimality Theory, have refined the analysis of morphophonological patterns
Morphophonological analysis often involves an attempt to give a series of formalrulesorconstraintsthat successfully predict the regular sound changes occurring in the morphemes of a given language. Such a series of rules converts a theoreticalunderlying representationinto a surface form that is actually heard. The units of which the underlying representations of morphemes are composed are sometimes calledmorphophonemes. The surface form produced by the morphophonological rules may consist ofphonemes(which are then subject to ordinary phonological rules to produce speech sounds orphones), or else the morphophonological analysis may bypass the phoneme stage and produce the phones itself.
Morphophonology bridges the gap between morphology and phonology, offering insights into the dynamic interactions between word formation and sound patterns. It continues to evolve as a field, integrating innovative approaches and broadening our understanding of linguistic systems globally.
Whenmorphemescombine, they influence each other's sound structure (whether analyzed at a phonetic orphonemiclevel), resulting in different variant pronunciations for the same morpheme. Morphophonology attempts to analyze these processes. A language's morphophonological structure is generally described with a series of rules which, ideally, can predict every morphophonologicalalternationthat takes place in the language.
An example of a morphophonological alternation in English is provided by thepluralmorpheme, written as "-s" or "-es". Its pronunciation varies among[s],[z], and[ɪz], as incats,dogs, andhorsesrespectively. A purely phonological analysis would most likely assign to these three endings the phonemic representations/s/,/z/,/ɪz/. On a morphophonological level, however, they may all be considered to be forms of the underlying object⫽z⫽, which is amorphophonemerealized as one of the phonemic forms{s,z,ɪz}. The different forms it takes are dependent on the segment at the end of the morpheme to which it attaches: the dependencies are described by morphophonological rules. (The behaviour of the English past tense ending "-ed" is similar: it can be pronounced/t/,/d/or/ɪd/, as inhoped,bobbedandadded.)
The plural suffix "-s" can also influence the form taken by the preceding morpheme, as in the case of the wordsleafandknife, which end with[f]in the singular/but have[v]in the plural (leaves,knives). On a morphophonological level, the morphemes may be analyzed as ending in a morphophoneme⫽F⫽, which becomesvoicedwhen a voiced consonant (in this case the⫽z⫽of the plural ending) is attached to it. The rule may be written symbolically as/F/→ [αvoice] /__[αvoice]. This expression is called Alpha Notation in which α can be + (positive value) or − (negative value).
Common conventions to indicate a morphophonemic rather than phonemic representation include double slashes (⫽ ⫽) (as above, implying that the transcription is 'more phonemic than simply phonemic'). This is the only convention consistent with the IPA. Other conventions include pipes (| |), double pipes (‖ ‖)[a]and braces ({ }).[b]Braces, from a convention inset theory, tend to be used when the phonemes are all listed, as in{s,z,ɪz}and{t,d,ɪd}for the English plural and past-tense morphemes⫽z⫽and⫽d⫽above.[1]
For instance, the English wordcatsmay be transcribed phonetically as[ˈkʰæʔts], phonemically as/ˈkæts/and morphophonemically as⫽ˈkætz⫽, if the plural is argued to be underlyingly⫽z⫽, assimilating to/s/after a voiceless nonsibilant. The tilde ~ may indicate morphological alternation, as in⫽ˈniːl~nɛl+t⫽or{niː~ɛl},{niː~ɛl+t}forkneel~knelt(the plus sign '+' indicates a morpheme boundary).[2]
Inflected and agglutinating languages may have extremely complicated systems of morphophonemics. Examples of complex morphophonological systems include:
Until the 1950s, many phonologists assumed thatneutralizingrules generally applied beforeallophonicrules. Thus phonological analysis was split into two parts: a morphophonological part, where neutralizing rules were developed to derive phonemes from morphophonemes; and a purely phonological part, where phones were derived from the phonemes. Since the 1960s (in particular with the work of thegenerativeschool, such as Chomsky and Halle'sThe Sound Pattern of English) many linguists have moved away from making such a split, instead regarding the surface phones as being derived from the underlying morphophonemes (which may be referred to using various terminology) through a single system of(morpho)phonological rules.
The purpose of both phonemic and morphophonemic analysis is to produce simpler underlying descriptions for what appear on the surface to be complicated patterns. In purely phonemic analysis the data is just a set of words in a language, while for the purposes of morphophonemic analysis the words must be considered in grammaticalparadigmsto take account of the underlyingmorphemes. It is postulated that morphemes are recorded in the speaker's "lexicon" in an invariant (morphophonemic) form, which, in a given environment, is converted by rules into a surface form. The analyst attempts to present as completely as possible a system of underlying units (morphophonemes) and a series of rules that act on them, so as to produce surface forms consistent with the linguistic data.
Theisolation formof a morpheme is the form in which that morpheme appears in isolation (when it is not subject to the effects of any other morpheme). In the case of abound morpheme, such as the English past tense ending "-ed", it is generally not possible to identify an isolation form since such a morpheme does not occur in isolation.
It is often reasonable to assume that the isolation form of a morpheme provides its underlying representation. For example, in some varieties ofAmerican English,plantis pronounced[plænt], whileplantingis[ˈplænɪŋ], where the morpheme "plant-" appears in the form[plæn]. Here, the underlying form can be assumed to be⫽plænt⫽, corresponding to the isolation form, since rules can be set up to derive the reduced form[plæn]from this (but it would be difficult or impossible to set up rules that would derive the isolation form[plænt]from an underlying⫽plæn⫽).
That is not always the case, however; the isolation form itself is sometimes subject toneutralizationthat does not apply to some other instances of the morpheme. For example, the French wordpetit("small") is pronounced in isolation without the final[t]sound, but in certain derived forms (such as the femininepetite), the[t]is heard. If the isolation form were adopted as the underlying form, the information that there is a final "t" would be lost, and it would then be difficult to explain the appearance of the "t" in the inflected forms. Similar considerations apply to languages withfinal obstruent devoicing, in which the isolation form undergoes loss of voicing contrast, but other forms may not.
If the grammar of a language is assumed to have two rules, rule A and rule B, with A ordered before B, a given derivation may cause the application of rule A to create the environment for rule B to apply, which was not present before the application of rule A. Both rules then are in afeeding relationship.
If rule A is ordered before B in the derivation in which rule A destroys the environment to which rule B applies, both rules are in ableeding order.
If A is ordered before B, and B creates an environment in which A could have applied, B is then said to counterfeed A, and the relationship iscounterfeeding.
If A is ordered before B, there is acounterbleedingrelationship if B destroys the environment that A applies to and has already applied and so B has missed its chance to bleed A.
Conjunctive orderingis the ordering that ensures that all rules are applied in a derivation before the surface representation occurs. Rules applied in a feeding relationship are said to beconjunctively ordered.
Disjunctive orderingis a rule that applies and prevents the other rule from applying in the surface representation. Such rules have a bleeding relationship and are said to bedisjunctively ordered.
The principle behindalphabeticwriting systems is that the letters (graphemes) representphonemes. However, manyorthographiesbased on such systems have correspondences between graphemes and phonemes that are not exact, and it is sometimes the case that certain spellings better represent a word's morphophonological structure rather than the purely-phonological structure. An example is that the English plural morpheme is written-s, regardless of whether it is pronounced/s/or/z/:catsanddogs, notdogz.
The above example involves active morphology (inflection), and morphophonemic spellings are common in this context in many languages. Another type of spelling that can be described as morphophonemic is the kind that reflects theetymologyof words. Such spellings are particularly common in English; examples includescience/saɪ/vs.unconscious/ʃ/,prejudice/prɛ/vs.prequel/priː/,sign/saɪn/signature/sɪɡn/,nation/neɪ/vs.nationalism/næ/, andspecial/spɛ/vs.species/spiː/.
For more detail on this topic, seePhonemic orthography, particularly the section onMorphophonemic features. | https://en.wikipedia.org/wiki/Morphophonology |
Distributed networkingis adistributed computingnetwork system where components of the program and data depend on multiple sources.
Distributed networking, used indistributed computing, is the network system over whichcomputer programming,software, and its data are spread out across more than one computer, but communicate complex messages through their nodes (computers), and are dependent upon each other. The goal of a distributed network is to share resources, typically to accomplish a single or similar goal.[1][2]Usually, this takes place over acomputer network,[1]however,internet-based computingis rising in popularity.[3]Typically, a distributed networking system is composed ofprocesses,threads,agents, anddistributed objects.[3]Merely distributed physical components is not enough to suffice as a distributed network; typically distributed networking usesconcurrentprogram execution.[2]
Client/servercomputing is a type of distributed computing where one computer, a client, requests data from the server, a primary computing center, which responds to the client directly with the requested data, sometimes through an agent. Client/server distributed networking is also popular in web-based computing.[3]Client/Server is the principle that a client computer can provide certain capabilities for a user and request others from other computers that provide services for the clients. TheWeb'sHypertext Transfer Protocolis basically all client/server.[1][4][5][6]
A distributed network can also beagent-based, where what controls the agent or component is loosely defined, and the components can have either pre-configured or dynamic settings.[3]
Decentralizationis where each computer on the network can be used for the computing task at hand, which is the opposite of the client/server model. Typically, only idle computers are used, and in this way, it is thought that networks are more efficient.[5]Peer-to-peer (P2P)computation is based on a decentralized, distributed network, including thedistributed ledgertechnology such asblockchain.[7][8]
Mesh networkingis a local network composed of devices (nodes) that was originally designed to communicate through radio waves, allowing for different types of devices. Each node is able to communicate with every other node on the network.
Prior to the 1980s, computing was typically centralized on a single low-cost desktop computer.[9]But today, computing resources (computers or servers) are typically physically distributed in many places, which distributed networking excels at. Some types of computing doesn't scale well past a certain level ofparallelismand the gains of superior hardware components, and thus isbottle-necked, such as byVery Large Scale Instruction Words. By increasing the number of computers rather than the power of their components, these bottlenecks are overcome. Situations whereresource sharingbecomes an issue, or where higherfault toleranceis needed also find aid in distributed networking.[2]Distributed networking is also very supportive of higher levels of anonymity.[10]
Enterprises with rapid growth and scaling needs may find it challenging to maintain their own distributed network under the traditional client/server computing model. Cloud Computing is the utility of distributed computing over Internet-based applications, storage, and computing services. A cloud is a cluster of computers or servers that are closely connected to providescalable, high-capacity computing or related tasks.[2][11] | https://en.wikipedia.org/wiki/Distributed_networking |
Atelegraph codeis one of thecharacter encodingsused to transmitinformationbytelegraphy.Morse codeis the best-known such code.Telegraphyusually refers to theelectrical telegraph, but telegraph systems using theoptical telegraphwere in use before that. A code consists of a number ofcode points, each corresponding to a letter of the alphabet, a numeral, or some other character. In codes intended for machines rather than humans, code points forcontrol characters, such ascarriage return, are required to control the operation of the mechanism. Each code point is made up of a number of elements arranged in a unique way for that character. There are usually two types of element (a binary code), but more element types were employed in some codes not intended for machines. For instance,American Morse codehad about five elements, rather than the two (dot and dash) ofInternational Morse Code.
Codes meant for human interpretation were designed so that the characters that occurred most often had the fewest elements in the corresponding code point. For instance, Morse code forE, the most common letter in English, is a single dot (▄), whereasQis▄▄▄ ▄▄▄ ▄ ▄▄▄. These arrangements meant the message could be sent more quickly and it would take longer for the operator to become fatigued. Telegraphs were always operated by humans until late in the 19th century. When automated telegraph messages came in,codes with variable-length code pointswere inconvenient for machine design of the period. Instead, codes with a fixed length were used. The first of these was theBaudot code, a five-bitcode. Baudot has only enough code points to print inupper case. Later codes had more bits (ASCIIhas seven) so that both upper and lower case could be printed. Beyond the telegraph age, modern computers require a very large number of code points (Unicodehas 21 bits) so that multiple languages and alphabets (character sets) can be handled without having to change the character encoding. Modern computers can easily handle variable-length codes such asUTF-8andUTF-16which have now become ubiquitous.
Prior to the electrical telegraph, a widely used method of building national telegraph networks was theoptical telegraphconsisting of a chain of towers from which signals could be sent by semaphore or shutters from tower to tower. This was particularly highly developed in France and had its beginnings during theFrench Revolution. The code used in France was the Chappe code, named afterClaude Chappethe inventor. TheBritish Admiraltyalso used the semaphore telegraph, but with their own code. The British code was necessarily different from that used in France because the British optical telegraph worked in a different way. The Chappe system had moveable arms, as if it were waving flags as inflag semaphore. The British system used an array of shutters that could be opened or closed.[1]
The Chappe system consisted of a large pivoted beam (the regulator) with an arm at each end (the indicators) which pivoted around the regulator on one extremity. The angles these components were allowed to take was limited to multiples of 45° to aid readability. This gave a code space of 8×4×8code points, but the indicator position inline with the regulator was never used because it was hard to distinguish from the indicator being folded back on top of the regulator, leaving a code space of7×4×7 = 196. Symbols were always formed with the regulator on either the left- or right-leaning diagonal (oblique) and only accepted as valid when the regulator moved to either the vertical or horizontal position. The left oblique was always used for messages, with the right oblique being used for control of the system. This further reduced the code space to 98, of which either four or six code points (depending on version) werecontrol characters, leaving a code space for text of 94 or 92 respectively.
The Chappe system mostly transmitted messages using acode bookwith a large number of set words and phrases. It was first used on an experimental chain of towers in 1793 and put into service from Paris toLillein 1794. The code book used this early is not known for certain, but an unidentified code book in theParis Postal Museummay have been for the Chappe system. The arrangement of this code in columns of 88 entries led Holzmann & Pehrson to suggest that 88 code points might have been used. However, the proposal in 1793 was for ten code points representing the numerals 0–9, and Bouchet says this system was still in use as late as 1800 (Holzmann & Pehrson put the change at 1795). The code book was revised and simplified in 1795 to speed up transmission. The code was in two divisions, the first division was 94 alphabetic and numeric characters plus some commonly used letter combinations. The second division was a code book of 94 pages with 94 entries on each page. A code point was assigned for each number up to 94. Thus, only two symbols needed to be sent to transmit an entire sentence – the page and line numbers of the code book, compared to four symbols using the ten-symbol code.
In 1799, three additional divisions were added. These had additional words and phrases, geographical places, and names of people. These three divisions required extra symbols to be added in front of the code symbol to identify the correct book. The code was revised again in 1809 and remained stable thereafter. In 1837 a horizontal only coding system was introduced by Gabriel Flocon which did not require the heavy regulator to be moved. Instead, an additional indicator was provided in the centre of the regulator to transmit that element of the code.[2]
TheEdelcrantzsystem was used in Sweden and was the second largest network built after that of France. The telegraph consisted of a set of ten shutters. Nine of these were arranged in a 3×3 matrix. Each column of shutters represented a binary-coded octal digit with a closed shutter representing "1" and the most significant digit at the bottom. Each symbol of telegraph transmission was thus a three-digit octal number. The tenth shutter was an extra-large one at the top. Its meaning was that the codepoint should be preceded by "A".
One use of the "A" shutter was that a numeral codepoint preceded by "A" meant add a zero (multiply by ten) to the digit. Larger numbers could be indicated by following the numeral with the code for hundreds (236), thousands (631) or a combination of these. This required fewer symbols to be transmitted than sending all the zero digits individually. However, the main purpose of the "A" codepoints was for a codebook of predetermined messages, much like the Chappe codebook.
The symbols without "A" were a large set of numerals, letters, common syllables and words to aidcode compaction. Around 1809, Edelcrantz introduced a new codebook with 5,120 codepoints, each requiring a two-symbol transmission to identify.
There were many codepoints for error correction (272, error), flow control, and supervisory messages. Usually, messages were expected to be passed all the way down the line, but there were circumstances when individual stations needed to communicate directly, usually for managerial purposes. The most common, and simplest situation was communication between adjacent stations. Codepoints 722 and 227 were used for this purpose, to get the attention of the next station towards, or away from, the sun, respectively. For more remote stations codepoints 557 and 755 respectively were used, followed by the identification of the requesting and target stations.[3]
Flag signalling was widely used for point-to-point signalling prior to the optical telegraph, but it was difficult to construct a nationwide network with hand-held flags. The much larger mechanical apparatus of the semaphore telegraph towers was needed so that a greater distance between links could be achieved. However, an extensive network with hand-held flags was constructed during theAmerican Civil War. This was thewig-wagsystem which used the code invented byAlbert J. Myer. Some of the towers used were enormous, up to 130 feet, to get a good range. Myer's code required only one flag using aternary code. That is, each code element consisted of one of three distinct flag positions. However, the alphabetical codepoints required only two positions, the third position only being used incontrol characters. Using a ternary code in the alphabet would have resulted in shorter messages because fewer elements are required in each codepoint, but a binary system is easier to read at long distance since fewer flag positions need to be distinguished. Myer's manual also describes a ternary-coded alphabet with a fixed length of three elements for each codepoint.[4]
Many different codes were invented during the early development of theelectrical telegraph. Virtually every inventor produced a different code to suit their particular apparatus. The earliest code used commercially on an electrical telegraph was theCooke and Wheatstone telegraph five needle code(C&W5). This was first used on theGreat Western Railwayin 1838. C&W5 had the major advantage that the code did not need to be learned by the operator; the letters could be read directly off the display board. However, it had the disadvantage that it required too many wires. A one needle code, C&W1, was developed that required only one wire. C&W1 was widely used in the UK and the British Empire.
Some other countries used C&W1, but it never became an international standard and generally each country developed their own code. In the US,American Morse codewas used, whose elements consisted of dots and dashes distinguished from each other by the length of the pulse of current on the telegraph line. This code was used on the telegraph invented bySamuel MorseandAlfred Vailand was first used commercially in 1844. Morse initially had code points only for numerals. He planned that numbers sent over the telegraph would be used as an index to a dictionary with a limited set of words. Vail invented an extended code that included code points for all the letters so that any desired word could be sent. It was Vail's code that became American Morse. In France, the telegraph used theFoy-Breguet telegraph, a two-needle telegraph that displayed the needles in Chappe code, the same code as the French optical telegraph, which was still more widely used than the electrical telegraph in France. To the French, this had the great advantage that they did not need to retrain their operators in a new code.[5]
In Germany in 1848,Friedrich Clemens Gerkedeveloped a heavily modified version of American Morse for use on German railways. American Morse had three different lengths of dashes and two different lengths of space between the dots and dashes in a code point. The Gerke code had only one length of dash and all inter-element spaces within a code point were equal. Gerke also created code points for the Germanumlautletters, which do not exist in English. Many central European countries belonged to the German-Austrian Telegraph Union. In 1851, the Union decided to adopt a common code across all its countries so that messages could be sent between them without the need for operators to recode them at borders. The Gerke code was adopted for this purpose.
In 1865, a conference in Paris adopted the Gerke code as the international standard, calling itInternational Morse Code. With some very minor changes, this is theMorse codeused today. The Cooke and Wheatstone telegraph needle instruments were capable of using Morse code since dots and dashes could be sent as left and right movements of the needle. By this time, the needle instruments were being made with end stops that made two distinctly different notes as the needle hit them. This enabled the operator to write the message without looking up at the needle which was much more efficient. This was a similar advantage to the Morse telegraph in which the operators could hear the message from the clicking of the relay armature. Nevertheless, after the British telegraph companies were nationalised in 1870 theGeneral Post Officedecided to standardise on the Morse telegraph and get rid of the many different systems they had inherited from private companies.
In the US, telegraph companies refused to use International Morse because of the cost of retraining operators. They opposed attempts by the government to make it law. In most other countries, the telegraph was state controlled so the change could simply be mandated. In the US, there was no single entity running the telegraph. Rather, it was a multiplicity of private companies. This resulted in international operators needing to be fluent in both versions of Morse and to recode both incoming and outgoing messages. The US continued to use American Morse on landlines (radiotelegraphygenerally used International Morse) and this remained the case until the advent of teleprinters which required entirely different codes and rendered the issue moot.[6]
The speed of sending in a manual telegraph is limited by the speed the operator can send each code element. Speeds are typically stated inwords per minute. Words are not all the same length, so literally counting the words will get a different result depending on message content. Instead, a word is defined as five characters for the purpose of measuring speed, regardless of how many words are actually in the message. Morse code, and many other codes, also do not have the same length of code for each character of the word, again introducing a content-related variable. To overcome this, the speed of the operator repeatedly transmitting a standard word is used. PARIS is classically chosen as this standard because that is the length of an average word in Morse.[7]
In American Morse, the characters are generally shorter than International Morse. This is partly because American Morse uses more dot elements, and partly because the most common dash, the short dash, is shorter than the International Morse dash—two dot elements against three dot elements long. In principle, American Morse will be transmitted faster than International Morse if all other variables are equal. In practice, there are two things that detract from this. Firstly, American Morse, with around five coding elements was harder to get the timings right when sent quickly. Inexperienced operators were apt to send garbled messages, an effect known ashog Morse. The second reason is that American Morse is more prone tointersymbol interference(ISI) because of the larger density of closely spaced dots. This problem was particularly severe onsubmarine telegraph cables, making American Morse less suitable for international communications. The only solution an operator had immediately to hand to deal with ISI was to slow down the transmission speed.[8]
Morse code for non-Latin alphabets, such asCyrillicorArabic script, is achieved by constructing acharacter encodingfor the alphabet in question using the same, or nearly the same code points as used in theLatin alphabet.Syllabaries, such as Japanesekatakana, are also handled this way (Wabun code). The alternative of adding more code points to Morse code for each new character would result in code transmissions being very long in some languages.[9]
Languages that uselogogramsare more difficult to handle due to the much larger number of characters required. TheChinese telegraph codeuses a codebook of around 9,800 characters (7,000 when originally launched in 1871) which are each assigned a four-digit number. It is these numbers that are transmitted, so Chinese Morse code consists entirely of numerals. The numbers must be looked up at the receiving end making this a slow process, but in the era when telegraph was widely used, skilled Chinesetelegrapherscould recall many thousands of the common codes from memory. The Chinese telegraph code is still used by law enforcement because it is an unambiguous method of recording Chinese names in non-Chinese scripts.[10]
Earlyprinting telegraphscontinued to use Morse code, but the operator no longer sent the dots and dashes directly with a single key. Instead they operated a piano keyboard with the characters to be sent marked on each key. The machine generated the appropriate Morse code point from the key press. An entirely new type of code was developed byÉmile Baudot, patented in 1874. TheBaudot codewas a 5-bit binary code, with the bits sentserially. Having a fixed length code greatly simplified the machine design. The operator entered the code from a small 5-key piano keyboard, each key corresponding to one bit of the code. Like Morse, Baudot code was organised to minimise operator fatigue with the code points requiring the fewest key presses assigned to the most common letters.
Early printing telegraphs required mechanical synchronisation between the sending and receiving machine. TheHughes printing telegraphof 1855 achieved this by sending a Morse dash every revolution of the machine. A different solution was adopted in conjunction with the Baudot code. Start and stop bits were added to each character on transmission, which allowedasynchronous serial communication. This scheme of start and stop bits was followed on all the later major telegraph codes.[11]
On busy telegraph lines, a variant of the Baudot code was used withpunched paper tape. This was the Murray code, invented byDonald Murrayin 1901. Instead of directly transmitting to the line, the keypresses of the operator punched holes in the tape. Each row of holes across the tape had five possible positions to punch, corresponding to the five bits of the Murray code. The tape was then run through a tape reader which generated the code and sent it down the telegraph line. The advantage of this system was that multiple messages could be sent to line very fast from one tape, making better use of the line than direct manual operation could.
Murray completely rearranged the character encoding to minimise wear on the machine since operator fatigue was no longer an issue. Thus, the character sets of the original Baudot and the Murray codes are not compatible. The five bits of the Baudot code are insufficient to represent all the letters, numerals, and punctuation required in a text message. Further, additional characters are required by printing telegraphs to better control the machine. Examples of thesecontrol charactersareline feedandcarriage return. Murray solved this problem by introducingshift codes. These codes instruct the receiving machine to change the character encoding to a different character set. Two shift codes were used in the Murray code; figure shift and letter shift. Another control character introduced by Murray was thedelete character(DEL, code 11111) which punched out all five holes on the tape. Its intended purpose was to remove erroneous characters from the tape, but Murray also used multiple DELs to mark the boundary between messages. Having all the holes punched out made a perforation which was easy to tear into separate messages at the receiving end. A variant of the Baudot–Murray code became an international standard as International Telegraph Alphabet no. 2 (ITA 2) in 1924. The "2" in ITA 2 is because the original Baudot code became the basis for ITA 1. ITA 2 remained the standard telegraph code in use until the 1960s and was still in use in places well beyond then.[12]
Theteleprinterwas invented in 1915. This is a printing telegraph with a typewriter-like keyboard on which the operator types the message. Nevertheless,telegramscontinued to be sent inupper caseonly because there was not room for a lower case character set in Baudot–Murray or ITA 2 codes.
Teleprinters were quickly adopted by news organizations, and "wire services" supplying stories to multiple newspapers developed, but an additional application soon arose: sending finishedcopyfrom an urbannewsroomto a remote printing plant. The limited character repertoire of the 5-level codes meant that someone had to manually retype the telegram in mixed case, a laborious and error-prone operation.
TheMonotype systemalready had separate keyboards and casters communicating by a paper tape, but it used a very wide 28-position paper tape to select one of 15 rows and 15 columns in thematrix case. To compete, theMergenthaler Linotype Companydeveloped aTeleTypeSetter(TTS) system which functioned similarly, but using a narrower 6-level code (the name "bit" would not be coineduntil 1948) which was more economical to transmit. TTS retained shift and unshiftcontrol characters, but they operated much like a modern keyboard: the unshift state provided lower-case letters, digits, and common punctuation, while the shift state provided upper-case letters and special symbols. TTS also included Linotype-specific features such asligaturesand a second "upper rail" shift function usually used foritalic type.
A typewriter-like "perforator" would create a paper tape, and had a large dial showing the length of the line so far at the minimum and maximumspacebandwidth so the typist could decide where to break lines. This tape was then transmitted to "reperforator", and the recreated paper tape was fed into aLinotype machinewith a tape reader at the printing plant. (The tape reader could be retrofitted to an existing Linotype machine, but also special high-speed Linotype machines were made which could operate faster than a manual operator could type.)
An operator was still required to handle the tapes, take the finished type to layout, addtype metalas needed, clear jams, and so on, but one operator could manage multiple Linotype machines.
To keep the feed perforations in the middle of the tape, the TTS code added a "0" row beside the "1" row in ITA-2. To show the similarity to the ITS-2 code, the following tables are sorted as if this is the most-significant bit.
Each shift state has 41 unique characters, making 82 in total. Adding the 8 fixed-width characters which are duplicated in the two shift states, this matches the 90-matrix capacity of a standard Linotype machine. (The variable-width space bands are a 91st character.)
The first computers used existing 5-bit ITA-2 keyboards and printers due to their easy availability, but the limited character repertoire quickly became a pain point.
By the 1960s, improving teleprinter technology meant that longer codes were nowhere near as significant a factor in teleprinter costs as they once were. The computer users wanted lowercase characters and additional punctuation, while both teleprinter and computer manufacturers wished to get rid of shift codes. This led theAmerican Standards Associationto develop a 7-bit code, the American Standard Code for Information Interchange (ASCII). The final form of ASCII was published in 1964 and it rapidly became the standard teleprinter code. ASCII was the last major code developed explicitly with telegraphy equipment in mind. Telegraphy rapidly declined after this and was largely replaced bycomputer networks, especially theInternetin the 1990s.
ASCII had several features geared to aid computer programming. The letter characters were in numerical order of code point, so an alphabetical sort could be achieved simply by sorting the data numerically. The code point for corresponding upper and lower case letters differed only by the value of bit 6, allowing a mix of cases to be sorted alphabetically if this bit was ignored. Other codes were introduced, notablyIBM'sEBCDICderived from thepunched cardmethod of input, but it was ASCII and its derivatives that won out as thelingua francaof computer information exchange.[18]
The arrival of themicroprocessorin the 1970s and thepersonal computerin the 1980s with their8-bit architectureled to the 8-bitbytebecoming the standard unit of computer storage. Packing 7-bit data into 8-bit storage is inconvenient for data retrieval. Instead, most computers stored one ASCII character per byte. This left one bit over that was not doing anything useful. Computer manufacturers used this bit inextended ASCIIto overcome some of the limitations of standard ASCII. The main issue was that ASCII was geared to English, particularly American English, and lacked theaccentedvowels used in other European languages such as French. Currency symbols for other countries were also added to the character set. Unfortunately, different manufacturers implemented different extended ASCIIs making them incompatible acrossplatforms. In 1987, theInternational Standards Organisationissued the standardISO 8859-1, for an 8-bit character encoding based on 7-bit ASCII which was widely taken up.
ISO 8859character encodings were developed for non-Latin scriptssuch asCyrillic,Hebrew,Arabic, andGreek. This was still problematic if a document or data used more than one script. Multiple switches between character encodings was required. This was solved by the publication in 1991 of the standard for 16-bitUnicode, in development since 1987. Unicode maintained ASCII characters at the same code points for compatibility. As well as support for non-Latin scripts, Unicode provided code points for logograms such asChinese charactersand many specialist characters such as astrological and mathematical symbols. In 1996, Unicode 2.0 allowed code points greater than 16-bit; up to 20-bit, and 21-bit with an additional private use area. 20-bit Unicode provided support for extinct languages such asOld Italic scriptand many rarely used Chinese characters.[19]
In 1931, theInternational Code of Signals, originally created for ship communication by signalling using flags, was expanded by adding a collection of five-letter codes to be used by radiotelegraph operators.
An alternative representation of needle codes is to use the numeral "1" for needle left, and "3" for needle right. The numeral "2", which does not appear in most codes represents the needle in the neutral upright position. The codepoints using this scheme are marked on the face of some needle instruments, especially those used for training.[32]
When used with aprinting telegraphorsiphon recorder, the "dashes" of dot-dash codes are often made the same length as the "dot". Typically, the mark on the tape for a dot is made above the mark for a dash. An example of this can be seen in the 1837 Steinheil code, which is nearly identical to the 1849 Steinheil code, except that they are represented differently in the table. International Morse code was commonly used in this form onsubmarine telegraph cables.[40] | https://en.wikipedia.org/wiki/Telegraph_code |
NMOSornMOSlogic (from N-type metal–oxide–semiconductor) usesn-type(-)MOSFETs(metal–oxide–semiconductorfield-effect transistors) to implementlogic gatesand otherdigital circuits.[1][2]
NMOS transistors operate by creating aninversion layerin ap-typetransistor body. This inversion layer, called the n-channel, can conductelectronsbetweenn-typesourceanddrainterminals. The n-channel is created by applying voltage to the third terminal, called thegate. Like other MOSFETs, nMOS transistors have four modes of operation: cut-off (or subthreshold), triode, saturation (sometimes called active), and velocity saturation.
NMOS AND-by-default logic can produce unusual glitches or buggy behavior in NMOS components, such as the6502"illegal opcodes" which are absent in CMOS 6502s. In some cases such as Commodore'sVIC-IIchip, the bugs present in the chip's logic were extensively exploited by programmers for graphics effects.
For many years, NMOS circuits were much faster than comparablePMOSandCMOScircuits, which had to use much slower p-channel transistors. It was also easier to manufacture NMOS than CMOS, as the latter has to implement p-channel transistors in special n-wells on the p-substrate, not prone to damage from bus conflicts, and not as vulnerable to electrostatic discharge damage. The major drawback with NMOS (and most otherlogic families) is that adirect currentmust flow through a logic gate even when the output is in asteady state(low in the case of NMOS). This means staticpower dissipation, i.e. power drain even when the circuit is not switching, leading to high power consumption.
Another disadvantage of NMOS circuits is their thermal output. Due to the need to keep constant voltage running through the circuit to hold the transistors' states, NMOS circuits can generate a considerable amount of heat in operation which can reduce the device's reliability. This was especially problematic with the early large gate process nodes in the 1970s. CMOS circuits for contrast generate almost no heat unless the transistor count approaches 1 million.
CMOS components were relatively uncommon in the 1970s-early 1980s and would typically be indicated with a "C" in the part number. Throughout the 1980s, both NMOS and CMOS parts were widely used with CMOS becoming more widespread as the decade went along. NMOS was preferred for components that performed active processing such as CPUs or graphics processors due to its higher speed and cheaper manufacturing cost as these were expensive compared to a passive component such as a memory chip, and some chips such as theMotorola 68030were hybrids with both NMOS and CMOS sections. CMOS has been near-universal in integrated circuits since the 1990s.
Additionally, just like indiode–transistor logic,transistor–transistor logic,emitter-coupled logicetc., the asymmetric input logic levels make NMOS and PMOS circuits more susceptible to noise than CMOS. These disadvantages are whyCMOS logichas supplanted most of these types in most high-speed digital circuits such asmicroprocessorsdespite the fact that CMOS was originally very slow compared tologic gatesbuilt withbipolar transistors.
MOS stands formetal-oxide-semiconductor, reflecting the way MOS-transistors were originally constructed, predominantly before the 1970s, with gates of metal, typically aluminium. Since around 1970, however, most MOS circuits have usedself-aligned gatesmade ofpolycrystalline silicon, a technology first developed byFederico FagginatFairchild Semiconductor. Thesesilicon gatesare still used in most types of MOSFET basedintegrated circuits, although metal gates (AlorCu) started to reappear in the early 2000s for certain types of high speed circuits, such as high performance microprocessors.
The MOSFETs are n-typeenhancement modetransistors, arranged in a so-called "pull-down network" (PDN) between the logic gate output and negative supply voltage (typically the ground). Apull up(i.e. a "load" that can be thought of as a resistor, see below) is placed between the positive supply voltage and each logic gate output. Anylogic gate, including thelogical inverter, can then be implemented by designing a network of parallel and/or series circuits, such that if the desired output for a certain combination ofbooleaninput values iszero(orfalse), the PDN will be active, meaning that at least one transistor is allowing a current path between the negative supply and the output. This causes a voltage drop over the load, and thus a low voltage at the output, representing thezero.
As an example, here is aNORgate implemented in schematic NMOS. If either input A or input B is high (logic 1, = True), the respective MOS transistor acts as a very low resistance between the output and the negative supply, forcing the output to be low (logic 0, = False). When both A and B are high, both transistors are conductive, creating an even lower resistance path to ground. The only case where the output is high is when both transistors are off, which occurs only when both A and B are low, thus satisfying the truth table of a NOR gate:
A MOSFET can be made to operate as a resistor, so the whole circuit can be made with n-channel MOSFETs only. NMOS circuits are slow to transition from low to high. When transitioning from high to low, the transistors provide low resistance, and the capacitive charge at the output drains away very quickly (similar to discharging a capacitor through a very low resistor). But the resistance between the output and the positive supply rail is much greater, so the low to high transition takes longer (similar to charging a capacitor through a high value resistor). Using a resistor of lower value will speed up the process but also increases static power dissipation. However, a better (and the most common) way to make the gates faster is to usedepletion-modetransistors instead ofenhancement-modetransistors as loads. This is calleddepletion-load NMOS logic. | https://en.wikipedia.org/wiki/NMOS_logic |
Anindependent test organizationis an organization, person, or company that tests products, materials, software, etc. according to agreed requirements. The test organization can be affiliated with the government or universities or can be anindependent testing laboratory. They are independent because they are not affiliated with the producer nor the user of the item being tested: no commercial bias is present. These "contract testing" facilities are sometimes called "third party" testing or evaluation facilities.
Many suppliers or vendors offer somechemical testing,physical testing, andsoftware testingas a free service to customers. It is common for businesses to partner with reputable suppliers: Many suppliers have certified quality management systems such asISO 9000or allow customers to conduct technical and quality audits. Data from testing is commonly shared. There is sometimes a risk that supplier testing may tend to be self-serving and not completely impartial.
Large companies often have their own specialized staff and testing facilities laboratory. Corporate engineers know their products, manufacturing capabilities, logistics system, and their customers best. Cost reduction of existing products and cost avoidance for new products have been documented.
Another option is to use paidconsultants,Independent contractors, and third-party test laboratories. They are commonly chosen for specialized expertise, for access to certain test equipment, for surge projects, or where independent testing is otherwise required. Many have certifications andaccreditations: ISO 9000,ISO/IEC 17025, and various governing agencies.
Independent third party laboratories should not be affiliated with any supplier as such affiliation creates bias.
Independent testing might have a variety of purposes, such as:
There are varioustechnical standardsavailable which organizations can use to evaluate products and services.Test methodsare published by regulators or can be included inspecificationsorcontracts. International standards organizations also publish test methods:
For example in software usage, the Capability Maturity Model Integration (CMMI) is a process improvement approach that “provides organizations with the essential elements of effective processes.” There are various levels attainable within CMMI, the highest of which is Level 5. Attaining this level of certification verifies that the practices of the organization are exemplary.
The Testing Maturity Model (TMM) has been designed to complement CMMI and is based on best industry practices. The TMM has 2 components; firstly, a set of 5 levels that define testing capability covering maturity goals, subgoals and activities, tasks and responsibilities and secondly, an assessment model consisting of a maturity questionnaire and an assessment procedure.
There is also the Test Process Improvement model from Sogeti. This supports the improvement of test processes by looking at 20 key areas and has different levels therein to enable insight into the state of the key areas. In order to satisfy the criteria stipulated in the best practice guidelines, organizations must be committed and must invest time and money to implement and adhere to the processes as defined by such guidelines.
Typically, companies have a small test team which coordinates the entire testing activity. During the testing cycle, the test team is supplemented with the readily available developers. | https://en.wikipedia.org/wiki/Independent_test_organization |
Acolor spaceis a specific organization ofcolors. In combination with color profiling supported by various physical devices, it supports reproducible representations of color – whether such representation entails ananalogor adigitalrepresentation. A color space may be arbitrary, i.e. with physically realized colors assigned to a set of physicalcolor swatcheswith corresponding assignedcolor names(including discrete numbers in – for example – thePantonecollection), or structured with mathematical rigor (as with theNCS System,Adobe RGBandsRGB). A "color space" is a useful conceptual tool for understanding the color capabilities of a particular device or digital file. When trying to reproduce color on another device, color spaces can show whether shadow/highlight detail and color saturation can be retained, and by how much either will be compromised.
A "color model" is an abstract mathematical model describing the way colors can be represented astuplesof numbers (e.g. triples inRGBor quadruples inCMYK); however, a color model with no associated mapping function to anabsolute color spaceis a more or less arbitrary color system with no connection to any globally understood system of color interpretation. Adding a specific mapping function between a color model and a reference color space establishes within the reference color space a definite "footprint", known as agamut, and for a given color model, this defines a color space. For example, Adobe RGB and sRGB are two different absolute color spaces, both based on the RGB color model. When defining a color space, the usual reference standard is theCIELABorCIEXYZcolor spaces, which were specifically designed to encompass all colors the average human can see.[1]
Since "color space" identifies a particular combination of the color model and the mapping function, the word is often used informally to identify a color model. However, even though identifying a color space automatically identifies the associated color model, this usage is incorrect in a strict sense. For example, although several specific color spaces are based on theRGB color model, there is no such thing as the singularRGB color space.
In 1802,Thomas Youngpostulated the existence of three types ofphotoreceptors(now known ascone cells) in the eye, each of which was sensitive to a particular range of visible light.[2]Hermann von Helmholtzdeveloped theYoung–Helmholtz theoryfurther in 1850: that the three types of cone photoreceptors could be classified as short-preferring (blue), middle-preferring (green), and long-preferring (red), according to their response to thewavelengthsof light striking theretina. The relative strengths of the signals detected by the three types of cones are interpreted by thebrainas a visible color. But it is not clear that they thought of colors as being points in color space.
The color-space concept was likely due toHermann Grassmann, who developed it in two stages. First, he developed the idea ofvector space, which allowed the algebraic representation of geometric concepts inn-dimensionalspace.[3]Fearnley-Sander (1979) describes Grassmann's foundation of linear algebra as follows:[4]
The definition of alinear space(vector space)... became widely known around 1920, whenHermann Weyland others published formal definitions. In fact, such a definition had been given thirty years previously byPeano, who was thoroughly acquainted with Grassmann's mathematical work. Grassmann did not put down a formal definition—the language was not available—but there is no doubt that he had the concept.
With this conceptual background, in 1853, Grassmann published a theory of how colors mix; it and its three color laws are still taught, asGrassmann's law.[5]
As noted first by Grassmann... the light set has the structure of a cone in the infinite-dimensional linear space. As a result, a quotient set (with respect to metamerism) of the light cone inherits the conical structure, which allows color to be represented as a convex cone in the 3- D linear space, which is referred to as the color cone.[6]
Colors can be created inprintingwithcolorspaces based on theCMYK color model, using the subtractiveprimary colorsofpigment(cyan,magenta,yellow, andkey[black]). To create a three-dimensional representation of a given color space, we can assign the amount of magenta color to the representation's Xaxis, the amount of cyan to its Y axis, and the amount of yellow to its Z axis. The resulting 3-D space provides a unique position for every possible color that can be created by combining those three pigments.
Colors can be created oncomputer monitorswith color spaces based on theRGB color model, using the additive primary colors (red,green, andblue). A three-dimensional representation would assign each of the three colors to the X, Y, and Z axes. Colors generated on a given monitor will be limited by the reproduction medium, such as the phosphor (in aCRT monitor) or filters and backlight (LCDmonitor).
Another way of creating colors on a monitor is with anHSL or HSVcolor model, based onhue,saturation,brightness(value/lightness). With such a model, the variables are assigned tocylindrical coordinates.
Many color spaces can be represented as three-dimensional values in this manner, but some have more, or fewer dimensions, and some, such asPantone, cannot be represented in this way at all.
Color space conversion is the translation of the representation of a color from one basis to another. This typically occurs in the context of converting an image that is represented in one color space to another color space, the goal being to make the translated image look as similar as possible to the original.
The RGB color model is implemented in different ways, depending on the capabilities of the system used. The most common incarnation in general use as of 2021[update]is the 24-bitimplementation, with 8 bits, or 256 discrete levels of color perchannel.[7]Any color space based on such a 24-bit RGB model is thus limited to a range of 256×256×256 ≈ 16.7 million colors. Some implementations use 16 bits per component for 48 bits total, resulting in the samegamutwith a larger number of distinct colors. This is especially important when working with wide-gamut color spaces (where most of the more common colors are located relatively close together), or when a large number of digital filtering algorithms are used consecutively. The same principle applies for any color space based on the same color model, but implemented at differentbit depths.
CIE 1931 XYZ color spacewas one of the first attempts to produce a color space based on measurements of human color perception (earlier efforts were byJames Clerk Maxwell, König & Dieterici, and Abney atImperial College)[8]and it is the basis for almost all other color spaces. TheCIERGBcolor space is a linearly-related companion of CIE XYZ. Additional derivatives of CIE XYZ include theCIELUV,CIEUVW, andCIELAB.
RGBusesadditive colormixing, because it describes what kind oflightneeds to beemittedto produce a given color. RGB stores individual values for red, green and blue.RGBAis RGB with an additional channel, alpha, to indicate transparency. Common color spaces based on the RGB model includesRGB,Adobe RGB,ProPhoto RGB,scRGB, andCIE RGB.
CMYKusessubtractive colormixing used in the printing process, because it describes what kind ofinksneed to be applied so the lightreflectedfrom thesubstrateand through the inks produces a given color. One starts with a white substrate (canvas, page, etc.), and uses ink to subtract color from white to create an image. CMYK stores ink values for cyan, magenta, yellow and black. There are many CMYK color spaces for different sets of inks, substrates, and press characteristics (which change the dot gain or transfer function for each ink and thus change the appearance).
YIQwas formerly used inNTSC(North America,Japanand elsewhere) television broadcasts for historical reasons. This system stores alumavalue roughly analogous to (and sometimes incorrectly identified as)[9][10]luminance, along with twochromavalues as approximate representations of the relative amounts of blue and red in the color. It is similar to theYUVscheme used in most video capture systems[11]and inPAL(Australia,Europe, exceptFrance, which usesSECAM) television, except that the YIQ color space is rotated 33° with respect to the YUV color space and the color axes are swapped. TheYDbDrscheme used by SECAM television is rotated in another way.
YPbPris a scaled version of YUV. It is most commonly seen in its digital form,YCbCr, used widely invideoandimage compressionschemes such asMPEGandJPEG.
xvYCCis a new international digital video color space standard published by theIEC(IEC 61966-2-4). It is based on theITUBT.601andBT.709standards but extends the gamut beyond the R/G/B primaries specified in those standards.
HSV(hue,saturation,value), also known as HSB (hue, saturation,brightness) is often used by artists because it is often more natural to think about a color in terms of hue and saturation than in terms of additive or subtractive color components. HSV is a transformation of an RGB color space, and its components and colorimetry are relative to the RGB color space from which it was derived.
HSL(hue,saturation,lightness/luminance), also known as HLS or HSI (hue, saturation,intensity) is quite similar toHSV, with "lightness" replacing "brightness". The difference is that thebrightnessof a pure color is equal to the brightness of white, while thelightnessof a pure color is equal to the lightness of a medium gray.
Early color spaces had two components. They largely ignored blue light because the added complexity of a 3-component process provided only a marginal increase in fidelity when compared to the jump from monochrome to 2-component color.
Incolor science, there are two meanings of the termabsolute color space:
In this article, we concentrate on the second definition.
CIEXYZ,sRGB, andICtCpare examples of absolute color spaces, as opposed to a genericRGB color space.
A non-absolute color space can be made absolute by defining its relationship to absolute colorimetric quantities. For instance, if the red, green, and blue colors in a monitor are measured exactly, together with other properties of the monitor, then RGB values on that monitor can be considered as absolute. TheCIE 1976 L*, a*, b* color spaceis sometimes referred to as absolute, though it also needs awhite pointspecification to make it so.[16]
A popular way to make a color space like RGB into an absolute color is to define anICCprofile, which contains the attributes of the RGB. This is not the only way to express an absolute color, but it is the standard in many industries. RGB colors defined by widely accepted profiles include sRGB andAdobe RGB. The process of adding anICC profileto a graphic or document is sometimes calledtaggingorembedding; tagging, therefore, marks the absolute meaning of colors in that graphic or document.
A color in one absolute color space can be converted into another absolute color space, and back again, in general; however, some color spaces may havegamutlimitations, and converting colors that lie outside that gamut will not produce correct results. There are also likely to be rounding errors, especially if the popular range of only 256 distinct values per component (8-bit color) is used.
One part of the definition of an absolute color space is the viewing conditions. The same color, viewed under different natural or artificiallightingconditions, will look different. Those involved professionally with color matching may use viewing rooms, lit by standardized lighting.
Occasionally, there are precise rules for converting between non-absolute color spaces. For example,HSL and HSVspaces are defined as mappings of RGB. Both are non-absolute, but the conversion between them should maintain the same color. However, in general, converting between two non-absolute color spaces (for example, RGB toCMYK) or between absolute and non-absolute color spaces (for example, RGB to L*a*b*) is almost a meaningless concept.
A different method of defining absolute color spaces is familiar to many consumers as the swatch card, used to select paint, fabrics, and the like. This is a way of agreeing a color between two parties. A more standardized method of defining absolute colors is thePantone Matching System, a proprietary system that includes swatch cards and recipes that commercial printers can use to make inks that are a particular color. | https://en.wikipedia.org/wiki/Color_space |
TheFinnish Defence Forcesswitched over to theNATO phonetic alphabetin 2005, but the Finnish one is used for Å, Ä, Ö and digits.[1]International operations use only the NATO alphabet.
On the Finnish rail network the Finnish Armed Forces spelling alphabet was used until May 31, 2020 and starting on July 1 the railways switched to NATO phonetic alphabet, but still retained Finnish spelling words for Å, Ä, Ö and numbers.[2] | https://en.wikipedia.org/wiki/Finnish_Armed_Forces_radio_alphabet |
Inmathematics, thedistributive propertyofbinary operationsis a generalization of thedistributive law, which asserts that the equalityx⋅(y+z)=x⋅y+x⋅z{\displaystyle x\cdot (y+z)=x\cdot y+x\cdot z}is always true inelementary algebra.
For example, inelementary arithmetic, one has2⋅(1+3)=(2⋅1)+(2⋅3).{\displaystyle 2\cdot (1+3)=(2\cdot 1)+(2\cdot 3).}Therefore, one would say thatmultiplicationdistributesoveraddition.
This basic property of numbers is part of the definition of mostalgebraic structuresthat have two operations called addition and multiplication, such ascomplex numbers,polynomials,matrices,rings, andfields. It is also encountered inBoolean algebraandmathematical logic, where each of thelogical and(denoted∧{\displaystyle \,\land \,}) and thelogical or(denoted∨{\displaystyle \,\lor \,}) distributes over the other.
Given asetS{\displaystyle S}and twobinary operators∗{\displaystyle \,*\,}and+{\displaystyle \,+\,}onS,{\displaystyle S,}
x∗(y+z)=(x∗y)+(x∗z);{\displaystyle x*(y+z)=(x*y)+(x*z);}
(y+z)∗x=(y∗x)+(z∗x);{\displaystyle (y+z)*x=(y*x)+(z*x);}
When∗{\displaystyle \,*\,}iscommutative, the three conditions above arelogically equivalent.
The operators used for examples in this section are those of the usualaddition+{\displaystyle \,+\,}andmultiplication⋅.{\displaystyle \,\cdot .\,}
If the operation denoted⋅{\displaystyle \cdot }is not commutative, there is a distinction between left-distributivity and right-distributivity:
a⋅(b±c)=a⋅b±a⋅c(left-distributive){\displaystyle a\cdot \left(b\pm c\right)=a\cdot b\pm a\cdot c\qquad {\text{ (left-distributive) }}}(a±b)⋅c=a⋅c±b⋅c(right-distributive).{\displaystyle (a\pm b)\cdot c=a\cdot c\pm b\cdot c\qquad {\text{ (right-distributive) }}.}
In either case, the distributive property can be described in words as:
To multiply asum(ordifference) by a factor, each summand (orminuendandsubtrahend) is multiplied by this factor and the resulting products are added (or subtracted).
If the operation outside the parentheses (in this case, the multiplication) is commutative, then left-distributivity implies right-distributivity and vice versa, and one talks simply ofdistributivity.
One example of an operation that is "only" right-distributive is division, which is not commutative:(a±b)÷c=a÷c±b÷c.{\displaystyle (a\pm b)\div c=a\div c\pm b\div c.}In this case, left-distributivity does not apply:a÷(b±c)≠a÷b±a÷c{\displaystyle a\div (b\pm c)\neq a\div b\pm a\div c}
The distributive laws are among the axioms forrings(like the ring ofintegers) andfields(like the field ofrational numbers). Here multiplication is distributive over addition, but addition is not distributive over multiplication. Examples of structures with two operations that are each distributive over the other areBoolean algebrassuch as thealgebra of setsor theswitching algebra.
Multiplying sums can be put into words as follows: When a sum is multiplied by a sum, multiply each summand of a sum with each summand of the other sum (keeping track of signs) then add up all of the resulting products.
In the following examples, the use of the distributive law on the set of real numbersR{\displaystyle \mathbb {R} }is illustrated. When multiplication is mentioned in elementary mathematics, it usually refers to this kind of multiplication. From the point of view of algebra, the real numbers form afield, which ensures the validity of the distributive law.
The distributive law is valid formatrix multiplication. More precisely,(A+B)⋅C=A⋅C+B⋅C{\displaystyle (A+B)\cdot C=A\cdot C+B\cdot C}for alll×m{\displaystyle l\times m}-matricesA,B{\displaystyle A,B}andm×n{\displaystyle m\times n}-matricesC,{\displaystyle C,}as well asA⋅(B+C)=A⋅B+A⋅C{\displaystyle A\cdot (B+C)=A\cdot B+A\cdot C}for alll×m{\displaystyle l\times m}-matricesA{\displaystyle A}andm×n{\displaystyle m\times n}-matricesB,C.{\displaystyle B,C.}Because the commutative property does not hold for matrix multiplication, the second law does not follow from the first law. In this case, they are two different laws.
In standard truth-functional propositional logic,distribution[3][4]in logical proofs uses two validrules of replacementto expand individual occurrences of certainlogical connectives, within someformula, into separate applications of those connectives across subformulas of the given formula. The rules are(P∧(Q∨R))⇔((P∧Q)∨(P∧R))and(P∨(Q∧R))⇔((P∨Q)∧(P∨R)){\displaystyle (P\land (Q\lor R))\Leftrightarrow ((P\land Q)\lor (P\land R))\qquad {\text{ and }}\qquad (P\lor (Q\land R))\Leftrightarrow ((P\lor Q)\land (P\lor R))}where "⇔{\displaystyle \Leftrightarrow }", also written≡,{\displaystyle \,\equiv ,\,}is ametalogicalsymbolrepresenting "can be replaced in a proof with" or "islogically equivalentto".
Distributivityis a property of some logical connectives of truth-functionalpropositional logic. The following logical equivalences demonstrate that distributivity is a property of particular connectives. The following are truth-functionaltautologies.(P∧(Q∨R))⇔((P∧Q)∨(P∧R))Distribution ofconjunctionoverdisjunction(P∨(Q∧R))⇔((P∨Q)∧(P∨R))Distribution ofdisjunctionoverconjunction(P∧(Q∧R))⇔((P∧Q)∧(P∧R))Distribution ofconjunctionoverconjunction(P∨(Q∨R))⇔((P∨Q)∨(P∨R))Distribution ofdisjunctionoverdisjunction(P→(Q→R))⇔((P→Q)→(P→R))Distribution ofimplication(P→(Q↔R))⇔((P→Q)↔(P→R))Distribution ofimplicationoverequivalence(P→(Q∧R))⇔((P→Q)∧(P→R))Distribution ofimplicationoverconjunction(P∨(Q↔R))⇔((P∨Q)↔(P∨R))Distribution ofdisjunctionoverequivalence{\displaystyle {\begin{alignedat}{13}&(P&&\;\land &&(Q\lor R))&&\;\Leftrightarrow \;&&((P\land Q)&&\;\lor (P\land R))&&\quad {\text{ Distribution of }}&&{\text{ conjunction }}&&{\text{ over }}&&{\text{ disjunction }}\\&(P&&\;\lor &&(Q\land R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\;\land (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\land &&(Q\land R))&&\;\Leftrightarrow \;&&((P\land Q)&&\;\land (P\land R))&&\quad {\text{ Distribution of }}&&{\text{ conjunction }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\lor &&(Q\lor R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\;\lor (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ disjunction }}\\&(P&&\to &&(Q\to R))&&\;\Leftrightarrow \;&&((P\to Q)&&\to (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ }}&&{\text{ }}\\&(P&&\to &&(Q\leftrightarrow R))&&\;\Leftrightarrow \;&&((P\to Q)&&\leftrightarrow (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ over }}&&{\text{ equivalence }}\\&(P&&\to &&(Q\land R))&&\;\Leftrightarrow \;&&((P\to Q)&&\;\land (P\to R))&&\quad {\text{ Distribution of }}&&{\text{ implication }}&&{\text{ over }}&&{\text{ conjunction }}\\&(P&&\;\lor &&(Q\leftrightarrow R))&&\;\Leftrightarrow \;&&((P\lor Q)&&\leftrightarrow (P\lor R))&&\quad {\text{ Distribution of }}&&{\text{ disjunction }}&&{\text{ over }}&&{\text{ equivalence }}\\\end{alignedat}}}
((P∧Q)∨(R∧S))⇔(((P∨R)∧(P∨S))∧((Q∨R)∧(Q∨S)))((P∨Q)∧(R∨S))⇔(((P∧R)∨(P∧S))∨((Q∧R)∨(Q∧S))){\displaystyle {\begin{alignedat}{13}&((P\land Q)&&\;\lor (R\land S))&&\;\Leftrightarrow \;&&(((P\lor R)\land (P\lor S))&&\;\land ((Q\lor R)\land (Q\lor S)))&&\\&((P\lor Q)&&\;\land (R\lor S))&&\;\Leftrightarrow \;&&(((P\land R)\lor (P\land S))&&\;\lor ((Q\land R)\lor (Q\land S)))&&\\\end{alignedat}}}
In approximate arithmetic, such asfloating-point arithmetic, the distributive property of multiplication (and division) over addition may fail because of the limitations ofarithmetic precision. For example, the identity1/3+1/3+1/3=(1+1+1)/3{\displaystyle 1/3+1/3+1/3=(1+1+1)/3}fails indecimal arithmetic, regardless of the number ofsignificant digits. Methods such asbanker's roundingmay help in some cases, as may increasing the precision used, but ultimately some calculation errors are inevitable.
Distributivity is most commonly found insemirings, notably the particular cases ofringsanddistributive lattices.
A semiring has two binary operations, commonly denoted+{\displaystyle \,+\,}and∗,{\displaystyle \,*,}and requires that∗{\displaystyle \,*\,}must distribute over+.{\displaystyle \,+.}
A ring is a semiring with additive inverses.
Alatticeis another kind ofalgebraic structurewith two binary operations,∧and∨.{\displaystyle \,\land {\text{ and }}\lor .}If either of these operations distributes over the other (say∧{\displaystyle \,\land \,}distributes over∨{\displaystyle \,\lor }), then the reverse also holds (∨{\displaystyle \,\lor \,}distributes over∧{\displaystyle \,\land \,}), and the lattice is called distributive. See alsoDistributivity (order theory).
ABoolean algebracan be interpreted either as a special kind of ring (aBoolean ring) or a special kind of distributive lattice (aBoolean lattice). Each interpretation is responsible for different distributive laws in the Boolean algebra.
Similar structures without distributive laws arenear-ringsandnear-fieldsinstead of rings anddivision rings. The operations are usually defined to be distributive on the right but not on the left.
In several mathematical areas, generalized distributivity laws are considered. This may involve the weakening of the above conditions or the extension to infinitary operations. Especially inorder theoryone finds numerous important variants of distributivity, some of which include infinitary operations, such as theinfinite distributive law; others being defined in the presence of onlyonebinary operation, such as the according definitions and their relations are given in the articledistributivity (order theory). This also includes the notion of acompletely distributive lattice.
In the presence of an ordering relation, one can also weaken the above equalities by replacing={\displaystyle \,=\,}by either≤{\displaystyle \,\leq \,}or≥.{\displaystyle \,\geq .}Naturally, this will lead to meaningful concepts only in some situations. An application of this principle is the notion ofsub-distributivityas explained in the article oninterval arithmetic.
Incategory theory, if(S,μ,ν){\displaystyle (S,\mu ,\nu )}and(S′,μ′,ν′){\displaystyle \left(S^{\prime },\mu ^{\prime },\nu ^{\prime }\right)}aremonadson acategoryC,{\displaystyle C,}adistributive lawS.S′→S′.S{\displaystyle S.S^{\prime }\to S^{\prime }.S}is anatural transformationλ:S.S′→S′.S{\displaystyle \lambda :S.S^{\prime }\to S^{\prime }.S}such that(S′,λ){\displaystyle \left(S^{\prime },\lambda \right)}is alax map of monadsS→S{\displaystyle S\to S}and(S,λ){\displaystyle (S,\lambda )}is acolax map of monadsS′→S′.{\displaystyle S^{\prime }\to S^{\prime }.}This is exactly the data needed to define a monad structure onS′.S{\displaystyle S^{\prime }.S}: the multiplication map isS′μ.μ′S2.S′λS{\displaystyle S^{\prime }\mu .\mu ^{\prime }S^{2}.S^{\prime }\lambda S}and the unit map isη′S.η.{\displaystyle \eta ^{\prime }S.\eta .}See:distributive law between monads.
Ageneralized distributive lawhas also been proposed in the area ofinformation theory.
The ubiquitousidentitythat relates inverses to the binary operation in anygroup, namely(xy)−1=y−1x−1,{\displaystyle (xy)^{-1}=y^{-1}x^{-1},}which is taken as an axiom in the more general context of asemigroup with involution, has sometimes been called anantidistributive property(of inversion as aunary operation).[5]
In the context of anear-ring, which removes the commutativity of the additively written group and assumes only one-sided distributivity, one can speak of (two-sided)distributive elementsbut also ofantidistributive elements. The latter reverse the order of (the non-commutative) addition; assuming a left-nearring (i.e. one which all elements distribute when multiplied on the left), then an antidistributive elementa{\displaystyle a}reverses the order of addition when multiplied to the right:(x+y)a=ya+xa.{\displaystyle (x+y)a=ya+xa.}[6]
In the study ofpropositional logicandBoolean algebra, the termantidistributive lawis sometimes used to denote the interchange between conjunction and disjunction when implication factors over them:[7](a∨b)⇒c≡(a⇒c)∧(b⇒c){\displaystyle (a\lor b)\Rightarrow c\equiv (a\Rightarrow c)\land (b\Rightarrow c)}(a∧b)⇒c≡(a⇒c)∨(b⇒c).{\displaystyle (a\land b)\Rightarrow c\equiv (a\Rightarrow c)\lor (b\Rightarrow c).}
These twotautologiesare a direct consequence of the duality inDe Morgan's laws. | https://en.wikipedia.org/wiki/Distributivity |
Theampere-turn(symbolA⋅t) is theMKS(metre–kilogram–second) unit ofmagnetomotive force(MMF), represented by adirect currentof oneampereflowing in a single-turn loop.[1]Turnsrefers to thewinding numberof an electrical conductor composing anelectromagnetic coil.
For example, a current of2 Aflowing through a coil of 10 turns produces an MMF of20 A⋅t.
The corresponding physical quantity isNI, the product of thenumber of turns,N, and the current,I; it has been used in industry, specifically, US-basedcoil-making industries.[citation needed]
By maintaining the same current and increasing the number of loops or turns of the coil, the strength of the magnetic field increases because each loop or turn of the coil sets up its own magnetic field. The magnetic field unites with the fields of the other loops to produce the field around the entire coil, making the total magnetic field stronger.
The strength of the magnetic field is not linearly related to the ampere-turns when a magnetic material is used as a part of the system. Also, the material within the magnet carrying the magnetic flux "saturates" at some point, after which adding more ampere-turns has little effect.
The ampere-turn corresponds to4π/10gilberts, the correspondingCGSunit.
InThomas Edison's laboratoryFrancis Uptonwas the lead mathematician. Trained withHelmholtzin Germany, he usedweberas the name of the unit of current, modified toamperelater: | https://en.wikipedia.org/wiki/Ampere-turn |
Inmathematics, aresiduated Boolean algebrais aresiduated latticewhose lattice structure is that of aBoolean algebra. Examples include Boolean algebras with the monoid taken to be conjunction, the set of all formal languages over a given alphabet Σ under concatenation, the set of all binary relations on a given setXunder relational composition, and more generally the power set of any equivalence relation, again under relational composition. The original application was torelation algebrasas a finitely axiomatized generalization of the binary relation example, but there exist interesting examples of residuated Boolean algebras that are not relation algebras, such as the language example.
Aresiduated Boolean algebrais an algebraic structure(L, ∧, ∨, ¬, 0, 1, •,I, \, /)such that
An equivalent signature better suited to therelation algebraapplication is(L, ∧, ∨, ¬, 0, 1, •,I, ▷, ◁)where the unary operationsx\ andx▷ are intertranslatable in the manner ofDe Morgan's lawsvia
and dually /yand ◁yas
with the residuation axioms in theresiduated latticearticle reorganized accordingly (replacingzby ¬z) to read
ThisDe Morgan dualreformulation is motivated and discussed in more detail in the section below on conjugacy.
Since residuated lattices and Boolean algebras are each definable with finitely many equations, so are residuated Boolean algebras, whence they form a finitely axiomatizablevariety.
The De Morgan duals ▷ and ◁ of residuation arise as follows. Among residuated lattices, Boolean algebras are special by virtue of having a complementation operation ¬. This permits an alternative expression of the three inequalities
in the axiomatization of the two residuals in terms of disjointness, via the equivalencex≤y⇔x∧¬y= 0. Abbreviatingx∧y= 0 tox#yas the expression of their disjointness, and substituting ¬zforzin the axioms, they become with a little Boolean manipulation
Now ¬(x\¬z) is reminiscent ofDe Morgan duality, suggesting thatx\ be thought of as a unary operationf, defined byf(y) =x\y, that has a De Morgan dual ¬f(¬y), analogous to ∀xφ(x) = ¬∃x¬φ(x). Denoting this dual operation asx▷, we definex▷zas ¬(x\¬z). Similarly we define another operationz◁yas ¬(¬z/y). By analogy withx\ as the residual operation associated with the operationx•, we refer tox▷ as the conjugate operation, or simplyconjugate, ofx•. Likewise ◁yis theconjugateof •y. Unlike residuals, conjugacy is an equivalence relation between operations: iffis the conjugate ofgthengis also the conjugate off, i.e. the conjugate of the conjugate offisf. Another advantage of conjugacy is that it becomes unnecessary to speak of right and left conjugates, that distinction now being inherited from the difference betweenx• and •x, which have as their respective conjugatesx▷ and ◁x. (But this advantage accrues also to residuals whenx\ is taken to be the residual operation tox•.)
All this yields (along with the Boolean algebra and monoid axioms) the following equivalent axiomatization of a residuated Boolean algebra.
With this signature it remains the case that this axiomatization can be expressed as finitely many equations.
In Examples 2 and 3 it can be shown thatx▷I=I◁x. In Example 2 both sides equal theconversex˘ ofx, while in Example 3, both sides areIwhenxcontains the empty word and 0 otherwise. In the former casex˘ =x. This is impossible for the latter becausex▷Iretains hardly any information aboutx. Hence in Example 2 we can substitutex˘ forxinx▷I=x˘ =I◁xand cancel (soundly) to give
x˘˘ =xcan be proved from these two equations.Tarski's notion of arelation algebracan be defined as a residuated Boolean algebra having an operationx˘ satisfying these two equations.
The cancellation step in the above is not possible for Example 3, which therefore is not a relation algebra,x˘ being uniquely determined asx▷I.
Consequences of this axiomatization of converse includex˘˘ =x, ¬(x˘) = (¬x)˘,(x∨y)˘ =x˘ ∨y˘, and (x•y)˘ =y˘•x˘. | https://en.wikipedia.org/wiki/Residuated_Boolean_algebra |
Inmachine learning,kernel machinesare a class of algorithms forpattern analysis, whose best known member is thesupport-vector machine(SVM). These methods involve using linear classifiers to solve nonlinear problems.[1]The general task ofpattern analysisis to find and study general types of relations (for exampleclusters,rankings,principal components,correlations,classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed intofeature vectorrepresentations via a user-specifiedfeature map: in contrast, kernel methods require only a user-specifiedkernel, i.e., asimilarity functionover all pairs of data points computed usinginner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to therepresenter theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.
Kernel methods owe their name to the use ofkernel functions, which enable them to operate in a high-dimensional,implicitfeature spacewithout ever computing the coordinates of the data in that space, but rather by simply computing theinner productsbetween theimagesof all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick".[2]Kernel functions have been introduced for sequence data,graphs, text, images, as well as vectors.
Algorithms capable of operating with kernels include thekernel perceptron, support-vector machines (SVM),Gaussian processes,principal components analysis(PCA),canonical correlation analysis,ridge regression,spectral clustering,linear adaptive filtersand many others.
Most kernel algorithms are based onconvex optimizationoreigenproblemsand are statistically well-founded. Typically, their statistical properties are analyzed usingstatistical learning theory(for example, usingRademacher complexity).
Kernel methods can be thought of asinstance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" thei{\displaystyle i}-th training example(xi,yi){\displaystyle (\mathbf {x} _{i},y_{i})}and learn for it a corresponding weightwi{\displaystyle w_{i}}. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of asimilarity functionk{\displaystyle k}, called akernel, between the unlabeled inputx′{\displaystyle \mathbf {x'} }and each of the training inputsxi{\displaystyle \mathbf {x} _{i}}. For instance, a kernelizedbinary classifiertypically computes a weighted sum of similaritiesy^=sgn∑i=1nwiyik(xi,x′),{\displaystyle {\hat {y}}=\operatorname {sgn} \sum _{i=1}^{n}w_{i}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),}where
Kernel classifiers were described as early as the 1960s, with the invention of thekernel perceptron.[3]They rose to great prominence with the popularity of thesupport-vector machine(SVM) in the 1990s, when the SVM was found to be competitive withneural networkson tasks such ashandwriting recognition.
The kernel trick avoids the explicit mapping that is needed to get linearlearning algorithmsto learn a nonlinear function ordecision boundary. For allx{\displaystyle \mathbf {x} }andx′{\displaystyle \mathbf {x'} }in the input spaceX{\displaystyle {\mathcal {X}}}, certain functionsk(x,x′){\displaystyle k(\mathbf {x} ,\mathbf {x'} )}can be expressed as aninner productin another spaceV{\displaystyle {\mathcal {V}}}. The functionk:X×X→R{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is often referred to as akernelor akernel function. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum orintegral.
Certain problems in machine learning have more structure than an arbitrary weighting functionk{\displaystyle k}. The computation is made much simpler if the kernel can be written in the form of a "feature map"φ:X→V{\displaystyle \varphi \colon {\mathcal {X}}\to {\mathcal {V}}}which satisfiesk(x,x′)=⟨φ(x),φ(x′)⟩V.{\displaystyle k(\mathbf {x} ,\mathbf {x'} )=\langle \varphi (\mathbf {x} ),\varphi (\mathbf {x'} )\rangle _{\mathcal {V}}.}The key restriction is that⟨⋅,⋅⟩V{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {V}}}must be a proper inner product. On the other hand, an explicit representation forφ{\displaystyle \varphi }is not necessary, as long asV{\displaystyle {\mathcal {V}}}is aninner product space. The alternative follows fromMercer's theorem: an implicitly defined functionφ{\displaystyle \varphi }exists whenever the spaceX{\displaystyle {\mathcal {X}}}can be equipped with a suitablemeasureensuring the functionk{\displaystyle k}satisfiesMercer's condition.
Mercer's theorem is similar to a generalization of the result from linear algebra thatassociates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure thecounting measureμ(T)=|T|{\displaystyle \mu (T)=|T|}for allT⊂X{\displaystyle T\subset X}, which counts the number of points inside the setT{\displaystyle T}, then the integral in Mercer's theorem reduces to a summation∑i=1n∑j=1nk(xi,xj)cicj≥0.{\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}k(\mathbf {x} _{i},\mathbf {x} _{j})c_{i}c_{j}\geq 0.}If this summation holds for all finite sequences of points(x1,…,xn){\displaystyle (\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n})}inX{\displaystyle {\mathcal {X}}}and all choices ofn{\displaystyle n}real-valued coefficients(c1,…,cn){\displaystyle (c_{1},\dots ,c_{n})}(cf.positive definite kernel), then the functionk{\displaystyle k}satisfies Mercer's condition.
Some algorithms that depend on arbitrary relationships in the native spaceX{\displaystyle {\mathcal {X}}}would, in fact, have a linear interpretation in a different setting: the range space ofφ{\displaystyle \varphi }. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to computeφ{\displaystyle \varphi }directly during computation, as is the case withsupport-vector machines. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms.
Theoretically, aGram matrixK∈Rn×n{\displaystyle \mathbf {K} \in \mathbb {R} ^{n\times n}}with respect to{x1,…,xn}{\displaystyle \{\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n}\}}(sometimes also called a "kernel matrix"[4]), whereKij=k(xi,xj){\displaystyle K_{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})}, must bepositive semi-definite (PSD).[5]Empirically, for machine learning heuristics, choices of a functionk{\displaystyle k}that do not satisfy Mercer's condition may still perform reasonably ifk{\displaystyle k}at least approximates the intuitive idea of similarity.[6]Regardless of whetherk{\displaystyle k}is a Mercer kernel,k{\displaystyle k}may still be referred to as a "kernel".
If the kernel functionk{\displaystyle k}is also acovariance functionas used inGaussian processes, then the Gram matrixK{\displaystyle \mathbf {K} }can also be called acovariance matrix.[7]
Application areas of kernel methods are diverse and includegeostatistics,[8]kriging,inverse distance weighting,3D reconstruction,bioinformatics,cheminformatics,information extractionandhandwriting recognition. | https://en.wikipedia.org/wiki/Kernel_trick |
Incryptography, akey distribution center(KDC) is part of acryptosystemintended to reduce the risks inherent in exchangingkeys. KDCs often operate in systems within which some users may have permission to use certain services at some times and not at others.
For instance, an administrator may have established a policy that only certain users may back up to tape. Manyoperating systemscan control access to the tape facility via a "systemservice". If that system service further restricts the tape drive to operate only on behalf of users who can submit a service-granting ticket when they wish to use it, there remains only the task of distributing such tickets to the appropriately permitted users. If the ticket consists of (or includes) a key, one can then term the mechanism which distributes it a KDC. Usually, in such situations, the KDC itself also operates as a system service.
A typical operation with a KDC involves a request from a user to use some service. The KDC will use cryptographic techniques, mostly usingsymmetric encryption, to authenticate requesting users as themselves. It will also check whether an individual user has the right to access the service requested. If the authenticated user meets all prescribed conditions, the KDC can issue a ticket permitting access.
In most (but not all) cases the KDC shares akeywith each of all the other parties.
The KDC produces aticketbased on aserverkey.
Theclientreceives the ticket and submits it to the appropriateserver.
The server can verify the submitted ticket and grant access to user submitting it.
Security systems using KDCs includeKerberos. (Actually, Kerberos partitions KDC functionality between two different agents: the AS (Authentication Server) and the TGS (Ticket Granting Service).) | https://en.wikipedia.org/wiki/Key_distribution_center |
TrustRankis analgorithmthat conductslink analysisto separate usefulwebpagesfromspamand helps search engine rank pages inSERPs(Search Engine Results Pages). It is semi-automated process which means that it needs some human assistance in order to function properly. Search engines have many different algorithms and ranking factors that they use when measuring the quality of webpages. TrustRank is one of them.
Because manual review of the Internet is impractical and very expensive, TrustRank was introduced in order to help achieve this task much more quickly and cheaply. It was first introduced by researchers Zoltan Gyongyi and Hector Garcia-Molina ofStanford Universityand Jan Pedersen ofYahoo!in their paper "Combating Web Spam with TrustRank" in 2004.[1]Today, this algorithm is a part of major web search engines like Yahoo! and Google.[2]
One of the most important factors that helpweb search enginedetermine the quality of a web page when returning results arebacklinks. Search engines take a number and quality of backlinks into consideration when assigning a place to a certain web page in SERPs. Manyweb spampages are created only with the intention of misleadingsearch engines. These pages, chiefly created for commercial reasons, use various techniques toachieve higher-than-deserved rankingsin thesearch engines' result pages. While human experts can easily identify spam, search engines are still being improved daily in order to do it without help of humans.
One popular method for improving rankings is to increase the perceived importance of a document through complex linking schemes.Google'sPageRankand other search ranking algorithms have been subjected to such manipulation.
TrustRank seeks to combat spam by filtering the web based upon reliability. The method calls for selecting a small set of seed pages to be evaluated by an expert. Once the reputable seed pages are manually identified, a crawl extending outward from the seed set seeks out similarly reliable and trustworthy pages. TrustRank's reliability diminishes with increased distance between documents and the seed set.
The logic works in the opposite way as well, which is called Anti-Trust Rank. The closer a site is to spam resources, the more likely it is to be spam as well.[3]
The researchers who proposed the TrustRank methodology have continued to refine their work by evaluating related topics, such as measuringspam mass. | https://en.wikipedia.org/wiki/TrustRank |
Graph neural networks(GNN) are specializedartificial neural networksthat are designed for tasks whose inputs aregraphs.[1][2][3][4][5]
One prominent example is molecular drug design.[6][7][8]Each input sample is a graph representation of a molecule, where atoms form the nodes and chemical bonds between atoms form the edges. In addition to the graph representation, the input also includes known chemical properties for each of the atoms. Dataset samples may thus differ in length, reflecting the varying numbers of atoms in molecules, and the varying number of bonds between them. The task is to predict the efficacy of a given molecule for a specific medical application, like eliminatingE. colibacteria.
The key design element of GNNs is the use ofpairwise message passing, such that graph nodes iteratively update their representations by exchanging information with their neighbors. Several GNN architectures have been proposed,[2][3][9][10][11]which implement different flavors of message passing,[12][13]started by recursive[2]or convolutional constructive[3]approaches. As of 2022[update], it is an open question whether it is possible to define GNN architectures "going beyond" message passing, or instead every GNN can be built on message passing over suitably defined graphs.[14]
In the more general subject of "geometricdeep learning", certain existing neural network architectures can be interpreted as GNNs operating on suitably defined graphs.[12]Aconvolutional neural networklayer, in the context ofcomputer vision, can be considered a GNN applied to graphs whose nodes arepixelsand only adjacent pixels are connected by edges in the graph. Atransformerlayer, innatural language processing, can be considered a GNN applied tocomplete graphswhose nodes arewordsor tokens in a passage ofnatural languagetext.
Relevant application domains for GNNs includenatural language processing,[15]social networks,[16]citation networks,[17]molecular biology,[18]chemistry,[19][20]physics[21]andNP-hardcombinatorial optimizationproblems.[22]
Open sourcelibrariesimplementing GNNs include PyTorch Geometric[23](PyTorch), TensorFlow GNN[24](TensorFlow), Deep Graph Library[25](framework agnostic), jraph[26](Google JAX), and GraphNeuralNetworks.jl[27]/GeometricFlux.jl[28](Julia,Flux).
The architecture of a generic GNN implements the following fundamentallayers:[12]
It has been demonstrated that GNNs cannot be more expressive than theWeisfeiler–Leman Graph Isomorphism Test.[32][33]In practice, this means that there exist different graph structures (e.g.,moleculeswith the sameatomsbut differentbonds) that cannot be distinguished by GNNs. More powerful GNNs operating on higher-dimension geometries such assimplicial complexescan be designed.[34][35][13]As of 2022[update], whether or not future architectures will overcome the message passing primitive is an open research question.[14]
Message passing layers are permutation-equivariant layers mapping a graph into an updated representation of the same graph. Formally, they can be expressed as message passing neural networks (MPNNs).[12]
LetG=(V,E){\displaystyle G=(V,E)}be agraph, whereV{\displaystyle V}is the node set andE{\displaystyle E}is the edge set. LetNu{\displaystyle N_{u}}be theneighbourhoodof some nodeu∈V{\displaystyle u\in V}. Additionally, letxu{\displaystyle \mathbf {x} _{u}}be thefeaturesof nodeu∈V{\displaystyle u\in V}, andeuv{\displaystyle \mathbf {e} _{uv}}be the features of edge(u,v)∈E{\displaystyle (u,v)\in E}. An MPNNlayercan be expressed as follows:[12]
whereϕ{\displaystyle \phi }andψ{\displaystyle \psi }aredifferentiable functions(e.g.,artificial neural networks), and⨁{\displaystyle \bigoplus }is apermutationinvariantaggregation operatorthat can accept an arbitrary number of inputs (e.g., element-wise sum, mean, or max). In particular,ϕ{\displaystyle \phi }andψ{\displaystyle \psi }are referred to asupdateandmessagefunctions, respectively. Intuitively, in an MPNN computational block, graph nodesupdatetheir representations byaggregatingthemessagesreceived from their neighbours.
The outputs of one or more MPNN layers are node representationshu{\displaystyle \mathbf {h} _{u}}for each nodeu∈V{\displaystyle u\in V}in the graph. Node representations can be employed for any downstream task, such as node/graphclassificationor edge prediction.
Graph nodes in an MPNN update their representation aggregating information from their immediate neighbours. As such, stackingn{\displaystyle n}MPNN layers means that one node will be able to communicate with nodes that are at mostn{\displaystyle n}"hops" away. In principle, to ensure that every node receives information from every other node, one would need to stack a number of MPNN layers equal to the graphdiameter. However, stacking many MPNN layers may cause issues such as oversmoothing[36]and oversquashing.[37]Oversmoothing refers to the issue of node representations becoming indistinguishable. Oversquashing refers to the bottleneck that is created by squeezing long-range dependencies into fixed-size representations. Countermeasures such as skip connections[10][38](as inresidual neural networks), gated update rules[39]and jumping knowledge[40]can mitigate oversmoothing. Modifying the final layer to be a fully-adjacent layer, i.e., by considering the graph as acomplete graph, can mitigate oversquashing in problems where long-range dependencies are required.[37]
Other "flavours" of MPNN have been developed in the literature,[12]such as graph convolutional networks[9]and graph attention networks,[11]whose definitions can be expressed in terms of the MPNN formalism.
The graph convolutional network (GCN) was first introduced byThomas KipfandMax Wellingin 2017.[9]
A GCN layer defines afirst-order approximationof a localized spectralfilteron graphs. GCNs can be understood as a generalization ofconvolutional neural networksto graph-structured data.
The formal expression of a GCN layer reads as follows:
whereH{\displaystyle \mathbf {H} }is the matrix of node representationshu{\displaystyle \mathbf {h} _{u}},X{\displaystyle \mathbf {X} }is the matrix of node featuresxu{\displaystyle \mathbf {x} _{u}},σ(⋅){\displaystyle \sigma (\cdot )}is anactivation function(e.g.,ReLU),A~{\displaystyle {\tilde {\mathbf {A} }}}is the graphadjacency matrixwith the addition of self-loops,D~{\displaystyle {\tilde {\mathbf {D} }}}is the graphdegree matrixwith the addition of self-loops, andΘ{\displaystyle \mathbf {\Theta } }is a matrix of trainable parameters.
In particular, letA{\displaystyle \mathbf {A} }be the graph adjacency matrix: then, one can defineA~=A+I{\displaystyle {\tilde {\mathbf {A} }}=\mathbf {A} +\mathbf {I} }andD~ii=∑j∈VA~ij{\displaystyle {\tilde {\mathbf {D} }}_{ii}=\sum _{j\in V}{\tilde {A}}_{ij}}, whereI{\displaystyle \mathbf {I} }denotes theidentity matrix. This normalization ensures that theeigenvaluesofD~−12A~D~−12{\displaystyle {\tilde {\mathbf {D} }}^{-{\frac {1}{2}}}{\tilde {\mathbf {A} }}{\tilde {\mathbf {D} }}^{-{\frac {1}{2}}}}are bounded in the range[0,1]{\displaystyle [0,1]}, avoidingnumerical instabilitiesandexploding/vanishing gradients.
A limitation of GCNs is that they do not allow multidimensional edge featureseuv{\displaystyle \mathbf {e} _{uv}}.[9]It is however possible to associate scalar weightswuv{\displaystyle w_{uv}}to each edge by imposingAuv=wuv{\displaystyle A_{uv}=w_{uv}}, i.e., by setting each nonzero entry in the adjacency matrix equal to the weight of the corresponding edge.
The graph attention network (GAT) was introduced byPetar Veličkovićet al. in 2018.[11]
Graph attention network is a combination of a GNN and an attention layer.
The implementation of attention layer in graphical neural networks helps provide attention or focus to the important information from the data instead of focusing on the whole data.
A multi-head GAT layer can be expressed as follows:
whereK{\displaystyle K}is the number ofattentionheads,‖{\displaystyle {\Big \Vert }}denotesvector concatenation,σ(⋅){\displaystyle \sigma (\cdot )}is anactivation function(e.g.,ReLU),αij{\displaystyle \alpha _{ij}}are attention coefficients, andWk{\displaystyle W^{k}}is a matrix of trainable parameters for thek{\displaystyle k}-th attention head.
For the final GAT layer, the outputs from each attention head are averaged before the application of the activation function. Formally, the final GAT layer can be written as:
Attentionin Machine Learning is a technique that mimicscognitive attention. In the context of learning on graphs, the attention coefficientαuv{\displaystyle \alpha _{uv}}measureshow importantis nodeu∈V{\displaystyle u\in V}to nodev∈V{\displaystyle v\in V}.
Normalized attention coefficients are computed as follows:
wherea{\displaystyle \mathbf {a} }is a vector of learnable weights,⋅T{\displaystyle \cdot ^{T}}indicatestransposition,euv{\displaystyle \mathbf {e} _{uv}}are the edge features (if present), andLeakyReLU{\displaystyle {\text{LeakyReLU}}}is amodified ReLUactivation function. Attention coefficients are normalized to make them easily comparable across different nodes.[11]
A GCN can be seen as a special case of a GAT where attention coefficients are not learnable, but fixed and equal to the edge weightswuv{\displaystyle w_{uv}}.
The gated graph sequence neural network (GGS-NN) was introduced byYujia Liet al. in 2015.[39]The GGS-NN extends the GNN formulation by Scarselli et al.[2]to output sequences. The message passing framework is implemented as an update rule to agated recurrent unit(GRU) cell.
A GGS-NN can be expressed as follows:
where‖{\displaystyle \Vert }denotesvector concatenation,0{\displaystyle \mathbf {0} }is a vector of zeros,Θ{\displaystyle \mathbf {\Theta } }is a matrix of learnable parameters,GRU{\displaystyle {\text{GRU}}}is a GRU cell, andl{\displaystyle l}denotes the sequence index. In a GGS-NN, the node representations are regarded as the hidden states of a GRU cell. The initial node featuresxu(0){\displaystyle \mathbf {x} _{u}^{(0)}}arezero-paddedup to the hidden state dimension of the GRU cell. The same GRU cell is used for updating representations for each node.
Local pooling layers coarsen the graph via downsampling. We present here several learnable local pooling strategies that have been proposed.[31]For each case, the input is the initial graph is represented by a matrixX{\displaystyle \mathbf {X} }of node features, and the graph adjacency matrixA{\displaystyle \mathbf {A} }. The output is the new matrixX′{\displaystyle \mathbf {X} '}of node features, and the new graph adjacency matrixA′{\displaystyle \mathbf {A} '}.
We first set
y=Xp‖p‖{\displaystyle \mathbf {y} ={\frac {\mathbf {X} \mathbf {p} }{\Vert \mathbf {p} \Vert }}}
wherep{\displaystyle \mathbf {p} }is a learnableprojectionvector. The projection vectorp{\displaystyle \mathbf {p} }computes a scalar projection value for each graph node.
The top-k pooling layer[29]can then be formalised as follows:
wherei=topk(y){\displaystyle \mathbf {i} ={\text{top}}_{k}(\mathbf {y} )}is the subset of nodes with the top-k highest projection scores,⊙{\displaystyle \odot }denotes element-wisematrix multiplication, andsigmoid(⋅){\displaystyle {\text{sigmoid}}(\cdot )}is thesigmoid function. In other words, the nodes with the top-k highest projection scores are retained in the new adjacency matrixA′{\displaystyle \mathbf {A} '}. Thesigmoid(⋅){\displaystyle {\text{sigmoid}}(\cdot )}operation makes the projection vectorp{\displaystyle \mathbf {p} }trainable bybackpropagation, which otherwise would produce discrete outputs.[29]
We first set
whereGNN{\displaystyle {\text{GNN}}}is a generic permutation equivariant GNN layer (e.g., GCN, GAT, MPNN).
The Self-attention pooling layer[30]can then be formalised as follows:
wherei=topk(y){\displaystyle \mathbf {i} ={\text{top}}_{k}(\mathbf {y} )}is the subset of nodes with the top-k highest projection scores,⊙{\displaystyle \odot }denoteselement-wise matrix multiplication.
The self-attention pooling layer can be seen as an extension of the top-k pooling layer. Differently from top-k pooling, the self-attention scores computed in self-attention pooling account both for the graph features and the graph topology.
Homophilyprinciple, i.e., nodes with the same labels or similar attributes are more likely to be connected, has been commonly believed to be the main reason for the superiority of Graph Neural Networks (GNNs) over traditional Neural Networks (NNs) on graph-structured data, especially on node-level tasks.[41]However, recent work has identified a non-trivial set of datasets where GNN’s performance compared to the NN’s is not satisfactory.[42]Heterophily, i.e., low homophily, has been considered the main cause of this empirical observation.[43]People have begun to revisit and re-evaluate most existing graph models in the heterophily scenario across various kinds of graphs, e.g.,heterogeneous graphs,temporal graphsandhypergraphs. Moreover, numerous graph-related applications are found to be closely related to the heterophily problem, e.g.graph fraud/anomaly detection,graph adversarial attacks and robustness, privacy,federated learningandpoint cloud segmentation,graph clustering,recommender systems,generative models,link prediction,graph classificationandcoloring, etc. In the past few years, considerable effort has been devoted to studying and addressing the heterophily issue in graph learning.[41][43][44]
Graph neural networks are one of the main building blocks ofAlphaFold, an artificial intelligence program developed byGoogle'sDeepMindfor solving theprotein foldingproblem inbiology. AlphaFold achieved first place in severalCASPcompetitions.[45][46][40]
Social networksare a major application domain for GNNs due to their natural representation associal graphs. GNNs are used to develop recommender systems based on bothsocial relationsand item relations.[47][16]
GNNs are used as fundamental building blocks for several combinatorial optimization algorithms.[48]Examples include computingshortest pathsorEulerian circuitsfor a given graph,[39]derivingchip placementssuperior or competitive to handcrafted human solutions,[49]and improving expert-designed branching rules inbranch and bound.[50]
When viewed as a graph, a network of computers can be analyzed with GNNs for anomaly detection. Anomalies within provenance graphs often correlate to malicious activity within the network. GNNs have been used to identify these anomalies on individual nodes[51]and within paths[52]to detect malicious processes, or on the edge level[53]to detectlateral movement.
Water distribution systems can be modelled as graphs, being then a straightforward application of GNN. This kind of algorithm has been applied to water demand forecasting,[54]interconnecting District Measuring Areas to improve the forecasting capacity. Other application of this algorithm on water distribution modelling is the development of metamodels.[55] | https://en.wikipedia.org/wiki/Graph_neural_network |
Asanity checkorsanity testis a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true. It is a simple check to see if the produced material is rational (that the material's creator was thinking rationally, applyingsanity). The point of a sanity test is to rule out certain classes of obviously false results, not to catch every possible error. Arule-of-thumborback-of-the-envelope calculationmay be checked to perform the test. The advantage of performing an initial sanity test is that of speedily evaluating basic function.
In arithmetic, for example, when multiplying by 9, using thedivisibility rulefor 9 to verify that thesum of digitsof the result is divisible by 9 is a sanity test—it will not catcheverymultiplication error, but is a quick and simple method to discovermanypossible errors.
Incomputer science, asanity testis a very brief run-through of the functionality of acomputer program, system, calculation, or other analysis, to assure that part of the system or methodology works roughly as expected. This is often prior to a more exhaustive round of testing.
A sanity test can refer to variousorders of magnitudeand other simplerule-of-thumbdevices applied to cross-checkmathematicalcalculations. For example:
In software development, a sanity test (a form ofsoftware testingwhich offers "quick, broad, and shallow testing"[1]) evaluates the result of a subset of application functionality to determine whether it is possible and reasonable to proceed with further testing of the entire application.[2]Sanity tests may sometimes be used interchangeably withsmoke tests[3]insofar as both terms denote tests which determine whether it ispossibleandreasonableto continue testing further. On the other hand, a distinction is sometimes made that a smoke test is a non-exhaustive test that ascertains whether the most crucial functions of a programme work before proceeding with further testing whereas a sanity test refers to whether specific functionality such as a particular bug fix works as expected without testing the wider functionality of the software.[citation needed]In other words, a sanity test determines whether the intended result of a code change works correctly while a smoke test ensures that nothing else important was broken in the process. Sanity testing and smoke testing avoid wasting time and effort by quickly determining whether an application is too flawed to merit more rigorousQA testing, but needs more developerdebugging.
Groups of sanity tests are often bundled together for automatedunit testingof functions, libraries, or applications prior tomergingdevelopment code into a testing ortrunkversion controlbranch,[4]forautomated building,[5]or forcontinuous integrationandcontinuous deployment.[6]
Another common usage ofsanity testis to denote checks which are performedwithinprogramme code, usually on arguments to functions or returns therefrom, to see if the answers can be assumed to be correct. The more complicated the routine, the more important that its response be checked. The trivial case is checking to see whether thereturn valueof a function indicated success or failure, and to therefore cease further processing upon failure. This return value is actually often itself the result of a sanity check. For example, if the function attempted to open, write to, and close a file, a sanity check may be used to ensure that it did not fail on any of these actions—which is a sanity check often ignored by programmers.[7]
These kinds of sanity checks may be used during development for debugging purposes and also to aid introubleshootingsoftwareruntime errors. For example, in a bank account management application, a sanity check will fail if a withdrawal requests more money than the total account balance rather than allowing the account to go negative (which wouldn't be sane). Another sanity test might be that deposits or purchases correspond to patterns established by historical data—for example, large purchase transactions or ATM withdrawals in foreign locations never before visited by the cardholder may be flagged for confirmation.[citation needed]
Sanity checks are also performed upon installation ofstable, productionsoftware code into a new computingenvironmentto ensure that alldependenciesare met, such as a compatibleoperating systemandlinklibraries. When a computing environment has passed all the sanity checks, it's known as a sane environment for the installation programme to proceed with reasonable expectation of success.
A"Hello, World!" programis often used as a sanity test for adevelopment environmentsimilarly. Rather than a complicated script running a set of unit tests, if this simple programme fails to compile or execute, it proves that the supporting environment likely has a configuration problem that will preventanycode from compiling or executing. But if "Hello world" executes, then any problems experienced with other programmes likely can be attributed to errors in that application's code rather than the environment.
TheAssociation for Computing Machinery,[8]and software projects such asAndroid,[9]MediaWiki[10]andTwitter,[11]discourage use of the phrasesanity checkin favour of other terms such asconfidence test,coherence check, or simplytest, as part of a wider attempt to avoidableistlanguage and increaseinclusivity. | https://en.wikipedia.org/wiki/Sanity_testing |
TheNew York Central Rail Roadintroduced a container system in 1922.[1]
Details include:
This United States rail–related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/NYC_container |
Anambigramis acalligraphiccomposition ofglyphs(letters, numbers, symbols or other shapes) that can yield different meanings depending on the orientation of observation.[2][3]Most ambigrams are visualpalindromesthat rely on some kind ofsymmetry, and they can often be interpreted asvisual puns.[4]The term was coined byDouglas Hofstadterin 1983–1984.[2][5]
Most often, ambigrams appear as visually symmetrical words. When flipped, they remain unchanged, or they mutate to reveal anothermeaning. "Half-turn" ambigrams undergo apoint reflection(180-degreerotational symmetry) and can be read upside down (for example, the word "swims"), while mirror ambigrams haveaxial symmetryand can be read through areflectivesurface like amirror. Many other types of ambigrams exist.[6]
Ambigrams can be constructed in variouslanguagesandalphabets, and the notion often extends tonumbersand othersymbols. It is a recentinterdisciplinaryconcept, combiningart,literature,mathematics,cognition, andoptical illusions. Drawing symmetrical words constitutes also arecreational activityforamateurs. Numerous ambigramlogosare famous, and ambigramtattooshave become increasingly popular. There are methods to design an ambigram, a field in which someartistshave become specialists.
The wordambigramwas coined in 1983 byDouglas Hofstadter, an American scholar ofcognitive sciencebest known as thePulitzer Prize-winning author of the bookGödel, Escher, Bach.[7][4][5]It is aneologismcomposed of the Latin prefixambi-("both") and the Greek suffix-gram("drawing, writing").[2]
Hofstadter describes ambigrams as "calligraphic designs that manage to squeeze in two different readings."[8]"The essence is imbuing a singlewritten formwithambiguity".[9][10]
Anambigramis avisual punof a special kind: acalligraphic designhaving two or more (clear)interpretationsaswritten words. One can voluntarily jump back and forth between the rivalreadingsusually by shifting one's physicalpoint of view(moving the design in some way) but sometimes by simply altering one'sperceptualbias towards a design (clicking an internal mental switch, so to speak). Sometimes the readings will say identical things, sometimes they will say different things.[11][4]
Hofstadter attributes the origin of the wordambigramto conversations among a small group of friends in 1983.[12]
Prior to Hofstadter'sterminology, other names were used to refer to ambigrams. Among them, the expressions "verticalpalindromes"[13]byDmitri Borgmann[14](1965) andGeorges Perec,[15][16]"designatures" (1979),[17]"inversions" (1980) byScott Kim,[18][19]or simply "upside-down words" byJohn Langdonand Robert Petrick.[20]
Ambigramwas added to theOxford English Dictionaryin March 2011,[6][21]and to theMerriam-Websterdictionary in September 2020.[2][22]Scrabbleincluded the word in its database in November 2022.[23][3][24]
Many ambigrams can be described asgraphicpalindromes.
The firstSator squarepalindrome was found in the ruins ofPompeii, meaning it was created before theEruption of Mount Vesuvius in 79 AD.
A sator square using themirror writingfor the representation of the letters S and N was carved in a stone wall inOppède(France) between theRoman Empireand theMiddle Ages,[26]thus producing a work made up of 25 letters and 8 differentcharacters, 3 naturally symmetrical (A, T, O), 3 others decipherable from left to right (R, P, E), and 2 others from right to left (S, N). This engraving is therefore readable in four directions.[27]
Although the term is recent, the existence ofmirrorambigrams has been attested since at least thefirst millennium. They are generallypalindromesstylizedto be visuallysymmetrical.
Inancient Greek, the phrase"ΝΙΨΟΝ ΑΝΟΜΗΜΑΤΑ ΜΗ ΜΟΝΑΝ ΟΨΙΝ"(wash the sins, not only the face), is apalindromefound in several locations, including the site of the churchHagia Sophiain Turkey.[28][29]It is sometimes turned into a mirror ambigram when written in capital letters with the removal ofspaces, and the stylization of the letter Ν (Ν).
Aboustrophedonis a type ofbi-directional text, mostly seen in ancient manuscripts and other inscriptions. Every other line of writing is flipped or reversed, with reversed letters. Rather than going left-to-right as in modern European languages, or right-to-left as inArabicandHebrew, alternate lines in boustrophedon must be read in opposite directions. Also, the individual characters are reversed, or mirrored. This two-way writing system reveals that modern ambigrams can have quite ancient origins, with an intuitive component in some minds.
Mirror writing inIslamic calligraphyflourished during the early modern period, but its origins may stretch as far back as pre-Islamic mirror-image rock inscriptions in theHejaz.[30]
The earliest known non-naturalrotationalambigram dates to 1893 by artistPeter Newell.[31]Although better known for his children's books and illustrations forMark TwainandLewis Carroll, he published two books ofreversible illustrations, in which the picture turns into a different image entirely when flippedupside down. The last page in his bookTopsys & Turvyscontains the phraseThe end, which, when inverted, readsPuzzle. InTopsys & Turvys Number 2(1902), Newell ended with a variation on the ambigram in whichThe endchanges intoPuzzle 2.[32]
In March 1904 the Dutch-AmericancomicartistGustave Verbeekused ambigrams in three consecutive strips ofThe UpsideDowns of old man Muffaroo and little lady Lovekins.[33]His comics wereambiguous images, made in such a way that one could read the six-panel comic, flip the book and keep reading.
From June to September 1908, the British monthlyThe Strand Magazinepublished a series of ambigrams by different people in its "Curiosities" column.[34]Of particular interest is the fact that all four of the people submitting ambigrams believed them to be a rare property of particular words. Mitchell T. Lavin, whose "chump" was published in June, wrote, "I think it is in the only word in the English language which has this peculiarity," while Clarence Williams wrote, about his "Bet" ambigram, "Possibly B is the only letter of the alphabet that will produce such an interesting anomaly."[34][35]
In theLatin alphabet, many letters are symmetricalglyphs. Thecapital lettersB, C, D, E, H, I, K, O, and X have a horizontal symmetry axis. This means that all words that can be written using only these letters are naturallake reflectionambigrams; examples include BOOK, CHOICE, or DECIDE.
Thelowercaseletters o, s, x, and z arerotationally symmetric, while pairs such as b/q, d/p, n/u, and in sometypefacesa/e, h/y and m/w, are rotations of each other. Among the lowercase letters "l" is unique since its symmetry is broken if it is close to a reference character which establishes a clearx-height. When rotated around the middle of the x-height l/ȷ or lo/oȷ it doesn't appear the same, but it does when rotated around its center like the uppercase-I. Thus, the words "sos", "pod", "suns", "yeah", "swims", "passed", or "dollop", form natural rotational ambigrams.
More generally, a "natural ambigram" is a word that possesses one or moresymmetrieswhen written in its natural state, requiring notypographicstyling. The words "bud", "bid", or "mom", form natural mirror ambigrams when reflected over avertical axis, as does "ليبيا", the name of the countryLibyainArabic. The words "HIM", "TOY, "TOOTH" or "MAXIMUM", in all capitals, form natural mirror ambigrams when their letters are stacked vertically and reflected over a vertical axis. Theuppercaseword "OHIO" can flip a quarter to produce a 90°rotationalambigram when written inserifstyle (with large "feet" above and below the "I").
Like allstrobogrammatic numbers,69is a natural rotational ambigram.
Patterns in natureare regularities found in the natural world.[36]Similarly,patternsin ambigrams are regularities found ingraphemes.
As a consequence to this "natural" property, someshapesappear more or less appropriate to handle for thedesigner. Ambigram candidates can become "almostnatural", when all the letters except maybe one or two are symmetrically cooperative, for example the word "awesome" possesses 5 compatible letters (the central s that flips around itself, and the couples a/e and w/m).
A symmetrical ambigram can be called "homogram" (contraction of "homo-ambigram") when it remains unchanged after reflection, and "heterogram" when it transforms.[11][37]In the most common type of ambigram, the twointerpretationsarise when the image is rotated 180 degrees with respect to each other (in other words, a second reading is obtained from the first by simply rotating the sheet).
Douglas Hofstadtercoined the word "homogram" to define an ambigram with identical letters.[11][37]In this case, the first half of the word turns into the last half.[38]
A symmetrical ambigram may be called a "heterogram"[11][37](contraction of "hetero-ambigram") when it becomes a different word after rotation. Visually, a hetero-ambigram is symmetrical only when both versions of the pairing are shown together. Theaestheticappearance is more difficult to design when a changing ambigram is intended to be shown in one way only, becausesymmetrygenerally enhances the visual appearance of artwork. Technically, there are two times more combinations of letters involved in ahetero-ambigramthan in ahomo-ambigram. For example, the 180° rotational ambigram "yeah" contains only two pairs of letters: y/h and e/a, whereas the heterogram "yeah / good" contains four : y/d, e/o, a/o, and h/g.
There is no limitation to the number of words that can potentially be paired up as hetero-ambigrams, and full ambigramsentenceshave even been published.[15][38]
Ambigrams are exercises ingraphic designthat play withoptical illusions,symmetryandvisual perception.
Some ambigrams feature a relationship between theirformand theircontent. Ambigrams usually fall into one of several categories.
"Half-turn" ambigrams orpoint reflectionambigrams, commonly called "upside-down words", are 180°rotational symmetricalcalligraphies.[7]They can be read right side up or upside down, or both.
Rotation ambigrams are the most common type of ambigrams for good reason. When a word isturned upside down, the top halves of the letters turn into the bottom halves.
And because our eyes pay attention primarily to the top halves of letters when we read, that means that you can essentially chop off the top half of a word, turn it upside down, and glue it to itself to make an ambigram. [...][41]
Amirrororreflectionambigram is a design that can be read when reflected in amirrorvertically, horizontally, or at 45 degrees,[42]giving either the same word or another word or phrase.
When thereflectingsurface is vertical (like amirrorfor example), the calligraphic design is avertical axis mirror ambigram.
The "museum" ambigram is almost natural with mirror symmetry, because the first two letters are easily exchanged with the last two, and the lowercase letter e can be transformed into s by a fairly obvious typographical acrobatics.[43]
Vertical axis mirror ambigrams find clever applications inmirror writing(orspecular writing), that is formed by writing in the direction that is the reverse of the natural way for a given language, such that the result is themirror imageof normal writing: it appears normal when it is reflected in amirror. For example, the word "ambulance" could be read frontward and backward in a vertical axis reflective ambigram. Following this idea, the French artist Patrice Hamel created a mirror ambigram saying "entrée" (entrance, in French) one way, and "sortie" (exit) the other way, displayed in the giant glass façade of theGare du NordinParis, so that the travelers coming in readentrance, and those leaving readway out.[44]
When the reflecting surface is horizontal (like amirroring lakefor example), the calligraphic design is ahorizontal axis mirror ambigram.
The bookAmbigrams Revealedfeatures several creations of this type, like the word "Failure" mirroring in the water of a pond to give "Success", or "Love" changing into "Lust".[45]
In afigure / groundambigram, letters fit together so thenegative spacearound and between one word spells another word.[42]
InGestalt psychology,figure–ground perceptionis known as identifying afigurefrom the background. For example, black words on a printed paper are seen as the "figure", and the white sheet as the "background".
In ambigrams, thetypographic spaceof the background is used asnegative spaceto form new letters and new words. For example, inside acapitalH, one can easily insert a lowercasei.
The oil paintingYou & Me(US) byJohn Langdon(1996) belongs to this category. The word "me" fills the space between the letters of "you".[46]
WithEscher-liketessellationsassociated to wordpatterns, ambigrams can be oriented in three, four, and up to six directions via rotational symmetries of 120°, 90° and 60° respectively,[47]such as those created by French artist Alain Nicolas.[48]Some words can also transform in thenegative space, but the multiplication of constraints often has the effect of reducing either the readability or thecomplexityof thedesignedwords.
Ambigram tessellations are wordpuzzles, in whichgeometrysets the rules.[48]
Media related toAmbigram tessellationsat Wikimedia Commons.
A chain ambigram is a design where a word (or sometimes words) are interlinked, forming a repeating chain.[42]Letters are usually overlapped: a word will start partway through another word.
Sometimes chain ambigrams are presented in the form of a circle.
For example, the chain "...sunsunsunsun..." can flip upside down, but not the word "sun" alone, written horizontally.
A chain ambigram can be constituted of one to several elements. A single element ambigram chain is like asnake eating its own tail. A two-elements ambigram chain is like a snake eating the neighbor's tail with the neighbor eating the first snake's, and so on.
Scott Kim's "Infinity" works, and that ofJohn Langdon"Chain reaction", are alsoself-referential, since the first is infinite in the literal sense of the word, and the second, both reversible at 180° and interfering around the letter O, evokes a chain reaction.[49]
Aspinonym[de]is a type of ambigram in which a word is written using the sameglyphrepeated in differentorientations.[38]WEB is an example of a word that can easily be made into a spinonym.
Perceptualshiftambigrams, also called "oscillation" ambigrams, are designs with nosymmetrybut can be read astwo different wordsdepending on how the curves of the letters are interpreted.[42]These ambigrams work on the principle ofrabbit-duck-style ambiguous images.
For exampleDouglas Hofstadterexpresses the dual nature of light as revealed by physics with his perceptual shift ambigramWave / Particle.
"Quarter-turn" ambigrams or 90°rotationalambigrams turnclockwiseorcounterclockwiseto express different meanings.[4]For example, the letter U can turn into a C and reciprocally, or the letters M or W into an E.[38]
A totem ambigram is an ambigram whose letters are stacked like atotem, most often offering a vertical axismirror symmetry.
This type helps when several letters fit together, but hardly the whole word.
For example, in theMariamonogram[hu], the letters M, A and I are individually symmetrical, and the pairing R/A is almost naturally mirroring.
When adequately stacked, the 5 letters produce a nice totem ambigram, whereas the whole name "Maria" would not offer the same cooperativeness.
The ambigrammist artistJohn Langdondesigned several totemic assemblages, such as the word "METRO" composed of the symmetrical letter M, then section ETR, and below O; or the sentence "THANK YOU", vertical assembly of T, H, A, then of the symmetric NK couple, then finally Y, O, U.[50]
In mathematics, afractalis a geometricalshapethat exhibitsinvarianceunder scaling.
A piece of the whole, if enlarged, has the same geometrical features as the entire object itself.
A fractal ambigram is a sort of space-filling ambigrams where thetiledword branches from itself and then shrinks in aself-similarmanner, forming afractal.[51]In general, only a few letters are constrained in a fractal ambigram. The other letters don't need to look like any other, and thus can be shaped freely.
A3Dambigram is a design where an object is presented that will appear to read several letters or words when viewed from different angles.
Such designs can be generated usingconstructive solid geometry, a technique used insolid modeling, and then physically constructed with therapid prototypingmethod.
3-dimensional ambigramsculpturescan also be achieved inplastic arts. They arevolumeambigrams.
The original 1979 edition ofHofstadter'sGödel, Escher, Bachfeatured two 3-D ambigrams on the cover.[52]
Complex ambigrams are ambigrams involving more than one symmetry, or satisfying the criteria for several types. For example, a complex ambigram can be both rotational and mirror with a 4-folddihedralsymmetry. Or a spinonym that reads upside down is also a complex ambigram.
Ambigrams exist in many languages. With theLatin alphabet, they generally mixlowercaseanduppercaseletters. But words can also be symmetrical in other alphabets, likeArabic,Bengali,Cyrillic,Greek, and even inChinese charactersand Japanesekanji.
InKorean,곰(bear) and문(door),공(ball) and운(luck), or물(water) and롬(ROM) form a natural rotational ambigram. Some syllables like응(yes),표(ticket/signage) or를(object particle), and words like "허리피라우" (straighten your back) also make full ambigrams.
Thehan charactermeaning "hundred" is written百, that makes a natural 90° rotational ambigram when theglyphmakes a quarter turn counterclockwise, one sees "100".[53]
Media related toAmbigrams by languageat Wikimedia Commons.
An ambigram of numbers, ornumeral ambigram, containsnumerical digits, like1,2,3...[38]
Inmathematics, apalindromic number(also known as anumeral palindrome) is a number that remains the same when its digits arereversedthrough a vertical axis (but not necessarily visually). The palindromic numbers containing only 1, 8, and 0, constitute natural numeric ambigrams (visuallysymmetricalthrough amirror). Also, because theglyph2is graphically themirror imageof5, it means numbers like 205 or 85128 are natural numeral mirror ambigrams. Though not palindromic in the mathematical sense, they read frontward and backward like real ambigrams.
Astrobogrammatic numberis a number whose numeral isrotationally symmetric, so that it appears the same when rotated 180 degrees. The numeral looks the same right-side up and upside down (e.g., 69, 96, 1001).[54][55][56]
Somedatesare natural numeral ambigrams.[57]In March 1961, artistNorman Mingocreated an upside-down cover forMad magazinefeaturing an ambigram of the current year. The title says "No matter how you look at it... it's gonna be aMadyear. 1961, the first upside-down year since 1881."[58]Tuesday, 22 February 2022, was a palindrome and ambigram date called "Twosday" because it contained reversible 2 (two).[59][60][61]
Ambigrams of numbers receive most attention in the realm ofrecreational mathematics.[4][62]
Ambigrams with numbers sometimes combine letters and numerical digits. Because the number 5 is approximately shaped like the letter S, the number 6 like a lowercase b, the number 9 like the letter g, it is possible to play on these similarities to design ambigrams. A good example is theSochi 2014 (Olympic games)logo where the fourglyphscontained in 2014 are exact symmetries of the four letters S, o, i and h, individually.[63]
Asalphabet lettersareglyphsused in thewriting systemsto express thelanguagesvisually, othersymbolsare also used in the world to code other fields, like theprosignsin theMorse codeor themusical notesinmusic.
Similarly to the ambigrams of letters, the ambigrams with other symbols are generally visually symmetrical, eitherpoint reflectiveorreflective through an axis.
The internationalMorse codedistress signalSOS▄ ▄ ▄ ▄▄▄ ▄▄▄ ▄▄▄ ▄ ▄ ▄is a natural ambigram constituted of dots and dashes. It flips upside down or through a mirror.
In morse code, the letter P coded▄ ▄▄▄ ▄▄▄ ▄and the letter R coded▄ ▄▄▄ ▄are individually symmetrical, like many other letters and numbers. Also, the letter G coded▄▄▄ ▄▄▄ ▄is the exact reverse of the letter W coded▄ ▄▄▄ ▄▄▄. Thus, the combination▄▄▄ ▄▄▄ ▄/▄ ▄▄▄ ▄▄▄coding the pairing G/W constitutes a natural ambigram. Consequently, meaningful natural ambigrams written in morse code certainly exist, like for example the words "gnaw"▄▄▄ ▄▄▄ ▄▄▄▄ ▄▄ ▄▄▄▄ ▄▄▄ ▄▄▄, "Dou"▄▄▄ ▄ ▄▄▄▄ ▄▄▄ ▄▄▄▄ ▄ ▄▄▄or "mom"▄▄▄ ▄▄▄▄▄▄ ▄▄▄ ▄▄▄▄▄▄ ▄▄▄.[7][64][65]
Inmusic, the interlude fromAlban Berg's operaLuluis apalindrome, thus thescoremade up ofmusical notesis almost symmetrical through a vertical axis.[66]
Inbiology, researchers study the ambigrammatic property ofnarnavirusesby using visual representations of the symmetrical sequences.[36][1][67]
Instead of simply writing them, ambigramletteringcovers theartofdrawingletters. In ambigram calligraphy, each letter acts as anillustration, each letter is created with attention to detail and has a unique role within acomposition. Lettering ambigrams do not translate into combinations of alphabet letters that can be used like atypeface, since they are created with a specific candidate in mind.
Thecalligrapher,graffitiwriter andgraphic designerNiels Shoe Meulmancreated several rotational ambigrams like the number "fifty",[69]the names "Shoe / Patta",[70]and the opposition "Love / Fear".[71]
Thecoverof the 7th volume of thetypographybookTypismis an ambigram drawn byNikita Prokhorov.[72]
The AmericantypedesignerMark Simonsondesigned poetic andhumorousambigrams, such as the words "Revelation", "Typophile", and the symbiosis "Drink / Drunk".[73]The last one makes avisual punwhen printed on ashot glass, sold commercially.[74]
Since they are visually striking, and sometimes surprising, ambigram words find large application incorporate logosandwordmarks, setting the visualidentityof many organizations, trademarks and brands.[75]
In 1968[76]or 1969,Raymond Loewydesigned the rotationalNew Man[fr]ambigram logo.[77][78][79]
The mirror ambigramDeLorean Motor Companylogo, designed by Phil Gibbon, was first used in 1975.[80][81][82]
Robert Petrick designed the invertibleAngellogo[83]in 1976.
The logoSun(Microsystems) designed by professorVaughan Pratt[84]in 1982 fulfills the criteria of several types: chain ambigram, spinonym, 90° and 180° rotational symmetries.
The Swedish pop groupABBAowns a mirror ambigram logo stylizedAᗺBAwith a reversed B, designed byRune Söderqvist[sv][85]in 1976.[86]
TheVenturalogo of the Visitors & Convention Bureau's board, in California, costUS$25,000 and was created in 2014 by the DuPuis group. It uses a 180° rotational symmetry.[87][88]
Other famous ambigram logos include:
the insurance companyAviva;[89]theacronymCRD(Capital Regional District) in the Canadian province of British Columbia;[90]the American multinational corporationDXCTechnology;
the two-sided marketplace for residential cleaningHandy;[91][92]the brand name of French premium high-speed train servicesInOui;[93]the French company specializing in ticketing and passenger information systemsIXXI;
the century-old brandMaoamof the confectionery manufacturer Haribo;[94]the American industrial rock bandNIͶ;
the Japanese food companyNissin;
the biotechnology companyNoxxonPharma, founded in 1997;
the online travel agencyOpodoin 2001;[95]the brand of food productsOXO[96]born in 1899;
the video gamePod;
the American developer and manufacturer of audio productsSonos;[97]the American professional basketball team PhoenixSuns;[98][99]the German manufacturer of adhesive productsUHU;
the quadruple symmetrical logoUAfrom the American clothing brandUnder Armour;
the Canadian corporation mandated to operate intercity passenger rail serviceVIAin 1978;[100]the American international broadcasterVOA, born in 1942;
and the Malaysian mobile virtual network operatorXOX. The student edition of theTesco Clubcardused 180° rotational symmetry.[101]
Because they arevisual puns,[4]ambigrams generally attract attention, and thus can be used invisual communicationto broadcast amarketingorpoliticalmessage.
In France, a mirror ambigram "Penelope/benevole" legible through a horizontal axis became amemeon the web after its diffusion onWikimedia Commons.[102]Penelope Fillon, wife of French politician and former Prime Minister of FranceFrançois Fillon, is suspected of having received wages for a fictitious job. Ironically, her name through the mirror becomesbenevole(voluntaryin French), suggesting dedication for a free service. Shared tens of thousands of times on thesocial networks, thishumorousambigram made thebuzzvia several French,[103]Belgian[104][105]and Swiss[102]medias.
Ambigrams are regularly used bycommunication agenciessuch asPublicisto engage the reader or the consumer through two-way messages.[106]Thus, in 2021, male first names transformed into female first names are included in a Swissadvertising campaignaimed at raising awareness aboutgender equality. An intriguingcatchphrasetypography upside down invites the reader to rotate the magazine, in which the first names "Michael" or "Peter" are transformed into "Nathalie" or "Alice".[107][108]
In 2015 iSmart's logo on one of its travelchargerswentviralbecause the brand's name turned out to be a natural ambigram that read "+Jews!" upside down. The company noted that "...we learned a powerful lesson of what not to do when creating alogo."[109]
Cinema posterssometimes seduce observers with ambigram titles, such as that ofTenetbyChristopher Nolan, by central symmetry.[27]orAnnabyLuc Bessonaround a vertical axis,[110][111]
The American artist and writerPeter Newellpublished arotationalambigram in 1893 saying "Puzzle / The end" in the book containingreversible illustrationsTopsys & Turvys.[31]
In March 1904 the Dutch-AmericancomicartistGustave Verbeekused ambigrams in three consecutive strips ofThe UpsideDowns of old man Muffaroo and little lady Lovekins.[33]His comics wereambiguous images, made in such a way that one could read the six-panel comic, flip the book and keep reading. InThe Wonderful Cure of the Waterfall(13 March 1904) an Indian medicine man says 'Big waters would make her very sound', while when flipped the medicine man turns into an Indian woman who says 'punos dery, ery apew poom, serlem big'. Which is explained as, 'poor deary' several foreign words that meant that she would call the 'Serlem Big'. The next comic calledAt the House of the Writing Pig(20 March 1904), where two ambigram wordballoonsare featured. The first features an angry pig trying to make the main protagonist leave by showing a sign that says; 'big boy go away, dis am home of mr h hog', up side down it reads 'Boy yew go away. We sip. Home of hog pig.' The protagonist asks the pig if it wants a big bun, upon which it replies 'Why big buns? Am mad u!', which flips into 'In pew we sang big hym'. Finally inThe Bad Snake and the Good Wizard(1904 Mar 27) there are two more ambigrams. The first turns 'How do you do' into the name of a wizard called 'Opnohop Moy', the second features a squirrel telling the protagonist 'Yes further on' only to inform it that there are 'No serpents here' on his way back. In a 2012 Swedish remake of the book,[112]the artist Marcus Ivarsson redrawsThe Bad Snake and the Good Wizardin his own style. He removes the squirrel, but keeps the other ambigram. 'How do you do' is replaced by 'Nejnej' (Swedish for no) and the wizard is now called 'Laulau'.
Media related toAmbigrams by Gustave Verbeekat Wikimedia Commons.
Oubapo,workshop of potentialcomic book art, is acomicsmovement which believes in the use offormalconstraintsto push the boundaries of the medium.Étienne Lécroart,cartoonist, is a founder and key member of Oubapo association, and has composed cartoons that could be read either horizontally, vertically, or in diagonal, and vice versa, sometimes including appropriate ambigrams.[113]
The Britishpainter, designer and illustratorRex Whistler, published in 1946 a rotational ambigram "¡OHO!" for the cover of a book gatheringreversible drawings.[114]
The artistJohn Langdon, specialist of ambigrams,[75]designed many colorpaintingsfeaturing ambigrams of all kinds, figure-ground, rotational, mirror or totem. Among other influences, he particularly admiresM. C. Escher'sdrawings.[115]
The Canadian artist Kelly Klages painted severalacrylicsoncanvaswith ambigram words and sentences referring to famous writers' novels written byWilliam ShakespeareorAgatha Christie, such asThird Girl,The Tempest,After the Funeral,The Hollow, Reformation,Sherlock Holmes, andElephants Can Remember.[116]
The GermanconceptualartistMia Florentine Weissbuilt a sculptural ambigramLove Hate[de],[117]that has traveled Europe as a symbol of peace and change of perspective.[118]Depending on which side the viewer looks at it, the sculpture says "Love" or "Hate". A similar concept was installed in front of theReichstag buildinginBerlinwith the words "Now / Won". Both sculptures are mirror type ambigrams, symmetrical around a vertical axis.[119]
The Swiss sculptorMarkus Raetzmade several three-dimensional ambigram works, featuring words generally with related meanings, such as
YES-NO (2003),[120]ME-WE (2004, 2010),[121]OUI-NON (2000–2002) in French,[122][123]SI–NO (1996)[124]and TODO-NADA (1998) in Spanish[125][126]These areanamorphicworks, which change in appearance depending on the angle of view of the observer.
The OUI–NON ambigram is installed on the Place du Rhône, inGeneva,Switzerland, at the top of a metal pole. Physically, the letters have the appearance of iron twists. With the perspective, this work demonstrates that reality can beambiguous.[123]
Some ambigram sculptures by the French conjurerFrancis Tabary[fr]are reversible by a half-turn rotation, and can therefore be exhibited on a support in two different ways.[127][128]
One of the most dynamic sectors that harbors ambigrams istattooing. Because they possess two ways of reading, ambigram tattoos inked on the skin benefit from a "mind-blowing" effect. On the arm,sleeve tattoosflip upside-down, on the back or jointly on two wrists they are more striking with amirror symmetry. A large range ofscriptsandfontsis available. Experienced ambigram artists can create anoptical illusionwith a complexvisual design.[129]
In 2015, an ambigramtattoowentviralfollowing anadvertising campaigndeveloped by thePublicisgroup two years earlier. TheSamaritans of Singaporeorganization, active in suicide prevention, has a 180° reversible "SOS" ambigram logo,acronymof its name andhomonymof the famousSOSdistress signal.
In 2013, this center orders advertisements that could be inserted in magazines to make readers aware of the problem ofdepressionamong young people, and the communication agency notices the symmetrical aspect of the logo. As a result, it begins to produce several ambigrammatic visuals, staged in photographic contexts, where sentences such as "I'm fine", "I feel fantastic" or "Life is great" turn into "Save me", "I'm falling apart", and "I hate myself". Readers noticing this logo placed at the upper left corner of the page with an upside-down typographicalcatchphraserotate the newspaper and visualize the double calligraphed messages, which call out with theSOS.[106][130]These ads are so influential that Bekah Miles, an American student herself coming out of a severe depression, chooses to use the "I'm fine / Save me" ambigram to get a tattoo on her thigh. Posted on Facebook, the two-sided photography immediately appeals to many young people, impressed or sensitive to this difficulty.[131][132]To educate its students,George Fox Universityin the United States then relays the optical illusion in its official journal, through a video totaling more than three million views[133]and the information is also reproduced in several local media and international organizations, thus helping to popularize this famous two-way tattoo.[134][135]Less fortunate, another teenage girl, aged 16, committed suicide, with her also this ambigram found on a note in her room, "I'm fine / Save me", reversible calligraphy today printed on badges and bracelets, for educational purposes.[136]
Ambigrams are sorts ofvisualpalindromes.[137]Some words turn upside down, others are symmetrical through a mirror. Natural ambigram palindromes exist, like the words "wow", "malayalam"[138](Dravidian language), or the biotechnology companyNoxxonthat possesses apalindromicname associated to a rotational ambigram logo. But some words are natural ambigrams, though not palindromes in the literary acception, like "bud" for example, because b and d are different letters. As a result, some words and sentences are good candidates for ambigrammists, but not for palindromists, and reciprocally, since theconstraintsdiffer slightly. Authors of ambigrams also benefit from a certain flexibility by playing on thetypefaceandgraphicaladjustments to influence the reading of their visual palindromes.
Oulipo,workshop of potential literature, seeks to create works usingconstrained writingtechniques.[13]Georges Perec, French novelist and member of the Oulipo group, designed a rotational ambigram, that he called "vertical palindrome".[15]Sibylline, the sentence "Andin Basnoda a une épouse qui pue" in French means "Andin Basnoda has a smelly wife". Perec did not care about punctuation spaces, but hiscreationflips easily with a classical font likeArial.
Visual palindromes sometimes perfectly illustrate literary contents. The American authorDan BrownincorporatedJohn Langdon's designs into the plot of his bestsellerAngels & Demons, and his fictional characterRobert Langdon's surname was a homage to theambigram artist.[139]
The fantasy novelAbarat, written and illustrated byClive Barker, features an ambigram of the title on its cover.[140]
Acalligramis text arranged in such a way that it forms a thematically related image. It can be a poem, a phrase, a portion ofscripture, or a single word. The visual arrangement can rely on certain use of thetypeface,calligraphyorhandwriting. The image created by the words illustrates the text by expressing visually what it says, or something closely associated.
InIslamic calligraphy, symmetrical calligrams appear in ancient and modern periods, forming mirror ambigrams inArabiclanguage.[30]
The word "OK" turned 90°counterclockwiseevokes a human icon, with the letter O forming the head and the letter K the arms and the legs. The Norwegian Climbing ClubOslo Klatreklubb[no](acronym"OK") borrowed the concept of this naturalcalligramfor their official logo.[141]
As described byDouglas Hofstadter, ambigrams arevisual punshaving two or more (clear)interpretationsaswritten words.[4]
Multilingualambigrams can be read one way in alanguage, and another way in a different language oralphabet.[42]Multi-lingual ambigrams can occur in all of the various types of ambigrams, with multi-lingual perceptual shift ambigrams being particularly striking.
Like certainanagramswith providential meanings such as "Listen / Silent" or "The eyes / They see", ambigrams also sometimes take on a timely sense, for example "up" becomes the abbreviation "dn", very naturally by rotation of 180°.[142]But on the other hand, it happens that the luck of the letters makes things bad. This is the case with the weird anagram "Santa/ Satan", as it is with a rotational ambigram that has goneviralbecause of theparadoxicaland unintentional message it expresses. Spotted in 2015 on a metal medal marketed without bad intention, the text "hope" displays upside down with a fairly obvious reading "Adolf". This coincidence photographed by an Internet user was relayed by several media and constitutes anambiguous image.[143][144]
Recreational mathematicsis carried out forentertainmentrather than as a strictly research and application-based professional activity.[62]An ambigrammagic squareexists, with the sums of the numbers in each row, each column, and both main diagonals the same right side up and upside down (180° rotational design). Numeral ambigrams also associate with alphabet letters. A "dissection" ambigram of "squaring the circle" was achieved in a puzzle where each piece of the word "circle" fits inside a perfect square.[4]
Burkard Polster, professor of mathematics inMelbourne[145]conducted researches on ambigrams and published several books dealing with the topic, includingEye Twisters, Ambigrams & Other Visual Puzzles to Amaze and Entertain.[146]In the abstractMathemagical Ambigrams, Polster performs several ambigrams closely related to his realm, like the words "algebra", "geometry", "math", "maths", or "mathematics".[4]
Calculator spellingis anunintended characteristicof theseven-segment displaytraditionally used bycalculators, in which, when read upside-down, the digits resemble letters of theLatin alphabet. Also,palindromic numbersandstrobogrammatic numberssometimes attract attention of mathematician ambigrammists.[55][54]
Ambigramtessellationsand3Dambigrams are two types particularly fun for the mathematician ingeometry. Wordpatternsin tessellations can start from 35 different fundamentalpolygons, such as therhombus, theisoscelesright triangle, or theparallelogram.[47]
Word puzzlesare used as a source ofentertainment, but can additionally serve aneducationalpurpose. The AmericanpuzzledesignerScott Kimpublished several ambigrams inScientific AmericaninMartin Gardner's
"Mathematical Games" column, among them long sentences like"Martin Gardner's celebration ofmind"turning into "Physics, patterns andprestidigitation".[147]
Legibilityis an important aspect in successful ambigrams. It concerns the ease with which a reader decodes symbols. If the message is lost or difficult to perceive, an ambigram does not work.[8]Readability is related toperception, or how our braininterpretsthe forms we see through our eyes.[148]
Symmetryin ambigrams generally improves the visual appearance of thecalligraphicwords.[38]Hermann Rorschach, inventor of theRorschach Testnotices that asymmetric figures are rejected by many subjects. Symmetry supplies part of the necessary artistic composition.[149]
For manyamateurs, designing ambigrams represents arecreational activity, whereserendipitycan play a fertile role, when the author makes an unplanned fortunate discovery.[4][34]
In the word "ambigram", the rootambi-means "both" and is a popular prefix in aworld of dualities, such as day/night, left/right, birth/death, good/evil.[150]InWordplay: The Philosophy, Art, and Science of Ambigrams,[151]John Langdonmentions theyin and yangsymbol as one of his major influences to create upside down words.
Ambigrams are mentioned inMetamagical Themas, an eclectic collection of articles thatDouglas Hofstadterwrote for thepopular sciencemagazineScientific Americanduring the early 1980s.[9]
Seeking the balance point ofanalogiesis anaestheticexercise closely related to the aesthetically pleasing activity of doing ambigrams, where shapes must be concocted that are poised exactly at the midpoint between twointerpretations. But seeking the balance point is far more than just aesthetic play; it probes the very core of how people perceiveabstractions, and it does so without their even knowing it. It is a crucial aspect ofCopycatresearch.[9]
Inmagic, ambigrams work likevisual illusions, revealing an unexpected new message from a particular written word.[153]
In the first series of the British showTrick or Treat, the show's host and creatorDerren Brownuses cards with rotational ambigrams.[154][155]These cards can read either 'Trick' or 'Treat'.
Ambiguous images, of which ambigrams are a part, cause ambiguity in different ways. For example, by rotational symmetry, as in the Illusion ofThe CookbyGiuseppe Arcimboldo(1570);[156]sometimes by afigure-groundambivalence as inRubin vase; by perceptual shift as in therabbit–duck illusion, or throughpareidolias; or again, by the representation ofimpossible objects, such asNecker cubeorPenrose triangle. For all these types of images, certain ambigrams exist, and can be combined withvisualsof the same type.
John Langdondesigned afigure-groundambigram "optical illusion" with the two words "optical" and "illusion", one forming the figure and the other the background. "Optical" is easier to see initially but "illusion" emerges with longer observation.[157]
Adidasmarketed a line ofsneakerscalled "Bounce", with an ambigramtypographyprinted inside the shoe.
Several clothing brands, such asHelly Hansen(HH),Under Armour(UA), orNew Man[fr], raise an ambigram logo as their visualidentity.[79]
Mirror ambigrams are also sometimes placed onT-shirts,towelsandhats, whilesocksare more adapted to rotational ambigrams. TheconceptualartistMia Florentine Weissmarketed T-shirts and other products with her mirror ambigramLove Hate[de].[158][118]Likewise, the city ofVenturain California sells sweatshirts, caps, jackets, and other fashion accessories printed with its rotational ambigram logo.[159]
TheCD coverof the thirteenth studio albumFuneralby American rapperLil Waynefeatures a 180° rotational ambigram reading "Funeral / Lil Wayne".[160]
Thespecial editionpaper sleeve (CD with DVD) of the solo albumChaos and Creation in the BackyardbyPaul McCartneyfeatures an ambigram of the singer's name.[161]
TheGrateful Deadhave used ambigrams several times, including on their albumsAoxomoxoa[162]andAmerican Beauty.[163]
Although the words spelled by most ambigrams are relatively short in length, oneDVDcover forThe Princess Bridemovie creates a rotational ambigram out of two words "Princess Bride", whether viewed right side up or upside down.[164]
The cover of the studio albumCreate/Destroy/Createby rock bandGoodnight, Sunriseis an ambigram composition constituted of two invariant words, "create" and "destroy", designed by Polish artist Daniel Dostal.[165]
The reversibleshot glasscontaining a changing message "Drink / Drunk", created by thetypographerMark Simonsonwas manufactured and sold in the market.[74]
The concept of reversible sign that some merchants use through their windows to indicate that the store is sometimes "open", sometimes "closed", was inaugurated at the beginning of the 2000s, by a rotational ambigram "Open / Closed" developed by David Holst.[43]
Different ambigramartists, sometimes calledambigrammists,[9][166]may create distinctive ambigrams from the same words, differing in bothstyleandform.
There are no universal guidelines for creating ambigrams, and differentwaysof approaching problems coexist.
A number of books suggestmethodsforcreation, includingWordPlay,[75]Eye Twisters,[146]andAmbigrams Revealed,[38]in English.
Computerizedmethods toautomaticallycreate ambigrams have been developed.[167][168]
John LangdonandScott Kimeach believed that they had invented ambigrams in the 1970s.[169]
Douglas Hofstadtercoined the term.[4]
To explain visually the numerous types of possible ambigrams, Hofstadter created many pieces with different constraints and symmetries.[170]Hofstadter has had several exhibitions of his artwork in various university galleries.[171][172]
According toScott Kim, Hofstadter once created a series of 50 ambigrams on the name of all the states in the US.[173]
In 1987 a book of 200 of his ambigrams, together with a long dialogue with his alter ego Egbert G. Gebstadter on ambigrams andcreativity, was published in Italy.[5][12]
John Langdonis aself-taughtartist,graphic designerandpainter, who started designing ambigrams in the late 1960s and early 70s.Letteringspecialist, Langdon is a professor oftypographyandcorporate identityatDrexel UniversityinPhiladelphia.[174]
John Langdon produced a mirror image logo "Starship" in 1972-1973,[175][176]that was sold to the rock bandJefferson Starship.
Langdon's ambigram bookWordplaywas published in 1992. It contains about 60 ambigrams. Each design is accompanied by a brief essay that explores the word's definition, its etymology, its relationship to philosophy and science, and its use in everyday life.[75]
Ambigrams became more popular as a result ofDan Brownincorporating John Langdon's designs into the plot of his bestseller,Angels & Demons, and the DVD release of theAngels & Demonsmovie contains a bonus chapter called "This is an Ambigram". Langdon also produced the ambigram that was used for some versions of the book's cover.[169]Brown used the nameRobert Langdonfor the hero in his novels as an homage to John Langdon.[139][177]
Blacksmith Records, the music management company andrecord label, possesses a rotational ambigram logo[178]designed by John Langdon.[179]
Scott Kimis one of the best-known masters of the art of ambigrams.[78]He is an Americanpuzzledesigner andartistwho published in 1981 a book calledInversionswith ambigrams of many types.[18][177]
Nikita Prokhorovis agraphic designer,lettering artistand ambigram designer. His bookAmbigrams Revealedshowcases ambigram designs of all types, from all around the world.[38][180]
Born in 1946,Alain Nicolasis a specialist of figurative and ambigramtessellations. In his book, he performed many tilings with various words like "infinity", "Einstein" or "inversion" legible in many orientations.[47]According toThe Guardian, Nicolas has been called "the world's finest artist ofEscher-styletilings".[181] | https://en.wikipedia.org/wiki/Ambigram |
TPT(time partition testing) is a systematictestmethodologyfor theautomatedsoftware testandverificationofembedded control systems,cyber-physical systems, anddataflow programs. TPT is specialised on testing andvalidationof embedded systems whose inputs and outputs can be represented assignalsand is a dedicated method for testing continuous behaviour ofsystems.[1]Mostcontrol systemsbelong to this system class. The outstanding characteristic of control systems is the fact that they interact closely interlinked with a real world environment. Controllers need to observe their environment and react correspondingly to its behaviour.[2]The system works in an interactional cycle with its environment and is subject to temporal constraints. Testing these systems is to stimulate and to check the timing behaviour. Traditional functional testing methods use scripts – TPT usesmodel-based testing.
TPT combines a systematic and graphic modelling technique for test cases with a fully automated test execution in different environments and automatic test evaluation. TPT covers the following four test activities:
In TPT tests are modelled graphically with the aid of special state machines and time partitioning.[1][3]All test cases for one system under test can be modelled using one hybrid automaton. Tests often consist of a sequence of logical phases. Thestatesof thefinite-state machinerepresent the logical passes of a test which are similar for all tests. Trigger conditions model the transitions between the test phases. Each state and transition of the automaton may have different variants. The combination of the variants model the individual test cases.
Natural languagetexts become part of the graphics, supporting the simple and demonstrative readability even for non-programmers. Substantial techniques such as parallel and hierarchical branchingstate machines, conditional branching,reactivity, signal description,measuredsignals as well as lists of simple test steps allow an intuitive and graphic modelling even of complex test cases.
The test's complexity is hidden behind graphics. The lowest level signal description consists of either test step lists or so called direct definitions.
Through the use of the Test-Step List, one can model simple sequences of test steps that do not need to execute in parallel, such as setting signals (Set channel), ramping signals (Ramp channel), setting parameters (Set parameter), and waiting (Wait). Requests for the expected test results can be made within the test sequence to evaluate the system under test as it runs. It is also possible to place subautomatons in the Test-Step List, which in turn contain automatons and sequences, resulting in hierarchical Test-Step Lists. The test sequences can also be combined with other modelling methods, allowing for a great deal of complexity (or simplicity) in one's test. Test sequences can also be combined and parallelised with other modelling methods.
Within the Test-Step-List it is possible to implement so-called "Direct Definitions". Using this type of modelling, one can define signals as a function of time, past variables/test events, and other signals. It is also possible to define these signals by writing "C-Style" code as well as importing measurement data and using a manual signal editor.
It is possible to definefunctionsthat can act as aclientsorservers. Client functions are called from TPT in the system under test, where server functions implemented in TPT can be called as "stubfunctions" from the system under test. TPT itself may also call the server functions.
TPT was developed specifically for testing of continuous and reactive behaviour of embedded systems.[4]TPT can be seen as the extension of theClassification Tree Methodin terms of timing behaviour. Because of its systematic approach intest casegeneration, TPT even keeps track of very complex systems whose thorough testing requires a large number of test cases thus making it possible to find failures in the system under test with an ideal number of test cases.
The underlying idea of TPT's systematic is the separation of similarities and differences among the test cases: most test cases are very similar in their structural process and can "only" be differentiated in a few, but crucial details.[5]TPT makes use of this fact by jointly modelling and using joint structures. On the one hand, redundancies are thus avoided. On the other hand, it is made very clear what the test cases actually differ in – i.e. which specific aspect they respectively test. The comparability of test cases and thus the overview is improved in this approach and the attention of the tester is focused on the essential – the differentiating features of the test cases.
The hierarchical structure of the test cases makes it possible to break complex test problems down into sub-problems thus also improving the clarity and – as a result – the quality of the test.
These modelling techniques support the tester in finding the actually relevant cases, avoiding redundancies and keeping track of even large numbers of test cases.[6]
TPT comprises several possibilities to automatically generate test cases:
With TPT, each test case can specifically react to the system's behaviour[8]during the testing process in real time – for instance to react on the system exactly when a certain system-state occurs or a sensor signal exceeds a certain threshold. If, for example, a sensor failure for an engine controller is to be simulated when the engine idling speed is exceeded, it has to be possible to react to the event "engine idling speed exceeded" in the description of the test case.
TPT test cases are made independent of its execution. The test cases can be executed in almost any environment due to the so-calledvirtual machine(VM) concept also inreal timeenvironments. Examples areMATLAB/Simulink,TargetLink, ASCET,C-code,CAN,AUTOSAR, SystemDesk, DaVinci CT, LABCAR, INCA, Software-in-the-Loop (SiL) andHiL. Thus TPT is an integrated tool to be used in all testing phases of the development likeunit testing,integration testing,system testingandregression testing.
For analysis and measurement ofcode coverage, TPT can interact with coverage tools like Testwell CTC++ forC-code.
A configurable graphical user interface (Dashboard), based onGUI widgets, can be used to interact with tests.
The modelled test cases in TPT are compiled and during test execution interpreted by the so-calledvirtual machine(VM). The VM is the same for all platforms and all tests. Only a platformadapterrealises the signal mapping for the individual application. The TPT-VM is implemented inANSI Cand requires a memory of just a few kilobytes and can completely do without a dynamic memory allocation, allowing it to be applied in minimalist and environments with few resources too. There are alsoAPIsforCand.NET.
TPT's Virtual Machine is able to process tests in real time with defined response behaviour. The response times of TPT test cases are normally given within micro seconds – depending on the complexity and test hardware.
The expected system behaviour for individual test cases should also be automatically tested to assure efficient test processes. TPT offers the possibility to compute the properties for the expected behaviour online (during test execution) and offline (after test execution). While online evaluation uses the same modelling techniques as test modelling, offline evaluation offers decidedly more far-reaching possibilities for more complex evaluations, including operations such as comparisons with external reference data, limit-value monitoring, signal filters, analyses of state sequences and time conditions.
The offline evaluation is, technically speaking, based on thePythonscript language, which has been extended by specific syntactic language elements and a specialised evaluation library to give optimal support to the test evaluation. The use of a script language ensures a high degree of flexibility in the test evaluation: access to reference data, communication with other tools and development of one's own domain-specific libraries for test evaluation is supported. Besides of the script based test result evaluation user interfaces provide simple access to the test assessments and help non-programmers to avoid scripting.
Measurement data from other sources likeTargetLinkandSimulinksignal logging or MCD-3 measurement data can be assessed automatically. This data can be independent from the test execution.
TPT test documentation according toIEEE 829presents the result of the test evaluation to the tester in a HTML, report, in which not only the pure information "success", "failed" or "unknown" can be depicted as the test result for each test case, but also details such as characteristic parameters or signals that have been observed in the test execution or computed in the test evaluation. Since the test assessment returns proper information about the timing and the checked behaviour this information can be made available in the report.
The content of the test documentation as well as the structure of the document can be freely configured with the help of a template.
TPT supportstest managementof TPT test projects with the following activities:
Industry norms such asIEC 61508,DO-178B, EN 50128 andISO 26262requiretraceability of requirements and tests. TPT offers an interface torequirementstools likeTelelogicDOORS to support these activities.
TPT is amodel-based testingtooland is applied mainly in theautomotive controller development[9]and has originally been developed withinDaimler AGfor their own development. Daimler coordinated the development of the testing tool for years.[10]Since 2007 PikeTec continues the development of the tool. TPT is used by many different other car manufacturers likeBMW,Volkswagen,Audi,PorscheandGeneral Motorsas well as suppliers likeRobert Bosch GmbH,ContinentalandHella.[11] | https://en.wikipedia.org/wiki/Time_Partition_Testing |
Artificial intelligence(AI) refers to the capability ofcomputational systemsto perform tasks typically associated withhuman intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is afield of researchincomputer sciencethat develops and studies methods andsoftwarethat enable machines toperceive their environmentand uselearningandintelligenceto take actions that maximize their chances of achieving defined goals.[1]Such machines may be called AIs.
High-profileapplications of AIinclude advancedweb search engines(e.g.,Google Search);recommendation systems(used byYouTube,Amazon, andNetflix);virtual assistants(e.g.,Google Assistant,Siri, andAlexa);autonomous vehicles(e.g.,Waymo);generativeandcreativetools (e.g.,ChatGPTandAI art); andsuperhumanplay and analysis instrategy games(e.g.,chessandGo). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it'snot labeled AI anymore."[2][3]
Various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include learning,reasoning,knowledge representation,planning,natural language processing,perception, and support forrobotics.[a]To reach these goals, AI researchers have adapted and integrated a wide range of techniques, includingsearchandmathematical optimization,formal logic,artificial neural networks, and methods based onstatistics,operations research, andeconomics.[b]AI also draws uponpsychology,linguistics,philosophy,neuroscience, and other fields.[4]Some AI companies, such asOpenAI,Google DeepMindandMeta, aim to createartificial general intelligence(AGI)—AI that can complete virtually any cognitive task at least as well as humans.[5]
Artificial intelligence was founded as an academic discipline in 1956,[6]and the field went through multiple cycles of optimism throughoutits history,[7][8]followed by periods of disappointment and loss of funding, known asAI winters.[9][10]Funding and interest vastly increased after 2012 whengraphics processing unitsstarted being used to accelerate neural networks, anddeep learningoutperformed previous AI techniques.[11]This growth accelerated further after 2017 with thetransformer architecture.[12]In the 2020s, the period of rapidprogressmarked by advanced generative AI became known as theAI boom. Generative AI and its ability to create and modify content exposed several unintended consequences and harms in the present and raisedethical concernsaboutAI's long-term effectsand potentialexistential risks, prompting discussions aboutregulatory policiesto ensure thesafetyand benefits of the technology.
The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logicaldeductions.[13]By the late 1980s and 1990s, methods were developed for dealing withuncertainor incomplete information, employing concepts fromprobabilityandeconomics.[14]
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow.[15]Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[16]Accurate and efficient reasoning is an unsolved problem.
Knowledge representationandknowledge engineering[17]allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[18]scene interpretation,[19]clinical decision support,[20]knowledge discovery (mining "interesting" and actionable inferences from largedatabases),[21]and other areas.[22]
Aknowledge baseis a body of knowledge represented in a form that can be used by a program. Anontologyis the set of objects, relations, concepts, and properties used by a particular domain of knowledge.[23]Knowledge bases need to represent things such as objects, properties, categories, and relations between objects;[24]situations, events, states, and time;[25]causes and effects;[26]knowledge about knowledge (what we know about what other people know);[27]default reasoning(things that humans assume are true until they are told differently and will remain true even when other facts are changing);[28]and many other aspects and domains of knowledge.
Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous);[29]and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).[16]There is also the difficulty ofknowledge acquisition, the problem of obtaining knowledge for AI applications.[c]
An "agent" is anything that perceives and takes actions in the world. Arational agenthas goals or preferences and takes actions to make them happen.[d][32]Inautomated planning, the agent has a specific goal.[33]Inautomated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": theutilityof all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.[34]
Inclassical planning, the agent knows exactly what the effect of any action will be.[35]In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.[36]
In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., withinverse reinforcement learning), or the agent can seek information to improve its preferences.[37]Information value theorycan be used to weigh the value of exploratory or experimental actions.[38]The space of possible future actions and situations is typicallyintractablylarge, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be.
AMarkov decision processhas atransition modelthat describes the probability that a particular action will change the state in a particular way and areward functionthat supplies the utility of each state and the cost of each action. Apolicyassociates a decision with each possible state. The policy could be calculated (e.g., byiteration), beheuristic, or it can be learned.[39]
Game theorydescribes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents.[40]
Machine learningis the study of programs that can improve their performance on a given task automatically.[41]It has been a part of AI from the beginning.[e]
There are several kinds of machine learning.Unsupervised learninganalyzes a stream of data and finds patterns and makes predictions without any other guidance.[44]Supervised learningrequires labeling the training data with the expected answers, and comes in two main varieties:classification(where the program must learn to predict what category the input belongs in) andregression(where the program must deduce a numeric function based on numeric input).[45]
Inreinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".[46]Transfer learningis when the knowledge gained from one problem is applied to a new problem.[47]Deep learningis a type of machine learning that runs inputs through biologically inspiredartificial neural networksfor all of these types of learning.[48]
Computational learning theorycan assess learners bycomputational complexity, bysample complexity(how much data is required), or by other notions ofoptimization.[49]
Natural language processing(NLP)[50]allows programs to read, write and communicate in human languages such asEnglish. Specific problems includespeech recognition,speech synthesis,machine translation,information extraction,information retrievalandquestion answering.[51]
Early work, based onNoam Chomsky'sgenerative grammarandsemantic networks, had difficulty withword-sense disambiguation[f]unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem[29]).Margaret Mastermanbelieved that it was meaning and not grammar that was the key to understanding languages, and thatthesauriand not dictionaries should be the basis of computational language structure.
Modern deep learning techniques for NLP includeword embedding(representing words, typically asvectorsencoding their meaning),[52]transformers(a deep learning architecture using anattentionmechanism),[53]and others.[54]In 2019,generative pre-trained transformer(or "GPT") language models began to generate coherent text,[55][56]and by 2023, these models were able to get human-level scores on thebar exam,SATtest,GREtest, and many other real-world applications.[57]
Machine perceptionis the ability to use input from sensors (such as cameras, microphones, wireless signals, activelidar, sonar, radar, andtactile sensors) to deduce aspects of the world.Computer visionis the ability to analyze visual input.[58]
The field includesspeech recognition,[59]image classification,[60]facial recognition,object recognition,[61]object tracking,[62]androbotic perception.[63]
Affective computingis a field that comprises systems that recognize, interpret, process, or simulate humanfeeling, emotion, and mood.[65]For example, somevirtual assistantsare programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitatehuman–computer interaction.
However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents.[66]Moderate successes related to affective computing include textualsentiment analysisand, more recently,multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject.[67]
A machine withartificial general intelligenceshould be able to solve a wide variety of problems with breadth and versatility similar tohuman intelligence.[68]
AI research uses a wide variety of techniques to accomplish the goals above.[b]
AI can solve many problems by intelligently searching through many possible solutions.[69]There are two very different kinds of search used in AI:state space searchandlocal search.
State space searchsearches through a tree of possible states to try to find a goal state.[70]For example,planningalgorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process calledmeans-ends analysis.[71]
Simple exhaustive searches[72]are rarely sufficient for most real-world problems: thesearch space(the number of places to search) quickly grows toastronomical numbers. The result is a search that istoo slowor never completes.[15]"Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal.[73]
Adversarial searchis used forgame-playingprograms, such as chess or Go. It searches through atreeof possible moves and countermoves, looking for a winning position.[74]
Local searchusesmathematical optimizationto find a solution to a problem. It begins with some form of guess and refines it incrementally.[75]
Gradient descentis a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize aloss function. Variants of gradient descent are commonly used to trainneural networks,[76]through thebackpropagationalgorithm.
Another type of local search isevolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them,selectingonly the fittest to survive each generation.[77]
Distributed search processes can coordinate viaswarm intelligencealgorithms. Two popular swarm algorithms used in search areparticle swarm optimization(inspired by birdflocking) andant colony optimization(inspired byant trails).[78]
Formallogicis used forreasoningandknowledge representation.[79]Formal logic comes in two main forms:propositional logic(which operates on statements that are true or false and useslogical connectivessuch as "and", "or", "not" and "implies")[80]andpredicate logic(which also operates on objects, predicates and relations and usesquantifierssuch as "EveryXis aY" and "There aresomeXs that areYs").[81]
Deductive reasoningin logic is the process ofprovinga new statement (conclusion) from other statements that are given and assumed to be true (thepremises).[82]Proofs can be structured as prooftrees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes byinference rules.
Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whoseleaf nodesare labelled by premises oraxioms. In the case ofHorn clauses, problem-solving search can be performed by reasoningforwardsfrom the premises orbackwardsfrom the problem.[83]In the more general case of the clausal form offirst-order logic,resolutionis a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.[84]
Inference in both Horn clause logic and first-order logic isundecidable, and thereforeintractable. However, backward reasoning with Horn clauses, which underpins computation in thelogic programminglanguageProlog, isTuring complete. Moreover, its efficiency is competitive with computation in othersymbolic programminglanguages.[85]
Fuzzy logicassigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.[86]
Non-monotonic logics, including logic programming withnegation as failure, are designed to handledefault reasoning.[28]Other specialized versions of logic have been developed to describe many complex domains.
Many problems in AI (including reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods fromprobabilitytheory and economics.[87]Precise mathematical tools have been developed that analyze how an agent can make choices and plan, usingdecision theory,decision analysis,[88]andinformation value theory.[89]These tools include models such asMarkov decision processes,[90]dynamicdecision networks,[91]game theoryandmechanism design.[92]
Bayesian networks[93]are a tool that can be used forreasoning(using theBayesian inferencealgorithm),[g][95]learning(using theexpectation–maximization algorithm),[h][97]planning(usingdecision networks)[98]andperception(usingdynamic Bayesian networks).[91]
Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g.,hidden Markov modelsorKalman filters).[91]
The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand.Classifiers[99]are functions that usepattern matchingto determine the closest match. They can be fine-tuned based on chosen examples usingsupervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as adata set. When a new observation is received, that observation is classified based on previous experience.[45]
There are many kinds of classifiers in use.[100]Thedecision treeis the simplest and most widely used symbolic machine learning algorithm.[101]K-nearest neighboralgorithm was the most widely used analogical AI until the mid-1990s, andKernel methodssuch as thesupport vector machine(SVM) displaced k-nearest neighbor in the 1990s.[102]Thenaive Bayes classifieris reportedly the "most widely used learner"[103]at Google, due in part to its scalability.[104]Neural networksare also used as classifiers.[105]
An artificial neural network is based on a collection of nodes also known asartificial neurons, which loosely model theneuronsin a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once theweightcrosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.[105]
Learning algorithms for neural networks uselocal searchto choose the weights that will get the right output for each input during training. The most common training technique is thebackpropagationalgorithm.[106]Neural networks learn to model complex relationships between inputs and outputs andfind patternsin data. In theory, a neural network can learn any function.[107]
Infeedforward neural networksthe signal passes in only one direction.[108]Recurrent neural networksfeed the output signal back into the input, which allows short-term memories of previous input events.Long short term memoryis the most successful network architecture for recurrent networks.[109]Perceptrons[110]use only a single layer of neurons; deep learning[111]uses multiple layers.Convolutional neural networksstrengthen the connection between neurons that are "close" to each other—this is especially important inimage processing, where a local set of neurons mustidentify an "edge"before the network can identify an object.[112]
Deep learning[111]uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, inimage processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.[113]
Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, includingcomputer vision,speech recognition,natural language processing,image classification,[114]and others. The reason that deep learning performs so well in so many applications is not known as of 2021.[115]The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s)[i]but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching toGPUs) and the availability of vast amounts of training data, especially the giantcurated datasetsused for benchmark testing, such asImageNet.[j]
Generative pre-trained transformers(GPT) arelarge language models(LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pre-trained on a largecorpus of textthat can be from the Internet. The pretraining consists of predicting the nexttoken(a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique calledreinforcement learning from human feedback(RLHF). Current GPT models are prone to generating falsehoods called "hallucinations". These can be reduced with RLHF and quality data, but the problem has been getting worse for reasoning systems.[123]Such systems are used inchatbots, which allow people to ask a question or request a task in simple text.[124][125]
Current models and services includeGemini(formerly Bard),ChatGPT,Grok,Claude,Copilot, andLLaMA.[126]MultimodalGPT models can process different types of data (modalities) such as images, videos, sound, and text.[127]
In the late 2010s,graphics processing units(GPUs) that were increasingly designed with AI-specific enhancements and used with specializedTensorFlowsoftware had replaced previously usedcentral processing unit(CPUs) as the dominant means for large-scale (commercial and academic)machine learningmodels' training.[128]Specializedprogramming languagessuch asPrologwere used in early AI research,[129]butgeneral-purpose programming languageslikePythonhave become predominant.[130]
The transistor density inintegrated circuitshas been observed to roughly double every 18 months—a trend known asMoore's law, named after theIntelco-founderGordon Moore, who first identified it. Improvements inGPUshave been even faster,[131]a trend sometimes calledHuang's law,[132]named afterNvidiaco-founder and CEOJensen Huang.
AI and machine learning technology is used in most of the essential applications of the 2020s, including:search engines(such asGoogle Search),targeting online advertisements,recommendation systems(offered byNetflix,YouTubeorAmazon), drivinginternet traffic,targeted advertising(AdSense,Facebook),virtual assistants(such asSiriorAlexa),autonomous vehicles(includingdrones,ADASandself-driving cars),automatic language translation(Microsoft Translator,Google Translate),facial recognition(Apple'sFaceIDorMicrosoft'sDeepFaceandGoogle'sFaceNet) andimage labeling(used byFacebook, Apple'sPhotosandTikTok). The deployment of AI may be overseen by aChief automation officer(CAO).
The application of AI inmedicineandmedical researchhas the potential to increase patient care and quality of life.[133]Through the lens of theHippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.[134][135]
For medical research, AI is an important tool for processing and integratingbig data. This is particularly important fororganoidandtissue engineeringdevelopment which usemicroscopyimaging as a key technique in fabrication.[136]It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research.[136][137]New AI tools can deepen the understanding of biomedically relevant pathways. For example,AlphaFold 2(2021) demonstrated the ability to approximate, in hours rather than months, the 3Dstructure of a protein.[138]In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.[139]In 2024, researchers used machine learning to accelerate the search forParkinson's diseasedrug treatments. Their aim was to identify compounds that block the clumping, or aggregation, ofalpha-synuclein(the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold.[140][141]
Game playingprograms have been used since the 1950s to demonstrate and test AI's most advanced techniques.[142]Deep Bluebecame the first computer chess-playing system to beat a reigning world chess champion,Garry Kasparov, on 11 May 1997.[143]In 2011, in aJeopardy!quiz showexhibition match,IBM'squestion answering system,Watson, defeated the two greatestJeopardy!champions,Brad RutterandKen Jennings, by a significant margin.[144]In March 2016,AlphaGowon 4 out of 5 games ofGoin a match with Go championLee Sedol, becoming the firstcomputer Go-playing system to beat a professional Go player withouthandicaps. Then, in 2017, itdefeated Ke Jie, who was the best Go player in the world.[145]Other programs handleimperfect-informationgames, such as thepoker-playing programPluribus.[146]DeepMinddeveloped increasingly generalisticreinforcement learningmodels, such as withMuZero, which could be trained to play chess, Go, orAtarigames.[147]In 2019, DeepMind's AlphaStar achieved grandmaster level inStarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map.[148]In 2021, an AI agent competed in a PlayStationGran Turismocompetition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning.[149]In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseenopen-worldvideo games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.[150]
Large language models, such asGPT-4,Gemini,Claude,LLaMaorMistral, are increasingly used in mathematics. These probabilistic models are versatile, but can also produce wrong answers in the form ofhallucinations. They sometimes need a large database of mathematical problems to learn from, but also methods such assupervisedfine-tuning[151]or trainedclassifierswith human-annotated data to improve answers for new problems and learn from corrections.[152]A February 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data.[153]One technique to improve their performance involves training the models to produce correctreasoningsteps, rather than just the correct result.[154]TheAlibaba Groupdeveloped a version of itsQwenmodels calledQwen2-Math, that achieved state-of-the-art performance on several mathematical benchmarks, including 84% accuracy on the MATH dataset of competition mathematics problems.[155]In January 2025, Microsoft proposed the techniquerStar-Maththat leveragesMonte Carlo tree searchand step-by-step reasoning, enabling a relatively small language model likeQwen-7Bto solve 53% of theAIME2024 and 90% of the MATH benchmark problems.[156]
Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such asAlphaTensor,AlphaGeometryandAlphaProofall fromGoogle DeepMind,[157]LlemmafromEleutherAI[158]orJulius.[159]
When natural language is used to describe mathematical problems, converters can transform such prompts into a formal language such asLeanto define mathematical tasks.
Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.[160]
Topological deep learningintegrates varioustopologicalapproaches.
Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years.[161]
According to Nicolas Firzli, director of theWorld Pensions & Investments Forum, it may be too early to see the emergence of highly innovative AI-informed financial products and services. He argues that "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation."[162]
Various countries are deploying AI military applications.[163]The main applications enhancecommand and control, communications, sensors, integration and interoperability.[164]Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous andautonomous vehicles.[163]AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions,target acquisition, coordination and deconfliction of distributedJoint Firesbetween networked combat vehicles, both human operated andautonomous.[164]
AI has been used in military operations in Iraq, Syria, Israel and Ukraine.[163][165][166][167]
Generative artificial intelligence(Generative AI, GenAI,[168]or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data.[169][170][171]These modelslearnthe underlying patterns and structures of theirtraining dataand use them to produce new data[172][173]based on the input, which often comes in the form of natural languageprompts.[174][175]
Generative AI tools have become more common since an "AI boom" in the 2020s. This boom was made possible by improvements intransformer-baseddeepneural networks, particularlylarge language models(LLMs). Major tools includechatbotssuch asChatGPT,DeepSeek,Copilot,Gemini,Llama, andGrok;text-to-imageartificial intelligence image generationsystems such asStable Diffusion,Midjourney, andDALL-E; andtext-to-videoAI generators such asSora.[176][177][178][179]Technology companies developing generative AI includeOpenAI,Anthropic,Microsoft,Google,DeepSeek, andBaidu.[180][181][182]
Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, includingvirtual assistants,chatbots,autonomous vehicles,game-playing systems, andindustrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[186][187][188]
Applications of AI in this domain include AI-enabled menstruation and fertility trackers that analyze user data to offer prediction,[189]AI-integrated sex toys (e.g.,teledildonics),[190]AI-generated sexual education content,[191]and AI agents that simulate sexual and romantic partners (e.g.,Replika).[192]AI is also used for the production of non-consensualdeepfake pornography, raising significant ethical and legal concerns.[193]
AI technologies have also been used to attempt to identifyonline gender-based violenceand onlinesexual groomingof minors.[194][195]
There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes.[196]A few examples areenergy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions,foreign policy, or supply chain management.
AI applications for evacuation anddisastermanagement are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[197][198][199]
In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conductpredictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.
Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights." For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.
During the2024 Indian elections, US$50 million was spent on authorized AI-generated content, notably by creatingdeepfakesof allied (including sometimes deceased) politicians to better engage with voters, and by translating speeches to various local languages.[200]
AI has potential benefits and potential risks.[201]AI may be able to advance science and find solutions for serious problems:Demis HassabisofDeepMindhopes to "solve intelligence, and then use that to solve everything else".[202]However, as the use of AI has become widespread, several unintended consequences and risks have been identified.[203]In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning.[204]
Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns aboutprivacy,surveillanceandcopyright.
AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency.
Sensitive user data collected may include online activity records, geolocation data, video, or audio.[205]For example, in order to buildspeech recognitionalgorithms,Amazonhas recorded millions of private conversations and allowedtemporary workersto listen to and transcribe some of them.[206]Opinions about this widespread surveillance range from those who see it as anecessary evilto those for whom it is clearlyunethicaland a violation of theright to privacy.[207]
AI developers argue that this is the only way to deliver valuable applications and have developed several techniques that attempt to preserve privacy while still obtaining the data, such asdata aggregation,de-identificationanddifferential privacy.[208]Since 2016, some privacy experts, such asCynthia Dwork, have begun to view privacy in terms offairness.Brian Christianwrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'."[209]
Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".[210][211]Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file.[212]In 2023, leading authors (includingJohn GrishamandJonathan Franzen) sued AI companies for using their work to train generative AI.[213][214]Another discussed approach is to envision a separatesui generissystem of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[215]
The commercial AI scene is dominated byBig Techcompanies such asAlphabet Inc.,Amazon,Apple Inc.,Meta Platforms, andMicrosoft.[216][217][218]Some of these players already own the vast majority of existingcloud infrastructureandcomputingpower fromdata centers, allowing them to entrench further in the marketplace.[219][220]
In January 2024, theInternational Energy Agency(IEA) releasedElectricity 2024, Analysis and Forecast to 2026, forecasting electric power use.[221]This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[222]
Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[223]
A 2024Goldman SachsResearch Paper,AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[224]Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[225]
In 2024, theWall Street Journalreported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[226]NvidiaCEOJen-Hsun Huangsaid nuclear power is a good option for the data centers.[227]
In September 2024,Microsoftannounced an agreement withConstellation Energyto re-open theThree Mile Islandnuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the USNuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 USInflation Reduction Act.[228]The US government and the state of Michigan are investing almost $2 billion (US) to reopen thePalisades Nuclearreactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO ofExelonwho was responsible for Exelon spinoff of Constellation.[229]
After the last approval in September 2023,Taiwansuspended the approval of data centers north ofTaoyuanwith a capacity of more than 5 MW in 2024, due to power supply shortages.[230]Taiwan aims tophase out nuclear powerby 2025.[230]On the other hand,Singaporeimposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban.[230]
Although most nuclear plants in Japan have been shut down after the 2011Fukushima nuclear accident, according to an October 2024Bloombergarticle in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI.[231]Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI.[231]
On 1 November 2024, theFederal Energy Regulatory Commission(FERC) rejected an application submitted byTalen Energyfor approval to supply some electricity from the nuclear power stationSusquehannato Amazon's data center.[232]According to the Commission ChairmanWillie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors.[232]
In 2025 a report prepared by the International Energy Agency estimated thegreenhouse gas emissionsfrom the energy consumption of AI at 180 million tons. By 2035, these emissions could rise to 300-500 million tonnes depending on what measures will be taken. This is below 1.5% of the energy sector emissions. The emissions reduction potential of AI was estimated at 5% of the energy sector emissions, butrebound effects(for example if people will pass from public transport to autonomous cars) can reduce it.[233]
YouTube,Facebookand others userecommender systemsto guide users to more content. These AI programs were given the goal ofmaximizinguser engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choosemisinformation,conspiracy theories, and extremepartisancontent, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people intofilter bubbleswhere they received multiple versions of the same misinformation.[234]This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[235]The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took some steps to mitigate the problem.[236]
In 2022,generative AIbegan to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[237]One such potential malicious use is deepfakes forcomputational propaganda.[238]AI pioneerGeoffrey Hintonexpressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.[239]
AI researchers atMicrosoft,OpenAI, universities and other organisations have suggested using "personhood credentials" as a way to overcome online deception enabled by AI models.[240]
Machine learning applications will bebiased[k]if they learn from biased data.[242]The developers may not be aware that the bias exists.[243]Bias can be introduced by the waytraining datais selected and by the way a model is deployed.[244][242]If a biased algorithm is used to make decisions that can seriouslyharmpeople (as it can inmedicine,finance,recruitment,housingorpolicing) then the algorithm may causediscrimination.[245]The field offairnessstudies how to prevent harms from algorithmic biases.
On June 28, 2015,Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people,[246]a problem called "sample size disparity".[247]Google "fixed" this problem by preventing the system from labellinganythingas a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.[248]
COMPASis a commercial program widely used byU.S. courtsto assess the likelihood of adefendantbecoming arecidivist. In 2016,Julia AngwinatProPublicadiscovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend.[249]In 2017, several researchers[l]showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.[251]
A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender".[252]Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."[253]
Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions asrecommendations, some of these "recommendations" will likely be racist.[254]Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will bebetterthan the past. It is descriptive rather than prescriptive.[m]
Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.[247]
There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category isdistributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negativestereotypesor render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict withanti-discrimination laws.[241]
At its 2022Conference on Fairness, Accountability, and Transparency(ACM FAccT 2022), theAssociation for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.[dubious–discuss][256]
Many AI systems are so complex that their designers cannot explain how they reach their decisions.[257]Particularly withdeep neural networks, in which there are a large amount of non-linearrelationships between inputs and outputs. But some popular explainability techniques exist.[258]
It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with aruleras "cancerous", because pictures of malignancies typically include a ruler to show the scale.[259]Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.[260]
People who have been harmed by an algorithm's decision have a right to an explanation.[261]Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union'sGeneral Data Protection Regulationin 2016 included an explicit statement that this right exists.[n]Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.[262]
DARPAestablished theXAI("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.[263]
Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output.[264]LIME can locally approximate a model's outputs with a simpler, interpretable model.[265]Multitask learningprovides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.[266]Deconvolution,DeepDreamand othergenerativemethods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning.[267]Forgenerative pre-trained transformers,Anthropicdeveloped a technique based ondictionary learningthat associates patterns of neuron activations with human-understandable concepts.[268]
Artificial intelligence provides a number of tools that are useful tobad actors, such asauthoritarian governments,terrorists,criminalsorrogue states.
A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o]Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentiallyweapons of mass destruction.[270]Even when used in conventional warfare, they currently cannot reliably choose targets and could potentiallykill an innocent person.[270]In 2014, 30 nations (including China) supported a ban on autonomous weapons under theUnited Nations'Convention on Certain Conventional Weapons, however theUnited Statesand others disagreed.[271]By 2015, over fifty countries were reported to be researching battlefield robots.[272]
AI tools make it easier forauthoritarian governmentsto efficiently control their citizens in several ways.Faceandvoice recognitionallow widespreadsurveillance.Machine learning, operating this data, canclassifypotential enemies of the state and prevent them from hiding.Recommendation systemscan precisely targetpropagandaandmisinformationfor maximum effect.Deepfakesandgenerative AIaid in producing misinformation. Advanced AI can make authoritariancentralized decision makingmore competitive than liberal and decentralized systems such asmarkets. It lowers the cost and difficulty ofdigital warfareandadvanced spyware.[273]All these technologies have been available since 2020 or earlier—AIfacial recognition systemsare already being used formass surveillancein China.[274][275]
There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[276]
Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[277]
In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI.[278]A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-termunemployment, but they generally agree that it could be a net benefit ifproductivitygains areredistributed.[279]Risk estimates vary; for example, in the 2010s, Michael Osborne andCarl Benedikt Freyestimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".[p][281]The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[277]In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[282][283]
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence;The Economiststated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[284]Jobs at extreme risk range fromparalegalsto fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[285]
From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward byJoseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.[286]
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicistStephen Hawkingstated, "spell the end of the human race".[287]This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character.[q]These sci-fi scenarios are misleading in several ways.
First, AI does not require human-likesentienceto be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. PhilosopherNick Bostromargued that if one givesalmost anygoal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of apaperclip factory manager).[289]Stuart Russellgives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."[290]In order to be safe for humanity, asuperintelligencewould have to be genuinelyalignedwith humanity's morality and values so that it is "fundamentally on our side".[291]
Second,Yuval Noah Harariargues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things likeideologies,law,government,moneyand theeconomyare built onlanguage; they exist because there are stories that billions of people believe. The current prevalence ofmisinformationsuggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[292]
The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[293]Personalities such asStephen Hawking,Bill Gates, andElon Musk,[294]as well as AI pioneers such asYoshua Bengio,Stuart Russell,Demis Hassabis, andSam Altman, have expressed concerns about existential risk from AI.
In May 2023,Geoffrey Hintonannounced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google".[295]He notably mentioned risks of anAI takeover,[296]and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[297]
In 2023, many leading AI experts endorsedthe joint statementthat "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[298]
Some other researchers were more optimistic. AI pioneerJürgen Schmidhuberdid not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier."[299]While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors."[300][301]Andrew Ngalso argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[302]Yann LeCun"scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."[303]In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.[304]However, after 2016, the study of current and future risks and possible solutions became a serious area of research.[305]
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans.Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[306]
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[307]The field of machine ethics is also called computational morality,[307]and was founded at anAAAIsymposium in 2005.[308]
Other approaches includeWendell Wallach's "artificial moral agents"[309]andStuart J. Russell'sthree principlesfor developing provably beneficial machines.[310]
Active organizations in the AI open-source community includeHugging Face,[311]Google,[312]EleutherAIandMeta.[313]Various AI models, such asLlama 2,MistralorStable Diffusion, have been made open-weight,[314][315]meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freelyfine-tuned, which allows companies to specialize them with their own data and for their own use-case.[316]Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitatebioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.[317]
Artificial Intelligence projects can be guided by ethical considerations during the design, development, and implementation of an AI system. An AI framework such as the Care and Act Framework, developed by theAlan Turing Instituteand based on the SUM values, outlines four main ethical dimensions, defined as follows:[318][319]
Other developments in ethical frameworks include those decided upon during theAsilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[320]however, these principles are not without criticism, especially regards to the people chosen to contribute to these frameworks.[321]
Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[322]
TheUK AI Safety Institutereleased in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[323]
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[324]The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[325]According to AI Index atStanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[326][327]Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[328]Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[328]TheGlobal Partnership on Artificial Intelligencewas launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.[328]Henry Kissinger,Eric Schmidt, andDaniel Huttenlocherpublished a joint statement in November 2021 calling for a government commission to regulate AI.[329]In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[330]In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[331]In 2024, theCouncil of Europecreated the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[332]
In a 2022Ipsossurvey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[326]A 2023Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[333]In a 2023Fox Newspoll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[334][335]
In November 2023, the first globalAI Safety Summitwas held inBletchley Parkin the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[336]28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[337][338]In May 2024 at theAI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.[339][340]
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly toAlan Turing'stheory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning.[342][343]This, along with concurrent discoveries incybernetics,information theoryandneurobiology, led researchers to consider the possibility of building an "electronic brain".[r]They developed several areas of research that would become part of AI,[345]such asMcCullouchandPittsdesign for "artificial neurons" in 1943,[116]and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced theTuring testand showed that "machine intelligence" was plausible.[346][343]
The field of AI research was founded ata workshopatDartmouth Collegein 1956.[s][6]The attendees became the leaders of AI research in the 1960s.[t]They and their students produced programs that the press described as "astonishing":[u]computers were learningcheckersstrategies, solving word problems in algebra, provinglogical theoremsand speaking English.[v][7]Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.[343]
Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine withgeneral intelligenceand considered this the goal of their field.[350]In 1965Herbert Simonpredicted, "machines will be capable, within twenty years, of doing any work a man can do".[351]In 1967Marvin Minskyagreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[352]They had, however, underestimated the difficulty of the problem.[w]In 1974, both the U.S. and British governments cut off exploratory research in response to thecriticismofSir James Lighthill[354]and ongoing pressure from the U.S. Congress tofund more productive projects.[355]Minsky's andPapert's bookPerceptronswas understood as proving thatartificial neural networkswould never be useful for solving real-world tasks, thus discrediting the approach altogether.[356]The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[9]
In the early 1980s, AI research was revived by the commercial success ofexpert systems,[357]a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan'sfifth generation computerproject inspired the U.S. and British governments to restore funding foracademic research.[8]However, beginning with the collapse of theLisp Machinemarket in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[10]
Up to this point, most of AI's funding had gone to projects that used high-levelsymbolsto representmental objectslike plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especiallyperception,robotics,learningandpattern recognition,[358]and began to look into "sub-symbolic" approaches.[359]Rodney Brooksrejected "representation" in general and focussed directly on engineering machines that move and survive.[x]Judea Pearl,Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[87][364]But the most important development was the revival of "connectionism", including neural network research, byGeoffrey Hintonand others.[365]In 1990,Yann LeCunsuccessfully showed thatconvolutional neural networkscan recognize handwritten digits, the first of many successful applications of neural networks.[366]
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such asstatistics,economicsandmathematics).[367]By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence" (a tendency known as theAI effect).[368]However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield ofartificial general intelligence(or "AGI"), which had several well-funded institutions by the 2010s.[68]
Deep learningbegan to dominate industry benchmarks in 2012 and was adopted throughout the field.[11]For many specific tasks, other methods were abandoned.[y]Deep learning's success was based on both hardware improvements (faster computers,[370]graphics processing units,cloud computing[371]) and access tolarge amounts of data[372](including curated datasets,[371]such asImageNet). Deep learning's success led to an enormous increase in interest and funding in AI.[z]The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.[328]
In 2016, issues offairnessand the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. Thealignment problembecame a serious field of academic study.[305]
In the late 2010s and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015,AlphaGo, developed byDeepMind, beat the world championGo player. The program taught only the game's rules and developed a strategy by itself.GPT-3is alarge language modelthat was released in 2020 byOpenAIand is capable of generating high-quality human-like text.[373]ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months.[374]It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness.[375]These programs, and others, inspired an aggressiveAI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI".[376]About 800,000 "AI"-related U.S. job openings existed in 2022.[377]According to PitchBook research, 22% of newly fundedstartupsin 2024 claimed to be AI companies.[378]
Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines.[379]Another major focus has been whether machines can be conscious, and the associated ethical implications.[380]Many other topics in philosophy are relevant to AI, such asepistemologyandfree will.[381]Rapid advancements have intensified public discussions on the philosophy andethics of AI.[380]
Alan Turingwrote in 1950 "I propose to consider the question 'can machines think'?"[382]He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour".[382]He devised the Turing test, which measures the ability of a machine to simulate human conversation.[346]Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes thatwe can not determine these things about other peoplebut "it is usual to have a polite convention that everyone thinks."[383]
RussellandNorvigagree with Turing that intelligence must be defined in terms of external behavior, not internal structure.[1]However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineeringtexts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly likepigeonsthat they can fool other pigeons.'"[385]AI founderJohn McCarthyagreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".[386]
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world".[387]Another AI founder,Marvin Minsky, similarly describes it as "the ability to solve hard problems".[388]The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals.[1]These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
Another definition has been adopted by Google,[389]a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[390]with many companies during the early 2020s AI boom using the term as a marketingbuzzword, often even if they did "not actually use AI in a material way".[391]
No established unifying theory orparadigmhas guided AI research for most of its history.[aa]The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostlysub-symbolic,softandnarrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
Symbolic AI(or "GOFAI")[393]simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed thephysical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[394]
However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning.Moravec's paradoxis the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.[395]PhilosopherHubert Dreyfushadarguedsince the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.[396]Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]
The issue is not resolved:sub-symbolicreasoning can make many of the same inscrutable mistakes that human intuition does, such asalgorithmic bias. Critics such asNoam Chomskyargue continuing research into symbolic AI will still be necessary to attain general intelligence,[398][399]in part because sub-symbolic AI is a move away fromexplainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field ofneuro-symbolic artificial intelligenceattempts to bridge the two approaches.
"Neats" hope that intelligent behavior is described using simple, elegant principles (such aslogic,optimization, orneural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[400]but eventually was seen as irrelevant. Modern AI has elements of both.
Finding a provably correct or optimal solution isintractablefor many important problems.[15]Soft computing is a set of techniques, includinggenetic algorithms,fuzzy logicand neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
AI researchers are divided as to whether to pursue the goals of artificial general intelligence andsuperintelligencedirectly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[401][402]General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.
Thephilosophy of minddoes not know whether a machine can have amind,consciousnessandmental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence.RussellandNorvigadd that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[403]However, the question has become central to the philosophy of mind. It is also typically the central question at issue inartificial intelligence in fiction.
David Chalmersidentified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[404]The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how thisfeelsor why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While humaninformation processingis easy to explain, humansubjective experienceis difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person toknow what red looks like.[405]
Computationalism is the position in thephilosophy of mindthat the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to themind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophersJerry FodorandHilary Putnam.[406]
PhilosopherJohn Searlecharacterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[ac]Searle challenges this claim with hisChinese roomargument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.[410]
It is difficult or impossible to reliably evaluate whether an advancedAI is sentient(has the ability to feel), and if so, to what degree.[411]But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[412][413]Sapience(a set of capacities related to high intelligence, such as discernment orself-awareness) may provide another moral basis for AI rights.[412]Robot rightsare also sometimes proposed as a practical way to integrate autonomous agents into society.[414]
In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[415]Critics argued in 2018 that granting rights to AI systems would downplay the importance ofhuman rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[416][417]
Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be amoral blind spotanalogous toslaveryorfactory farming, which could lead tolarge-scale sufferingif sentient AI is created and carelessly exploited.[413][412]
Asuperintelligenceis a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[402]If research intoartificial general intelligenceproduced sufficiently intelligent software, it might be able toreprogram and improve itself. The improved software would be even better at improving itself, leading to whatI. J. Goodcalled an "intelligence explosion" andVernor Vingecalled a "singularity".[418]
However, technologies cannot improve exponentially indefinitely, and typically follow anS-shaped curve, slowing when they reach the physical limits of what the technology can do.[419]
Robot designerHans Moravec, cyberneticistKevin Warwickand inventorRay Kurzweilhave predicted that humans and machines may merge in the future intocyborgsthat are more capable and powerful than either. This idea, called transhumanism, has roots in the writings ofAldous HuxleyandRobert Ettinger.[420]
Edward Fredkinargues that "artificial intelligence is the next step in evolution", an idea first proposed bySamuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon byGeorge Dysonin his 1998 bookDarwin Among the Machines: The Evolution of Global Intelligence.[421]
Arguments fordecomputinghave been raised byDan McQuillan(Resisting AI: An Anti-fascist Approach to Artificial Intelligence, 2022), meaning an opposition to the sweeping application and expansion of artificial intelligence. Similar todegrowth, the approach criticizes AI as an outgrowth of the systemic issues and capitalist world we live in. It argues that a different future is possible, in which distance between people is reduced rather than increased through AI intermediaries.[422]
Thought-capable artificial beings have appeared as storytelling devices since antiquity,[423]and have been a persistent theme inscience fiction.[424]
A commontropein these works began withMary Shelley'sFrankenstein, where a human creation becomes a threat to its masters. This includes such works asArthur C. Clarke'sandStanley Kubrick's2001: A Space Odyssey(both 1968), withHAL 9000, the murderous computer in charge of theDiscovery Onespaceship, as well asThe Terminator(1984) andThe Matrix(1999). In contrast, the rare loyal robots such as Gort fromThe Day the Earth Stood Still(1951) and Bishop fromAliens(1986) are less prominent in popular culture.[425]
Isaac Asimovintroduced theThree Laws of Roboticsin many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics;[426]while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[427]
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that havethe ability to feel, and thus to suffer. This appears inKarel Čapek'sR.U.R., the filmsA.I. Artificial IntelligenceandEx Machina, as well as the novelDo Androids Dream of Electric Sheep?, byPhilip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[428]
The two most widely used textbooks in 2023 (see theOpen Syllabus):
The four most widely used AI textbooks in 2008:
Other textbooks: | https://en.wikipedia.org/wiki/Artificial_intelligence |
GhostNet(simplified Chinese:幽灵网;traditional Chinese:幽靈網;pinyin:YōuLíngWǎng) is the name given by researchers at theInformation Warfare Monitorto a large-scalecyber spying[1][2]operation discovered in March 2009. The operation is likely associated with anadvanced persistent threat, or a network actor that spies undetected.[3]Its command and control infrastructure is based mainly in thePeople's Republic of Chinaand GhostNet has infiltrated high-value political, economic and media locations[4]in 103 countries. Computer systems belonging toembassies, foreign ministries and other government offices, and theDalai Lama'sTibetanexile centers in India, London and New York City were compromised.
GhostNet was discovered and named following a 10-month investigation by theInfowar Monitor(IWM), carried out after IWM researchers approached theDalai Lama's representative in Geneva[5]suspecting that their computer network had been infiltrated.[6]The IWM is composed of researchers from The SecDev Group and Canadian consultancy and theCitizen Lab,Munk School of Global Affairsat theUniversity of Toronto; the research findings were published in theInfowar Monitor, an affiliated publication.[7]Researchers from theUniversity of Cambridge'sComputer Laboratory, supported by theInstitute for Information Infrastructure Protection,[8]also contributed to the investigation at one of the three locations inDharamshala, where the Tibetan government-in-exile is located. The discovery of the 'GhostNet', and details of its operations, were reported byThe New York Timeson March 29, 2009.[7][9]Investigators focused initially on allegations of Chinese cyber-espionage against theTibetan exilecommunity, such as instances where email correspondence and other data were extracted.[10]
Compromised systems were discovered in theembassiesofIndia,South Korea,Indonesia,Romania,Cyprus,Malta,Thailand,Taiwan,Portugal,GermanyandPakistanand the office of the Prime Minister ofLaos. Theforeign ministriesofIran,Bangladesh,Latvia,Indonesia,Philippines,Brunei,BarbadosandBhutanwere also targeted.[1][11]No evidence was found thatU.S.orU.K.government offices were infiltrated, although aNATOcomputer was monitored for half a day and the computers of theIndian embassyinWashington, D.C., were infiltrated.[4][11][12]
Since its discovery, GhostNet has attacked other government networks, for example Canadian official financial departments in early 2011, forcing them off-line. Governments commonly do not admit such attacks, which must be verified by official but anonymous sources.[13]
Emails are sent to target organizations that contain contextually relevant information. These emails contain malicious attachments, that when opened, enable aTrojan horseto access the system.[citation needed]This Trojan connects back to a control server, usually located in China, to receive commands. The infected computer will then execute the command specified by the control server. Occasionally, the command specified by the control server will cause the infected computer to download and install a Trojan known asGh0st Ratthat allows attackers to gain complete, real-time control of computers runningMicrosoft Windows.[4]Such a computer can be controlled or inspected by attackers, and the software even has the ability to turn on camera and audio-recording functions of infected computers, enabling attackers to perform surveillance.[7]
The researchers from the IWM stated they could not conclude that the Chinese government was responsible for the spy network.[14]However, a report from researchers at theUniversity of Cambridgesays they believe that the Chinese government is behind the intrusions they analyzed at the Office of the Dalai Lama.[15]
Researchers have also noted the possibility that GhostNet was an operation run by private citizens in China for profit or for patriotic reasons, or created by intelligence agencies from other countries such as Russia or the United States.[7]The Chinese government has stated that China "strictly forbids any cyber crime."[1][10]
The "Ghostnet Report" documents several unrelated infections at Tibetan-related organizations in addition to the Ghostnet infections. By using the email addresses provided by the IWM report, Scott J. Henderson had managed to trace one of the operators of one of the infections (non-Ghostnet) toChengdu. He identifies the hacker as a 27-year-old man who had attended theUniversity of Electronic Science and Technology of China, and currently connected with the Chinese hackerunderground.[16]
Despite the lack of evidence to pinpoint the Chinese government as responsible for intrusions against Tibetan-related targets, researchers at Cambridge have found actions taken by Chinese government officials that corresponded with the information obtained via computer intrusions. One such incident involved a diplomat who was pressured by Beijing after receiving an email invitation to a visit with theDalai Lamafrom his representatives.[15]
Another incident involved a Tibetan woman who was interrogated by Chinese intelligence officers and was shown transcripts of her online conversations.[14][17]However, there are other possible explanations for this event. Drelwa usesQQand other instant messengers to communicate with Chinese Internet users. In 2008, IWM found that TOM-Skype, the Chinese version of Skype, was logging and storing text messages exchanged between users. It is possible that the Chinese authorities acquired the chat transcripts through these means.[18]
IWM researchers have also found that when detected, GhostNet is consistently controlled from IP addresses located on the island ofHainan, China, and have pointed out that Hainan is home to the Lingshui signals intelligence facility and the Third Technical Department of the People's Liberation Army.[4]Furthermore, one of GhostNet's four control servers has been revealed to be agovernment server.[clarify][19] | https://en.wikipedia.org/wiki/GhostNet |
This is alist of abbreviations used in medical prescriptions, including hospital orders (the patient-directed part of which is referred to assig codes). This list does not include abbreviations for pharmaceuticals or drug name suffixes such as CD, CR, ER, XT (SeeTime release technology § List of abbreviationsfor those).
Capitalisationand the use offull stopsare a matter ofstyle. In the list, abbreviations in English are capitalized whereas those in Latin are not.
These abbreviations can be verified inreference works, both recent[1]and older.[2][3][4]Some of those works (such as Wyeth 1901[4]) are so comprehensive that their entire content cannot be reproduced here. This list includes all that are frequently encountered in today'shealth careinEnglish-speakingregions.
Some of these are obsolete; others remain current.
There is a risk of serious consequences when abbreviations are misread or misinterpreted. In the United Kingdom, all prescriptions should be in English without abbreviation (apart from some units such as mg and mL; micrograms and nanograms shouldnotbe abbreviated).[5]In the United States, abbreviations which aredeprecatedby theJoint Commissionare marked in red; those abbreviations which are deprecated by other organizations, such as theInstitute for Safe Medication Practices(ISMP) and theAmerican Medical Association(AMA), are marked in orange.
The Joint Commission is an independent, non-profit, non-governmental organization which offersaccreditationto hospitals and other health care organizations in the United States. While their recommendations are not binding on U.S. physicians, they are required of organizations who wish accreditation by the Joint Commission.
0–9ABCDEFGHIJKLMNOPQRSTUVWXYZ | https://en.wikipedia.org/wiki/List_of_abbreviations_used_in_medical_prescriptions |
Apassword policyis a set of rules designed to enhance computer security by encouraging users to employ strongpasswordsand use them properly. A password policy is often part of an organization's official regulations and may be taught as part ofsecurity awarenesstraining. Either the password policy is merely advisory, or the computer systems force users to comply with it. Some governments have national authentication frameworks[1]that define requirements for user authentication to government services, including requirements for passwords.
The United States Department of Commerce'sNational Institute of Standards and Technology(NIST) has put out two standards for password policies which have been widely followed.
From 2004, the "NIST Special Publication 800-63. Appendix A,"[2]advised people to use irregular capitalization, special characters, and at least one numeral. This was the advice that most systems followed, and was "baked into" a number of standards that businesses needed to follow.
However, in 2017 a major update changed this advice, particularly that forcing complexity and regular changes is now seen as bad practice.[3][4]: 5.1.1.2
The key points of these are:
NIST included a rationale for the new guidelines in its Appendix A.
Typical components of a password policy include:
Many policies require a minimum password length. Eight characters is typical but may not be appropriate.[6][7][8]Longer passwords are almost always more secure, but some systems impose a maximum length for compatibility withlegacy systems.
Some policies suggest or impose requirements on what type of password a user can choose, such as:
Other systems create an initial password for the user; but require then to change it to one of their own choosing within a short interval.
Password block lists are lists of passwords that are always blocked from use. Block lists contain passwords constructed of character combinations that otherwise meet company policy, but should no longer be used because they have been deemed insecure for one or more reasons, such as being easily guessed, following a common pattern, or public disclosure from previousdata breaches. Common examples are Password1, Qwerty123, or Qaz123wsx.
Some policies require users to change passwords periodically, often every 90 or 180 days. The benefit of password expiration, however, is debatable.[9][10]Systems that implement such policies sometimes prevent users from picking a password too close to a previous selection.[11]
This policy can often backfire. Some users find it hard to devise "good" passwords that are also easy to remember, so if people are required to choose many passwords because they have to change them often, they end up using much weaker passwords; the policy also encourages users to write passwords down. Also, if the policy prevents a user from repeating a recent password, this requires that there is a database in existence of everyone's recent passwords (or theirhashes) instead of having the old ones erased from memory. Finally, users may change their password repeatedly within a few minutes, and then change back to the one they really want to use, circumventing the password change policy altogether.
The human aspects of passwords must also be considered. Unlike computers, human users cannot delete one memory and replace it with another. Consequently, frequently changing a memorized password is a strain on the human memory, and most users resort to choosing a password that is relatively easy to guess (SeePassword fatigue). Users are often advised to usemnemonicdevices to remember complex passwords. However, if the password must be repeatedly changed, mnemonics are useless because the user would not remember which mnemonic to use. Furthermore, the use of mnemonics (leading to passwords such as "2BOrNot2B") makes the password easier to guess.
Administration factors can also be an issue. Users sometimes have older devices that require a password that was used before the password duration expired.[clarification needed]In order to manage these older devices, users may have to resort to writing down all old passwords in case they need to log into an older device.
Requiring a very strong password and not requiring it be changed is often better.[12]However, this approach does have a major drawback: if an unauthorized person acquires a password and uses it without being detected, that person may have access for an indefinite period.
It is necessary to weigh these factors: the likelihood of someone guessing a password because it is weak, versus the likelihood of someone managing to steal, or otherwise acquire without guessing, a stronger password.
Bruce Schneierargues that "pretty much anything that can be remembered can be cracked", and recommends a scheme that uses passwords which will not appear in any dictionaries.[13]
Password policies may include progressive sanctions beginning with warnings and ending with possible loss of computer privileges or job termination. Where confidentiality is mandated by law, e.g. withclassified information, a violation of password policy could be a criminal offense in some jurisdictions.[14]Some[who?]consider a convincing explanation of the importance of security to be more effective than threats of sanctions[citation needed].
The level of password strength required depends, among other things, on how easy it is for an attacker to submit multiple guesses. Some systems limit the number of times a user can enter an incorrect password before some delay is imposed or the account is frozen. At the other extreme, some systems make available aspecially hashedversion of the password, so that anyone can check its validity. When this is done, an attacker can try passwords very rapidly; so much stronger passwords are necessary for reasonable security. (Seepassword crackingandpassword length equation.) Stricter requirements are also appropriate for accounts with higher privileges, such as root or system administrator accounts.
Password policies are usually a tradeoff between theoretical security and the practicalities of human behavior. For example:
A 2010 examination of the password policies of 75 different websites concludes that security only partly explains more stringent policies:monopolyproviders of a service, such as government sites, have more stringent policies than sites where consumers have choice (e.g. retail sites and banks). The study concludes that sites with more stringent policies "do not have greater security concerns, they are simply better insulated from the consequences from poor usability."[15]
Other approaches are available that are generally considered to be more secure than simple passwords. These include use of asecurity tokenorone-time passwordsystem, such asS/Key, ormulti-factor authentication.[16]However, these systems heighten the tradeoff between security and convenience: according toShuman Ghosemajumder, these systems all improve security, but come "at the cost of moving the burden to the end user."[17] | https://en.wikipedia.org/wiki/Password_policy |
Incomputational complexity theory, a problem isNP-completewhen:
The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, "nondeterministic" refers tonondeterministic Turing machines, a way of mathematically formalizing the idea of a brute-force search algorithm.Polynomial timerefers to an amount of time that is considered "quick" for adeterministic algorithmto check a single solution, or for a nondeterministic Turing machine to perform the whole search. "Complete" refers to the property of being able to simulate everything in the samecomplexity class.
More precisely, each input to the problem should be associated with a set of solutions of polynomial length, the validity of each of which can be tested quickly (inpolynomial time),[2]such that the output for any input is "yes" if the solution set is non-empty and "no" if it is empty. The complexity class of problems of this form is calledNP, an abbreviation for "nondeterministic polynomial time". A problem is said to beNP-hardif everything in NP can be transformed in polynomial time into it even though it may not be in NP. A problem is NP-complete if it is both in NP and NP-hard. The NP-complete problems represent the hardest problems in NP. If some NP-complete problem has a polynomial time algorithm, all problems in NP do. The set of NP-complete problems is often denoted byNP-CorNPC.
Although a solution to an NP-complete problem can beverified"quickly", there is no known way tofinda solution quickly. That is, the time required to solve the problem using any currently knownalgorithmincreases rapidly as the size of the problem grows. As a consequence, determining whether it is possible to solve these problems quickly, called theP versus NP problem, is one of the fundamentalunsolved problems in computer sciencetoday.
While a method for computing the solutions to NP-complete problems quickly remains undiscovered,computer scientistsandprogrammersstill frequently encounter NP-complete problems. NP-complete problems are often addressed by usingheuristicmethods andapproximation algorithms.
NP-complete problems are inNP, the set of alldecision problemswhose solutions can be verified in polynomial time;NPmay be equivalently defined as the set of decision problems that can be solved in polynomial time on anon-deterministic Turing machine. A problempin NP is NP-complete if every other problem in NP can be transformed (or reduced) intopin polynomial time.[citation needed]
It is not known whether every problem in NP can be quickly solved—this is called theP versus NP problem. But ifany NP-complete problemcan be solved quickly, thenevery problem in NPcan, because the definition of an NP-complete problem states that every problem in NP must be quickly reducible to every NP-complete problem (that is, it can be reduced in polynomial time). Because of this, it is often said that NP-complete problems areharderormore difficultthan NP problems in general.[citation needed]
A decision problemC{\displaystyle \scriptstyle C}is NP-complete if:[citation needed]
C{\displaystyle \scriptstyle C}can be shown to be in NP by demonstrating that a candidate solution toC{\displaystyle \scriptstyle C}can be verified in polynomial time.
Note that a problem satisfying condition 2 is said to beNP-hard, whether or not it satisfies condition 1.[4]
A consequence of this definition is that if we had a polynomial time algorithm (on aUTM, or any otherTuring-equivalentabstract machine) forC{\displaystyle \scriptstyle C}, we could solve all problems in NP in polynomial time.
The concept of NP-completeness was introduced in 1971 (seeCook–Levin theorem), though the termNP-completewas introduced later. At the 1971STOCconference, there was a fierce debate between the computer scientists about whether NP-complete problems could be solved in polynomial time on adeterministicTuring machine.John Hopcroftbrought everyone at the conference to a consensus that the question of whether NP-complete problems are solvable in polynomial time should be put off to be solved at some later date, since nobody had any formal proofs for their claims one way or the other.[citation needed]This is known as "the question of whether P=NP".
Nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the greatunsolved problems of mathematics. TheClay Mathematics Instituteis offering a US$1 million reward (Millennium Prize) to anyone who has a formal proof that P=NP or that P≠NP.[5]
The existence of NP-complete problems is not obvious. TheCook–Levin theoremstates that theBoolean satisfiability problemis NP-complete, thus establishing that such problems do exist. In 1972,Richard Karpproved that several other problems were also NP-complete (seeKarp's 21 NP-complete problems); thus, there is a class of NP-complete problems (besides the Boolean satisfiability problem). Since the original results, thousands of other problems have been shown to be NP-complete by reductions from other problems previously shown to be NP-complete; many of these problems are collected inGarey & Johnson (1979).
The easiest way to prove that some new problem is NP-complete is first to prove that it is in NP, and then to reduce some known NP-complete problem to it. Therefore, it is useful to know a variety of NP-complete problems. The list below contains some well-known problems that are NP-complete when expressed as decision problems.
To the right is a diagram of some of the problems and thereductionstypically used to prove their NP-completeness. In this diagram, problems are reduced from bottom to top. Note that this diagram is misleading as a description of the mathematical relationship between these problems, as there exists apolynomial-time reductionbetween any two NP-complete problems; but it indicates where demonstrating this polynomial-time reduction has been easiest.
There is often only a small difference between a problem in P and an NP-complete problem. For example, the3-satisfiabilityproblem, a restriction of the Boolean satisfiability problem, remains NP-complete, whereas the slightly more restricted2-satisfiabilityproblem is in P (specifically, it isNL-complete), but the slightly more general max. 2-sat. problem is again NP-complete. Determining whether a graph can be colored with 2 colors is in P, but with 3 colors is NP-complete, even when restricted toplanar graphs. Determining if a graph is acycleor isbipartiteis very easy (inL), but finding a maximum bipartite or a maximum cycle subgraph is NP-complete. A solution of theknapsack problemwithin any fixed percentage of the optimal solution can be computed in polynomial time, but finding the optimal solution is NP-complete.
An interesting example is thegraph isomorphism problem, thegraph theoryproblem of determining whether agraph isomorphismexists between two graphs. Two graphs areisomorphicif one can betransformedinto the other simply by renamingvertices. Consider these two problems:
The Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete, though it is in NP. This is an example of a problem that is thought to behard, but is not thought to be NP-complete. This class is calledNP-Intermediate problemsand exists if and only if P≠NP.
At present, all known algorithms for NP-complete problems require time that issuperpolynomialin the input size. Thevertex coverproblem hasO(1.2738k+nk){\displaystyle O(1.2738^{k}+nk)}[6]for somek>0{\displaystyle k>0}and it is unknown whether there are any faster algorithms.
The following techniques can be applied to solve computational problems in general, and they often give rise to substantially faster algorithms:
One example of a heuristic algorithm is a suboptimalO(nlogn){\displaystyle O(n\log n)}greedy coloring algorithmused forgraph coloringduring theregister allocationphase of some compilers, a technique calledgraph-coloring global register allocation. Each vertex is a variable, edges are drawn between variables which are being used at the same time, and colors indicate the register assigned to each variable. Because mostRISCmachines have a fairly large number of general-purpose registers, even a heuristic approach is effective for this application.
In the definition of NP-complete given above, the termreductionwas used in the technical meaning of a polynomial-timemany-one reduction.
Another type of reduction is polynomial-timeTuring reduction. A problemX{\displaystyle \scriptstyle X}is polynomial-time Turing-reducible to a problemY{\displaystyle \scriptstyle Y}if, given a subroutine that solvesY{\displaystyle \scriptstyle Y}in polynomial time, one could write a program that calls this subroutine and solvesX{\displaystyle \scriptstyle X}in polynomial time. This contrasts with many-one reducibility, which has the restriction that the program can only call the subroutine once, and the return value of the subroutine must be the return value of the program.
If one defines the analogue to NP-complete with Turing reductions instead of many-one reductions, the resulting set of problems won't be smaller than NP-complete; it is an open question whether it will be any larger.
Another type of reduction that is also often used to define NP-completeness is thelogarithmic-space many-one reductionwhich is a many-one reduction that can be computed with only a logarithmic amount of space. Since every computation that can be done inlogarithmic spacecan also be done in polynomial time it follows that if there is a logarithmic-space many-one reduction then there is also a polynomial-time many-one reduction. This type of reduction is more refined than the more usual polynomial-time many-one reductions and it allows us to distinguish more classes such asP-complete. Whether under these types of reductions the definition of NP-complete changes is still an open problem. All currently known NP-complete problems are NP-complete under log space reductions. All currently known NP-complete problems remain NP-complete even under much weaker reductions such asAC0{\displaystyle AC_{0}}reductions andNC0{\displaystyle NC_{0}}reductions. Some NP-Complete problems such as SAT are known to be complete even under polylogarithmic time projections.[7]It is known, however, thatAC0reductions define a strictly smaller class than polynomial-time reductions.[8]
According toDonald Knuth, the name "NP-complete" was popularized byAlfred Aho,John HopcroftandJeffrey Ullmanin their celebrated textbook "The Design and Analysis of Computer Algorithms". He reports that they introduced the change in thegalley proofsfor the book (from "polynomially-complete"), in accordance with the results of a poll he had conducted of thetheoretical computer sciencecommunity.[9]Other suggestions made in the poll[10]included "Herculean", "formidable",Steiglitz's "hard-boiled" in honor of Cook, and Shen Lin's acronym "PET", which stood for "probably exponential time", but depending on which way theP versus NP problemwent, could stand for "provably exponential time" or "previously exponential time".[11]
The following misconceptions are frequent.[12]
Viewing adecision problemas a formal language in some fixed encoding, the set NPC of all NP-complete problems isnot closedunder:
It is not known whether NPC is closed undercomplementation, since NPC=co-NPCif and only if NP=co-NP, and since NP=co-NP is anopen question.[16] | https://en.wikipedia.org/wiki/NP-complete |
This is a list of well-knowndata structures. For a wider list of terms, seelist of terms relating to algorithms and data structures. For a comparison ofrunning timesfor a subset of this list seecomparison of data structures.
Some properties of abstract data types:
"Ordered" means that the elements of the data type have some kind of explicit order to them, where an element can be considered "before" or "after" another element. This order is usually determined by the order in which the elements are added to the structure, but the elements can be rearranged in some contexts, such assortinga list. For a structure that isn't ordered, on the other hand, no assumptions can be made about the ordering of the elements (although a physical implementation of these data types will often apply some kind of arbitrary ordering). "Uniqueness" means that duplicate elements are not allowed. Depending on the implementation of the data type, attempting to add a duplicate element may either be ignored, overwrite the existing element, or raise an error. The detection for duplicates is based on some inbuilt (or alternatively, user-defined) rule for comparing elements.
A data structure is said to be linear if its elements form a sequence.
Trees are a subset ofdirected acyclic graphs.
In these data structures each tree node compares a bit slice of key values.
These are data structures used forspace partitioningorbinary space partitioning.
Manygraph-based data structures are used in computer science and related fields: | https://en.wikipedia.org/wiki/List_of_data_structures |
Feature creepis the excessive ongoing expansion or addition of newfeaturesin a product,[1]especially incomputer software,video games(where it should not be confused withpower creep) andconsumer and business electronics. These extra features go beyond the basic function of the product and can result insoftware bloatand over-complication, rather than simple design.
The definition of what qualifies as "feature creep" varies amongend users, where what is perceived as such by some users may be considered practical functionality by others.[2]Feature creep is one of the most common sources ofcostand schedule overruns.[3][verification needed]It thus endangers and can even kill products and projects.
Feature creep may arise from the desire to provide the consumer with a more useful or desirable product in order to increase sales or distribution. Once a product does everything that it is designed to do, the manufacturer may add functions some users might consider unneeded (sometimes at the cost of efficiency) or continue with the original version (at the cost of a perceived lack of improvement).
Feature creep may also arise as a result ofcompromise from a committeeimplementing several different viewpoints oruse casesin the same product, even for opportunistic reasons.[4]As more features are added to support each approach, cross-conversion features between the multiple paradigms may further complicate the total features.
There are several methods to control feature creep, including: strict limits for allowable features, multiple variations, and pruning excess features.
Later feature creep may be avoided by basing initial design on strong software fundamentals, such as logical separation of functionality and data access, e.g. using submenus that are optionally accessible bypower userswho desire more functionality and a higher verbosity of information. It can be actively controlled with rigorouschange managementand by delaying changes to later delivery phases of a project.[5]
Another method of controlling feature creep is maintaining multiple variations of products, where features are limited and reduced in the more basic variations, e.g.Microsoft Windowseditions. For softwareuser interfaces, viewing modes or operation modes can be used (e.g. basic mode or expert mode), between which the users can select to match their own needs.
Both in manygraphical user interfacesandcommand-line interfaces, users are able to opt in for a higher verbosity manually. In the latter case, in many command-line programs, adding a-vor--verboseoption manually, does show more detailed information that might be less relevant to minimal users, but useful to power users or for debugging and troubleshooting purposes.
Because the ever-growing, ever-expanding addition of new features might exceed available resources, a minimal core "basic" version of a product can be maintained separately, to ensure operation in smaller operating environments. Using the "80/20 rule", the more basic product variations might fulfill the needs of the majority (e.g. ~80%) of the users, so they would not be subjected to the complexity (or extra expense) of features requested by the advanced 20% of users. The extra features are still available, but optional and ready to be utilized for those who solicit them, but they have not been implemented into the basic versions of the products.
Another solution for feature creep ismodularity. Power users who require more functionality can retrofit needed features by downloading software modules,plug-ins, add-ons (also known as add-ins) and custom themes to match their personal requirements.
At some point, the cost of maintaining a particular subset of features might become prohibitive, and pruning can be used. A new product version can omit the extra features, or perhaps a transition period would be used, where old features weredeprecatedbefore eventual removal from the system. If there are multiple variations of products, then some of them might be phased out of use. One major example is theSamsung Galaxy S6, released March 2015, of which significantly many software/menu features and also some hardware features were pruned. A “more functional” variation of it hasn't been released.[citation needed]
Occasionally, uncontrolled feature creep can lead to products that surpass the scope of what was originally intended; this is known asscope creep. A common consequence of feature creep is the delay or cancellation of a product, which may become more expensive than was originally intended.[citation needed]
Often, a reasonably feature-complete software project, or one with moderate amounts of feature creep, can survive and even thrive through many iterations, but its successor release may suffer substantial delays once a decision is taken to rewrite the whole code base in addition to introducing new technologies. For example, Microsoft'sWindows Vistawas planned to be a minor release betweenWindows XPand its successor codenamedWindows "Blackcomb"(released as Windows 7), but after adapting more and more features from Blackcomb (many of which were eventually cancelled), Vista turned out to become a major release which took five years of development.
A similar fate was suffered byNetscape 6, which was originally supposed to beNetscape 5. The 1998 decision by Netscape Communications to open-source its Netscape Navigator browser and Communicator Internet suite (both code-named Mozilla) soon made it obvious that the underlying code was too difficult, and required a complete rewrite of Mozilla, which fostered the creation of theMozilla application framework. This caused significant delays, Netscape 5 was skipped, and the company was purchased by AOL. The subsequent release of Netscape 6.00 in 2000 was widely criticized as alpha-level code, and the project reached stability by Netscape 6.1 in 2001, three years after the decision to rework the Internet suite. By that time, Microsoft's Internet Explorer browser had long-eclipsed Netscape in usage share, which had diminished to single digits.
Even after reaching stability and attaining some necessary new features, the open-sourceMozilla Application Suite(then named just Mozilla), on which AOL built Netscape, was viewed as "bloated". Just a year later, a group of Mozilla developers decided to separate the browser component, which eventually becameFirefox.
Double Fine Adventures'KickstarterprojectBroken Ageis another example of a project being delayed by feature creep. Originally supposed to have a release date of October 2012, the first half of the game was released in January 2014 while the second half followed late April 2015, and required two separate funding rounds to complete.[6]
Feature creep combined with short deadlines will often lead to a"hacky solution". The desired change may be large enough to warrant a redesign of the existing project foundation, but deadline pressure instead requires developers to rush and put out a less-refined product. Thespoonerism"feeping creaturism" was coined to emphasize a developer's dislike of this situation,[7]personifying the scope-crept product as "a misshapen creature of hacks ... prowling about in the dark",[8]and the harbinger of more creep to come.[9]("Feeping" is a jargon synonym of "beeping".)[10] | https://en.wikipedia.org/wiki/Feature_creep |
Inlinguistics, afalse friendis a word in a different language that looks or sounds similar to a word in a given language, but differs significantly in meaning. Examples of false friends includeEnglishembarrassedandSpanishembarazado('pregnant'); EnglishparentsversusPortugueseparentesandItalianparenti(the latter two both meaning 'relatives'); EnglishdemandandFrenchdemander('ask'); and Englishgift,GermanGift('poison'), andNorwegiangift(both 'married' and 'poison').
The term was introduced by a French book,Les faux amis: ou, Les trahisons du vocabulaire anglais(False friends: or, the betrayals of English vocabulary), published in 1928.
As well as producing completely false friends, the use ofloanwordsoften results in the use of a word in a restrictedcontext, which may then develop new meanings not found in the original language. For example,angstmeans 'fear' in a general sense (as well as 'anxiety') in German, but when it was borrowed into English in the context ofpsychology, its meaning was restricted to a particular type of fear described as "a neurotic feeling of anxiety and depression".[1]Also,gymnasiummeant both 'a place of education' and 'a place for exercise' inLatin, but its meaning became restricted tothe formerin German and tothe latterin English, making the expressions into false friends in those languages as well as inAncient Greek, where it started out as 'a place for naked exercise'.[2]
False friends are bilingualhomophonesor bilingualhomographs,[3]i.e., words in two or more languages that look similar (homographs) or sound similar (homophones), but differ significantly in meaning.[3][4]
The origin of the term is as a shortened version of the expression "false friend of a translator", the English translation of a French expression (French:faux amis du traducteur) introduced by Maxime Kœssler and Jules Derocquigny in their 1928 book,[5]with a sequel,Autres Mots anglais perfides.
From theetymologicalpoint of view, false friends can be created in several ways.
If language A borrowed a word from language B, or both borrowed the word from a third language or inherited it from a common ancestor, and later the word shifted in meaning or acquired additional meanings in at least one of these languages, anative speakerof one language will face a false friend when learning the other. Sometimes, presumably both senses were present in the common ancestor language, but the cognate words took on different restricted senses in Language A and Language B.[6]
Actual, which in English is usually a synonym ofreal, has a different meaning in other European languages, in which it means 'current' or 'up-to-date', and has the logical derivative as averb, meaning 'to make current' or 'to update'.Actualise(oractualize) in English means 'to make a reality of'.[7]
The Italian wordconfetti('sugared almonds') has acquired a new meaning in English, French and Dutch; in Italian, the corresponding word iscoriandoli.[8]
English and Spanish, both of which have borrowed from Ancient Greek and Latin, have multiple false friends, such as:
English andJapanesealso have diverse false friends, many of them beingwasei-eigoandgairaigowords.[9]
The wordfrienditself has cognates in the other Germanic languages, but the Scandinavian ones (likeSwedishfrände,Danishfrænde) predominantly mean 'relative'. The originalProto-Germanicword meant simply 'someone whom one cares for' and could therefore refer to both a friend and a relative, but it lost various degrees of the 'friend' sense in the Scandinavian languages, while it mostly lost the sense of 'relative' in English (the pluralfriendsis still, rarely, used for "kinsfolk", as in the Scottish proverbFriends agree best at a distance, quoted in 1721).
TheEstonianandFinnish languagesare related, which gives rise to false friends such as swapped forms for south and south-west:[4]
Or Estonianvaim('spirit' or 'ghost') and Finnishvaimo('wife');[3]or Estoniankoristaja('a cleaner') and Finnishkoristaja('a decorator').
A high level of lexical similarity exists between German andDutch,[10]but shifts in meaning of words with a shared etymology have in some instances resulted in 'bi-directional false friends':[11][12]
Note thatdie Seemeans 'sea', and thus is not a false friend.
The meanings could diverge significantly. For example, theProto-Malayo-Polynesianword*qayam('domesticated animal') became specialized in descendant languages:Malay/Indonesianayam('chicken'),Cebuanoayam('dog'), andGaddangayam('pig').[6]
In Swedish, the wordroligmeans 'fun':ett roligt skämt'a funny joke', while in the closely related languages Danish and Norwegian it means 'calm' (as in "he was calm despite all the commotion around him"). However, the Swedish original meaning of 'calm' is retained in some related words such asro'calmness', andorolig'worrisome, anxious', literally 'un-calm'.[13]The Danish and Norwegian wordsemestermeans term (as in school term), but the Swedish wordsemestermeans holiday. The Danish wordfrokostmeans lunch, while the Norwegian wordfrokostand the Swedish wordfrukostboth mean breakfast.
Pseudo-anglicismsare new words formed from Englishmorphemesindependently from an analogous English construct and with a different intended meaning.[14]
Japaneseis notable for its pseudo-anglicisms, known aswasei-eigo('Japan-made English').[15][16]
In bilingual situations, false friends often result in asemantic change—a real new meaning that is then commonly used in a language. For example, the Portuguesehumoroso('capricious') changed its meaning in American Portuguese to 'humorous', owing to the English surface-cognatehumorous.[17]
TheAmerican Italianfattorialost its original meaning, "farm", in favor of "factory", owing to the phonetically similar surface-cognate Englishfactory(cf. Standard Italianfabbrica, 'factory'). Instead of the originalfattoria, the phonetic adaptation American Italianfarmabecame the new signifier for "farm" (Weinreich 1963: 49; see "one-to-one correlation between signifiers and referents").[full citation needed]
Due to the closeness between Italianterra rossa('red soil') and Portugueseterra roxa'purple soil',Italian farmers in Brazilusedterra roxato describe a type of soil similar to thered Mediterranean soil.[18]The actual Portuguese word for "red" isvermelha. Nevertheless,terra roxaandterra vermelhaare still used interchangeably in Brazilian agriculture.[19]
Quebec Frenchis also known for shifting the meanings of some words toward those of their English cognates, but such words are considered false friends in European French. For example,éventuellementis commonly used as "eventually" in Quebec but means "perhaps" in Europe.
This phenomenon is analyzed byGhil'ad Zuckermannas "(incestuous)phono-semantic matching".[20] | https://en.wikipedia.org/wiki/False_friend |
This is alist ofalgorithmgeneral topics. | https://en.wikipedia.org/wiki/List_of_algorithm_general_topics |
Windows Anytime Upgrade(Add Features to Windows) was a service byMicrosoftintroduced inWindows Vistathat facilitated upgrades across successiveeditions of Windows Vista.[1]Prices for upgrades purchased through Windows Anytime Upgrade were lower than prices for upgrades purchased at retail.[2][3]Windows Anytime Upgrade is included inWindows 7to allow users to upgrade toWindows 7 editions. InWindows 8andWindows 8.1it was rebranded as Add Features to Windows and was used to purchase an upgrade license for thePro editionor to addWindows Media Centerto an existing Pro installation. Support for this feature was discontinued on October 31, 2015.[4]
Windows Anytime Upgrade was in development prior to thedevelopment reset of Windows Vista, then known by its codename "Longhorn." A preliminary version of the feature can be seen inbuild 4093.
On February 26, 2006, Microsoft announced the editions of Windows Vista to be released to retail andoriginal equipment manufacturers(OEMs).[5][6]After this announcement, various technology-related outlets reported that Anytime Upgrade would enable users to upgrade to successive editions.[1][7][8]
All editions of Windows Vista (excluding Enterprise) are stored on the same retail and OEM optical media—a license key for the edition purchased determines which edition is eligible for installation.[9]When first announced, Anytime Upgrade enabled users to purchase a digital license from an online merchant to upgrade their edition of Windows Vista. Once a license had been purchased, a user's product license, billing and other information would be stored within a user's digital locker at theWindows Marketplacedigital distributionplatform; this would allow a user to retain this information at anoff-sitelocation for reference purposes and to reinstall the operating system, if necessary.[10]A user could then initiate an upgrade to the edition for which the license was purchased either through components stored on thehard driveby the OEM of thepersonal computer, through an Anytime UpgradeDVDsupplied by the OEM, or through retail installation media compatible with Anytime Upgrade.[11]If none of these options were available, Anytime Upgrade provided an option for a user to purchase a DVD online and have it delivered by mail.[2][3]
Microsoft also released retail packaging for Anytime Upgrade. The retail products were made available during the consumer launch of Windows Vista on January 30, 2007.[10]The initial version of these products included only an upgrade license, but this was later modified in May 2007 to include both a DVD and a product license.[12]In an effort to streamline the upgrade process, Microsoft announced that digital license distribution would cease on February 20, 2008; licenses purchased prior to this date would not be affected. As a result of this change, users would be required to purchase the aforementioned retail packaging in order to use Anytime Upgrade functionality[2][13]and Windows VistaService Pack 1omitted the option to purchase a license online.[14]DVDs for Anytime Upgrade were only produced for Windows Vista.
Anytime Upgrade in Windows Vista performs a full reinstallation of the new product edition while retaining the user's data, programs, and settings.[15]This process can take a considerable amount of time, up to a few hours.[2]
Anytime Upgrade in Windows 7 no longer performs a full reinstallation of Windows. Components for the upgraded editions are instead pre-installed directly in the operating system; a notable result of this change is that the speed of the upgrade process has been significantly increased. Microsoft stated that an upgrade should take approximately 10 minutes.[14]Anytime Upgrade also does not require physical media or additional software.[16][15]Instead, Windows 7 requires a user to purchase a license online, in a manner similar to the initial functionality that was later removed from Windows Vista starting with Service Pack 1.[14]Microsoft would also release Anytime Upgrade packaging for Windows 7 at retail. The packaging, however, would only include a license for the edition to be upgraded, as Anytime Upgrade in the operating system does not require physical media.[17]
In Windows 8, the process has changed. Users will need to go to the Control Panel and search for Add Features to Windows. In Windows 10, this is located in Settings > System > About > Change Product Key or Upgrade Your Version of Windows.
This process works the same way as in Windows 7, with a few exceptions:
When first announced, Anytime Upgrade was available in theUnited States,Canada,EMEA,European Union,Norway,Switzerland, andJapan, with Microsoft stating that availability of the program would expand after launch of Windows Vista.[11]English version retail packaging for Anytime Upgrade was made available at the consumer launch of Windows Vista for North America andAsia-Pacificregions.[12]
In 2009,Ars Technicareported that Anytime Upgrade retail packaging for Windows 7 may only have been available in regions without broadband Internet access or where retail packaging was ineligible to be offered.[17]Anytime Upgrade was available for Windows 7 in select regions.[18] | https://en.wikipedia.org/wiki/Windows_Anytime_Upgrade |
Incomputing, the termremote desktoprefers to asoftware- oroperating systemfeature that allows apersonal computer'sdesktop environmentto be run remotely from one system (usually a PC, but the concept applies equally to aserveror asmartphone), while being displayed on a separateclient device. Remote desktop applications have varying features. Some allow attaching to an existing user'ssessionand "remote controlling", either displaying the remote control session or blanking the screen. Taking over a desktop remotely is a form of remote administration.
Remote access can also be explained as the remote control of a computer by using another device connected via the internet or another network. This is widely used by many computer manufacturers and large businesses help desks for technical troubleshooting of their customer's problems.
Remote desktop software captures the mouse and keyboard inputs from the local computer (client) and sends them to theremote computer(server).[1]The remote computer in turn sends the display commands to the local computer. When applications with many graphics including video or 3D models need to be controlled remotely, a remote workstation software that sends the pixels rather than the display commands must be used to provide a smooth, like-local experience.
Remote desktop sharing is accomplished through a common client/server model. The client, orVNCviewer, is installed on a local computer and then connects via a network to a server component, which is installed on the remote computer. In a typical VNC session, all keystrokes and mouse clicks are registered as if the client were actually performing tasks on the end-user machine.[2]
Remote desktops also have a major advantage for security development, companies are able to permit software engineers who may be dispersed geographically to operate and develop from a computer which can be held within the companies office or cloud environment.
The target computer in a remote desktop scenario is still able to access all of its core functions. Many of these core functions, including the mainclipboard, can be shared between the target computer and remote desktop client.
Following the onset ofCOVID-19, the shift to remote-work environments has led many to work from home with devices without enterprise IT support. As a result, these workers ware reliant on remote desktop software to collaborate and keep their systems available and secure.[3]
A main use of remote desktop software is remote administration and remote implementation. This need arises when software buyers are far away from their software vendor. Most remote access software can be used for "headless computers": instead of each computer having its own monitor, keyboard, and mouse, or using aKVM switch, one computer can have a monitor, keyboard, mouse, and remote control software, and control many headless computers. The duplicate desktop mode is useful for user support and education. Remote control software combined with telephone communication can be nearly as helpful for novice computer-users as if the support staff were actually there.
Remote desktop software can be used to access a remote computer: a physical personalcomputerto which a user does not have physical access, but that can be accessed or interacted with.[4]Unlikeservers, remote computers are mainly used for peer to peer connections, where one device is unattended. A remote computer connection is generally only possible if both devices have anetworkconnection.
Since the advent ofcloud computingremote desktop software can be housed onUSB hardware devices, allowing users to connect the device to any PC connected to their network or the Internet and recreate their desktop via a connection to the cloud. This model avoids one problem with remote desktop software, which requires the local computer to be switched on at the time when the user wishes to access it remotely. (It is possible with a router with C2S VPN support, andwake on LANequipment, to establish avirtual private network(VPN) connection with the router over the Internet if not connected to theLAN, switch on a computer connected to the router, then connect to it.)
Remote desktop products are available in three models: hosted service, software, and appliance.
Tech support scammersuse remote desktop software to connect to their victim's computer and will often lock out the computer if the victim does not cooperate.
Remote desktopprotocolsinclude the following:
Aremote access trojan(RAT, sometimes calledcreepware)[6]is a type ofmalwarethat controls a system through a remote network connection. Whiledesktop sharingandremote administrationhave many legal uses, "RAT" connotes criminal or malicious activity. A RAT is typically installed without the victim's knowledge, often as payload of aTrojan horse, and will try to hide its operation from the victim and fromcomputer security softwareand other anti-virus software.[7][8][9][10][11][12] | https://en.wikipedia.org/wiki/Remote_desktop_software |
Arandom permutationis asequencewhere any order of its items is equally likely atrandom, that is, it is apermutation-valuedrandom variableof a set of objects. The use of random permutations is common ingames of chanceand inrandomized algorithmsincoding theory,cryptography, andsimulation. A good example of a random permutation is the fairshufflingof a standarddeck of cards: this is ideally a random permutation of the 52 cards.
One algorithm for generating a random permutation of a set of sizenuniformly at random, i.e., such that each of then!permutationsis equally likely to appear, is to generate asequenceby uniformly randomly selecting an integer between 1 andn(inclusive), sequentially and without replacementntimes, and then to interpret this sequence (x1, ...,xn) as the permutation
shown here intwo-line notation.
An inefficient brute-force method for sampling without replacement could select from the numbers between 1 andnat every step, retrying the selection whenever the random number picked is a repeat of a number already selected until selecting a number that has not yet been selected. The expected number of retries per step in such cases will scale with the inverse of the fraction of numbers already selected, and the overall number of retries as the sum of those inverses, making this an inefficient approach.
Such retries can be avoided using an algorithm where, on eachith step whenx1, ...,xi− 1have already been chosen, one chooses a uniformly random numberjfrom between 1 andn−i+ 1 (inclusive) and setsxiequal to thejth largest of the numbers that have not yet been selected. This selects uniformly randomly among the remaining numbers at every step without retries.
A simplealgorithmto generate a permutation ofnitems uniformly at random without retries, known as theFisher–Yates shuffle, is to start with any permutation (for example, theidentity permutation), and then go through the positions 0 throughn− 2 (we use a convention where the first element has index 0, and the last element has indexn− 1), and for each positioniswapthe element currently there with a randomly chosen element from positionsithroughn− 1 (the end), inclusive. Any permutation ofnelements will be produced by this algorithm with probability exactly 1/n!, thus yielding a uniform distribution of the permutations.
If theuniform()function is implemented simply asrandom() % (m)then there will be a bias in the distribution of permutations if the number of return values ofrandom()is not a multiple of m. However, this effect is small if the number of return values ofrandom()is orders of magnitude greater than m.
As with all computational implementations of random processes, the quality of the distribution generated by an implementation of a randomized algorithm such as the Fisher-Yates shuffle, i.e., how close the actually generated distribution is to the desired distribution, will depend on the quality of underlying sources of randomness in the implementation such aspseudorandom number generatorsorhardware random number generators. There are manyrandomness testsfor random permutations, such as the "overlapping permutations" test of theDiehard tests. A typical form of such tests is to take somepermutation statisticfor which the distribution is theoretically known and then test whether the distribution of that statistic on a set of randomly generated permutations from an implementation closely approximates the distribution of that statistic from the true distribution.
Theprobability distributionfor the number offixed pointsof a uniformly distributed random permutation ofnelements approaches aPoisson distributionwithexpected value1 asngrows.[1]The firstnmomentsof this distribution are exactly those of the Poisson distribution. In particular, the probability that a random permutation has no fixed points (i.e., that the permutation is aderangement) approaches 1/easnincreases. | https://en.wikipedia.org/wiki/Random_permutation |
In themathematicalfield ofcategory theory, thecategory of sets, denoted bySet, is thecategorywhoseobjectsaresets. The arrows ormorphismsbetween setsAandBare thefunctionsfromAtoB, and the composition of morphisms is thecomposition of functions.
Many other categories (such as thecategory of groups, withgroup homomorphismsas arrows) add structure to the objects of the category of sets or restrict the arrows to functions of a particular kind (or both).
The axioms of a category are satisfied bySetbecause composition of functions isassociative, and because every setXhas anidentity functionidX:X→Xwhich serves as identity element for function composition.
TheepimorphismsinSetare thesurjectivemaps, themonomorphismsare theinjectivemaps, and theisomorphismsare thebijectivemaps.
Theempty setserves as theinitial objectinSetwithempty functionsas morphisms. Everysingletonis aterminal object, with the functions mapping all elements of the source sets to the single target element as morphisms. There are thus nozero objectsinSet.
The categorySetiscomplete and co-complete. Theproductin this category is given by thecartesian productof sets. Thecoproductis given by thedisjoint union: given setsAiwhereiranges over some index setI, we construct the coproduct as the union ofAi×{i} (the cartesian product withiserves to ensure that all the components stay disjoint).
Setis the prototype of aconcrete category; other categories are concrete if they are "built on"Setin some well-defined way.
Every two-element set serves as asubobject classifierinSet. The power object of a setAis given by itspower set, and theexponential objectof the setsAandBis given by the set of all functions fromAtoB.Setis thus atopos(and in particularcartesian closedandexact in the sense of Barr).
Setis notabelian,additivenorpreadditive.
Every non-empty set is aninjective objectinSet. Every set is aprojective objectinSet(assuming theaxiom of choice).
Thefinitely presentable objectsinSetare the finite sets. Since every set is adirect limitof its finite subsets, the categorySetis alocally finitely presentable category.
IfCis an arbitrary category, thecontravariant functorsfromCtoSetare often an important object of study. IfAis an object ofC, then the functor fromCtoSetthat sendsXto HomC(X,A) (the set of morphisms inCfromXtoA) is an example of such a functor. IfCis asmall category(i.e. the collection of its objects forms a set), then the contravariant functors fromCtoSet, together with natural transformations as morphisms, form a new category, afunctor categoryknown as the category ofpresheavesonC.
InZermelo–Fraenkel set theorythe collection of all sets is not a set; this follows from theaxiom of foundation. One refers to collections that are not sets asproper classes. One cannot handle proper classes as one handles sets; in particular, one cannot write that those proper classes belong to a collection (either a set or a proper class). This is a problem because it means that the category of sets cannot be formalized straightforwardly in this setting. Categories likeSetwhose collection of objects forms a proper class are known aslarge categories, to distinguish them from the small categories whose objects form a set.
One way to resolve the problem is to work in a system that gives formal status to proper classes, such asNBG set theory. In this setting, categories formed from sets are said to besmalland those (likeSet) that are formed from proper classes are said to belarge.
Another solution is to assume the existence ofGrothendieck universes. Roughly speaking, a Grothendieck universe is a set which is itself a model of ZF(C) (for instance if a set belongs to a universe, its elements and its powerset will belong to the universe). The existence of Grothendieck universes (other than the empty set and the setVω{\displaystyle V_{\omega }}of allhereditarily finite sets) is not implied by the usual ZF axioms; it is an additional, independent axiom, roughly equivalent to the existence ofstrongly inaccessible cardinals. Assuming this extra axiom, one can limit the objects ofSetto the elements of a particular universe. (There is no "set of all sets" within the model, but one can still reason about the classUof all inner sets, i.e., elements ofU.)
In one variation of this scheme, the class of sets is the union of the entire tower of Grothendieck universes. (This is necessarily aproper class, but each Grothendieck universe is a set because it is an element of some larger Grothendieck universe.) However, one does not work directly with the "category of all sets". Instead, theorems are expressed in terms of the categorySetUwhose objects are the elements of a sufficiently large Grothendieck universeU, and are then shown not to depend on the particular choice ofU. As a foundation forcategory theory, this approach is well matched to a system likeTarski–Grothendieck set theoryin which one cannot reason directly about proper classes; its principal disadvantage is that a theorem can be true of allSetUbut not ofSet.
Various other solutions, and variations on the above, have been proposed.[1][2][3]
The same issues arise with other concrete categories, such as thecategory of groupsor thecategory of topological spaces. | https://en.wikipedia.org/wiki/Category_of_sets |
Rule 184is a one-dimensional binarycellular automatonrule, notable for solving themajority problemas well as for its ability to simultaneously describe several, seemingly quite different,particle systems:
The apparent contradiction between these descriptions is resolved by different ways of associating features of the automaton's state with particles.
The name of Rule 184 is aWolfram codethat defines the evolution of its states. The earliest research on Rule 184 is byLi (1987)andKrug & Spohn (1988). In particular, Krug and Spohn already describe all three types of particle system modeled by Rule 184.[2]
A state of the Rule 184 automaton consists of a one-dimensionalarrayof cells, each containing abinary value(0 or 1). In each step of its evolution, the Rule 184 automaton applies the following rule to each of the cells in the array, simultaneously for all cells, to determine the new state of the cell:[3]
An entry in this table defines the new state of each cell as a function of the previous state and the previous values of the neighboring cells on either side.
The name for this rule, Rule 184, is theWolfram codedescribing the state table above: the bottom row of the table, 10111000, when viewed as abinary number, is equal to the decimal number184.[4]
The rule set for Rule 184 may also be described intuitively, in several different ways:
From the descriptions of the rules above, two important properties of its dynamics may immediately be seen. First, in Rule 184, for any finite set of cells withperiodic boundary conditions, the number of 1s and the number of 0s in a pattern remains invariant throughout the pattern's evolution. Rule 184 and its reflection are the only nontrivial[7]elementary cellular automatato have this property of number conservation.[8]Similarly, if the density of 1s is well-defined for an infinite array of cells, it remains invariant as the automaton carries out its steps.[9]And second, although Rule 184 is not symmetric under left-right reversal, it does have a different symmetry: reversing left and right and at the same time swapping the roles of the 0 and 1 symbols produces a cellular automaton with the same update rule.
Patterns in Rule 184 typically quickly stabilize, either to a pattern in which the cell states move in lockstep one position leftwards at each step, or to a pattern that moves one position rightwards at each step. Specifically, if the initial density of cells with state 1 is less than 50%, the pattern stabilizes into clusters of cells in state 1, spaced two units apart, with the clusters separated by blocks of cells in state 0. Patterns of this type move rightwards. If, on the other hand, the initial density is greater than 50%, the pattern stabilizes into clusters of cells in state 0, spaced two units apart, with the clusters separated by blocks of cells in state 1, and patterns of this type move leftwards. If the density is exactly 50%, the initial pattern stabilizes (more slowly) to a pattern that can equivalently be viewed as moving either leftwards or rightwards at each step: an alternating sequence of 0s and 1s.[10]
Themajority problemis the problem of constructing a cellular automaton that, when run on any finite set of cells, can compute the value held by a majority of its cells.
In a sense, Rule 184 solves this problem, as follows. if Rule 184 is run on a finite set of cells with periodic boundary conditions, with an unequal number of 0s and 1s, then each cell will eventually see two consecutive states of the majority value infinitely often, but will see two consecutive states of the minority value only finitely many times.[11]The majority problem cannot be solved perfectly if it is required that all cells eventually stabilize to the majority state[12]but the Rule 184 solution avoids this impossibility result by relaxing the criterion by which the automaton recognizes a majority.
If one interprets each 1-cell in Rule 184 as containing a particle, these particles behave in many ways similarly to automobiles in a single lane of traffic: they move forward at a constant speed if there is open space in front of them, and otherwise they stop. Traffic models such as Rule 184 and its generalizations that discretize both space and time are commonly calledparticle-hopping models.[13]Although very primitive, the Rule 184 model of traffic flow already predicts some of the familiar emergent features of real traffic: clusters of freely moving cars separated by stretches of open road when traffic is light, andwaves of stop-and-go trafficwhen it is heavy.[14]
It is difficult to pinpoint the first use of Rule 184 for traffic flow simulation, in part because the focus of research in this area has been less on achieving the greatest level of mathematical abstraction and more on verisimilitude: even the earlier papers on cellular automaton based traffic flow simulation typically make the model more complex in order to more accurately simulate real traffic. Nevertheless, Rule 184 is fundamental to traffic simulation by cellular automata.Wang, Kwong & Hui (1998), for instance, state that "the basic cellular automaton model describing a one-dimensional traffic flow problem is rule 184."Nagel (1996)writes "Much work using CA models for traffic is based on this model." Several authors describe one-dimensional models with vehicles moving at multiple speeds; such models degenerate to Rule 184 in the single-speed case.[15]Gaylord & Nishidate (1996)extend the Rule 184 dynamics to two-lane highway traffic with lane changes; their model shares with Rule 184 the property that it is symmetric under simultaneous left-right and 0-1 reversal.Biham, Middleton & Levine (1992)describe atwo-dimensional city grid modelin which the dynamics of individual lanes of traffic is essentially that of Rule 184.[16]For an in-depth survey of cellular automaton traffic modeling and associated statistical mechanics, seeMaerivoet & De Moor (2005)andChowdhury, Santen & Schadschneider (2000).
When viewing Rule 184 as a traffic model, it is natural to consider the average speed of the vehicles. When the density of traffic is less than 50%, this average speed is simply one unit of distance per unit of time: after the system stabilizes, no car ever slows. However, when the density is a number ρ greater than 1/2, the average speed of traffic is1−ρρ{\displaystyle {\tfrac {1-\rho }{\rho }}}. Thus, the system exhibits a second-order kineticphase transitionatρ= 1/2. When Rule 184 is interpreted as a traffic model, and started from a random configuration whose density is at this critical valueρ= 1/2, then the average speed approaches its stationary limit as the square root of the number of steps. Instead, for random configurations whose density is not at the critical value, the approach to the limiting speed is exponential.[17]
As shown in the figure, and as originally described byKrug & Spohn (1988),[18]Rule 184 may be used to model deposition of particles onto a surface. In this model, one has a set of particles that occupy a subset of the positions in asquare latticeoriented diagonally (the darker particles in the figure). If a particle is present at some position of the lattice, the lattice positions below and to the right, and below and to the left of the particle must also be filled, so the filled part of the lattice extends infinitely downward to the left and right. The boundary between filled and unfilled positions (the thin black line in the figure) is interpreted as modeling a surface, onto which more particles may be deposited. At each time step, the surface grows by the deposition of new particles in each local minimum of the surface; that is, at each position where it is possible to add one new particle that has existing particles below it on both sides (the lighter particles in the figure).
To model this process by Rule 184, observe that the boundary between filled and unfilled lattice positions can be marked by a polygonal line, the segments of which separate adjacent lattice positions and have slopes +1 and −1. Model a segment with slope +1 by an automaton cell with state 0, and a segment with slope −1 by an automaton cell with state 1. The local minima of the surface are the points where a segment of slope −1 lies to the left of a segment of slope +1; that is, in the automaton, a position where a cell with state 1 lies to the left of a cell with state 0. Adding a particle to that position corresponds to changing the states of these two adjacent cells from 1,0 to 0,1, so advancing the polygonal line. This is exactly the behavior of Rule 184.[19]
Related work on this model concerns deposition in which the arrival times of additional particles are random, rather than having particles arrive at all local minima simultaneously.[20]These stochastic growth processes can be modeled as anasynchronous cellular automaton.
Ballistic annihilationdescribes a process by which moving particles andantiparticlesannihilateeach other when they collide. In the simplest version of this process, the system consists of a single type of particle and antiparticle, moving at equal speeds in opposite directions in a one-dimensional medium.[21]
This process can be modeled by Rule 184, as follows. The particles are modeled as points that are aligned, not with the cells of the automaton, but rather with the interstices between cells. Two consecutive cells that both have state 0 model a particle at the space between these two cells that moves rightwards one cell at each time step. Symmetrically, two consecutive cells that both have state 1 model an antiparticle that moves leftwards one cell at each time step. The remaining possibilities for two consecutive cells are that they both have differing states; this is interpreted as modeling a background material without any particles in it, through which the particles move. With this interpretation, the particles and antiparticles interact by ballistic annihilation: when a rightwards-moving particle and a leftwards-moving antiparticle meet, the result is a region of background from which both particles have vanished, without any effect on any other nearby particles.[22]
The behavior of certain other systems, such as one-dimensionalcyclic cellular automata, can also be described in terms of ballistic annihilation.[23]There is a technical restriction on the particle positions for the ballistic annihilation view of Rule 184 that does not arise in these other systems, stemming from the alternating pattern of the background: in the particle system corresponding to a Rule 184 state, if two consecutive particles are both of the same type they must be an odd number of cells apart, while if they are of opposite types they must be an even number of cells apart. However this parity restriction does not play a role in the statistical behavior of this system.
Pivato (2007)uses a similar but more complicated particle-system view of Rule 184: he not only views alternating 0–1 regions as background, but also considers regions consisting solely of a single state to be background as well. Based on this view he describes seven different particles formed by boundaries between regions, and classifies their possible interactions. SeeChopard & Droz (1998, pp. 188–190) for a more general survey of the cellular automaton models of annihilation processes.
In his bookA New Kind of Science,Stephen Wolframpoints out that rule 184, when run on patterns with density 50%, can be interpreted as parsing thecontext-free languagedescribing strings formed from nestedparentheses. This interpretation is closely related to the ballistic annihilation view of rule 184: in Wolfram's interpretation, an open parenthesis corresponds to a left-moving particle while a close parenthesis corresponds to a right-moving particle.[24] | https://en.wikipedia.org/wiki/Rule_184 |
Proactive cyber defensemeans acting in anticipation to oppose an attack through cyber and cognitive domains.[1]Proactive cyber defense can be understood as options between offensive and defensive measures. It includes interdicting, disrupting or deterring an attack or a threat's preparation to attack, either pre-emptively or in self-defence.
Proactive cyber defense differs from active defence, in that the former is pre-emptive (does not waiting for an attack to occur). Furthermore, active cyber defense differs from offensive cyber operations (OCO) in that the latter requires legislative exceptions to undertake. Hence, offensive cyber capabilities may be developed in collaboration with industry and facilitated by private sector; these operations are often led by nation-states.
Common methods of proactive cyber defense include cyber deception, attribution, threat hunting and adversarial pursuit. The mission of the pre-emptive and proactive operations is to conduct aggressive interception and disruption activities against an adversary using:psychological operations, managed information dissemination, precision targeting, information warfare operations, computer network exploitation, and other active threat reduction measures.
The proactive defense strategy is meant to improve information collection by stimulating reactions of the threat agents and to provide strike options as well as to enhance operational preparation of the real or virtual battlespace. Proactive cyber defence can be a measure for detecting and obtaining information before a cyber attack, or it can also be impending cyber operation and be determining the origin of an operation that involves launching a pre-emptive, preventive, or cyber counter-operation.
The offensive capacity includes the manipulation and/or disruption of networks and systems with the purpose of limiting or eliminating the adversary's operational capability. This capability can be required to guarantee one's freedom of action in the cyber domain.Cyber-attackscan be launched to repel an attack (active defence) or to support the operational action.
Strategically, cyber defence refers to operations that are conducted in the cyber domain in support of mission objectives. The main difference betweencyber securityand cyber defence is that cyber defence requires a shift fromnetwork assurance(security) tomission assurance. Cyber defence focuses on sensing, detecting, orienting, and engaging adversaries in order to assure mission success and to outmanoeuver the adversary. This shift from security to defence requires a strong emphasis on intelligence, and reconnaissance, and the integration of staff activities to include intelligence, operations, communications, and planning.
Defensive cyber operations refer to activities on or through the global information infrastructure to help protect an institutions' electronic information and information infrastructures as a matter of mission assurance. Defensive cyber does not normally involve direct engagement with the adversary.
Active cyber operations refers to activities on the global information infrastructure to degrade, disrupt, influence, respond, and interfere with the capabilities, intentions, and activities of a foreign individual, state, organization, and terrorist groups. Active cyber defence decisively engages the adversary and includes adversarial pursuit activities.
In the fifth century, B.C.,Sun Tzuadvocated foreknowledge (predictive analysis) as part of a winning strategy. He warned that planners must have a precise understanding of the active threat and not "remain ignorant of the enemy's condition". The thread of proactive defense is spun throughout his teachings. PsychiatristViktor Franklwas likely the first to use the term proactive in his 1946 bookMan's Search for Meaningto distinguish the act of taking responsibility for one's own circumstances rather than attributing one's condition to external factors.
Later in 1982, theUnited States Department of Defense(DoD) used "proactive" as a contrary concept to "reactive" inassessing risk. In the framework of risk management "proactive" meant taking initiative by acting rather than reacting to threat events. Conversely "reactive" measures respond to a stimulus or past events rather than predicting the event.Military scienceconsiders defence as the science-art of thwarting an attack. Furthermore, doctrine poses that if a party attacks an enemy who is about to attack this could be called active-defence. Defence is also aeuphemismfor war but does not carry the negative connotation of an offensive war. Usage in this way has broadened the concept of proactive defence to include most military issues including offensive, which is implicitly referred to as active-defence. Politically, the concept of national self-defence to counter a war of aggression refers to a defensive war involving pre-emptive offensive strikes and is one possible criterion in the 'Just War Theory'. Proactive defence has moved beyond theory, and it has been put into practice in theatres of operation. In 1989Stephen Covey's study transformed the meaning of proactive as "to act before a situation becomes a source of confrontation or crisis".[2]Since then, "proactive" has been placed in opposition to the words "reactive" or "passive".
Cyber is derived from "cybernetics", a word originally coined by a group of scientists led byNorbert Wienerand made popular by Wiener's book of 1948,Cybernetics or Control and Communication in the Animal and the Machine.[3]Cyberspace typically refers to the vast and growing logical domain composed of public and private networks; it means independently managed networks linked together the Internet. The definition of Cyberspace has been extended to include all network-space which at some point, through some path, may have eventual access to the public internet. Under this definition, cyberspace becomes virtually every networked device in the world, which is not devoid of a network interface entirely. With the rapid evolution of information warfare operations doctrine in the 1990s, we have begun to see the use of proactive and preemptive cyber defence concepts used by policymakers and scholars.
The National Strategy to Secure Cyberspace,a book written by George W. Bush, was published in February 2003 outlining the initial framework for both organizing and prioritizing efforts to secure the cyberspace. It highlighted the necessity for public-private partnerships. In this book, proactive threads include the call to deter malicious activity and prevent cyber attacks against America's critical infrastructures.
The notion of "proactive defence" has a rich history. The hype of "proactive cyber defence" reached its zenith around 1994, under the auspices of Information Warfare. Much of the current doctrine related to proactive cyber defence was fully developed by 1995. Now most of the discussions around proactive defence in the literature are much less "proactive" than the earlier discussions in 1994. Present-day proactive cyber defence strategy was conceived within the context of the rich discussion that preceded it, existing doctrine and real proactive cyber defence programs that have evolved globally over the past decade.
As one of the founding members of Canada's interdepartmental committee on Information Warfare, Dr. Robert Garigue and Dave McMahon pointed out that "strategic listening, core intelligence, and proactive defence provide time and precision. Conversely, reacting in surprise is ineffective, costly and leaves few options. Strategic deterrence needs a credible offensive, proactive defence and information peacekeeping capability in which to project power and influence globally through Cyberspace in the defence of the nation. Similarly, deterrence and diplomacy are required in the right dosage to dissuade purposeful interference with the national critical cyber infrastructures in influence in the democratic process by foreign states.[4]
Intelligence agencies, such as the National Security Agency, were criticized for buying up and stockpilingzero-day vulnerabilitiesand keeping them secret and developing mainlyoffensive capabilitiesinstead of defensive measures and, thereby, helping patch vulnerabilities.[5][6][7][8]This criticism was widely reiterated and recognized after the May 2017WannaCry ransomware attack.[9][10][11][12][13][14]
The notion of a proactive pre-emptive operations group (P2OG) emerged from a report of theDefense Science Board's(DSB) 2002 briefing. The briefing was reported by Dan Dupont inInside the Pentagonon September 26, 2002, and was also discussed by William M. Arkin in theLos Angeles Timeson October 27, 2002.[15]TheLos Angeles Timeshas subsequently quotedU.S. Secretary of DefenseDonald Rumsfeldrevealing the creation of the "Proactive, Pre-emptive Operations Group". The mission was to conduct Aggressive, Proactive, Pre-emptive Operations to interdiction and disruption the threat using: psychological operations, managed information dissemination, precision targeting, and information warfare operations.[16]Today, the proactive defence strategy means improving information collection by stimulating reactions of the threat agents, provide strike options to enhance operational preparation of the real as well as virtual battle space. The P2OG has been recommended to be constituted of one hundred highly specialized people with unique technical and intelligence skills. The group would be overseen by the White House's deputy national security adviser and would carry out missions coordinated by the secretary of defence. Proactive measures, according to DoD are those actions taken directly against the preventive stage of an attack by the enemy.
The discipline of world politics and the notions of pre-emptive cyber defence topics are the two important concepts that need to be examined because we are living in a dynamic international system in which actors (countries) update their threat perceptions according to the developments in the technological realm.[17]Given this logic employed frequently by the policymakers, countries prefer using pre-emptive measures before being targeted. This topic is extensively studied by the political scientists focusing on the power transition theory (PTT), where Organski and Kugler first discussed that powerful countries start the attack before the balance of power changes in favor of the relatively weaker but the rising state.[18]Although the PTT has relevance to explain the use of pre-emptive cyber defence policies, this theory can still be difficult to apply when it comes to cyber defence entirely because it is not easy to understand the relative power differentials of the international actors in terms of their cyber capabilities. On the other hand, we can still use the PTT to explain the security perceptions of the United States and China, as a rising country, in terms of their use of pre-emptive cyber defence policies. Many scholars have already begun to examine the likelihood of cyber war between these countries and examined the relevance of the PTT and other similar international relations theories.[19][20][21] | https://en.wikipedia.org/wiki/Proactive_Cyber_Defence |
Instatisticsandbusiness, along tailof somedistributionsof numbers is the portion of the distribution having many occurrences far from the "head" or central part of the distribution. The distribution could involve popularities, random numbers of occurrences of events with variousprobabilities, etc.[1]The term is often used loosely, with no definition or an arbitrary definition, but precise definitions are possible.
In statistics, the termlong-tailed distributionhas a narrow technical meaning, and is a subtype ofheavy-tailed distribution.[2][3][4]Intuitively, a distribution is (right) long-tailed if, for any fixed amount, when a quantity exceeds a high level, it almost certainly exceeds it by at least that amount: large quantities are probably even larger.[a]Note that there is no sense ofthe"long tail" of a distribution, but only thepropertyof a distribution being long-tailed.
In business, the termlong tailis applied torank-size distributionsorrank-frequency distributions(primarily of popularity), which often formpower lawsand are thus long-tailed distributions in the statistical sense. This is used to describe the retailing strategy of selling many unique items with relatively small quantities sold of each (the "long tail")—usually in addition to selling fewer popular items in large quantities (the "head"). Sometimes an intermediate category is also included, variously called thebody,belly,torso, ormiddle. The specific cutoff of what part of a distribution isthe"long tail" is often arbitrary, but in some cases may be specified objectively; seesegmentation of rank-size distributions.
The long tail concept has found some ground for application, research, and experimentation. It is a term used in online business,mass media,micro-finance(Grameen Bank, for example), user-driven innovation (Eric von Hippel), knowledge management, and social network mechanisms (e.g.crowdsourcing,crowdcasting,peer-to-peer), economic models, marketing (viral marketing), and IT Security threat hunting within a SOC (Information security operations center).
Frequency distributionswith long tails have been studied by statisticians since at least 1946.[5]The term has also been used in the finance[6]and insurance business[7]for many years. The work ofBenoît Mandelbrotin the 1950s and later has led to him being referred to as "the father of long tails".[8]
The long tail was popularized byChris Andersonin an October 2004Wiredmagazine article, in which he mentionedAmazon.com,AppleandYahoo!as examples of businesses applying this strategy.[7][9]Anderson elaborated the concept in his bookThe Long Tail: Why the Future of Business Is Selling Less of More.
Anderson cites research published in 2003 byErik Brynjolfsson,Yu (Jeffrey) Hu, andMichael D. Smith, who first used a log-linear curve on an XY graph to describe the relationship betweenAmazon.comsales and sales ranking. They showed that the primary value of the internet to consumers comes from releasing new sources of value by providing access to products in the long tail.[10]
The distribution and inventory costs of businesses successfully applying a long tail strategy allow them to realize significant profit out of selling small volumes of hard-to-find items to many customers instead of only selling large volumes of a reduced number of popular items. The total sales of this large number of "non-hit items" is called "the long tail".
Given enough choice, a large population of customers, and negligible stocking and distribution costs, the selection and buying pattern of the population results in the demand across products having apower lawdistribution orPareto distribution.
It is important to understand why some distributions are normal vs. long tail (power) distributions. Chris Anderson argues that while quantities such as human height orIQfollow a normal distribution, inscale-free networkswithpreferential attachments, power law distributions are created, i.e. because some nodes are more connected than others (likeMalcolm Gladwell's “mavens” inThe Tipping Point).[11][12]
The long tailis the name for a long-known feature of some statistical distributions (such asZipf,power laws,Pareto distributionsandgeneral Lévy distributions). In "long-tailed" distributions a high-frequency or high-amplitude population is followed by a low-frequency or low-amplitude population which gradually "tails off"asymptotically. The events at the far end of the tail have a very low probability of occurrence.
As arule of thumb, for such population distributions the majority of occurrences (more than half, and where thePareto principleapplies, 80%) are accounted for by the first 20% of items in the distribution.
Power lawdistributions or functions characterize an important number of behaviors from nature and human endeavor. This fact has given rise to a keen scientific and social interest in such distributions, and the relationships that create them. The observation of such a distribution often points to specific kinds of mechanisms, and can often indicate a deep connection with other, seemingly unrelated systems. Examples of behaviors that exhibit long-tailed distribution are the occurrence of certain words in a given language, the income distribution of a business or the intensity of earthquakes (see:Gutenberg–Richter law).
Chris Anderson's andClay Shirky's articles highlight special cases in which we are able to modify the underlying relationships and evaluate the impact on the frequency of events. In those cases the infrequent, low-amplitude (or low-revenue) events – the long tail, represented here by the portion of the curve to the right of the 20th percentile – can become the largest area under the line. This suggests that a variation of one mechanism (internet access) or relationship (the cost of storage) can significantly shift the frequency of occurrence of certain events in the distribution. The shift has a crucial effect in probability and in the customer demographics of businesses likemass mediaand online sellers.
However, the long tails characterizing distributions such as theGutenberg–Richter lawor the words-occurrenceZipf's law, and those highlighted by Anderson and Shirky are of very different, if not opposite, nature: Anderson and Shirky refer to frequency-rank relations, whereas the Gutenberg–Richter law and the Zipf's law are probability distributions. Therefore, in these latter cases "tails" correspond to large-intensity events such as large earthquakes and most popular words, which dominate the distributions. By contrast, the long tails in the frequency-rank plots highlighted by Anderson and Shirky would rather correspond to short tails in the associated probability distributions, and therefore illustrate an opposite phenomenon compared to the Gutenberg–Richter and the Zipf's laws.
Use of the phrasethe long tailin business as "the notion of looking at the tail itself as a new market" of consumers was first coined byChris Anderson.[13]The concept drew in part from a February 2003 essay byClay Shirky, "Power Laws, Weblogs and Inequality",[14]which noted that a relative handful ofweblogshave many links going into them but "the long tail" of millions of weblogs may have only a handful of links going into them. Anderson described the effects of the long tail on current and future business models beginning with a series of speeches in early 2004 and with the publication of aWiredmagazine article in October 2004. Anderson later extended it into the bookThe Long Tail: Why the Future of Business is Selling Less of More(2006).
Anderson argues that products in low demand or that have a low sales volume can collectively make up a market share that rivals or exceeds the relatively few current bestsellers and blockbusters, if the store or distribution channel is large enough. Anderson cites earlier research byErik Brynjolfsson,Yu (Jeffrey) Hu, andMichael D. Smith, that showed that a significant portion of Amazon.com's sales come from obscure books that are not available in brick-and-mortar stores. The long tail is a potential market and, as the examples illustrate, the distribution and sales channel opportunities created by the Internet often enable businesses to tap that market successfully.
In his Wired article Anderson opens with an anecdote about creating a niche market for books on Amazon. He writes about a book titledTouching the Voidabout a near-death mountain climbing accident that took place in the Peruvian Andes. Anderson states the book got good reviews, but didn't have much commercial success. However, ten years later a book titledInto Thin AirbyJon Krakauerwas published and Touching the Void began to sell again. Anderson realized that this was due to Amazon's recommendations. This created a niche market for those who enjoy books about mountain climbing even though it is not considered a popular genre supporting the long tail theory.
An Amazon employee described the long tail as follows: "We sold more books today that didn't sell at all yesterday than we sold today of all the books that did sell yesterday."[15]
Anderson has explained the term as a reference to the tail of ademand curve.[16]The term has since beenrederived from an XY graph that is created when charting popularity to inventory. In the graph shown above, Amazon's book sales would be represented along the vertical axis, while the book or movie ranks are along the horizontal axis. The total volume of low popularity items exceeds the volume of high popularity items.
Erik Brynjolfsson,Yu (Jeffrey) Hu, andMichael D. Smithfound that a large proportion ofAmazon.com's book sales come from obscure books that were not available in brick-and-mortar stores. They then quantified the potential value of the long tail to consumers. In an article published in 2003, these authors showed that, while most of the discussion about the value of the Internet to consumers has revolved around lower prices, consumer benefit (a.k.a.consumer surplus) from access to increased product variety in online book stores is ten times larger than their benefit from access to lower prices online.[17]
A subsequent study byErik Brynjolfsson,Yu (Jeffrey) Hu, andMichael D. Smith[18]found that the long tail has grown longer over time, with niche books accounting for a larger share of total sales. Their analyses suggested that by 2008, niche books accounted for 36.7% of Amazon's sales while the consumer surplus generated by niche books has increased at least fivefold from 2000 to 2008. In addition, their new methodology finds that, while the widely used power laws are a good first approximation for the rank-sales relationship, the slope may not be constant for all book ranks, with the slope becoming progressively steeper for more obscure books.
In support of their findings, Wenqi Zhou and Wenjing Duan not only find a longer tail but also a fatter tail by an in-depth analysis on consumer software downloading pattern in their paper "Online user reviews, product variety, and the long tail".[19]The demand for all products decreases, but the decrease for the hits is more pronounced, indicating the demand shifting from the hits to the niches over time. In addition, they also observe a superstar effect in the presence of the long tail. A small number of very popular products still dominates the demand.
In a 2006 working paper titled "Goodbye Pareto Principle, Hello Long Tail",[20]Erik Brynjolfsson,Yu (Jeffrey) Hu, and Duncan Simester found that, by greatly loweringsearchcosts, information technology in general and Internet markets in particular could substantially increase the collective share of hard-to-find products, thereby creating a longer tail in the distribution of sales.
They used a theoretical model to show how a reduction in search costs will affect the concentration in product sales. By analyzing data collected from a multi-channel retailing company, they showed empirical evidence that the Internet channel exhibits a significantly less concentrated sales distribution, when compared with traditional channels. An 80/20 rule fits the distribution of product sales in the catalog channel quite well, but in the Internet channel, this rule needs to be modified to a 72/28 rule in order to fit the distribution of product sales in that channel. The difference in the sales distribution is highly significant, even after controlling for consumer differences.
The key supply-side factor that determines whether a sales distribution has a long tail is the cost of inventory storage and distribution. Where inventory storage and distribution costs are insignificant, it becomes economically viable to sell relatively unpopular products; however, when storage and distribution costs are high, only the most popular products can be sold. For example, a traditional movie rental store has limited shelf space, which it pays for in the form of buildingoverhead; to maximize its profits, it must stock only the most popular movies to ensure that no shelf space is wasted. Because online video rental provider (such asAmazon.comorNetflix) stocks movies in centralized warehouses, its storage costs are far lower and its distribution costs are the same for a popular or unpopular movie. It is therefore able to build a viable business stocking a far wider range of movies than a traditional movie rental store. Those economics of storage and distribution then enable the advantageous use of the long tail: for example, Netflix finds that in aggregate, "unpopular" movies are rented more than popular movies.
AnMIT Sloan Management Reviewarticle titled "From Niches to Riches: Anatomy of the Long Tail"[21]examined the long tail from both the supply side and the demand side and identifies several key drivers. On the supply side, the authors point out howe-tailers' expanded, centralized warehousing allows for more offerings, thus making it possible for them to cater to more varied tastes.[22]
On the demand side, tools such as search engines, recommendation software, and sampling tools are allowing customers to find products outside their geographic area. The authors also look toward the future to discuss second-order, amplified effects of Long Tail, including the growth of markets serving smaller niches.
Not all recommender systems are equal, however, when it comes to expanding the long tail. Some recommenders (i.e. certain collaborative filters) can exhibit a bias toward popular products, creatingpositive feedback, and actually reduce the long tail. AWhartonstudy details this phenomenon along with several ideas that may promote the long tail and greater diversity.[23]
A 2010 study conducted by Wenqi Zhou and Wenjing Duan[19]further points out that the demand side factor (online user reviews) and the supply side factor (product variety) interplay to influence the long tail formation of user choices. Consumers' reliance on online user reviews to choose products is significantly influenced by the quantity of products available. Specifically, they find that the impacts of both positive and negative user reviews are weakened as product variety goes up. In addition, the increase in product variety reduces the impact of user reviews on popular products more than it does on niche products.
The "crowds" of customers, users and small companies that inhabit the long-tail distribution can perform collaborative and assignment work. Some relevant forms of these new production models are:
The demand-side factors that lead to the long tail can be amplified by the "networks of products" which are created by hyperlinked recommendations across products. AnMIS Quarterlyarticle by Gal Oestreicher-Singer andArun Sundararajanshows that categories of books onAmazon.comwhich are more central and thus influenced more by their recommendation network have significantly more pronounced long-tail distributions. Their data across 200 subject areas shows that a doubling of this influence leads to a 50% increase in revenues from the least popular one-fifth of books.[25]
The long-tail distribution applies at a given point in time, but over time the relative popularity of the sales of the individual products will change.[26]Although the distribution of sales may appear to be similar over time, the positions of the individual items within it will vary. For example, new items constantly enter most fashion markets. A recent fashion-based model[27]ofconsumer choice, which is capable of generating power law distributions of sales similar to those observed in practice,[28]takes into account turnover in the relative sales of a given set of items, as well as innovation, in the sense that entirely new items become offered for sale.
There may be an optimal inventory size, given the balance between sales and the cost of keeping up with the turnover. An analysis based on this pure fashion model[29]indicates that, even for digital retailers, the optimal inventory may in many cases be less than the millions of items that they can potentially offer. In other words, by proceeding further and further into the long tail, sales may become so small that the marginal cost of tracking them in rank order, even at a digital scale, might be optimised well before a million titles, and certainly before infinite titles. This model can provide further predictions into markets with long-tail distribution, such as the basis for a model for optimizing the number of each individual item ordered, given its current sales rank and the total number of different titles stocked.
From a given country's viewpoint, diplomatic interactions with other countries likewise exhibit a long tail.[30]Strategic partners receive the largest amount of diplomatic attention, while a long tail of remote states obtains just an occasional signal of peace. The fact that even allegedly "irrelevant" countries obtain at least rare amicable interactions by virtually all other states was argued to create a societal surplus of peace, a reservoir that can be mobilized in case a state needs it. The long tail thus functionally resembles "weak ties" in interpersonal networks.
Before a long tail works, only the most popular products are generally offered. When the cost of inventory storage and distribution fall, a wide range of products become available. This can, in turn, have the effect of reducing demand for the most popular products. For example, a small website that focuses on niches of content can be threatened by a larger website which has a variety of information (such as Yahoo)Web content. The big website covers more variety while the small website has only a few niches to choose from.
The competitive threat from these niche sites is reduced by the cost of establishing and maintaining them and the effort required for readers to track multiple small web sites. These factors have been transformed by easy and cheap web site software and the spread ofRSS. Similarly, mass-market distributors likeBlockbustermay be threatened by distributors likeLoveFilm, which supply the titles that Blockbuster doesn't offer because they are not already very popular.
Some of the most successful Internet businesses have used the long tail as part of their business strategy. Examples includeeBay(auctions),Yahoo!andGoogle(web search),Amazon(retail), andiTunes Store(music andpodcasts), amongst the major companies, along with smaller Internet companies likeAudible(audio books) andLoveFilm(video rental). These purely digital retailers also have almost no marginal cost, which is benefiting the online services, unlike physical retailers that have fixed limits on their products. The internet can still sell physical goods, but at an unlimited selection and with reviews and recommendations.[31]The internet has opened up larger territories to sell and provide its products without being confined to just the "local Markets" such as physical retailers likeTargetor evenWalmart. With the digital and hybrid retailers there is no longer a perimeter on market demands.[32]
The adoption ofvideo gamesandmassively multiplayer online gamessuch asSecond Lifeas tools for education and training is starting to show a long-tailed pattern. It costs significantly less to modify a game than it has been to create unique training applications, such as those for training in business, commercial flight, and military missions. This has led some[who?]to envision a time in which game-based training devices or simulations will be available for thousands of different job descriptions.[citation needed]
The banking business has used internet technology to reach an increasing number of customers. The most important shift in business model due to the long tail has come from the various forms ofmicrofinancedeveloped.[citation needed]
As opposed to e-tailers, micro-finance is a distinctly low technology business. Its aim is to offer very small credits to lower-middle to lower class and poor people, that would otherwise be ignored by the traditional banking business. The banks that have followed this strategy of selling services to the low-frequency long tail of the sector have found out that it can be an important niche, long ignored by consumer banks.[33]The recipients of small credits tend to be very good payers of loans, despite their non-existent credit history. They are also willing to pay higher interest rates than the standard bank or credit card customer. It also is a business model that fills an important developmental role in an economy.[34]
Grameen BankinBangladeshhas successfully followed this business model. In Mexico the banks Compartamos andBanco Aztecaalso service this customer demographic, with an emphasis on consumer credit.Kiva.orgis an organization that provides micro credits to people worldwide, by using intermediaries called small microfinance organizations (S.M.O.'s)to distribute crowd sourced donations made by Kiva.org lenders.
According to theuser-driven innovationmodel, companies can rely on users of their products and services to do a significant part of theinnovationwork. Users want products that are customized to their needs. They are willing to tell the manufacturer what they really want and how it should work. Companies can make use of a series of tools, such as interactive and internet based technologies, to give their users a voice and to enable them to do innovation work that is useful to the company.
Given the diminishing cost of communication and information sharing (by analogy to the low cost of storage and distribution, in the case ofe-tailers), long-tailed user driven innovation will gain importance for businesses.
In following a long-tailed innovation strategy, the company is using the model to tap into a large group of users that are in the low-intensity area of the distribution. It is theircollaborationand aggregated work that results in an innovation effort.Social innovationcommunities formed by groups of users can perform rapidly thetrial and errorprocess of innovation, share information, test and diffuse the results.
Eric von Hippelof MIT's Sloan School of Management defined the user-led innovation model in his bookDemocratizing Innovation.[35]Among his conclusions is the insight that as innovation becomes more user-centered the information needs to flow freely, in a more democratic way, creating a "rich intellectual commons" and "attacking a major structure of the social division of labor".
In today's world, customers are eager to voice their opinions and shape the products and services they use. This presents a unique opportunity for companies to leverage interactive and internet-based technologies to give their users a voice and enable them to participate in the innovation process. By doing so, companies can gain valuable insights into their customer's needs and preferences, which can help drive product development and innovation. By creating a platform for their users to share their ideas and feedback, companies can harness the power of collaborative innovation and stay ahead of the competition. Ultimately, involving users in the innovation process is a win-win for both companies and their customers, as it leads to more tailored, effective products and services that better meet the needs of the end user.
The drive to build a market and obtain revenue from the consumer demographic of the long tail has led businesses to implement a series of long-tailmarketingtechniques, most of them based on extensive use of internet technologies. Among the most representative are:
The long tail has possible implications forcultureandpolitics. Where theopportunity costof inventory storage and distribution is high, only the most popular products are sold. But where the long tail works, minority tastes become available and individuals are presented with a wider array of choices. The long tail presents opportunities for various suppliers to introduce products in the niche category. These encourage the diversification of products. These niche products open opportunities for suppliers while concomitantly satisfying the demands of many individuals – therefore lengthening the tail portion of the long tail. In situations where popularity is currently determined by the lowest common denominator, a long-tail model may lead to improvement in a society's level of culture. The opportunities that arise because of the long tail greatly affect society's cultures because suppliers have unlimited capabilities due to infinite storage and demands that were unable to be met prior to the long tail are realized. At the end of the long tail, the conventional profit-making business model ceases to exist; instead, people tend to come up with products for varied reasons like expression rather than monetary benefit. In this way, the long tail opens up a large space for authentic works of creativity.
Televisionis a good example of this: Chris Anderson defines long-tail TV in the context of "content that is not available through traditional distribution channels but could nevertheless find an audience."[37]Thus, the advent of services such astelevision on demand,pay-per-viewand even premium cable subscription services such as HBO and Showtime open up the opportunity for niche content to reach the right audiences, in an otherwise mass medium. These may not always attract the highest level of viewership, but their business distribution models make that of less importance. As the opportunity cost goes down, the choice of TV programs grows and greater cultural diversity rises.
Often presented as a phenomenon of interest primarily to mass market retailers and web-based businesses, the long tail also has implications for the producers of content, especially those whose products could not – for economic reasons – find a place in pre-Internet information distribution channels controlled by book publishers, record companies, movie studios, and television networks. Looked at from the producers' side, the long tail has made possible a flowering of creativity across all fields of human endeavour.[citation needed]One example of this isYouTube, where thousands of diverse videos – whose content, production value or lack of popularity make them inappropriate for traditional television – are easily accessible to a wide range of viewers.
The intersection of viral marketing, online communities and new technologies that operate within the long tail of consumers and business is described in the novel byWilliam Gibson,Pattern Recognition.
In military thinking,John Robbapplies the long tail to the developments in insurgency and terrorist movements, showing how technology and networking allows the long tail of disgruntled groups and criminals to take on the nation state and have a chance to win.
A 2008 study byAnita Elberse, professor of business administration atHarvard Business School, calls the long tail theory into question, citing sales data which shows that the Web magnifies the importance of blockbuster hits.[38]On his blog, Chris Anderson responded to the study, praising Elberse and the academic rigor with which she explores the issue but drawing a distinction between their respective interpretations of where the "head" and "tail" begin. Elberse defined head and tail using percentages, while Anderson uses absolute numbers.[39]Similar results were published bySerguei Netessineand Tom F. Tan, who suggest that head and tail should be defined by percentages rather than absolute numbers.[40]
Also in 2008, a sales analysis of an unnamed UK digital music service by economistWill Pageand high-tech entrepreneur Andrew Bud found that sales exhibited alog-normal distributionrather than a power law; they reported that 80% of the music tracks available sold no copies at all over a one-year period. Anderson responded by stating that the study's findings are difficult to assess without access to its data.[41][42] | https://en.wikipedia.org/wiki/Long_tail |
Spectral musicuses theacousticproperties of sound – orsound spectra– as a basis forcomposition.[1]
Defined in technical language, spectral music is an acoustic musical practice wherecompositionaldecisions are often informed bysonographicrepresentations andmathematicalanalysis of sound spectra, or by mathematically generated spectra. The spectral approach focuses on manipulating the spectral features, interconnecting them, and transforming them. In this formulation, computer-based sound analysis and representations of audio signals are treated as being analogous to atimbralrepresentation of sound.
The (acoustic-composition) spectral approach originated in France in the early 1970s, and techniques were developed, and later refined, primarily atIRCAM, Paris, with theEnsemble l'Itinéraire, by composers such asGérard GriseyandTristan Murail.Hugues Dufourtis commonly credited for introducing the termmusique spectrale(spectral music) in an article published in 1979.[1][2]Murail has described spectral music as anaestheticrather than a style, not so much a set of techniques as an attitude; asJoshua Finebergputs it, a recognition that "music is ultimately sound evolving in time".[3]Julian Andersonindicates that a number of major composers associated with spectralism consider the term inappropriate, misleading, and reductive.[4]The Istanbul Spectral Music Conference of 2003 suggested a redefinition of the term "spectral music" to encompass any music that foregrounds timbre as an important element of structure or language.[5]
While spectralism as a historical movement is generally considered to have begun in France and Germany in the 1970s, precursors to the philosophy and techniques of spectralism, as prizing the nature and properties of sound above all else as an organizing principle for music, go back at least to the early twentieth century. Proto-spectral composers includeClaude Debussy,Edgard Varèse,Giacinto Scelsi,Olivier Messiaen,György Ligeti,Iannis Xenakis,La Monte Young, andKarlheinz Stockhausen.[6][7][8]Other composers who anticipated spectralist ideas in their theoretical writings includeHarry Partch,Henry Cowell, andPaul Hindemith.[9]Also crucial to the origins of spectralism was the development of techniques of sound analysis and synthesis incomputer musicand acoustics during this period, especially focused around IRCAM in France and Darmstadt in Germany.[10]
Julian Anderson considers Danish composerPer Nørgård'sVoyage into the Golden Screenfor chamber orchestra (1968) to be the first "properly instrumental piece of spectral composition".[11]Spectralism as a recognizable and unified movement, however, arose during the early 1970s, in part as a reaction against and alternative to the primarily pitch focused aesthetics of theserialismand post-serialism which was ascendant at the time.[a]Early spectral composers were centered in the cities of Paris and Cologne and associated with the composers of theEnsemble l'Itinéraireand the Feedback group, respectively. In Paris,Gérard GriseyandTristan Murailwere the most prominent pioneers of spectral techniques; Grisey'sEspaces Acoustiquesand Murail'sGondwanawere two influential works of this period. Their early work emphasized the use of the overtone series, techniques ofspectral analysisand ring and frequency modulation, and slowly unfolding processes to create music which gave a new attention to timbre and texture.[12]
The German Feedback group, includingJohannes Fritsch,Mesías Maiguashca,Péter Eötvös,Claude Vivier, andClarence Barlow, was primarily associated with students and disciples of Karlheinz Stockhausen, and began to pioneer spectral techniques around the same time. Their work generally placed more emphasis on linear and melodic writing within a spectral context as compared to that of their French contemporaries, though with significant variations.[13]Another important group of early spectral composers was centered in Romania, where a unique form of spectralism arose, in part inspired by Romanian folk music.[14]This folk tradition, as collected byBéla Bartók(1904–1918), with its acoustic scales derived directly from resonance and natural wind instruments of thealphornfamily, like thebuciumeandtulnice, as well as thecimpoibagpipe, inspired several spectral composers, includingCorneliu Cezar,Anatol Vieru,Aurel Stroe,Ștefan Niculescu,Horațiu Rădulescu,Iancu Dumitrescu, andOctavian Nemescu.[15]
Towards the end of the twentieth century, techniques associated with spectralist composers began to be adopted more widely and the original pioneers of spectralism began to integrate their techniques more fully with those of other traditions. For example, in their works from the later 1980s and into the 1990s, both Grisey and Murail began to shift their emphasis away from the more gradual and regular process which characterized their early work to include more sudden dramatic contrasts as more well linear and contrapuntal writing.[16]Likewise, spectral techniques were adopted by composers from a wider variety of traditions and countries, including the UK (with composers likeJulian AndersonandJonathan Harvey), Finland (composers likeMagnus LindbergandKaija Saariaho), and the United States.[17]A further development is the emergence of "hyper-spectralism"[clarification needed]in the works of Iancu Dumitrescu and Ana-Maria Avram.[18][19]
The spectral adventure has allowed the renovation, without imitation of the foundations of occidental music, because it is not a closed technique but an attitude.—Gérard Grisey[20]
The "panoply of methods and techniques" used are secondary, being only "the means of achieving a sonic end".[3]
Spectral music focuses on the phenomenon andacousticsof sound as well as its potential semantic qualities. Pitch material and intervallic content are often derived from theharmonic series, including the use ofmicrotones. Spectrographic analysis of acoustic sources is used as inspiration fororchestration. The reconstruction of electroacoustic source materials by using acoustic instruments is another common approach to spectral orchestration. In "additive instrumental synthesis", instruments are assigned to play discrete components of a sound, such as an individualpartial.Amplitude modulation,frequency modulation,difference tones, harmonic fusion, residue pitch,Shepard-tonephenomena, and other psychoacoustic concepts are applied to music materials.[21]
Formal concepts important in spectral music includeprocessand the stretching of time.[further explanation needed]Though development is "significantly different from those ofminimalist music" in that all musical parameters may be affected, it similarly draws attention to very subtle aspects of the music. These processes most often achieve a smooth transition throughinterpolation.[22]Any or all of these techniques may be operating in a particular work, though this list is not exhaustive.
TheRomanianspectral tradition focuses more on the study of how sound itself behaves in a "live" environment. Sound work is not restricted to harmonic spectra but includes transitory aspects oftimbreand non-harmonicmusical components(e.g.,rhythm,tempo,dynamics). Furthermore, sound is treatedphenomenologicallyas a dynamic presence to be encountered in listening (rather than as an object of scientific study). This approach results in a transformational musical language in which continuous change of the material displaces the central role accorded to structure in spectralism of the "French school".[23]
Spectral music was initially associated with composers of the FrenchEnsemble l'Itinéraire, includingHugues Dufourt,Gérard Grisey,Tristan Murail, andMichaël Lévinas. For these composers, musical sound (or natural sound) is taken as a model for composition, leading to an interest in the exploration of the interior of sounds.[24]Giacinto Scelsiwas an important influence on Grisey, Murail, and Lévinas; his approach with exploring a single sound in his works and a "smooth" conception of time (such as in hisQuattro pezzi su una nota sola) greatly influenced these composers to include new instrumental techniques and variations of timbre in their works.[25]
Other spectral music composers include those from the German Feedback group, principallyJohannes Fritsch,Mesías Maiguashca,Péter Eötvös,Claude Vivier, andClarence Barlow. Features of spectralism are also seen independently in the contemporary work of Romanian composersCorneliu Cezar,Ștefan Niculescu,Horațiu Rădulescu, andIancu Dumitrescu.[1]
Independent of spectral music developments in Europe, American composerJames Tenney's output included more than fifty significant works that feature spectralist traits.[26]His influences came from encounters with a scientific culture which pervaded during the postwar era, and a "quasi-empiricist musical aesthetic" fromJohn Cage.[27]His works, although having similarities with European spectral music, are distinctive in some ways, for example in his interest in "post-Cageian indeterminacy".
The spectralist movement inspired more recent composers such asJulian Anderson,Ana-Maria Avram,Joshua Fineberg,Georg Friedrich Haas,Jonathan Harvey,Fabien Lévy,Magnus Lindberg, andKaija Saariaho.
Some of the "post-spectralist" French composers includeÉric Tanguy[fr],Philippe Hurel,François Paris,Philippe Leroux, andThierry Blondeau.[28]
In the United States, composers such asAlvin Lucier,La Monte Young,Terry Riley,Maryanne Amacher,Phill Niblock, andGlenn Brancarelate some of the influences of spectral music into their own work. Tenney's work has also influenced a number of composers such asLarry PolanskyandJohn Luther Adams.[29]
In the US, jazz saxophonist and composerSteve Lehman, and in Europe, French composerFrédéric Maurin[fr;de], have both introduced spectral techniques into the domain of jazz.[30][31]
Characteristic spectral pieces include:
Other pieces that utilise spectral ideas or techniques include:[11][27][32]
Post-spectral pieces include:[33][34]
StriaandMortuos Plango, Vivos Vocoare examples ofelectronic musicthat embrace spectral techniques.[35][36] | https://en.wikipedia.org/wiki/Spectral_music |
Sidorenko's conjectureis a majorconjecturein the field ofextremal graph theory, posed byAlexander Sidorenkoin 1986. Roughly speaking, the conjecture states that for anybipartite graphH{\displaystyle H}andgraphG{\displaystyle G}onn{\displaystyle n}vertices with average degreepn{\displaystyle pn}, there are at leastp|E(H)|n|V(H)|{\displaystyle p^{|E(H)|}n^{|V(H)|}}labeled copies ofH{\displaystyle H}inG{\displaystyle G}, up to a small error term. Formally, it provides an intuitive inequality aboutgraph homomorphismdensities ingraphons. The conjectured inequality can be interpreted as a statement that the density of copies ofH{\displaystyle H}in a graph is asymptotically minimized by a random graph, as one would expect ap|E(H)|{\displaystyle p^{|E(H)|}}fraction of possible subgraphs to be a copy ofH{\displaystyle H}if each edge exists with probabilityp{\displaystyle p}.
LetH{\displaystyle H}be a graph. ThenH{\displaystyle H}is said to haveSidorenko's propertyif, for allgraphonsW{\displaystyle W}, the inequality
is true, wheret(H,W){\displaystyle t(H,W)}is thehomomorphism densityofH{\displaystyle H}inW{\displaystyle W}.
Sidorenko's conjecture (1986) states that every bipartite graph has Sidorenko's property.[1]
IfW{\displaystyle W}is a graphG{\displaystyle G}, this means that the probability of a uniform random mapping fromV(H){\displaystyle V(H)}toV(G){\displaystyle V(G)}being a homomorphism is at least the product over each edge inH{\displaystyle H}of the probability of that edge being mapped to an edge inG{\displaystyle G}. This roughly means that a randomly chosen graph with fixed number of vertices and average degree has the minimum number of labeled copies ofH{\displaystyle H}. This is not a surprising conjecture because the right hand side of the inequality is the probability of the mapping being a homomorphism if each edge map is independent. So one should expect the two sides to be at least of the same order. The natural extension to graphons would follow from the fact that every graphon is thelimit pointof some sequence of graphs.
The requirement thatH{\displaystyle H}is bipartite to have Sidorenko's property is necessary — ifW{\displaystyle W}is a bipartite graph, thent(K3,W)=0{\displaystyle t(K_{3},W)=0}sinceW{\displaystyle W}is triangle-free. Butt(K2,W){\displaystyle t(K_{2},W)}is twice the number of edges inW{\displaystyle W}, so Sidorenko's property does not hold forK3{\displaystyle K_{3}}. A similar argument shows that no graph with an odd cycle has Sidorenko's property. Since a graph is bipartite if and only if it has no odd cycles, this implies that the only possible graphs that can have Sidorenko's property are bipartite graphs.
Sidorenko's property is equivalent to the following reformulation:
This is equivalent because the number of homomorphisms fromK2{\displaystyle K_{2}}toG{\displaystyle G}is twice the number of edges inG{\displaystyle G}, and the inequality only needs to be checked whenW{\displaystyle W}is a graph as previously mentioned.
In this formulation, since the number of non-injective homomorphisms fromH{\displaystyle H}toG{\displaystyle G}is at most a constant timesn|V(H)|−1{\displaystyle n^{|V(H)|-1}}, Sidorenko's property would imply that there are at least(p|E(H)|−o(1))n|V(H)|{\displaystyle (p^{|E(H)|}-o(1))n^{|V(H)|}}labeled copies ofH{\displaystyle H}inG{\displaystyle G}.
As previously noted, to prove Sidorenko's property it suffices to demonstrate the inequality for all graphsG{\displaystyle G}. Throughout this section,G{\displaystyle G}is a graph onn{\displaystyle n}vertices with average degreepn{\displaystyle pn}. The quantityhom(H,G){\displaystyle \operatorname {hom} (H,G)}refers to the number of homomorphisms fromH{\displaystyle H}toG{\displaystyle G}. This quantity is the same asn|V(H)|t(H,G){\displaystyle n^{|V(H)|}t(H,G)}.
Elementary proofs of Sidorenko's property for some graphs follow from theCauchy–Schwarz inequalityorHölder's inequality. Others can be done by usingspectral graph theory, especially noting the observation that the number of closed paths of lengthℓ{\displaystyle \ell }from vertexi{\displaystyle i}to vertexj{\displaystyle j}inG{\displaystyle G}is the component in thei{\displaystyle i}th row andj{\displaystyle j}th column of the matrixAℓ{\displaystyle A^{\ell }}, whereA{\displaystyle A}is theadjacency matrixofG{\displaystyle G}.
By fixing two verticesu{\displaystyle u}andv{\displaystyle v}ofG{\displaystyle G}, each copy ofC4{\displaystyle C_{4}}that haveu{\displaystyle u}andv{\displaystyle v}on opposite ends can be identified by choosing two (not necessarily distinct) common neighbors ofu{\displaystyle u}andv{\displaystyle v}. Lettingcodeg(u,v){\displaystyle \operatorname {codeg} (u,v)}denote thecodegreeofu{\displaystyle u}andv{\displaystyle v}(i.e. the number of common neighbors), this implies:
by the Cauchy–Schwarz inequality. The sum has now become a count of all pairs of vertices and their common neighbors, which is the same as the count of all vertices and pairs of their neighbors. So:
by Cauchy–Schwarz again. So:
as desired.
Although the Cauchy–Schwarz approach forC4{\displaystyle C_{4}}is elegant and elementary, it does not immediately generalize to all even cycles. However, one can apply spectral graph theory to prove that all even cycles have Sidorenko's property. Note that odd cycles are not accounted for in Sidorenko's conjecture because they are not bipartite.
Using the observation about closed paths, it follows thathom(C2k,G){\displaystyle \operatorname {hom} (C_{2k},G)}is the sum of the diagonal entries inA2k{\displaystyle A^{2k}}. This is equal to thetraceofA2k{\displaystyle A^{2k}}, which in turn is equal to the sum of the2k{\displaystyle 2k}th powers of theeigenvaluesofA{\displaystyle A}. Ifλ1≥λ2≥⋯≥λn{\displaystyle \lambda _{1}\geq \lambda _{2}\geq \dots \geq \lambda _{n}}are the eigenvalues ofA{\displaystyle A}, then themin-max theoremimplies that:
where1{\displaystyle \mathbf {1} }is the vector withn{\displaystyle n}components, all of which are1{\displaystyle 1}. But then:
because the eigenvalues of areal symmetric matrixare real. So:
as desired.
J.L. Xiang Li andBalázs Szegedy(2011) introduced the idea of usingentropyto prove some cases of Sidorenko's conjecture. Szegedy (2015) later applied the ideas further to prove that an even wider class of bipartite graphs have Sidorenko's property.[2]While Szegedy's proof wound up being abstract and technical,Tim Gowersand Jason Long reduced the argument to a simpler one for specific cases such as paths of length3{\displaystyle 3}.[3]In essence, the proof chooses a niceprobability distributionof choosing the vertices in the path and appliesJensen's inequality(i.e. convexity) to deduce the inequality.
Here is a list of some bipartite graphsH{\displaystyle H}which have been shown to have Sidorenko's property. LetH{\displaystyle H}have bipartitionA⊔B{\displaystyle A\sqcup B}.
However, there are graphs for which Sidorenko's conjecture is still open. An example is the "Möbius strip" graphK5,5∖C10{\displaystyle K_{5,5}\setminus C_{10}}, formed by removing a10{\displaystyle 10}-cycle from the complete bipartite graph with parts of size5{\displaystyle 5}.
László Lovászproved a local version of Sidorenko's conjecture, i.e. for graphs that are "close" torandom graphsin a sense of cut norm.[11]
A sequence of graphs{Gn}n=1∞{\displaystyle \{G_{n}\}_{n=1}^{\infty }}is calledquasi-random with densityp{\displaystyle p}for some density0<p<1{\displaystyle 0<p<1}if for every graphH{\displaystyle H}:
The sequence of graphs would thus have properties of theErdős–Rényi random graphG(n,p){\displaystyle G(n,p)}.
If the edge densityt(K2,Gn){\displaystyle t(K_{2},G_{n})}is fixed at(1+o(1))p{\displaystyle (1+o(1))p}, then the condition implies that the sequence of graphs is near the equality case in Sidorenko's property for every graphH{\displaystyle H}.
From Chung, Graham, and Wilson's 1989 paper about quasi-random graphs, it suffices for theC4{\displaystyle C_{4}}count to match what would be expected of a random graph (i.e. the condition holds forH=C4{\displaystyle H=C_{4}}).[12]The paper also asks which graphsH{\displaystyle H}have this property besidesC4{\displaystyle C_{4}}. Such graphs are calledforcing graphsas their count controls the quasi-randomness of a sequence of graphs.
The forcing conjecture states the following:
It is straightforward to see that ifH{\displaystyle H}is forcing, then it is bipartite and not a tree. Some examples of forcing graphs are even cycles (shown by Chung, Graham, and Wilson). Skokan and Thoma showed that all complete bipartite graphs that are not trees are forcing.[13]
Sidorenko's conjecture for graphs of densityp{\displaystyle p}follows from the forcing conjecture. Furthermore, the forcing conjecture would show that graphs that are close to equality in Sidorenko's property must satisfy quasi-randomness conditions.[14] | https://en.wikipedia.org/wiki/Sidorenko%27s_conjecture |
In algebra, anelliptic algebrais a certainregular algebraof aGelfand–Kirillov dimensionthree (quantum polynomial ringin three variables) that corresponds to a cubic divisor in the projective spaceP2. If the cubic divisor happens to be anelliptic curve, then the algebra is called aSklyanin algebra. The notion is studied in the context ofnoncommutative projective geometry.
Thisalgebra-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Elliptic_algebra |
Flat Neighborhood Network (FNN)is atopologyfordistributed computingand other computer networks. Eachnodeconnects to two or moreswitcheswhich, ideally, entirely cover the node collection, so that each node can connect to any other node in two "hops" (jump up to one switch and down to the other node). This contrasts to topologies with fewer cables per node which communicate with remote nodes via intermediate nodes, as in Hypercube (seeThe Connection Machine).
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
This supercomputer-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Flat_neighborhood_network |
Incomputing, theprocess identifier(a.k.a.process IDorPID) is a number used by mostoperating systemkernels—such as those ofUnix,macOSandWindows—to uniquely identify an activeprocess. This number may be used as a parameter in various function calls, allowing processes to be manipulated, such as adjusting the process's priority orkillingit altogether.
InUnix-likeoperating systems, new processes are created by thefork()system call. The PID is returned to theparent process, enabling it to refer to the child in further function calls. The parent may, for example, wait for the child to terminate with thewaitpid()function, or terminate the process withkill().
There are two tasks with specially distinguished process IDs: PID 0 is used forswapperorsched, which is part of the kernel and is a process that runs on a CPU core whenever that CPU core has nothing else to do.[1]Linux also calls the threads of this processidle tasks.[2]In some APIs, PID 0 is also used as a special value that always refers to the calling thread, process, or process group.[3][4]Process ID 1 is usually theinitprocess primarily responsible for starting and shutting down the system. Originally, process ID 1 was not specifically reserved for init by any technical measures: it simply had this ID as a natural consequence of being the first process invoked by the kernel. More recent Unix systems typically have additional kernel components visible as 'processes', in which case PID 1 is actively reserved for the init process to maintain consistency with older systems.
Process IDs, in the first place, are usually allocated on a sequential basis,[5]beginning at 0 and rising to a maximum value which varies from system to system. Once this limit is reached, allocation restarts at 300 and again increases. InmacOSandHP-UX, allocation restarts at 100.[6]However, for this and subsequent passes any PIDs still assigned to processes are skipped. Some consider this to be a potential security vulnerability in that it allows information about the system to be extracted, or messages to be covertly passed between processes. As such, implementations that are particularly concerned about security may choose a different method of PID assignment.[7]On some systems, likeMPE/iX, the lowest available PID is used, sometimes in an effort to minimize the number of process information kernel pages in memory.
The current process ID is provided by agetpid()system call,[8]or as a variable$$in shell. The process ID of a parent process is obtainable by agetppid()system call.[9]
OnLinux, the maximum process ID is given by the pseudo-file/proc/sys/kernel/pid_max.[10]
Some processes, for example, themocmusic player and theMySQLdaemon, write their PID to a documented file location, to allow other processes to look it up.[citation needed]
On theWindowsfamily of operating systems, one can get the current process's ID using theGetCurrentProcessId()function of theWindows API,[11]and ID of other processes usingGetProcessId().[12]Internally, process ID is called aclient ID, and is allocated from the same namespace asthreadIDs, so these two never overlap. TheSystem Idle Processis given process ID 0. TheSystem Processis given the process ID 8 onWindows 2000and 4 onWindows XPandWindows Server 2003.[13]On theWindows NT familyof operating systems, process and thread identifiers are all multiples of 4, but it is not part of the specification.[14] | https://en.wikipedia.org/wiki/Process_identifier |
Inmathematics,quantalesare certainpartially orderedalgebraic structuresthat generalizelocales(point free topologies) as well as various multiplicativelatticesofidealsfromring theoryandfunctional analysis(C*-algebras,von Neumann algebras).[1]Quantales are sometimes referred to ascompleteresiduated semigroups.
Aquantaleis acomplete latticeQ{\displaystyle Q}with anassociativebinary operation∗:Q×Q→Q{\displaystyle \ast \colon Q\times Q\to Q}, called itsmultiplication, satisfying a distributive property such that
and
for allx,yi∈Q{\displaystyle x,y_{i}\in Q}andi∈I{\displaystyle i\in I}(hereI{\displaystyle I}is anyindex set). The quantale isunitalif it has anidentity elemente{\displaystyle e}for its multiplication:
for allx∈Q{\displaystyle x\in Q}. In this case, the quantale is naturally amonoidwith respect to its multiplication∗{\displaystyle \ast }.
A unital quantale may be defined equivalently as amonoidin the categorySupof completejoin-semilattices.
A unital quantale is an idempotentsemiringunder join and multiplication.
A unital quantale in which the identity is thetop elementof the underlying lattice is said to bestrictly two-sided(or simplyintegral).
Acommutative quantaleis a quantale whose multiplication iscommutative. Aframe, with its multiplication given by themeetoperation, is a typical example of a strictly two-sided commutative quantale. Another simple example is provided by theunit intervaltogether with its usualmultiplication.
Anidempotent quantaleis a quantale whose multiplication isidempotent. Aframeis the same as an idempotent strictly two-sided quantale.
Aninvolutive quantaleis a quantale with an involution
that preserves joins:
Aquantalehomomorphismis amapf:Q1→Q2{\displaystyle f\colon Q_{1}\to Q_{2}}that preserves joins and multiplication for allx,y,xi∈Q1{\displaystyle x,y,x_{i}\in Q_{1}}andi∈I{\displaystyle i\in I}:
Thismathematics-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Quantale |
Instatistics,maximum likelihood estimation(MLE) is a method ofestimatingtheparametersof an assumedprobability distribution, given some observed data. This is achieved bymaximizingalikelihood functionso that, under the assumedstatistical model, theobserved datais most probable. Thepointin theparameter spacethat maximizes the likelihood function is called the maximum likelihood estimate.[1]The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means ofstatistical inference.[2][3][4]
If the likelihood function isdifferentiable, thederivative testfor finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, theordinary least squaresestimator for alinear regressionmodel maximizes the likelihood when the random errors are assumed to havenormaldistributions with the same variance.[5]
From the perspective ofBayesian inference, MLE is generally equivalent tomaximum a posteriori (MAP) estimationwith aprior distributionthat isuniformin the region of interest. Infrequentist inference, MLE is a special case of anextremum estimator, with the objective function being the likelihood.
We model a set of observations as a randomsamplefrom an unknownjoint probability distributionwhich is expressed in terms of a set ofparameters. The goal of maximum likelihood estimation is to determine the parameters for which the observed data have the highest joint probability. We write the parameters governing the joint distribution as a vectorθ=[θ1,θ2,…,θk]T{\displaystyle \;\theta =\left[\theta _{1},\,\theta _{2},\,\ldots ,\,\theta _{k}\right]^{\mathsf {T}}\;}so that this distribution falls within aparametric family{f(⋅;θ)∣θ∈Θ},{\displaystyle \;\{f(\cdot \,;\theta )\mid \theta \in \Theta \}\;,}whereΘ{\displaystyle \,\Theta \,}is called theparameter space, a finite-dimensional subset ofEuclidean space. Evaluating the joint density at the observed data sampley=(y1,y2,…,yn){\displaystyle \;\mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})\;}gives a real-valued function,Ln(θ)=Ln(θ;y)=fn(y;θ),{\displaystyle {\mathcal {L}}_{n}(\theta )={\mathcal {L}}_{n}(\theta ;\mathbf {y} )=f_{n}(\mathbf {y} ;\theta )\;,}which is called thelikelihood function. Forindependent random variables,fn(y;θ){\displaystyle f_{n}(\mathbf {y} ;\theta )}will be the product of univariatedensity functions:fn(y;θ)=∏k=1nfkunivar(yk;θ).{\displaystyle f_{n}(\mathbf {y} ;\theta )=\prod _{k=1}^{n}\,f_{k}^{\mathsf {univar}}(y_{k};\theta )~.}
The goal of maximum likelihood estimation is to find the values of the model parameters that maximize the likelihood function over the parameter space,[6]that is:θ^=argmaxθ∈ΘLn(θ;y).{\displaystyle {\hat {\theta }}={\underset {\theta \in \Theta }{\operatorname {arg\;max} }}\,{\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}
Intuitively, this selects the parameter values that make the observed data most probable. The specific valueθ^=θ^n(y)∈Θ{\displaystyle ~{\hat {\theta }}={\hat {\theta }}_{n}(\mathbf {y} )\in \Theta ~}that maximizes the likelihood functionLn{\displaystyle \,{\mathcal {L}}_{n}\,}is called the maximum likelihood estimate. Further, if the functionθ^n:Rn→Θ{\displaystyle \;{\hat {\theta }}_{n}:\mathbb {R} ^{n}\to \Theta \;}so defined ismeasurable, then it is called the maximum likelihoodestimator. It is generally a function defined over thesample space, i.e. taking a given sample as its argument. Asufficient but not necessarycondition for its existence is for the likelihood function to becontinuousover a parameter spaceΘ{\displaystyle \,\Theta \,}that iscompact.[7]For anopenΘ{\displaystyle \,\Theta \,}the likelihood function may increase without ever reaching a supremum value.
In practice, it is often convenient to work with thenatural logarithmof the likelihood function, called thelog-likelihood:ℓ(θ;y)=lnLn(θ;y).{\displaystyle \ell (\theta \,;\mathbf {y} )=\ln {\mathcal {L}}_{n}(\theta \,;\mathbf {y} )~.}Since the logarithm is amonotonic function, the maximum ofℓ(θ;y){\displaystyle \;\ell (\theta \,;\mathbf {y} )\;}occurs at the same value ofθ{\displaystyle \theta }as does the maximum ofLn.{\displaystyle \,{\mathcal {L}}_{n}~.}[8]Ifℓ(θ;y){\displaystyle \ell (\theta \,;\mathbf {y} )}isdifferentiableinΘ,{\displaystyle \,\Theta \,,}sufficient conditionsfor the occurrence of a maximum (or a minimum) are∂ℓ∂θ1=0,∂ℓ∂θ2=0,…,∂ℓ∂θk=0,{\displaystyle {\frac {\partial \ell }{\partial \theta _{1}}}=0,\quad {\frac {\partial \ell }{\partial \theta _{2}}}=0,\quad \ldots ,\quad {\frac {\partial \ell }{\partial \theta _{k}}}=0~,}known as the likelihood equations. For some models, these equations can be explicitly solved forθ^,{\displaystyle \,{\widehat {\theta \,}}\,,}but in general no closed-form solution to the maximization problem is known or available, and an MLE can only be found vianumerical optimization. Another problem is that in finite samples, there may exist multiplerootsfor the likelihood equations.[9]Whether the identified rootθ^{\displaystyle \,{\widehat {\theta \,}}\,}of the likelihood equations is indeed a (local) maximum depends on whether the matrix of second-order partial and cross-partial derivatives, the so-calledHessian matrix
H(θ^)=[∂2ℓ∂θ12|θ=θ^∂2ℓ∂θ1∂θ2|θ=θ^…∂2ℓ∂θ1∂θk|θ=θ^∂2ℓ∂θ2∂θ1|θ=θ^∂2ℓ∂θ22|θ=θ^…∂2ℓ∂θ2∂θk|θ=θ^⋮⋮⋱⋮∂2ℓ∂θk∂θ1|θ=θ^∂2ℓ∂θk∂θ2|θ=θ^…∂2ℓ∂θk2|θ=θ^],{\displaystyle \mathbf {H} \left({\widehat {\theta \,}}\right)={\begin{bmatrix}\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{1}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{2}\,\partial \theta _{k}}}\right|_{\theta ={\widehat {\theta \,}}}\\\vdots &\vdots &\ddots &\vdots \\\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{1}}}\right|_{\theta ={\widehat {\theta \,}}}&\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}\,\partial \theta _{2}}}\right|_{\theta ={\widehat {\theta \,}}}&\dots &\left.{\frac {\partial ^{2}\ell }{\partial \theta _{k}^{2}}}\right|_{\theta ={\widehat {\theta \,}}}\end{bmatrix}}~,}
isnegative semi-definiteatθ^{\displaystyle {\widehat {\theta \,}}}, as this indicates localconcavity. Conveniently, most commonprobability distributions– in particular theexponential family– arelogarithmically concave.[10][11]
While the domain of the likelihood function—theparameter space—is generally a finite-dimensional subset ofEuclidean space, additionalrestrictionssometimes need to be incorporated into the estimation process. The parameter space can be expressed asΘ={θ:θ∈Rk,h(θ)=0},{\displaystyle \Theta =\left\{\theta :\theta \in \mathbb {R} ^{k},\;h(\theta )=0\right\}~,}
whereh(θ)=[h1(θ),h2(θ),…,hr(θ)]{\displaystyle \;h(\theta )=\left[h_{1}(\theta ),h_{2}(\theta ),\ldots ,h_{r}(\theta )\right]\;}is avector-valued functionmappingRk{\displaystyle \,\mathbb {R} ^{k}\,}intoRr.{\displaystyle \;\mathbb {R} ^{r}~.}Estimating the true parameterθ{\displaystyle \theta }belonging toΘ{\displaystyle \Theta }then, as a practical matter, means to find the maximum of the likelihood function subject to theconstrainth(θ)=0.{\displaystyle ~h(\theta )=0~.}
Theoretically, the most natural approach to thisconstrained optimizationproblem is the method of substitution, that is "filling out" the restrictionsh1,h2,…,hr{\displaystyle \;h_{1},h_{2},\ldots ,h_{r}\;}to a seth1,h2,…,hr,hr+1,…,hk{\displaystyle \;h_{1},h_{2},\ldots ,h_{r},h_{r+1},\ldots ,h_{k}\;}in such a way thath∗=[h1,h2,…,hk]{\displaystyle \;h^{\ast }=\left[h_{1},h_{2},\ldots ,h_{k}\right]\;}is aone-to-one functionfromRk{\displaystyle \mathbb {R} ^{k}}to itself, and reparameterize the likelihood function by settingϕi=hi(θ1,θ2,…,θk).{\displaystyle \;\phi _{i}=h_{i}(\theta _{1},\theta _{2},\ldots ,\theta _{k})~.}[12]Because of the equivariance of the maximum likelihood estimator, the properties of the MLE apply to the restricted estimates also.[13]For instance, in amultivariate normal distributionthecovariance matrixΣ{\displaystyle \,\Sigma \,}must bepositive-definite; this restriction can be imposed by replacingΣ=ΓTΓ,{\displaystyle \;\Sigma =\Gamma ^{\mathsf {T}}\Gamma \;,}whereΓ{\displaystyle \Gamma }is a realupper triangular matrixandΓT{\displaystyle \Gamma ^{\mathsf {T}}}is itstranspose.[14]
In practice, restrictions are usually imposed using the method of Lagrange which, given the constraints as defined above, leads to therestricted likelihood equations∂ℓ∂θ−∂h(θ)T∂θλ=0{\displaystyle {\frac {\partial \ell }{\partial \theta }}-{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\lambda =0}andh(θ)=0,{\displaystyle h(\theta )=0\;,}
whereλ=[λ1,λ2,…,λr]T{\displaystyle ~\lambda =\left[\lambda _{1},\lambda _{2},\ldots ,\lambda _{r}\right]^{\mathsf {T}}~}is a column-vector ofLagrange multipliersand∂h(θ)T∂θ{\displaystyle \;{\frac {\partial h(\theta )^{\mathsf {T}}}{\partial \theta }}\;}is thek × rJacobian matrixof partial derivatives.[12]Naturally, if the constraints are not binding at the maximum, the Lagrange multipliers should be zero.[15]This in turn allows for a statistical test of the "validity" of the constraint, known as theLagrange multiplier test.
Nonparametric maximum likelihood estimation can be performed using theempirical likelihood.
A maximum likelihood estimator is anextremum estimatorobtained by maximizing, as a function ofθ, theobjective functionℓ^(θ;x){\displaystyle {\widehat {\ell \,}}(\theta \,;x)}. If the data areindependent and identically distributed, then we haveℓ^(θ;x)=∑i=1nlnf(xi∣θ),{\displaystyle {\widehat {\ell \,}}(\theta \,;x)=\sum _{i=1}^{n}\ln f(x_{i}\mid \theta ),}this being the sample analogue of the expected log-likelihoodℓ(θ)=E[lnf(xi∣θ)]{\displaystyle \ell (\theta )=\operatorname {\mathbb {E} } [\,\ln f(x_{i}\mid \theta )\,]}, where this expectation is taken with respect to the true density.
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[16]However, like other estimation methods, maximum likelihood estimation possesses a number of attractivelimiting properties: As the sample size increases to infinity, sequences of maximum likelihood estimators have these properties:
Under the conditions outlined below, the maximum likelihood estimator isconsistent. The consistency means that if the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}and we have a sufficiently large number of observationsn, then it is possible to find the value ofθ0with arbitrary precision. In mathematical terms this means that asngoes to infinity the estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges in probabilityto its true value:θ^mle→pθ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{p}}}\ \theta _{0}.}
Under slightly stronger conditions, the estimator convergesalmost surely(orstrongly):θ^mle→a.s.θ0.{\displaystyle {\widehat {\theta \,}}_{\mathrm {mle} }\ {\xrightarrow {\text{a.s.}}}\ \theta _{0}.}
In practical applications, data is never generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}. Rather,f(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}is a model, often in idealized form, of the process generated by the data. It is a common aphorism in statistics thatall models are wrong. Thus, true consistency does not occur in practical applications. Nevertheless, consistency is often considered to be a desirable property for an estimator to have.
To establish consistency, the following conditions are sufficient.[17]
θ≠θ0⇔f(⋅∣θ)≠f(⋅∣θ0).{\displaystyle \theta \neq \theta _{0}\quad \Leftrightarrow \quad f(\cdot \mid \theta )\neq f(\cdot \mid \theta _{0}).}In other words, different parameter valuesθcorrespond to different distributions within the model. If this condition did not hold, there would be some valueθ1such thatθ0andθ1generate an identical distribution of the observable data. Then we would not be able to distinguish between these two parameters even with an infinite amount of data—these parameters would have beenobservationally equivalent.
The identification condition establishes that the log-likelihood has a unique global maximum. Compactness implies that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right).
Compactness is only a sufficient condition and not a necessary condition. Compactness can be replaced by some other conditions, such as:
P[lnf(x∣θ)∈C0(Θ)]=1.{\displaystyle \operatorname {\mathbb {P} } {\Bigl [}\;\ln f(x\mid \theta )\;\in \;C^{0}(\Theta )\;{\Bigr ]}=1.}
The dominance condition can be employed in the case ofi.i.d.observations. In the non-i.i.d. case, the uniform convergence in probability can be checked by showing that the sequenceℓ^(θ∣x){\displaystyle {\widehat {\ell \,}}(\theta \mid x)}isstochastically equicontinuous.
If one wants to demonstrate that the ML estimatorθ^{\displaystyle {\widehat {\theta \,}}}converges toθ0almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:supθ∈Θ‖ℓ^(θ∣x)−ℓ(θ)‖→a.s.0.{\displaystyle \sup _{\theta \in \Theta }\left\|\;{\widehat {\ell \,}}(\theta \mid x)-\ell (\theta )\;\right\|\ \xrightarrow {\text{a.s.}} \ 0.}
Additionally, if (as assumed above) the data were generated byf(⋅;θ0){\displaystyle f(\cdot \,;\theta _{0})}, then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. Specifically,[18]n(θ^mle−θ0)→dN(0,I−1){\displaystyle {\sqrt {n}}\left({\widehat {\theta \,}}_{\mathrm {mle} }-\theta _{0}\right)\ \xrightarrow {d} \ {\mathcal {N}}\left(0,\,I^{-1}\right)}whereIis theFisher information matrix.
The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, ifθ^{\displaystyle {\widehat {\theta \,}}}is the MLE forθ{\displaystyle \theta }, and ifg(θ){\displaystyle g(\theta )}is any transformation ofθ{\displaystyle \theta }, then the MLE forα=g(θ){\displaystyle \alpha =g(\theta )}is by definition[19]
α^=g(θ^).{\displaystyle {\widehat {\alpha }}=g(\,{\widehat {\theta \,}}\,).\,}
It maximizes the so-calledprofile likelihood:
L¯(α)=supθ:α=g(θ)L(θ).{\displaystyle {\bar {L}}(\alpha )=\sup _{\theta :\alpha =g(\theta )}L(\theta ).\,}
The MLE is also equivariant with respect to certain transformations of the data. Ify=g(x){\displaystyle y=g(x)}whereg{\displaystyle g}is one to one and does not depend on the parameters to be estimated, then the density functions satisfy
fY(y)=fX(g−1(y))|(g−1(y))′|{\displaystyle f_{Y}(y)=f_{X}(g^{-1}(y))\,|(g^{-1}(y))^{\prime }|}
and hence the likelihood functions forX{\displaystyle X}andY{\displaystyle Y}differ only by a factor that does not depend on the model parameters.
For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data. In fact, in the log-normal case ifX∼N(0,1){\displaystyle X\sim {\mathcal {N}}(0,1)}, thenY=g(X)=eX{\displaystyle Y=g(X)=e^{X}}follows alog-normal distribution. The density of Y follows withfX{\displaystyle f_{X}}standardNormalandg−1(y)=log(y){\displaystyle g^{-1}(y)=\log(y)},|(g−1(y))′|=1y{\displaystyle |(g^{-1}(y))^{\prime }|={\frac {1}{y}}}fory>0{\displaystyle y>0}.
As assumed above, if the data were generated byf(⋅;θ0),{\displaystyle ~f(\cdot \,;\theta _{0})~,}then under certain conditions, it can also be shown that the maximum likelihood estimatorconverges in distributionto a normal distribution. It is√n-consistent and asymptotically efficient, meaning that it reaches theCramér–Rao bound. Specifically,[18]
n(θ^mle−θ0)→dN(0,I−1),{\displaystyle {\sqrt {n\,}}\,\left({\widehat {\theta \,}}_{\text{mle}}-\theta _{0}\right)\ \ \xrightarrow {d} \ \ {\mathcal {N}}\left(0,\ {\mathcal {I}}^{-1}\right)~,}whereI{\displaystyle ~{\mathcal {I}}~}is theFisher information matrix:Ijk=E[−∂2lnfθ0(Xt)∂θj∂θk].{\displaystyle {\mathcal {I}}_{jk}=\operatorname {\mathbb {E} } \,{\biggl [}\;-{\frac {\partial ^{2}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{j}\,\partial \theta _{k}}}\;{\biggr ]}~.}
In particular, it means that thebiasof the maximum likelihood estimator is equal to zero up to the order1/√n.
However, when we consider the higher-order terms in theexpansionof the distribution of this estimator, it turns out thatθmlehas bias of order1⁄n. This bias is equal to (componentwise)[20]
bh≡E[(θ^mle−θ0)h]=1n∑i,j,k=1mIhiIjk(12Kijk+Jj,ik){\displaystyle b_{h}\;\equiv \;\operatorname {\mathbb {E} } {\biggl [}\;\left({\widehat {\theta }}_{\mathrm {mle} }-\theta _{0}\right)_{h}\;{\biggr ]}\;=\;{\frac {1}{\,n\,}}\,\sum _{i,j,k=1}^{m}\;{\mathcal {I}}^{hi}\;{\mathcal {I}}^{jk}\left({\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\right)}
whereIjk{\displaystyle {\mathcal {I}}^{jk}}(with superscripts) denotes the (j,k)-th component of theinverseFisher information matrixI−1{\displaystyle {\mathcal {I}}^{-1}}, and
12Kijk+Jj,ik=E[12∂3lnfθ0(Xt)∂θi∂θj∂θk+∂lnfθ0(Xt)∂θj∂2lnfθ0(Xt)∂θi∂θk].{\displaystyle {\frac {1}{\,2\,}}\,K_{ijk}\;+\;J_{j,ik}\;=\;\operatorname {\mathbb {E} } \,{\biggl [}\;{\frac {1}{2}}{\frac {\partial ^{3}\ln f_{\theta _{0}}(X_{t})}{\partial \theta _{i}\;\partial \theta _{j}\;\partial \theta _{k}}}+{\frac {\;\partial \ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{j}}}\,{\frac {\;\partial ^{2}\ln f_{\theta _{0}}(X_{t})\;}{\partial \theta _{i}\,\partial \theta _{k}}}\;{\biggr ]}~.}
Using these formulae it is possible to estimate the second-order bias of the maximum likelihood estimator, andcorrectfor that bias by subtracting it:θ^mle∗=θ^mle−b^.{\displaystyle {\widehat {\theta \,}}_{\text{mle}}^{*}={\widehat {\theta \,}}_{\text{mle}}-{\widehat {b\,}}~.}This estimator is unbiased up to the terms of order1/n, and is called thebias-corrected maximum likelihood estimator.
This bias-corrected estimator issecond-order efficient(at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order1/n2. It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However, the maximum likelihood estimator isnotthird-order efficient.[21]
A maximum likelihood estimator coincides with themost probableBayesian estimatorgiven auniformprior distributionon theparameters. Indeed, themaximum a posteriori estimateis the parameterθthat maximizes the probability ofθgiven the data, given by Bayes' theorem:
P(θ∣x1,x2,…,xn)=f(x1,x2,…,xn∣θ)P(θ)P(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (\theta \mid x_{1},x_{2},\ldots ,x_{n})={\frac {f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}{\operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}}}
whereP(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is the prior distribution for the parameterθand whereP(x1,x2,…,xn){\displaystyle \operatorname {\mathbb {P} } (x_{1},x_{2},\ldots ,x_{n})}is the probability of the data averaged over all parameters. Since the denominator is independent ofθ, the Bayesian estimator is obtained by maximizingf(x1,x2,…,xn∣θ)P(θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )\operatorname {\mathbb {P} } (\theta )}with respect toθ. If we further assume that the priorP(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}is a uniform distribution, the Bayesian estimator is obtained by maximizing the likelihood functionf(x1,x2,…,xn∣θ){\displaystyle f(x_{1},x_{2},\ldots ,x_{n}\mid \theta )}. Thus the Bayesian estimator coincides with the maximum likelihood estimator for a uniform prior distributionP(θ){\displaystyle \operatorname {\mathbb {P} } (\theta )}.
In many practical applications inmachine learning, maximum-likelihood estimation is used as the model for parameter estimation.
The Bayesian Decision theory is about designing a classifier that minimizes total expected risk, especially, when the costs (the loss function) associated with different decisions are equal, the classifier is minimizing the error over the whole distribution.[22]
Thus, the Bayes Decision Rule is stated as
wherew1,w2{\displaystyle \;w_{1}\,,w_{2}\;}are predictions of different classes. From a perspective of minimizing error, it can also be stated asw=argmaxw∫−∞∞P(error∣x)P(x)dx{\displaystyle w={\underset {w}{\operatorname {arg\;max} }}\;\int _{-\infty }^{\infty }\operatorname {\mathbb {P} } ({\text{ error}}\mid x)\operatorname {\mathbb {P} } (x)\,\operatorname {d} x~}whereP(error∣x)=P(w1∣x){\displaystyle \operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{1}\mid x)~}if we decidew2{\displaystyle \;w_{2}\;}andP(error∣x)=P(w2∣x){\displaystyle \;\operatorname {\mathbb {P} } ({\text{ error}}\mid x)=\operatorname {\mathbb {P} } (w_{2}\mid x)\;}if we decidew1.{\displaystyle \;w_{1}\;.}
By applyingBayes' theoremP(wi∣x)=P(x∣wi)P(wi)P(x){\displaystyle \operatorname {\mathbb {P} } (w_{i}\mid x)={\frac {\operatorname {\mathbb {P} } (x\mid w_{i})\operatorname {\mathbb {P} } (w_{i})}{\operatorname {\mathbb {P} } (x)}}},
and if we further assume the zero-or-one loss function, which is a same loss for all errors, the Bayes Decision rule can be reformulated as:hBayes=argmaxw[P(x∣w)P(w)],{\displaystyle h_{\text{Bayes}}={\underset {w}{\operatorname {arg\;max} }}\,{\bigl [}\,\operatorname {\mathbb {P} } (x\mid w)\,\operatorname {\mathbb {P} } (w)\,{\bigr ]}\;,}wherehBayes{\displaystyle h_{\text{Bayes}}}is the prediction andP(w){\displaystyle \;\operatorname {\mathbb {P} } (w)\;}is theprior probability.
Findingθ^{\displaystyle {\hat {\theta }}}that maximizes the likelihood is asymptotically equivalent to finding theθ^{\displaystyle {\hat {\theta }}}that defines a probability distribution (Qθ^{\displaystyle Q_{\hat {\theta }}}) that has a minimal distance, in terms ofKullback–Leibler divergence, to the real probability distribution from which our data were generated (i.e., generated byPθ0{\displaystyle P_{\theta _{0}}}).[23]In an ideal world, P and Q are the same (and the only thing unknown isθ{\displaystyle \theta }that defines P), but even if they are not and the model we use is misspecified, still the MLE will give us the "closest" distribution (within the restriction of a model Q that depends onθ^{\displaystyle {\hat {\theta }}}) to the real distributionPθ0{\displaystyle P_{\theta _{0}}}.[24]
For simplicity of notation, let's assume that P=Q. Let there beni.i.ddata samplesy=(y1,y2,…,yn){\displaystyle \mathbf {y} =(y_{1},y_{2},\ldots ,y_{n})}from some probabilityy∼Pθ0{\displaystyle y\sim P_{\theta _{0}}}, that we try to estimate by findingθ^{\displaystyle {\hat {\theta }}}that will maximize the likelihood usingPθ{\displaystyle P_{\theta }}, then:θ^=argmaxθLPθ(y)=argmaxθPθ(y)=argmaxθP(y∣θ)=argmaxθ∏i=1nP(yi∣θ)=argmaxθ∑i=1nlogP(yi∣θ)=argmaxθ(∑i=1nlogP(yi∣θ)−∑i=1nlogP(yi∣θ0))=argmaxθ∑i=1n(logP(yi∣θ)−logP(yi∣θ0))=argmaxθ∑i=1nlogP(yi∣θ)P(yi∣θ0)=argminθ∑i=1nlogP(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nlogP(yi∣θ0)P(yi∣θ)=argminθ1n∑i=1nhθ(yi)⟶n→∞argminθE[hθ(y)]=argminθ∫Pθ0(y)hθ(y)dy=argminθ∫Pθ0(y)logP(y∣θ0)P(y∣θ)dy=argminθDKL(Pθ0∥Pθ){\displaystyle {\begin{aligned}{\hat {\theta }}&={\underset {\theta }{\operatorname {arg\,max} }}\,L_{P_{\theta }}(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P_{\theta }(\mathbf {y} )={\underset {\theta }{\operatorname {arg\,max} }}\,P(\mathbf {y} \mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\prod _{i=1}^{n}P(y_{i}\mid \theta )={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log P(y_{i}\mid \theta )\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\left(\sum _{i=1}^{n}\log P(y_{i}\mid \theta )-\sum _{i=1}^{n}\log P(y_{i}\mid \theta _{0})\right)={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\left(\log P(y_{i}\mid \theta )-\log P(y_{i}\mid \theta _{0})\right)\\&={\underset {\theta }{\operatorname {arg\,max} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta )}{P(y_{i}\mid \theta _{0})}}={\underset {\theta }{\operatorname {arg\,min} }}\,\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}\log {\frac {P(y_{i}\mid \theta _{0})}{P(y_{i}\mid \theta )}}\\&={\underset {\theta }{\operatorname {arg\,min} }}\,{\frac {1}{n}}\sum _{i=1}^{n}h_{\theta }(y_{i})\quad {\underset {n\to \infty }{\longrightarrow }}\quad {\underset {\theta }{\operatorname {arg\,min} }}\,E[h_{\theta }(y)]\\&={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)h_{\theta }(y)dy={\underset {\theta }{\operatorname {arg\,min} }}\,\int P_{\theta _{0}}(y)\log {\frac {P(y\mid \theta _{0})}{P(y\mid \theta )}}dy\\&={\underset {\theta }{\operatorname {arg\,min} }}\,D_{\text{KL}}(P_{\theta _{0}}\parallel P_{\theta })\end{aligned}}}
Wherehθ(x)=logP(x∣θ0)P(x∣θ){\displaystyle h_{\theta }(x)=\log {\frac {P(x\mid \theta _{0})}{P(x\mid \theta )}}}. Usinghhelps see how we are using thelaw of large numbersto move from the average ofh(x) to theexpectancyof it using thelaw of the unconscious statistician. The first several transitions have to do with laws oflogarithmand that findingθ^{\displaystyle {\hat {\theta }}}that maximizes some function will also be the one that maximizes some monotonic transformation of that function (i.e.: adding/multiplying by a constant).
Sincecross entropyis justShannon's entropyplus KL divergence, and since the entropy ofPθ0{\displaystyle P_{\theta _{0}}}is constant, then the MLE is also asymptotically minimizing cross entropy.[25]
Consider a case wherentickets numbered from 1 tonare placed in a box and one is selected at random (seeuniform distribution); thus, the sample size is 1. Ifnis unknown, then the maximum likelihood estimatorn^{\displaystyle {\widehat {n}}}ofnis the numbermon the drawn ticket. (The likelihood is 0 forn<m,1⁄nforn≥m, and this is greatest whenn=m. Note that the maximum likelihood estimate ofnoccurs at the lower extreme of possible values {m,m+ 1, ...}, rather than somewhere in the "middle" of the range of possible values, which would result in less bias.) Theexpected valueof the numbermon the drawn ticket, and therefore the expected value ofn^{\displaystyle {\widehat {n}}}, is (n+ 1)/2. As a result, with a sample size of 1, the maximum likelihood estimator fornwill systematically underestimatenby (n− 1)/2.
Suppose one wishes to determine just how biased anunfair coinis. Call the probability of tossing a 'head'p. The goal then becomes to determinep.
Suppose the coin is tossed 80 times: i.e. the sample might be something likex1= H,x2= T, ...,x80= T, and the count of the number ofheads"H" is observed.
The probability of tossingtailsis 1 −p(so herepisθabove). Suppose the outcome is 49 heads and 31tails, and suppose the coin was taken from a box containing three coins: one which gives heads with probabilityp=1⁄3, one which gives heads with probabilityp=1⁄2and another which gives heads with probabilityp=2⁄3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation, the coin that has the largest likelihood can be found, given the data that were observed. By using theprobability mass functionof thebinomial distributionwith sample size equal to 80, number successes equal to 49 but for different values ofp(the "probability of success"), the likelihood function (defined below) takes one of three values:
P[H=49∣p=13]=(8049)(13)49(1−13)31≈0.000,P[H=49∣p=12]=(8049)(12)49(1−12)31≈0.012,P[H=49∣p=23]=(8049)(23)49(1−23)31≈0.054.{\displaystyle {\begin{aligned}\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{3}})^{49}(1-{\tfrac {1}{3}})^{31}\approx 0.000,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {1}{2}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {1}{2}})^{49}(1-{\tfrac {1}{2}})^{31}\approx 0.012,\\[6pt]\operatorname {\mathbb {P} } {\bigl [}\;\mathrm {H} =49\mid p={\tfrac {2}{3}}\;{\bigr ]}&={\binom {80}{49}}({\tfrac {2}{3}})^{49}(1-{\tfrac {2}{3}})^{31}\approx 0.054~.\end{aligned}}}
The likelihood is maximized whenp=2⁄3, and so this is themaximum likelihood estimateforp.
Now suppose that there was only one coin but itspcould have been any value0 ≤p≤ 1 .The likelihood function to be maximised isL(p)=fD(H=49∣p)=(8049)p49(1−p)31,{\displaystyle L(p)=f_{D}(\mathrm {H} =49\mid p)={\binom {80}{49}}p^{49}(1-p)^{31}~,}
and the maximisation is over all possible values0 ≤p≤ 1 .
One way to maximize this function is bydifferentiatingwith respect topand setting to zero:
0=∂∂p((8049)p49(1−p)31),0=49p48(1−p)31−31p49(1−p)30=p48(1−p)30[49(1−p)−31p]=p48(1−p)30[49−80p].{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial p}}\left({\binom {80}{49}}p^{49}(1-p)^{31}\right)~,\\[8pt]0&=49p^{48}(1-p)^{31}-31p^{49}(1-p)^{30}\\[8pt]&=p^{48}(1-p)^{30}\left[49(1-p)-31p\right]\\[8pt]&=p^{48}(1-p)^{30}\left[49-80p\right]~.\end{aligned}}}
This is a product of three terms. The first term is 0 whenp= 0. The second is 0 whenp= 1. The third is zero whenp=49⁄80. The solution that maximizes the likelihood is clearlyp=49⁄80(sincep= 0 andp= 1 result in a likelihood of 0). Thus themaximum likelihood estimatorforpis49⁄80.
This result is easily generalized by substituting a letter such assin the place of 49 to represent the observed number of 'successes' of ourBernoulli trials, and a letter such asnin the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yieldss⁄nwhich is the maximum likelihood estimator for any sequence ofnBernoulli trials resulting ins'successes'.
For thenormal distributionN(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}which hasprobability density function
f(x∣μ,σ2)=12πσ2exp(−(x−μ)22σ2),{\displaystyle f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right),}
the correspondingprobability density functionfor a sample ofnindependent identically distributednormal random variables (the likelihood) is
f(x1,…,xn∣μ,σ2)=∏i=1nf(xi∣μ,σ2)=(12πσ2)n/2exp(−∑i=1n(xi−μ)22σ2).{\displaystyle f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})=\prod _{i=1}^{n}f(x_{i}\mid \mu ,\sigma ^{2})=\left({\frac {1}{2\pi \sigma ^{2}}}\right)^{n/2}\exp \left(-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right).}
This family of distributions has two parameters:θ= (μ,σ); so we maximize the likelihood,L(μ,σ2)=f(x1,…,xn∣μ,σ2){\displaystyle {\mathcal {L}}(\mu ,\sigma ^{2})=f(x_{1},\ldots ,x_{n}\mid \mu ,\sigma ^{2})}, over both parameters simultaneously, or if possible, individually.
Since thelogarithmfunction itself is acontinuousstrictly increasingfunction over therangeof the likelihood, the values which maximize the likelihood will also maximize its logarithm (the log-likelihood itself is not necessarily strictly increasing). The log-likelihood can be written as follows:
log(L(μ,σ2))=−n2log(2πσ2)−12σ2∑i=1n(xi−μ)2{\displaystyle \log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{2}}\log(2\pi \sigma ^{2})-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}}
(Note: the log-likelihood is closely related toinformation entropyandFisher information.)
We now compute the derivatives of this log-likelihood as follows.
0=∂∂μlog(L(μ,σ2))=0−−2n(x¯−μ)2σ2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \mu }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=0-{\frac {\;-2n({\bar {x}}-\mu )\;}{2\sigma ^{2}}}.\end{aligned}}}wherex¯{\displaystyle {\bar {x}}}is thesample mean. This is solved by
μ^=x¯=∑i=1nxin.{\displaystyle {\widehat {\mu }}={\bar {x}}=\sum _{i=1}^{n}{\frac {\,x_{i}\,}{n}}.}
This is indeed the maximum of the function, since it is the only turning point inμand the second derivative is strictly less than zero. Itsexpected valueis equal to the parameterμof the given distribution,
E[μ^]=μ,{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\mu }}\;{\bigr ]}=\mu ,\,}
which means that the maximum likelihood estimatorμ^{\displaystyle {\widehat {\mu }}}is unbiased.
Similarly we differentiate the log-likelihood with respect toσand equate to zero:
0=∂∂σlog(L(μ,σ2))=−nσ+1σ3∑i=1n(xi−μ)2.{\displaystyle {\begin{aligned}0&={\frac {\partial }{\partial \sigma }}\log {\Bigl (}{\mathcal {L}}(\mu ,\sigma ^{2}){\Bigr )}=-{\frac {\,n\,}{\sigma }}+{\frac {1}{\sigma ^{3}}}\sum _{i=1}^{n}(\,x_{i}-\mu \,)^{2}.\end{aligned}}}
which is solved by
σ^2=1n∑i=1n(xi−μ)2.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}.}
Inserting the estimateμ=μ^{\displaystyle \mu ={\widehat {\mu }}}we obtain
σ^2=1n∑i=1n(xi−x¯)2=1n∑i=1nxi2−1n2∑i=1n∑j=1nxixj.{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}x_{i}x_{j}.}
To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error)δi≡μ−xi{\displaystyle \delta _{i}\equiv \mu -x_{i}}. Expressing the estimate in these variables yields
σ^2=1n∑i=1n(μ−δi)2−1n2∑i=1n∑j=1n(μ−δi)(μ−δj).{\displaystyle {\widehat {\sigma }}^{2}={\frac {1}{n}}\sum _{i=1}^{n}(\mu -\delta _{i})^{2}-{\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}(\mu -\delta _{i})(\mu -\delta _{j}).}
Simplifying the expression above, utilizing the facts thatE[δi]=0{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;\delta _{i}\;{\bigr ]}=0}andE[δi2]=σ2{\displaystyle \operatorname {E} {\bigl [}\;\delta _{i}^{2}\;{\bigr ]}=\sigma ^{2}}, allows us to obtain
E[σ^2]=n−1nσ2.{\displaystyle \operatorname {\mathbb {E} } {\bigl [}\;{\widehat {\sigma }}^{2}\;{\bigr ]}={\frac {\,n-1\,}{n}}\sigma ^{2}.}
This means that the estimatorσ^2{\displaystyle {\widehat {\sigma }}^{2}}is biased forσ2{\displaystyle \sigma ^{2}}. It can also be shown thatσ^{\displaystyle {\widehat {\sigma }}}is biased forσ{\displaystyle \sigma }, but that bothσ^2{\displaystyle {\widehat {\sigma }}^{2}}andσ^{\displaystyle {\widehat {\sigma }}}are consistent.
Formally we say that themaximum likelihood estimatorforθ=(μ,σ2){\displaystyle \theta =(\mu ,\sigma ^{2})}is
θ^=(μ^,σ^2).{\displaystyle {\widehat {\theta \,}}=\left({\widehat {\mu }},{\widehat {\sigma }}^{2}\right).}
In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously.
The normal log-likelihood at its maximum takes a particularly simple form:
log(L(μ^,σ^))=−n2(log(2πσ^2)+1){\displaystyle \log {\Bigl (}{\mathcal {L}}({\widehat {\mu }},{\widehat {\sigma }}){\Bigr )}={\frac {\,-n\;\;}{2}}{\bigl (}\,\log(2\pi {\widehat {\sigma }}^{2})+1\,{\bigr )}}
This maximum log-likelihood can be shown to be the same for more generalleast squares, even fornon-linear least squares. This is often used in determining likelihood-based approximateconfidence intervalsandconfidence regions, which are generally more accurate than those using the asymptotic normality discussed above.
It may be the case that variables are correlated, or more generally, not independent. Two random variablesy1{\displaystyle y_{1}}andy2{\displaystyle y_{2}}are independent only if their joint probability density function is the product of the individual probability density functions, i.e.
f(y1,y2)=f(y1)f(y2){\displaystyle f(y_{1},y_{2})=f(y_{1})f(y_{2})\,}
Suppose one constructs an order-nGaussian vector out of random variables(y1,…,yn){\displaystyle (y_{1},\ldots ,y_{n})}, where each variable has means given by(μ1,…,μn){\displaystyle (\mu _{1},\ldots ,\mu _{n})}. Furthermore, let thecovariance matrixbe denoted byΣ{\displaystyle {\mathit {\Sigma }}}. The joint probability density function of thesenrandom variables then follows amultivariate normal distributiongiven by:
f(y1,…,yn)=1(2π)n/2det(Σ)exp(−12[y1−μ1,…,yn−μn]Σ−1[y1−μ1,…,yn−μn]T){\displaystyle f(y_{1},\ldots ,y_{n})={\frac {1}{(2\pi )^{n/2}{\sqrt {\det({\mathit {\Sigma }})}}}}\exp \left(-{\frac {1}{2}}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]{\mathit {\Sigma }}^{-1}\left[y_{1}-\mu _{1},\ldots ,y_{n}-\mu _{n}\right]^{\mathrm {T} }\right)}
In thebivariatecase, the joint probability density function is given by:
f(y1,y2)=12πσ1σ21−ρ2exp[−12(1−ρ2)((y1−μ1)2σ12−2ρ(y1−μ1)(y2−μ2)σ1σ2+(y2−μ2)2σ22)]{\displaystyle f(y_{1},y_{2})={\frac {1}{2\pi \sigma _{1}\sigma _{2}{\sqrt {1-\rho ^{2}}}}}\exp \left[-{\frac {1}{2(1-\rho ^{2})}}\left({\frac {(y_{1}-\mu _{1})^{2}}{\sigma _{1}^{2}}}-{\frac {2\rho (y_{1}-\mu _{1})(y_{2}-\mu _{2})}{\sigma _{1}\sigma _{2}}}+{\frac {(y_{2}-\mu _{2})^{2}}{\sigma _{2}^{2}}}\right)\right]}
In this and other cases where a joint density function exists, the likelihood function is defined as above, in the section "principles," using this density.
X1,X2,…,Xm{\displaystyle X_{1},\ X_{2},\ldots ,\ X_{m}}are counts in cells / boxes 1 up to m; each box has a different probability (think of the boxes being bigger or smaller) and we fix the number of balls that fall to ben{\displaystyle n}:x1+x2+⋯+xm=n{\displaystyle x_{1}+x_{2}+\cdots +x_{m}=n}. The probability of each box ispi{\displaystyle p_{i}}, with a constraint:p1+p2+⋯+pm=1{\displaystyle p_{1}+p_{2}+\cdots +p_{m}=1}. This is a case in which theXi{\displaystyle X_{i}}sare not independent, the joint probability of a vectorx1,x2,…,xm{\displaystyle x_{1},\ x_{2},\ldots ,x_{m}}is called the multinomial and has the form:
f(x1,x2,…,xm∣p1,p2,…,pm)=n!∏xi!∏pixi=(nx1,x2,…,xm)p1x1p2x2⋯pmxm{\displaystyle f(x_{1},x_{2},\ldots ,x_{m}\mid p_{1},p_{2},\ldots ,p_{m})={\frac {n!}{\prod x_{i}!}}\prod p_{i}^{x_{i}}={\binom {n}{x_{1},x_{2},\ldots ,x_{m}}}p_{1}^{x_{1}}p_{2}^{x_{2}}\cdots p_{m}^{x_{m}}}
Each box taken separately against all the other boxes is a binomial and this is an extension thereof.
The log-likelihood of this is:
ℓ(p1,p2,…,pm)=logn!−∑i=1mlogxi!+∑i=1mxilogpi{\displaystyle \ell (p_{1},p_{2},\ldots ,p_{m})=\log n!-\sum _{i=1}^{m}\log x_{i}!+\sum _{i=1}^{m}x_{i}\log p_{i}}
The constraint has to be taken into account and use the Lagrange multipliers:
L(p1,p2,…,pm,λ)=ℓ(p1,p2,…,pm)+λ(1−∑i=1mpi){\displaystyle L(p_{1},p_{2},\ldots ,p_{m},\lambda )=\ell (p_{1},p_{2},\ldots ,p_{m})+\lambda \left(1-\sum _{i=1}^{m}p_{i}\right)}
By posing all the derivatives to be 0, the most natural estimate is derived
p^i=xin{\displaystyle {\hat {p}}_{i}={\frac {x_{i}}{n}}}
Maximizing log likelihood, with and without constraints, can be an unsolvable problem in closed form, then we have to use iterative procedures.
Except for special cases, the likelihood equations∂ℓ(θ;y)∂θ=0{\displaystyle {\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}=0}
cannot be solved explicitly for an estimatorθ^=θ^(y){\displaystyle {\widehat {\theta }}={\widehat {\theta }}(\mathbf {y} )}. Instead, they need to be solvediteratively: starting from an initial guess ofθ{\displaystyle \theta }(sayθ^1{\displaystyle {\widehat {\theta }}_{1}}), one seeks to obtain a convergent sequence{θ^r}{\displaystyle \left\{{\widehat {\theta }}_{r}\right\}}. Many methods for this kind ofoptimization problemare available,[26][27]but the most commonly used ones are algorithms based on an updating formula of the formθ^r+1=θ^r+ηrdr(θ^){\displaystyle {\widehat {\theta }}_{r+1}={\widehat {\theta }}_{r}+\eta _{r}\mathbf {d} _{r}\left({\widehat {\theta }}\right)}
where the vectordr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)}indicates thedescent directionof therth "step," and the scalarηr{\displaystyle \eta _{r}}captures the "step length,"[28][29]also known as thelearning rate.[30]
(Note: here it is a maximization problem, so the sign before gradient is flipped)
ηr∈R+{\displaystyle \eta _{r}\in \mathbb {R} ^{+}}that is small enough for convergence anddr(θ^)=∇ℓ(θ^r;y){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=\nabla \ell \left({\widehat {\theta }}_{r};\mathbf {y} \right)}
Gradient descent method requires to calculate the gradient at the rth iteration, but no need to calculate the inverse of second-order derivative, i.e., the Hessian matrix. Therefore, it is computationally faster than Newton-Raphson method.
ηr=1{\displaystyle \eta _{r}=1}anddr(θ^)=−Hr−1(θ^)sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)\mathbf {s} _{r}\left({\widehat {\theta }}\right)}
wheresr(θ^){\displaystyle \mathbf {s} _{r}({\widehat {\theta }})}is thescoreandHr−1(θ^){\displaystyle \mathbf {H} _{r}^{-1}\left({\widehat {\theta }}\right)}is theinverseof theHessian matrixof the log-likelihood function, both evaluated therth iteration.[31][32]But because the calculation of the Hessian matrix iscomputationally costly, numerous alternatives have been proposed. The popularBerndt–Hall–Hall–Hausman algorithmapproximates the Hessian with theouter productof the expected gradient, such that
dr(θ^)=−[1n∑t=1n∂ℓ(θ;y)∂θ(∂ℓ(θ;y)∂θ)T]−1sr(θ^){\displaystyle \mathbf {d} _{r}\left({\widehat {\theta }}\right)=-\left[{\frac {1}{n}}\sum _{t=1}^{n}{\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\left({\frac {\partial \ell (\theta ;\mathbf {y} )}{\partial \theta }}\right)^{\mathsf {T}}\right]^{-1}\mathbf {s} _{r}\left({\widehat {\theta }}\right)}
Other quasi-Newton methods use more elaborate secant updates to give approximation of Hessian matrix.
DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value of second-order derivative:Hk+1=(I−γkykskT)Hk(I−γkskykT)+γkykykT,{\displaystyle \mathbf {H} _{k+1}=\left(I-\gamma _{k}y_{k}s_{k}^{\mathsf {T}}\right)\mathbf {H} _{k}\left(I-\gamma _{k}s_{k}y_{k}^{\mathsf {T}}\right)+\gamma _{k}y_{k}y_{k}^{\mathsf {T}},}
where
yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}γk=1ykTsk,{\displaystyle \gamma _{k}={\frac {1}{y_{k}^{T}s_{k}}},}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.}
BFGS also gives a solution that is symmetric and positive-definite:
Bk+1=Bk+ykykTykTsk−BkskskTBkTskTBksk,{\displaystyle B_{k+1}=B_{k}+{\frac {y_{k}y_{k}^{\mathsf {T}}}{y_{k}^{\mathsf {T}}s_{k}}}-{\frac {B_{k}s_{k}s_{k}^{\mathsf {T}}B_{k}^{\mathsf {T}}}{s_{k}^{\mathsf {T}}B_{k}s_{k}}}\ ,}
where
yk=∇ℓ(xk+sk)−∇ℓ(xk),{\displaystyle y_{k}=\nabla \ell (x_{k}+s_{k})-\nabla \ell (x_{k}),}sk=xk+1−xk.{\displaystyle s_{k}=x_{k+1}-x_{k}.}
BFGS method is not guaranteed to converge unless the function has a quadraticTaylor expansionnear an optimum. However, BFGS can have acceptable performance even for non-smooth optimization instances
Another popular method is to replace the Hessian with theFisher information matrix,I(θ)=E[Hr(θ^)]{\displaystyle {\mathcal {I}}(\theta )=\operatorname {\mathbb {E} } \left[\mathbf {H} _{r}\left({\widehat {\theta }}\right)\right]}, giving us the Fisher scoring algorithm. This procedure is standard in the estimation of many methods, such asgeneralized linear models.
Although popular, quasi-Newton methods may converge to astationary pointthat is not necessarily a local or global maximum,[33]but rather a local minimum or asaddle point. Therefore, it is important to assess the validity of the obtained solution to the likelihood equations, by verifying that the Hessian, evaluated at the solution, is bothnegative definiteandwell-conditioned.[34]
Early users of maximum likelihood includeCarl Friedrich Gauss,Pierre-Simon Laplace,Thorvald N. Thiele, andFrancis Ysidro Edgeworth.[35][36]It wasRonald Fisherhowever, between 1912 and 1922, who singlehandedly created the modern version of the method.[37][38]
Maximum-likelihood estimation finally transcendedheuristicjustification in a proof published bySamuel S. Wilksin 1938, now calledWilks' theorem.[39]The theorem shows that the error in the logarithm of likelihood values for estimates from multiple independent observations is asymptoticallyχ2-distributed, which enables convenient determination of aconfidence regionaround any estimate of the parameters. The only difficult part of Wilks' proof depends on the expected value of theFisher informationmatrix, which is provided by a theorem proven by Fisher.[40]Wilks continued to improve on the generality of the theorem throughout his life, with his most general proof published in 1962.[41]
Reviews of the development of maximum likelihood estimation have been provided by a number of authors.[42][43][44][45][46][47][48][49] | https://en.wikipedia.org/wiki/Maximum_likelihood_estimation |
In amultitaskingcomputersystem,processesmay occupy a variety ofstates. These distinct states may not be recognized as such by theoperating systemkernel. However, they are a useful abstraction for the understanding of processes.
The following typical process states are possible on computer systems of all kinds. In most of these states, processes are "stored" onmain memory.
When a process is first created, it occupies the "created" or "new" state. In this state, the process awaits admission to the "ready" state. Admission will be approved or delayed by a long-term, or admission,scheduler. Typically in mostdesktop computersystems, this admission will be approved automatically. However, forreal-time operating systemsthis admission may be delayed. In a realtime system, admitting too many processes to the "ready" state may lead to oversaturation andovercontentionof the system's resources, leading to an inability to meet process deadlines.
A "ready" or "waiting" process has been loaded intomain memoryand is awaiting execution on aCPU(to be context switched onto the CPU by the dispatcher, or short-term scheduler). There may be many "ready" processes at any one point of the system's execution—for example, in a one-processor system, only one process can be executing at any one time, and all other "concurrently executing" processes will be waiting for execution.
Aready queueorrun queueis used incomputer scheduling. Modern computers are capable of running many different programs or processes at the same time. However, the CPU is only capable of handling one process at a time. Processes that are ready for the CPU are kept in aqueuefor "ready" processes. Other processes that are waiting for an event to occur, such as loading information from a hard drive or waiting on an internet connection, are not in the ready queue.
A process moves into the running state when it is chosen for execution. The process's instructions are executed by one of the CPUs (or cores) of the system. There is at most one running process per CPU or core. A process can run in either of the two modes, namelykernel modeoruser mode.[1][2]
A process transitions to ablockedstate when it cannot carry on without an external change in state or event occurring. For example, a process may block on a call to an I/O device such as a printer, if the printer is not available. Processes also commonly block when they require user input, or require access to acritical sectionwhich must be executed atomically. Such critical sections are protected using a synchronization object such as a semaphore or mutex.
A process may beterminated, either from the "running" state by completing its execution or by explicitly being killed. In either of these cases, the process moves to the "terminated" state. The underlying program is no longer executing, but the process remains in theprocess tableas azombie processuntil its parent process calls thewaitsystem callto read itsexit status, at which point the process is removed from the process table, finally ending the process's lifetime. If the parent fails to callwait, this continues to consume the process table entry (concretely theprocess identifieror PID), and causes aresource leak.
Two additional states are available for processes in systems that supportvirtual memory. In both of these states, processes are "stored" on secondary memory (typically ahard disk).
(Also calledsuspended and waiting.) In systems that support virtual memory, a process may be swapped out, that is, removed from main memory and placed on external storage by the scheduler. From here the process may be swapped back into the waiting state.
(Also calledsuspended and blocked.) Processes that are blocked may also be swapped out. In this event the process is both swapped out and blocked, and may be swapped back in again under the same circumstances as a swapped out and waiting process (although in this case, the process will move to the blocked state, and may still be waiting for a resource to become available). | https://en.wikipedia.org/wiki/Process_state |
OpenACC(foropen accelerators) is a programming standard forparallel computingdeveloped byCray, CAPS,NvidiaandPGI. The standard is designed to simplify parallel programming ofheterogeneousCPU/GPUsystems.[1]
As inOpenMP, the programmer can annotateC,C++andFortransource codeto identify the areas that should be accelerated usingcompiler directivesand additional functions.[2]Like OpenMP 4.0 and newer, OpenACC can target both theCPUandGPUarchitectures and launch computational code on them.
OpenACC members have worked as members of the OpenMP standard group to merge into OpenMP specification to create a common specification which extends OpenMP to support accelerators in a future release of OpenMP.[3][4]These efforts resulted in a technical report[5]for comment and discussion timed to include the annualSupercomputing Conference(November 2012,Salt Lake City) and to address non-Nvidia accelerator support with input from hardware vendors who participate in OpenMP.[6]
At ISC’12 OpenACC was demonstrated to work onNvidia,AMDandIntelaccelerators, without performance data.[7]
On November 12, 2012, at the SC12 conference, a draft of the OpenACC version 2.0 specification was presented.[8]New suggested capabilities include new controls over data movement (such as better handling ofunstructured dataand improvements in support for non-contiguous memory), and support for explicit function calls and separate compilation (allowing the creation and reuse of libraries of accelerated code). OpenACC 2.0 was officially released in June 2013.[9]
Version 2.5 of the specification was released in October 2015,[10]while version 2.6 was released in November 2017.[11]Subsequently, version 2.7 was released in November 2018.[12]
The latest version is version 3.3, which was released in November 2022.[13]
Support of OpenACC is available in commercial compilers from PGI (from version 12.6), and (for Cray hardware only) Cray.[7][14]
OpenUH[15]is anOpen64based open source OpenACC compiler supporting C and FORTRAN, developed by HPCTools group fromUniversity of Houston.
OpenARC[16]is an open source C compiler developed atOak Ridge National Laboratoryto support all features in the OpenACC 1.0 specification. An experimental[17]open source compiler, accULL, is developed by theUniversity of La Laguna(C languageonly).[18]
Omni Compiler[19][20]is an open source compiler developed at HPCS Laboratory. ofUniversity of Tsukubaand Programming Environment Research Team ofRIKENCenter for Computational Science, Japan, supported OpenACC,XcalableMP[ja]andXcalableACC[ja]combining XcalableMP and OpenACC.
IPMACC[21]is an open source C compiler developed byUniversity of Victoriathat translates OpenACC to CUDA, OpenCL, and ISPC. Currently, only following directives are supported:data,kernels,loop, andcache.
GCCsupport for OpenACC was slow in coming.[22]A GPU-targeting implementation from Samsung was announced in September 2013; this translated OpenACC 1.1-annotated code toOpenCL.[17]The announcement of a "real" implementation followed two months later, this time from NVIDIA and based on OpenACC 2.0.[23]This sparked some controversy, as the implementation would only target NVIDIA's ownPTXassembly language, for which no open source assembler or runtime was available.[24][25]Experimental support for OpenACC/PTX did end up in GCC as of version 5.1. GCC6 and GCC7 release series include a much improved implementation of the OpenACC 2.0a specification.[26][27]GCC 9.1 offers nearly complete OpenACC 2.5 support.[28]
In a way similar toOpenMP3.x on homogeneous system or the earlierOpenHMPP, the primary mode of programming in OpenACC is directives.[29]The specifications also include aruntime librarydefining several support functions. To exploit them, user should include "openacc.h" in C or "openacc_lib.h" in Fortran;[30]and then callacc_init()function.
OpenACC defines an extensive list of pragmas (directives),[31]for example:
Both are used to define parallel computation kernels to be executed on the accelerator, using distinct semantics[32][33]
Is the main directive to define and copy data to and from the accelerator.
Is used to define the type of parallelism in aparallelorkernelsregion.
There are some runtimeAPIfunctions defined too:acc_get_num_devices(),acc_set_device_type(),acc_get_device_type(),acc_set_device_num(),acc_get_device_num(),acc_async_test(),acc_async_test_all(),acc_async_wait(),acc_async_wait_all(),acc_init(),acc_shutdown(),acc_on_device(),acc_malloc(),acc_free().
OpenACC generally takes care of work organisation for the target device however this can be overridden through the use of gangs and workers. A gang consists of workers and operates over a number of processing elements (as with a workgroup in OpenCL). | https://en.wikipedia.org/wiki/OpenACC |
Indiscrete geometry, anopaque setis a system of curves or other set in theplanethat blocks alllines of sightacross apolygon, circle, or other shape. Opaque sets have also been calledbarriers,beam detectors,opaque covers, or (in cases where they have the form of aforestofline segmentsor other curves)opaque forests. Opaque sets were introduced byStefan Mazurkiewiczin 1916,[1]and the problem of minimizing their total length was posed byFrederick Bagemihlin 1959.[2]
For instance, visibility through aunit squarecan be blocked by its four boundary edges, with length 4, but a shorter opaque forest blocks visibility across the square with length2+126≈2.639{\displaystyle {\sqrt {2}}+{\tfrac {1}{2}}{\sqrt {6}}\approx 2.639}. It is unproven whether this is the shortest possible opaque set for the square, and for most other shapes this problem similarly remains unsolved. The shortest opaque set for any boundedconvex setin the plane has length at most theperimeterof the set, and at least half the perimeter. For the square, a slightly stronger lower bound than half the perimeter is known. Another convex set whose opaque sets are commonly studied is theunit circle, for which the shortestconnectedopaque set has length2+π{\displaystyle 2+\pi }. Without the assumption of connectivity, the shortest opaque set for the circle has length at leastπ{\displaystyle \pi }and at most4.7998{\displaystyle 4.7998}.
Several publishedalgorithmsclaiming to find the shortest opaque set for aconvex polygonwere later shown to be incorrect. Nevertheless, it is possible to find an opaque set with a guaranteedapproximation ratioinlinear time, or to compute the subset of the plane whose visibility is blocked by a given system of line segments inpolynomial time.
Every setS{\displaystyle S}in the plane blocks the visibility through a superset ofS{\displaystyle S}, itscoverageC{\displaystyle C}.C{\displaystyle C}consists of points for which all lines through the point intersectS{\displaystyle S}. If a given setK{\displaystyle K}forms a subset of the coverage ofS{\displaystyle S}, thenS{\displaystyle S}is said to be anopaque set,barrier,beam detector, oropaque coverforK{\displaystyle K}. If, additionally,S{\displaystyle S}has a special form, consisting of finitely manyline segmentswhose union forms aforest, it is called anopaque forest. There are many possible opaque sets for any given setK{\displaystyle K}, includingK{\displaystyle K}itself, and many possible opaque forests. For opaque forests, or more generally for systems ofrectifiable curves, their length can be measured in the standard way. For more general point sets, one-dimensionalHausdorff measurecan be used, and agrees with the standard length in the cases of line segments and rectifiable curves.[3]
Most research on this problem assumes that the given setK{\displaystyle K}is aconvex set. When it is not convex but merely aconnected set, it can be replaced by itsconvex hullwithout changing its opaque sets. Some variants of the problem restrict the opaque set to lie entirely inside or entirely outsideK{\displaystyle K}. In this case, it is called aninterior barrieror anexterior barrier, respectively. When this is not specified, the barrier is assumed to have no constraints on its location. Versions of the problem in which the opaque set must be connected or form a single curve have also been considered. It is not known whether everyconvex setP{\displaystyle P}has a shortest opaque set, or whether instead the lengths of its opaque sets might approach aninfimumwithout ever reaching it.[3]Every opaque set forP{\displaystyle P}can be approximated arbitrarily closely in length by an opaque forest,[4]and it has been conjectured that everyconvex polygonhas an opaque forest as its shortest opaque set, but this has not been proven.[3]
When the region to be covered is aconvex set, the length of its shortest opaque set must be at least half its perimeter and at most its perimeter. For some regions, additional improvements to these bounds can be made.
IfK{\displaystyle K}is a bounded convex set to be covered, then itsboundary∂K{\displaystyle \partial K}forms an opaque set whose length is the perimeter|∂K|{\displaystyle |\partial K|}. Therefore, the shortest possible length of an opaque set is at most the perimeter. For setsK{\displaystyle K}that are strictly convex, meaning that there are no line segments on the boundary, and for interior barriers, this bound is tight. Every point on the boundary must be contained in the opaque set, because every boundary point has atangent linethrough it that cannot be blocked by any other points.[5]The same reasoning shows that for interior barriers ofconvex polygons, allverticesmust be included. Therefore, theminimum Steiner treeof the vertices is the shortestconnectedopaque set, and thetraveling salesperson pathof the vertices is the shortestsingle-curveopaque set.[4]However, for interior barriers of non-polygonal convex sets that are not strictly convex, or for barriers that are not required to be connected, other opaque sets may be shorter; for instance, it is always possible to omit the longest line segment of the boundary. In these cases, the perimeter or Steiner tree length provide anupper boundon the length of an opaque set.[3][4]
There are several proofs that an opaque set for any convex setK{\displaystyle K}must have total length at least|∂K|/2{\displaystyle |\partial K|/2}, half the perimeter. One of the simplest involves theCrofton formula, according to which the length of any curve is proportional to its expected number of intersection points with a random line from an appropriateprobability distributionon lines. It is convenient to simplify the problem by approximatingK{\displaystyle K}by a strictly convex superset, which can be chosen to have perimeter arbitrarily close to the original set. Then, except for the tangent lines toK{\displaystyle K}(which form a vanishing fraction of all lines), a line that intersectsK{\displaystyle K}crosses its boundary twice. Therefore, if a random line intersectsK{\displaystyle K}with probabilityp{\displaystyle p}, the expected number of boundary crossings is2p{\displaystyle 2p}. But each line that intersectsK{\displaystyle K}intersects its opaque set, so the expected number of intersections with the opaque set is at leastp{\displaystyle p}, which is at least half that forK{\displaystyle K}. By the Crofton formula, the lengths of the boundary and barrier have the same proportion as these expected numbers.[6]
This lower bound of|∂K|/2{\displaystyle |\partial K|/2}on the length of an opaque set cannot be improved to have a larger constant factor than 1/2, because there exist examples of convex sets that have opaque sets whose length is close to this lower bound. In particular, for very long thin rectangles, one long side and two short sides form a barrier, with total length that can be made arbitrarily close to half the perimeter. Therefore, among lower bounds that consider only the perimeter of the coverage region, the bound of|∂K|/2{\displaystyle |\partial K|/2}is best possible.[6]However, getting closer to|∂K|/2{\displaystyle |\partial K|/2}in this way involves considering a sequence of shapes rather than just a single shape, because for any convex setK{\displaystyle K}that is not a triangle, there exists aδ{\displaystyle \delta }such that all opaque sets have length at least|∂K|/2+δ{\displaystyle |\partial K|/2+\delta }.[7]
For atriangle, as for any convex polygon, the shortest connected opaque set is its minimum Steiner tree.[8]In the case of a triangle, this tree can be described explicitly: if the widest angle of the triangle is2π/3{\displaystyle 2\pi /3}(120°) or more, it uses the two shortest edges of the triangle, and otherwise it consists of three line segments from the vertices to theFermat pointof the triangle.[9]However, without assuming connectivity, the optimality of the Steiner tree has not been demonstrated. Izumi has proven a small improvement to the perimeter-halving lower bound for theequilateral triangle.[10]
For aunit square, the perimeter is 4, the perimeter minus the longest edge is 3, and the length of the minimum Steiner tree is1+3≈2.732{\displaystyle 1+{\sqrt {3}}\approx 2.732}. However, a shorter, disconnected opaque forest is known, with length2+126≈2.639{\displaystyle {\sqrt {2}}+{\tfrac {1}{2}}{\sqrt {6}}\approx 2.639}. It consists of the minimum Steiner tree of three of the square's vertices, together with a line segment connecting the fourth vertex to the center.Ross Honsbergercredits its discovery to Maurice Poirier, a Canadian schoolteacher,[11]but it was already described in 1962 and 1964 by Jones.[12][13]It is known to be optimal among forests with only two components,[5][14]and has been conjectured to be the best possible more generally, but this remains unproven.[7]The perimeter-halving lower bound of 2 for the square, already proven by Jones,[12][13]can be improved slightly, to2.00002{\displaystyle 2.00002}, for any barrier that consists of at most countably manyrectifiable curves,[7]improving similar previous bounds that constrained the barrier to be placed only near to the given square.[6]
The case of theunit circlewas described in a 1995Scientific Americancolumn byIan Stewart, with a solution of length2+π{\displaystyle 2+\pi },[15]optimal for a single curve or connected barrier[8][16][17]but not for an opaque forest with multiple curves.Vance FaberandJan Mycielskicredit this single-curve solution toMenachem Magidorin 1974.[8]By 1980, E. Makai had already provided a better three-component solution, with length approximately4.7998{\displaystyle 4.7998},[18]rediscovered by John Day in a followup to Stewart's column.[19]The unknown length of the optimal solution has been called thebeam detection constant.[20]
Two published algorithms claim to generate the optimal opaque forest for arbitrary polygons, based on the idea that the optimal solution has a special structure: a Steiner tree for one triangle in atriangulation of the polygon, and a segment in each remaining triangle from one vertex to the opposite side, of length equal to the height of the triangle. This structure matches the conjectured structure of the optimal solution for a square. Although the optimal triangulation for a solution of this form is not part of the input to these algorithms, it can be found by the algorithms inpolynomial timeusingdynamic programming.[21][22]However, these algorithms do not correctly solve the problem for all polygons, because some polygons have shorter solutions with a different structure than the ones they find. In particular, for a long thin rectangle, the minimum Steiner tree of all four vertices is shorter than the triangulation-based solution that these algorithms find.[23]No known algorithm has been guaranteed to find a correct solution to the problem, regardless of its running time.[3]
Despite this setback, the shortest single-curve barrier of a convex polygon, which is the traveling salesperson path of its vertices, can be computed exactly inpolynomial timefor convex polygons by adynamic programmingalgorithm, in models of computation for whichsums of radicalscan be computed exactly.[4]There has also been more successful study ofapproximation algorithmsfor the problem, and for determining the coverage of a given barrier.
By the general bounds for opaque forest length in terms of perimeter, the perimeter of a convex set approximates its shortest opaque forest to within a factor of two in length. In two papers, Dumitrescu, Jiang, Pach, and Tóth provide severallinear-timeapproximation algorithms for the shortest opaque set for convex polygons, with betterapproximation ratiosthan two:
Additionally, because the shortest connected interior barrier of a convex polygon is given by the minimum Steiner tree, it has apolynomial-time approximation scheme.[4]
The region covered by a given forest can be determined as follows:
If the input consists ofn{\displaystyle n}line segments formingm{\displaystyle m}connected components, then each of then{\displaystyle n}setsCp{\displaystyle C_{p}}consists of at most2m{\displaystyle 2m}wedges. It follows that the combinatorial complexity of the coverage region, and the time to construct it, isO(m2n2){\displaystyle O(m^{2}n^{2})}as expressed inbig O notation.[25]
Although optimal in the worst case for inputs whose coverage region has combinatorial complexity matching this bound, this algorithm can be improved heuristically in practice by a preprocessing phase that merges overlapping pairs of hulls until all remaining hulls are disjoint, in timeO(nlog2n){\displaystyle O(n\log ^{2}n)}. If this reduces the input to a single hull, the more expensive sweeping and intersecting algorithm need not be run: in this case the hull is the coverage region.[26]
Mazurkiewicz (1916)showed that it is possible for an opaque set to avoid containing any nontrivial curves and still have finite total length.[1]A simplified construction ofBagemihl (1959), shown in the figure, produces an example for the unit square. The construction begins with line segments that form an opaque set with an additional property: the segments of negative slope block all lines of non-negative slope, while the segments of positive slope block all lines of non-positive slope. In the figure, the initial segments with this property are four disjoint segments along the diagonals of the square. Then, it repeatedly subdivides these segments while maintaining this property. At each level of the construction, each line segment is split by a small gap near its midpoint into two line segments, with slope of the same sign, that together block all lines of the opposite sign that were blocked by the original line segment. Thelimit setof this construction is aCantor spacethat, like all intermediate stages of the construction, is an opaque set for the square. With quickly decreasing gap sizes, the construction produces a set whoseHausdorff dimensionis one, and whose one-dimensionalHausdorff measure(a notion of length suitable for such sets) is finite.[2]
Thedistance setsof the boundary of a square, or of the four-segment shortest known opaque set for the square, both contain all distances in the interval from 0 to2{\displaystyle {\sqrt {2}}}. However, by using similar fractal constructions, it is also possible to find fractal opaque sets whose distance sets omit infinitely many of the distances in this interval, or that (assuming thecontinuum hypothesis) form aset of measure zero.[2]
Opaque sets were originally studied byStefan Mazurkiewiczin 1916.[1]Other early works on opaque sets include the papers ofH. M. Sen Guptaand N. C. Basu Mazumdar in 1955,[27]and byFrederick Bagemihlin 1959,[2]but these are primarily about the distance sets and topological properties of barriers rather than about minimizing their length. In a postscript to his paper, Bagemihl asked for the minimum length of an interior barrier for the square,[2]and subsequent work has largely focused on versions of the problem involving length minimization. They have been repeatedly posed, with multiple colorful formulations: digging a trench of as short a length as possible to find a straight buried telephone cable,[8]trying to find a nearby straight road while lost in a forest,[17]swimming to a straight shoreline while lost at sea,[4]efficiently painting walls to render a glass house opaque,[28]etc.
The problem has also been generalized to sets that block allgeodesicson aRiemannian manifold,[29][30]or that block lines through sets in higher-dimensions. In three dimensions, the corresponding question asks for a collection of surfaces of minimum total area that blocks all visibility across a solid. However, for some solids, such as a ball, it is not clear whether such a collection exists, or whether instead the area has aninfimumthat cannot be attained.[8][31] | https://en.wikipedia.org/wiki/Opaque_forest_problem |
Document retrievalis defined as the matching of some stated user query against a set offree-textrecords. These records could be any type of mainlyunstructured text, such asnewspaper articles, real estate records or paragraphs in a manual. User queries can range from multi-sentence full descriptions of an information need to a few words.
Document retrieval is sometimes referred to as, or as a branch of,text retrieval. Text retrieval is a branch ofinformation retrievalwhere the information is stored primarily in the form oftext. Text databases became decentralized thanks to thepersonal computer. Text retrieval is a critical area of study today, since it is the fundamental basis of allinternetsearch engines.
Document retrieval systems find information to given criteria by matching text records (documents) against user queries, as opposed toexpert systemsthat answer questions byinferringover a logicalknowledge database. A document retrieval system consists of a database of documents, aclassification algorithmto build a full text index, and a user interface to access the database.
A document retrieval system has two main tasks:
Internetsearch enginesare classical applications of document retrieval. The vast majority of retrieval systems currently in use range from simple Boolean systems through to systems usingstatisticalornatural language processingtechniques.
There are two main classes of indexing schemata for document retrieval systems:form based(orword based), andcontent basedindexing. The document classification scheme (orindexing algorithm) in use determines the nature of the document retrieval system.
Form based document retrieval addresses the exact syntactic properties of a text, comparable to substring matching in string searches. The text is generally unstructured and not necessarily in a natural language, the system could for example be used to process large sets of chemical representations in molecular biology. Asuffix treealgorithm is an example for form based indexing.
The content based approach exploits semantic connections between documents and parts thereof, and semantic connections between queries and documents. Most content based document retrieval systems use aninverted indexalgorithm.
Asignature fileis a technique that creates aquick and dirtyfilter, for example aBloom filter, that will keep all the documents that match to the query andhopefullya few ones that do not. The way this is done is by creating for each file a signature, typically a hash coded version. One method is superimposed coding. A post-processing step is done to discard the false alarms. Since in most cases this structure is inferior toinverted filesin terms of speed, size and functionality, it is not used widely. However, with proper parameters it can beat the inverted files in certain environments.
ThePubMed[1]form interface features the "related articles" search which works through a comparison of words from the documents' title, abstract, andMeSHterms using a word-weighted algorithm.[2][3] | https://en.wikipedia.org/wiki/Document_retrieval |
TheResource Description Framework(RDF) is a method to describe and exchangegraphdata. It was originally designed as a data model formetadataby theWorld Wide Web Consortium(W3C). It provides a variety of syntax notations and formats, of which the most widely used is Turtle (Terse RDF Triple Language).
RDF is adirected graphcomposed of triple statements. An RDF graph statement is represented by: (1) a node for the subject, (2) an arc from subject to object, representing a predicate, and (3) a node for the object. Each of these parts can be identified by aUniform Resource Identifier(URI). An object can also be a literal value. This simple, flexible data model has a lot ofexpressive powerto represent complex situations, relationships, and other things of interest, while also being appropriately abstract.
RDF was adopted as a W3C recommendation in 1999. The RDF 1.0 specification was published in 2004, and the RDF 1.1 specification in 2014.SPARQLis a standard query language for RDF graphs.RDF Schema(RDFS),Web Ontology Language(OWL) andSHACL(Shapes Constraint Language) are ontology languages that are used to describe RDF data.
The RDF data model[1]is similar to classical conceptual modeling approaches (such asentity–relationshiporclass diagrams). It is based on the idea of makingstatementsaboutresources(in particular web resources) in expressions of the formsubject–predicate–object, known astriples. Thesubjectdenotes the resource; thepredicatedenotes traits or aspects of the resource, and expresses a relationship between thesubjectand theobject.
For example, one way to represent the notion "The sky has the color blue" in RDF is as the triple: asubjectdenoting "the sky", apredicatedenoting "has the color", and anobjectdenoting "blue". Therefore, RDF usessubjectinstead ofobject(orentity) in contrast to the typical approach of anentity–attribute–value modelinobject-oriented design: entity (sky), attribute (color), and value (blue).
RDF is an abstract model with severalserialization formats(being essentially specializedfile formats). In addition the particular encoding for resources or triples can vary from format to format.
This mechanism for describing resources is a majorcomponentin the W3C'sSemantic Webactivity: an evolutionary stage of theWorld Wide Webin which automated software can store, exchange, and usemachine-readable informationdistributed throughout the Web, in turn enabling users to deal with the information with greater efficiency andcertainty. RDF's simple data model and ability to model disparate, abstract concepts has also led to its increasing use inknowledge managementapplications unrelated to Semantic Web activity.
A collection of RDF statements intrinsically represents alabeled,directedmultigraph. This makes an RDFdata modelbetter suited to certain kinds ofknowledge representationthan otherrelationalorontologicalmodels.
AsRDFS,OWLandSHACLdemonstrate, one can build additionalontology languagesupon RDF.
The initial RDF design, intended to "build a vendor-neutral and operating system- independent system of metadata",[2]derived from the W3C'sPlatform for Internet Content Selection(PICS), an early web content labelling system,[3]but the project was also shaped by ideas fromDublin Core, and from theMeta Content Framework(MCF),[2]which had been developed during 1995 to 1997 byRamanathan V. GuhaatAppleandTim BrayatNetscape.[4]
A first public draft of RDF appeared in October 1997,[5][6]issued by a W3C working group that included representatives fromIBM,Microsoft,Netscape,Nokia,Reuters,SoftQuad, and theUniversity of Michigan.[3]
In 1999, the W3C published the first recommended RDF specification, theModel and Syntax Specification("RDF M&S").[7]This described RDF's data model and anXMLserialization.[8]
Two persistent misunderstandings about RDF developed at this time: firstly, due to the MCF influence and the RDF "Resource Description" initialism, the idea that RDF was specifically for use in representing metadata; secondly that RDF was an XML format rather than a data model, and only the RDF/XML serialisation being XML-based. RDF saw little take-up in this period, but there was significant work done inBristol, around ILRT atBristol UniversityandHP Labs, and in Boston atMIT.RSS 1.0andFOAFbecame exemplar applications for RDF in this period.
The recommendation of 1999 was replaced in 2004 by a set of six specifications:[9]"The RDF Primer",[10]"RDF Concepts and Abstract",[11]"RDF/XML Syntax Specification (revised)",[12]"RDF Semantics",[13]"RDF Vocabulary Description Language 1.0",[14]and "The RDF Test Cases".[15]
This series was superseded in 2014 by the following six "RDF 1.1" documents: "RDF 1.1 Primer",[16]"RDF 1.1 Concepts and Abstract Syntax",[17]"RDF 1.1 XML Syntax",[18]"RDF 1.1 Semantics",[19]"RDF Schema 1.1",[20]and "RDF 1.1 Test Cases".[21]
The vocabulary defined by the RDF specification is as follows:[22]
rdf:Statement,rdf:subject,rdf:predicate,rdf:objectare used forreification(seebelow).
This vocabulary is used as a foundation forRDF Schema, where it is extended.
Several commonserialization formatsare in use, including:
RDF/XML is sometimes misleadingly called simply RDF because it was introduced among the other W3C specifications defining RDF and it was historically the first W3C standard RDF serialization format. However, it is important to distinguish the RDF/XML format from the abstract RDF model itself. Although the RDF/XML format is still in use, other RDF serializations are now preferred by many RDF users, both because they are more human-friendly,[34]and because some RDF graphs are not representable in RDF/XML due to restrictions on the syntax of XMLQNames.
With a little effort, virtually any arbitraryXMLmay also be interpreted as RDF usingGRDDL(pronounced 'griddle'), Gleaning Resource Descriptions from Dialects of Languages.
RDF triples may be stored in a type of database called atriplestore.
The subject of an RDF statement is either auniform resource identifier(URI) or ablank node, both of which denoteresources. Resources indicated byblank nodesare called anonymous resources. They are not directly identifiable from the RDF statement. The predicate is a URI which also indicates a resource, representing a relationship. The object is a URI, blank node or aUnicodestring literal.
As of RDF 1.1 resources are identified byInternationalized Resource Identifiers(IRIs); IRI are a generalization of URI.[35]
In Semantic Web applications, and in relatively popular applications of RDF likeRSSandFOAF(Friend of a Friend), resources tend to be represented by URIs that intentionally denote, and can be used to access, actual data on the World Wide Web. But RDF, in general, is not limited to the description of Internet-based resources. In fact, the URI that names a resource does not have to be dereferenceable at all. For example, a URI that begins with "http:" and is used as the subject of an RDF statement does not necessarily have to represent a resource that is accessible viaHTTP, nor does it need to represent a tangible, network-accessible resource—such a URI could represent absolutely anything. However, there is broad agreement that a bare URI (without a # symbol) which returns a 300-level coded response when used in an HTTP GET request should be treated as denoting the internet resource that it succeeds in accessing.
Therefore, producers and consumers of RDF statements must agree on the semantics of resource identifiers. Such agreement is not inherent to RDF itself, although there are some controlled vocabularies in common use, such as Dublin Core Metadata, which is partially mapped to a URI space for use in RDF. The intent of publishing RDF-based ontologies on the Web is often to establish, or circumscribe, the intended meanings of the resource identifiers used to express data in RDF. For example, the URI:
is intended by its owners to refer to the class of allMerlotred wines by vintner (i.e., instances of the above URI each represent the class of all wine produced by a single vintner), a definition which is expressed by the OWL ontology—itself an RDF document—in which it occurs. Without careful analysis of the definition, one might erroneously conclude that an instance of the above URI was something physical, instead of a type of wine.
Note that this is not a 'bare' resource identifier, but is rather aURI reference, containing the '#' character and ending with afragment identifier.
The body of knowledge modeled by a collection of statements may be subjected toreification, in which eachstatement(that is each triplesubject-predicate-objectaltogether) is assigned a URI and treated as a resource about which additional statements can be made, as in "Jane says thatJohn is the author of document X". Reification is sometimes important in order to deduce a level of confidence or degree of usefulness for each statement.
In a reified RDF database, each original statement, being a resource, itself, most likely has at least three additional statements made about it: one to assert that its subject is some resource, one to assert that its predicate is some resource, and one to assert that its object is some resource or literal. More statements about the original statement may also exist, depending on the application's needs.
Borrowing from concepts available inlogic(and as illustrated in graphical notations such asconceptual graphsandtopic maps), some RDF model implementations acknowledge that it is sometimes useful to group statements according to different criteria, calledsituations,contexts, orscopes, as discussed in articles by RDF specification co-editorGraham Klyne.[36][37]For example, a statement can be associated with a context, named by a URI, in order to assert an "is true in" relationship. As another example, it is sometimes convenient to group statements by their source, which can be identified by a URI, such as the URI of a particular RDF/XML document. Then, when updates are made to the source, corresponding statements can be changed in the model, as well.
Implementation of scopes does not necessarily require fully reified statements. Some implementations allow a single scope identifier to be associated with a statement that has not been assigned a URI, itself.[38][39]Likewisenamed graphsin which a set of triples is named by a URI can represent context without the need to reify the triples.[40]
The predominant query language for RDF graphs isSPARQL. SPARQL is anSQL-like language, and arecommendationof theW3Cas of January 15, 2008.
The following is an example of a SPARQL query to show country capitals in Africa, using a fictional ontology:
Other non-standard ways to query RDF graphs include:
SHACL Advanced Features specification[42](W3C Working Group Note), the most recent version of which is maintained by the SHACL Community Group,[43]defines support for SHACL Rules, used for data transformations, inferences and mappings of RDF based on SHACL shapes.
The predominant language for describing and validating RDF graphs isSHACL(Shapes Constraint Language).[44]SHACL specification is divided in two parts: SHACL Core and SHACL-SPARQL. SHACL Core consists of a list of built-in constraints such as cardinality, range of values and many others. SHACL-SPARQL describes SPARQL-based constraints and an extension mechanism to declare new constraint components.
Other non-standard ways to describe and validate RDF graphs include:
The following example is taken from the W3C website[48]describing a resource with statements "there is a Person identified by http://www.w3.org/People/EM/contact#me, whose name is Eric Miller, whose email address is e.miller123(at)example (changed for security purposes), and whose title is Dr."
The resource "http://www.w3.org/People/EM/contact#me" is the subject.
The objects are:
The subject is a URI.
The predicates also have URIs. For example, the URI for each predicate:
In addition, the subject has a type (with URI http://www.w3.org/1999/02/22-rdf-syntax-ns#type), which is person (with URI http://www.w3.org/2000/10/swap/pim/contact#Person).
Therefore, the following "subject, predicate, object" RDF triples can be expressed:
In standard N-Triples format, this RDF can be written as:
Equivalently, it can be written in standard Turtle (syntax) format as:
Or more concisely, using a common shorthand syntax of Turtle as:
Or, it can be written in RDF/XML format as:
Certain concepts in RDF are taken fromlogicandlinguistics, where subject-predicate and subject-predicate-object structures have meanings similar to, yet distinct from, the uses of those terms in RDF. This example demonstrates:
In theEnglish languagestatement'New York has the postal abbreviation NY','New York'would be the subject,'has the postal abbreviation'the predicate and'NY'the object.
Encoded as an RDF triple, the subject and predicate would have to be resources named by URIs. The object could be a resource or literal element. For example, in the N-Triples form of RDF, the statement might look like:
In this example, "urn:x-states:New%20York" is the URI for a resource that denotes the US stateNew York, "http://purl.org/dc/terms/alternative" is the URI for a predicate (whose human-readable definition can be found here[49]), and "NY" is a literal string. Note that the URIs chosen here are not standard, and do not need to be, as long as their meaning is known to whatever is reading them.
In a like manner, given that "https://en.wikipedia.org/wiki/Tony_Benn" identifies a particular resource (regardless of whether that URI could be traversed as a hyperlink, or whether the resource isactuallytheWikipediaarticle aboutTony Benn), to say that the title of this resource is "Tony Benn" and its publisher is "Wikipedia" would be two assertions that could be expressed as valid RDF statements. In the N-Triples form of RDF, these statements might look like the following:
To an English-speaking person, the same information could be represented simply as:
The title of this resource, which is published by Wikipedia, is 'Tony Benn'
However, RDF puts the information in a formal way that a machine can understand. The purpose of RDF is to provide anencodingand interpretation mechanism so thatresourcescan be described in a way that particularsoftwarecan understand it; in other words, so that software can access and use information that it otherwise could not use.
Both versions of the statements above are wordy because one requirement for an RDF resource (as a subject or a predicate) is that it be unique. The subject resource must be unique in an attempt to pinpoint the exact resource being described. The predicate needs to be unique in order to reduce the chance that the idea ofTitleorPublisherwill be ambiguous to software working with the description. If the software recognizeshttp://purl.org/dc/elements/1.1/title(a specificdefinitionfor theconceptof a title established by the Dublin Core Metadata Initiative), it will also know that this title is different from a land title or an honorary title or just the letters t-i-t-l-e put together.
The following example, written in Turtle, shows how such simple claims can be elaborated on, by combining multiple RDF vocabularies. Here, we note that the primary topic of the Wikipedia page is a "Person" whose name is "Tony Benn":
Some uses of RDF include research into social networking. It will also help people in business fields understand better their relationships with members of industries that could be of use for product placement.[58]It will also help scientists understand how people are connected to one another.
RDF is being used to gain a better understanding of road traffic patterns. This is because the information regarding traffic patterns is on different websites, and RDF is used to integrate information from different sources on the web. Before, the common methodology was using keyword searching, but this method is problematic because it does not consider synonyms. This is why ontologies are useful in this situation. But one of the issues that comes up when trying to efficiently study traffic is that to fully understand traffic, concepts related to people, streets, and roads must be well understood. Since these are human concepts, they require the addition offuzzy logic. This is because values that are useful when describing roads, like slipperiness, are not precise concepts and cannot be measured. This would imply that the best solution would incorporate both fuzzy logic and ontology.[59] | https://en.wikipedia.org/wiki/Resource_Description_Framework |
Inlinguisticmorphology, atransfixis a discontinuousaffixwhich is inserted into aword root, as inroot-and-patternsystems of morphology, like those of manySemitic languages.
A discontinuous affix is an affix whose phonetic components are not sequential within a word, and instead, are spread out between or around thephonesthat comprise the root. The word root is often an abstract series of three consonants, though single consonant, biliteral, and quadriliteral roots do exist.[1]An example of a triconsonantal root would beḍ–r–b(ض ر ب) in Arabic, which can be inflected to create forms such asḍaraba'he beat' andyaḍribu'he beats'. While triconsonantal roots are widely considered to be the most common state, some linguists posit that biliteral roots may in fact be the default, though at least one scholar is skeptical of the legitimacy of these claims.[1]
Transfixes are placed into these roots in assigned positions, dictated by templates which are tied to the specific meaning of a giveninflectionorderivation.[2]The transfixes in the examples above are–a–a–aandya––i–u.
Transfixes are different fromprefixes,suffixes, andinfixesin that a complete transfix is the entire structure which is placed into a root. A transfix is not a combination of prefixes, suffixes, and infixes, but its own unique structure which is split through a word. Similarly, another difference transfixes hold from other affixes is that the individual components of the transfix are meaningless on their own. If we look again atḍaraba,the components of the–a–a–atransfix do not encode any meaning individually. Only together do they create the tense meaning.
The following are examples of verb inflection inMaltese, noun derivation inArabic, and noun pluralization inHausa, all three of which areAfro-Asiatic languages.
The Maltese example efficiently demonstrates the broad nature of transfixes and how they can be inserted into a root.
The Arabic example shows the ways in which a great variety of different nouns and verbs can be derived from a single root through the use of transfixes.
The Hausa example demonstrates the presence of transfixation in non-Semitic languages, though the phenomenon does not seem to be attested outside the Afro-Asiatic family. | https://en.wikipedia.org/wiki/Transfix |
Atelecommandortelecontrolis acommandsent to control a remote system or systems not directly connected (e.g. via wires) to the place from which the telecommand is sent. The word is derived fromtele= remote (Greek), andcommand= to entrust/order (Latin). Systems that need remote measurement and reporting of information of interest to the system designer or operator require the counterpart of telecommand,telemetry. Thetelecommandcan be done inreal timeor not depending on the circumstances (in space, delay may be of days), as was the case ofMarsokhod.[1]
For a Telecommand (TC) to be effective, it must be compiled into a pre-arranged format (which may follow a standard structure), modulated onto a carrier wave which is then transmitted with adequate power to the remote system. The remote system will then demodulate the digital signal from the carrier, decode the TC, and execute it. Transmission of the carrier wave can be by ultrasound, infra-red or other electromagnetic means.
Infraredlight makes up the invisible section of theelectromagnetic spectrum.[2]This light, also classified as heat, transmits signals between the transmitter and receiver of the remote system.[2]Telecommand systems usually include a physical remote, which contains four key parts: buttons,integrated circuit, button contacts, and alight-emitting diode.[3]When the buttons on a remote are pressed they touch and close their corresponding contacts below them within the remote.[3]This completes the necessary circuit on the circuit board along with a change inelectrical resistance, which is detected by the integrated circuit. Based on the change in electrical resistance, the integrated circuit distinguishes which button was pushed and sends a correspondingbinary codeto the light-emitting diode (LED) usually located at the front of the remote.[3]To transfer the information from the remote to the receiver, the LED turns the electrical signals into an invisible beam of infrared light that corresponds with the binary code and sends this light to the receiver.[3]The receiver then detects the light signal via aphotodiodeand it is transformed into an electrical signal for the command and is sent to the receiver’s integrated circuit/microprocessorto process and complete the command.[3]The strength of the transmitting LED can vary and determines the required positioning accuracy of the remote in relevance to the receiver.[2]Infrared remotes have a maximum range of approximately 30 feet and require the remote control or transmitter and receiver to be within aline of sight.[2]
Ultrasonic is a technology used more frequently in the past for telecommand. InventorRobert Adleris known for inventing theremote controlwhich did not require batteries and used ultrasonic technology.[4]There are four aluminum rods inside the transmitter that produce high frequency sounds when they are hit at one end. Each rod is a different length, which enables them to produce varying sound pitches, which control the receiving unit.[5]This technology was widely used but had certain issues such as dogs being bothered by the high frequency sounds.[6]
Often the smaller new remote controlled airplanes and helicopters are incorrectly advertised as radio controlled devices (seeRadio control) but they are either controlled via infra-red transmission or electromagnetically guided. Both of these systems are part of the telecommand area.
To prevent unauthorised access to the remote system, TCencryptionmay be employed.Secret sharingmay be used. | https://en.wikipedia.org/wiki/Telecommand |
TheXOP(eXtended Operations[1])instruction set, announced byAMDon May 1, 2009, is an extension to the 128-bitSSEcore instructions in thex86andAMD64instruction set for theBulldozerprocessor core, which was released on October 12, 2011.[2]However AMD removed support for XOP fromZen (microarchitecture)onward.[3]
The XOP instruction set contains several different types of vector instructions since it was originally intended as a major upgrade toSSE. Most of the instructions are integer instructions, but it also contains floating point permutation and floating point fraction extraction instructions. See the index for a list of instruction types.
XOP is a revised subset of what was originally intended asSSE5. It was changed to be similar but not overlapping withAVX, parts that overlapped with AVX were removed or moved to separate standards such asFMA4(floating-point vectormultiply–accumulate) andCVT16(Half-precisionfloating-point conversion implemented as F16C byIntel).[1]
All SSE5 instructions that were equivalent or similar to instructions in theAVXandFMA4instruction sets announced by Intel have been changed to use the coding proposed by Intel.Integerinstructionswithoutequivalents in AVX were classified as the XOP extension.[1]The XOP instructions have an opcode byte 8F (hexadecimal), but otherwise almost identical coding scheme asAVXwith the 3-byte VEX prefix.
Commentators[4]have seen this as evidence that Intel has not allowed AMD to use any part of the large VEX coding space. AMD has been forced to use different codes in order to avoid using any code combination that Intel might possibly be using in its development pipeline for something else. The XOP coding scheme is as close to the VEX scheme as technically possible without risking that the AMD codes overlap with future Intel codes. This inference is speculative, since no public information is available about negotiations between the two companies on this issue.
The use of the 8F byte requires that the m-bits (seeVEX coding scheme) have a value larger than or equal to 8 in order to avoid overlap with existing instructions.[Note 1]The C4 byte used in the VEX scheme has no such restriction. This may prevent the use of the m-bits for other purposes in the future in the XOP scheme, but not in the VEX scheme. Another possible problem is that the pp bits have the value 00 in the XOP scheme, while they have the value 01 in the VEX scheme for instructions that have no legacy equivalent. This may complicate the use of the pp bits for other purposes in the future.
A similar compatibility issue is the difference between theFMA3 and FMA4instruction sets. Intel initially proposed FMA4 in AVX/FMA specification version 3 to supersede the 3-operand FMA proposed by AMD in SSE5. After AMD adopted FMA4, Intel canceled FMA4 support and reverted to FMA3 in the AVX/FMA specification version 5 (SeeFMA history).[1][5][6]
In March 2015, AMD explicitly revealed in the description of the patch for the GNU Binutils package thatZen, its third-generation x86-64 architecture in its first iteration (znver1 – Zen, version 1), will not supportTBM,FMA4,XOPandLWPinstructions developed specifically for the "Bulldozer" family of micro-architectures.[7][8]
These are integer version of theFMA instruction set. These are all four operand instructions similar toFMA4and they all operate on signed integers.
r0 = a0 * b0 + c0,r1 = a1 * b1 + c1, ..
r0 = a0 * b0 + c0,r1 = a2 * b2 + c1, .[2]
r0 = a0 * b0 + c0,r1 = a1 * b1 + c1, ..
r0 = a0 * b0 + c0,r1 = a2 * b2 + c1
r0 = a1 * b1 + c0,r1 = a3 * b3 + c1
r0 = a0 * b0 + a1 * b1 + c0,r1 = a2 * b2+a3 * b3 + c1, ..
Horizontal addition instructions adds adjacent values in the input vector to each other. The output size in the instructions below describes how wide the horizontal addition performed is. For instance horizontal byte to word adds two bytes at a time and returns the result as vector of words, but byte to quadword adds eight bytes together at a time and returns the result as vector of quadwords. Six additional horizontal addition and subtraction instructions can be found inSSSE3, but they operate on two input vectors and only does two and two operations.
r0 = a0+a1,r1 = a2+a3,r2 = a4+a5, ...
r0 = a0+a1+a2+a3,r1 = a4+a5+a6+a7, ...
r0 = a0+a1+a2+a3+a4+a5+a6+a7, ...
r0 = a0+a1,r1 = a2+a3,r2 = a4+a5, ...
r0 = a0+a1+a2+a3,r1 = a4+a5+a6+a7
r0 = a0+a1,r1 = a2+a3
r0 = a0-a1,r1 = a2-a3,r2 = a4-a5, ...
r0 = a0-a1,r1 = a2-a3,r2 = a4-a5, ...
r0 = a0-a1,r1 = a2-a3
This set of vector compare instructions all take an immediate as an extra argument. The immediate controls what kind of comparison is performed. There are eight comparison possible for each instruction. The vectors are compared and all comparisons that evaluate to true set all corresponding bits in the destination to 1, and false comparisons sets all the same bits to 0. This result can be used directly in VPCMOV instruction for a vectorizedconditional move.
VPCMOVworks as bitwise variant of the blend instructions inSSE4. Like the AVX instruction VPBLENDVB, it is a four-operand instruction with three source operands and a destination. For each bit in the third operand (which acts as a selector), 1 selects the same bit in the first source, and 0 selects the same in the second source. When used together with the XOP vector comparison instructions above this can be used to implement a vectorized ternary move, or if the second input is the same as the destination, a conditional move (CMOV).
The shift instructions here differ from those inSSE2in that they can shift each unit with a different amount using a vector register interpreted as packed signed integers. The sign indicates the direction of shift or rotate, with positive values causing left shift and negative right shift[10]Intel has specified a different incompatible set of variable vector shift instructions in AVX2.[11]
VPPERMis a single instruction that combines theSSSE3instruction PALIGNR and PSHUFB and adds more to both. Some compare it theAltivecinstructionVPERM.[12]It takes three registers as input, the first two are source registers and the third the selector register. Each byte in the selector selects one of the bytes in one of the two input registers for the output. The selector can also apply effects on the selected bytes such as setting it to 0, reverse the bit order, and repeating the most-significant bit. All of the effects or the input can in addition be inverted.
TheVPERMIL2PDandVPERMIL2PSinstructions are two source versions of theVPERMILPDandVPERMILPSinstructions inAVXwhich means likeVPPERMthey can select output from any of the fields in the two inputs.
These instructions extracts the fractional part of floating point, that is the part that would be lost in conversion to integer. | https://en.wikipedia.org/wiki/XOP_instruction_set |
Inmathematics, theorbit method(also known as theKirillov theory,the method of coadjoint orbitsand by a few similar names) establishes a correspondence between irreducibleunitary representationsof aLie groupand itscoadjoint orbits: orbits of theaction of the groupon the dual space of itsLie algebra. The theory was introduced byKirillov(1961,1962) fornilpotent groupsand later extended byBertram Kostant,Louis Auslander,Lajos Pukánszkyand others to the case ofsolvable groups.Roger Howefound a version of the orbit method that applies top-adic Lie groups.[1]David Voganproposed that the orbit method should serve as a unifying principle in the description of the unitary duals of real reductive Lie groups.[2]
One of the key observations of Kirillov was that coadjoint orbits of a Lie groupGhave natural structure ofsymplectic manifoldswhose symplectic structure is invariant underG. If an orbit is thephase spaceof aG-invariantclassical mechanical systemthen the corresponding quantum mechanical system ought to be described via an irreducible unitary representation ofG. Geometric invariants of the orbit translate into algebraic invariants of the corresponding representation. In this way the orbit method may be viewed as a precise mathematical manifestation of a vague physical principle of quantization. In the case of a nilpotent groupGthe correspondence involves all orbits, but for a generalGadditional restrictions on the orbit are necessary (polarizability, integrality, Pukánszky condition). This point of view has been significantly advanced by Kostant in his theory ofgeometric quantizationof coadjoint orbits.
For aLie groupG{\displaystyle G}, theKirillov orbit methodgives a heuristic method inrepresentation theory. It connects theFourier transformsofcoadjoint orbits, which lie in thedual spaceof theLie algebraofG, to theinfinitesimal charactersof theirreducible representations. The method got its name after theRussianmathematicianAlexandre Kirillov.
At its simplest, it states that a character of a Lie group may be given by theFourier transformof theDirac delta functionsupportedon the coadjoint orbits, weighted by the square-root of theJacobianof theexponential map, denoted byj{\displaystyle j}. It does not apply to all Lie groups, but works for a number of classes ofconnectedLie groups, includingnilpotent, somesemisimplegroups, andcompact groups.
LetGbe aconnected,simply connectednilpotentLie group. Kirillov proved that the equivalence classes ofirreducibleunitary representationsofGare parametrized by thecoadjoint orbitsofG, that is the orbits of the actionGon the dual spaceg∗{\displaystyle {\mathfrak {g}}^{*}}of its Lie algebra. TheKirillov character formulaexpresses theHarish-Chandra characterof the representation as a certain integral over the corresponding orbit.
Complex irreducible representations ofcompact Lie groupshave been completely classified. They are always finite-dimensional, unitarizable (i.e. admit an invariant positive definiteHermitian form) and are parametrized by theirhighest weights, which are precisely the dominant integral weights for the group. IfGis a compactsemisimple Lie groupwith aCartan subalgebrahthen its coadjoint orbits areclosedand each of them intersects the positive Weyl chamberh*+in a single point. An orbit isintegralif this point belongs to the weight lattice ofG.
The highest weight theory can be restated in the form of a bijection between the set of integral coadjoint orbits and the set of equivalence classes of irreducible unitary representations ofG: the highest weight representationL(λ) with highest weightλ∈h*+corresponds to the integral coadjoint orbitG·λ. TheKirillov character formulaamounts to the character formula earlier proved byHarish-Chandra. | https://en.wikipedia.org/wiki/Kirillov_orbit_theory |
Aframe injectionattack is an attack onInternet Explorer 5,Internet Explorer 6andInternet Explorer 7to load arbitrary code in the browser.[1]This attack is caused byInternet Explorernot checking the destination of the resulting frame,[2]therefore allowing arbitrary code such asJavaScriptorVBScript. This also happens when code gets injected through frames due to scripts not validating their input.[3]This other type of frame injection affects all browsers and scripts that do not validate untrusted input.[4] | https://en.wikipedia.org/wiki/Frame_injection |
AData Matrixis atwo-dimensional codeconsisting of black and white "cells" or dots arranged in either asquareorrectangularpattern, also known as amatrix. The information to be encoded can be text or numeric data. Usual data size is from a few bytes up to 1556bytes. The length of the encoded data depends on the number of cells in the matrix.Error correction codesare often used to increase reliability: even if one or more cells are damaged so it is unreadable, the message can still be read. A Data Matrix symbol can store up to 2,335alphanumericcharacters.
Data Matrix symbols are rectangular, usually square in shape and composed of square "cells" which representbits. Depending on the coding used, a "light" cell represents a 0 and a "dark" cell is a 1, or vice versa. Every Data Matrix is composed of two solid adjacent borders in an "L" shape (called the "finder pattern") and two other borders consisting of alternating dark and light "cells" or modules (called the "timing pattern"). Within these borders are rows and columns of cells encoding information. The finder pattern is used to locate and orient the symbol while the timing pattern provides a count of the number of rows and columns in the symbol. As more data is encoded in the symbol, the number of cells (rows and columns) increases. Each code is unique. Symbol sizes vary from 10×10 to 144×144 in the new version ECC 200, and from 9×9 to 49×49 in the old version ECC 000 – 140.
The most popular application for Data Matrix is marking small items, due to the code's ability to encode fifty characters in a symbol that is readable at 2 or 3 mm2(0.003 or 0.005 sq in) and the fact that the code can be read with only a 20% contrast ratio.[1]A Data Matrix is scalable; commercial applications exist with images as small as 300 micrometres (0.012 in) (laser etched on a 600-micrometre (0.024 in) silicon device) and as large as a 1 metre (3 ft) square (painted on the roof of aboxcar). Fidelity of the marking and reading systems are the only limitation.
The USElectronic Industries Alliance(EIA) recommends using Data Matrix for labeling small electronic components.[2]
Data Matrix codes are becoming common on printed media such as labels and letters. The code can be read quickly by abarcode readerwhich allows the media to be tracked, for example when a parcel has been dispatched to the recipient.
For industrial engineering purposes, Data Matrix codes can be marked directly onto components, ensuring that only the intended component is identified with the data-matrix-encoded data. The codes can be marked onto components with various methods, but within the aerospace industry these are commonly industrial ink-jet, dot-peen marking, laser marking, and electrolytic chemical etching (ECE). These methods give a permanent mark which can last up to the lifetime of the component.
Data Matrix codes are usually verified using specialist camera equipment and software.[further explanation needed]This verification ensures the code conforms to the relevant standards, and ensures readability for the lifetime of the component. After component enters service, the Data Matrix code can then be read by a reader camera, which decodes the Data Matrix data which can then be used for a number of purposes, such as movement tracking or inventory stock checks.
Data Matrix codes, along with other open-source codes such as 1D barcodes can also be read with mobile phones by downloading code specific mobile applications. Although many mobile devices are able to read 2D codes including Data Matrix Code,[3]few extend the decoding to enable mobile access and interaction, whereupon the codes can be used securely and across media; for example, in track and trace, anti-counterfeit, e.govt, and banking solutions.
Data Matrix codes are used in thefood industryinautocodingsystems to prevent food products being packaged and dated incorrectly. Codes are maintained internally on a food manufacturers database and associated with each unique product, e.g. ingredient variations. For each product run the unique code is supplied to the printer. Label artwork is required to allow the 2D Data Matrix to be positioned for optimal scanning. For black on white codes testing isn't required unless print quality is an issue, but all color variations need to be tested before production to ensure they are readable.[citation needed]
In May 2006 a German computer programmer, Bernd Hopfengärtner, created a large Data Matrix in a wheat field (in a fashion similar tocrop circles). The message read "Hello, World!".[4]
Data Matrix symbols are made up of modules arranged within a perimeter finder and timing pattern. It can encode up to 3,116 characters from the entireASCIIcharacter set (with extensions). The symbol consists of data regions which contain modules set out in a regular array. Large symbols contain several regions. Each data region is delimited by a finder pattern, and this is surrounded on all four sides by a quiet zone border (margin). (Note: The modules may be round or square- no specific shape is defined in the standard. For example, dot-peened cells are generally round.)
ECC 200, the newer version of Data Matrix, usesReed–Solomoncodes for error and erasure recovery. ECC 200 allows the routine reconstruction of the entire encoded data string when the symbol has sustained 30% damage, assuming the matrix can still be accurately located. Data Matrix has an error rate of less than 1 in 10 million characters scanned.[5]
Symbols have an even number of rows and an even number of columns. Most of the symbols are square with sizes from 10 × 10 to 144 × 144. Some symbols however are rectangular with sizes from 8×18 to 16×48 (even values only). All symbols using the ECC 200 error correction can be recognized by the upper-right corner module being the same as the background color. (binary 0).
Additional capabilities that differentiate ECC 200 symbols from the earlier standards include:
[6]
Older versions of Data Matrix include ECC 000, ECC 050, ECC 080, ECC 100, ECC 140. Instead of usingReed–Solomoncodes like ECC 200, ECC 000–140 use a convolution-based error correction. Each varies in the amount of error correction it offers, with ECC 000 offering none, and ECC 140 offering the greatest. For error detection at decode time, even in the case of ECC 000, each of these versions also encode acyclic redundancy check(CRC) on the bit pattern. As an added measure, the placement of each bit in the code is determined by bit-placement tables included in the specification. These older versions always have an odd number of modules, and can be made in sizes ranging from 9 × 9 to 49 × 49. All symbols utilizing the ECC 000 through 140 error correction can be recognized by the upper-right corner module being the inverse of the background color. (binary 1).
According to ISO/IEC 16022, "ECC 000–140 should only be used in closed applications where a single party controls both the production and reading of the symbols and is responsible for overall system performance."
Data Matrix was invented byInternational Data Matrix, Inc.(ID Matrix) which was merged into RVSI/Acuity CiMatrix, who were acquired bySiemensAG in October 2005 and Microscan Systems in September 2008. Data Matrix is covered today by severalISO/IECstandards and is in the public domain for many applications, which means it can be used free of any licensing or royalties.
Data Matrix codes useReed–Solomon error correctionover thefinite fieldF256{\displaystyle \mathbb {F} _{256}}(orGF(28)), the elements of which are encoded asbytes of 8 bits; the byteb7b6b5b4b3b2b1b0{\displaystyle b_{7}b_{6}b_{5}b_{4}b_{3}b_{2}b_{1}b_{0}}with a standard numerical value∑i=07bi2i{\displaystyle \textstyle \sum _{i=0}^{7}b_{i}2^{i}}encodes the field element∑i=07biαi{\displaystyle \textstyle \sum _{i=0}^{7}b_{i}\alpha ^{i}}whereα∈F256{\displaystyle \alpha \in \mathbb {F} _{256}}is taken to be a primitive element satisfyingα8+α5+α3+α2+1=0{\displaystyle \alpha ^{8}+\alpha ^{5}+\alpha ^{3}+\alpha ^{2}+1=0}. The primitive polynomial isx8+x5+x3+x2+1{\displaystyle x^{8}+x^{5}+x^{3}+x^{2}+1}, corresponding to the polynomial number 301, with initial root = 1 to obtain generator polynomials. The Reed–Solomon code uses different generator polynomials overF256{\displaystyle \mathbb {F} _{256}}, depending on how many error correction bytes the code adds. The number of bytes added is equal to the degree of the generator polynomial.
For example, in the 10 × 10 symbol, there are 3 data bytes and 5 error correction bytes. The generator polynomial is obtained as:g(x)=(x+α)(x+α2)(x+α3)(x+α4)(x+α5){\displaystyle g(x)=(x+\alpha )(x+\alpha ^{2})(x+\alpha ^{3})(x+\alpha ^{4})(x+\alpha ^{5})},
which gives:g(x)=x5+α235x4+α207x3+α210x2+α244x+α15{\displaystyle g(x)=x^{5}+\alpha ^{235}x^{4}+\alpha ^{207}x^{3}+\alpha ^{210}x^{2}+\alpha ^{244}x+\alpha ^{15}},
or with decimal coefficients:g(x)=x5+62x4+111x3+15x2+48x+228{\displaystyle g(x)=x^{5}+62x^{4}+111x^{3}+15x^{2}+48x+228}.
The encoding process is described in theISO/IECstandard 16022:2006.[7]Open-source software for encoding and decoding the ECC-200 variant of Data Matrix has been published.[8][9]
The diagrams below illustrate the placement of the message data within a Data Matrix symbol. The message is "Wikipedia", and it is arranged in a somewhat complicated diagonal pattern starting near the upper-left corner. Some characters are split in two pieces, such as the initial W, and the third 'i' is in "corner pattern 2" rather than the usual L-shaped arrangement. Also shown are the end-of-message code (marked End), the padding (P) and error correction (E) bytes, and four modules of unused space (X).
The symbol is of size 16×16 (14×14 data area), with 12 data bytes (including 'End' and padding) and 12 error correction bytes. A (255,243,6) Reed Solomon code shortened to (24,12,6) is used. It can correct up to 6 byte errors or erasures.
To obtain the error correction bytes, the following procedure may be carried out:
The generator polynomial specified for the (24,12,6) code, is:g(x)=x12+242x11+100x10+178x9+97x8+213x7+142x6+42x5+61x4+91x3+158x2+153x+41{\displaystyle g(x)=x^{12}+242x^{11}+100x^{10}+178x^{9}+97x^{8}+213x^{7}+142x^{6}+42x^{5}+61x^{4}+91x^{3}+158x^{2}+153x+41},
which may also be written in the form of a matrix of decimal coefficients:
The 12-byte long message "Wikipedia" including 'End', P1 and P2, in decimal coefficients (see the diagrams below for the computation method using ASCII values), is:
Using the procedure forReed-Solomon systematic encoding, the 12 error correction bytes obtained (E1 through E12 in decimal) in the form of the remainder after polynomial division are:
These error correction bytes are then appended to the original message. The resulting coded message has 24 bytes, and is in the form:
or in decimal coefficients:
and in hexadecimal coefficients:
Multiple encoding modes are used to store different kinds of messages. The default mode stores oneASCIIcharacter per 8-bit codeword. Control codes are provided to switch between modes, as shown below.
The C40, Text andX12modes are potentially more compact for storing text messages. They are similar toDEC Radix-50, using character codes in the range 0–39, and three of these codes are combined to make a number up to 403=64000, which is packed into two bytes (maximum value 65536) as follows:
The resulting value of B1 is in the range 0–250. The special value 254 is used to return to ASCII encoding mode.
Character code interpretations are shown in the table below. The C40 and Text modes have four separate sets. Set 0 is the default, and contains codes that temporarily select a different set for the next character. The only difference is that they reverse upper-and lower-case letters. C40 is primarily upper-case, with lower-case letters in set 3; Text is the other way around. Set 1, containing ASCII control codes, and set 2, containing punctuation symbols are identical in C40 and Text mode.
EDIFACTmode uses six bits per character, with four characters packed into three bytes. It can store digits, upper-case letters, and many punctuation marks, but has no support for lower-case letters.
Base 256 mode data starts with a length indicator, followed by a number of data bytes. A length of 1 to 249 is encoded as a single byte,
and longer lengths are stored as two bytes.
It is desirable to avoid long strings of zeros in the coded message, because they become large blank areas in the Data Matrix symbol, which may
cause a scanner to lose synchronization. (The default ASCII encoding does not use zero for this reason.) In order to make that less likely, the
length and data bytes are obscured by adding a pseudorandom value R(n), where n is the position in the byte stream.
Prior to the expiration of US patent 5,612,524[10]in November 2007, intellectual property companyAcacia Technologiesclaimed that Data Matrix was partially covered by its contents. As the patent owner, Acacia allegedly contacted Data Matrix users demanding license fees related to the patent.
Cognex Corporation, a large manufacturer of 2D barcode devices, filed adeclaratory judgmentcomplaint on 13 March 2006 after receiving information that Acacia had contacted its customers demanding licensing fees. On 19 May 2008 Judge Joan N. Ericksen of the U.S. District Court in Minnesota ruled in favor of Cognex.[11]The ruling held that the '524 patent, which claimed to cover a system for capturing and reading 2D symbology codes, is both invalid and unenforceable due toinequitable conductby the defendants during the procurement of the patent.
While the ruling was delivered after the patent expired, it precluded claims for infringement based on use of Data Matrix prior to November 2007.
A German patent application DE 4107020 was filed in 1991, and published in 1992. This patent is not cited in the above US patent applications and might invalidate them.[citation needed] | https://en.wikipedia.org/wiki/Data_Matrix |
PAN truncationis an anti-fraud measure available on some credit-card-processingpoint of sale(POS) terminals as part of amerchant accountservice.
"PAN" is an acronym forprimary account number, i.e., the "card number" on either adebitor acredit card.PAN truncationsimply replaces the card number printed on a customer receipt with a printout of only the last four digits, the remainder being replaced usually byasterisks. This hides the card number from anyone who obtains the receipt when discarded, or by other means, while still allowing a card holder with multiple cards to identify which was used, and thus accurately record the transaction.
PAN truncation is a measure to combatpayment card fraud, which is increasing worldwide,[1]particularly in a global market where "card not present" (CNP) transactions are increasingly[2]popular over theInternet, bymail, and bytelephone. | https://en.wikipedia.org/wiki/PAN_truncation |
Inlinguistics, aword stemis awordpart responsible for a word'slexicalmeaning. The term is used with slightly different meanings depending on themorphologyof the language in question. For instance, inAthabaskan linguistics, a verb stem is arootthat cannot appear on its own and that carries thetoneof the word.
Typically, a stem remains unmodified duringinflectionwith few exceptions due toapophony(for example inPolish,miast-o("city") andw mieść-e("in the city"); in English,sing,sang, andsung, where it can be modified according to morphological rules or peculiarities, such assandhi).
Word stem comparisons acrosslanguageshave helped revealcognatesthat have allowedcomparative linguiststo determinelanguage familiesandtheir history.[1]
The wordfriendshipis made by attaching themorpheme-shipto theroot wordfriend(which some linguists[2]also call a stem). While the inflectional plural morpheme-scan be attached tofriendshipto formfriendships, it can not be attached to the rootfriendwithinfriendshipto formfriendsship. A stem is a base from which all itsinflectedvariants are formed.[3]For example, thestabil-(a variant ofstableunable to stand alone) is the root of thedestabilized, while the stem consists ofde·stabil·ize, includingde-and-ize. The-(e)d, on the other hand, is not part of the stem.
A stem can be a lone root, such asrun, or acompound word, such as the compound nounsmeatballandbottleneckor the compound verbsblackenandstandardize.
The stem of theverbto waitiswait: The stem is the word part that is common to all of its inflected variants.
In languages with very little inflection, such asEnglishandChinese, the stem is usually not distinct from the "normal" form of the word (the lemma, citation, or dictionary form). However, in other languages, word stems may rarely or never occur on their own. For example, the English verb stemrunis indistinguishable from its present tense form (except in the third person singular). However, the equivalentSpanishverb stemcorr-never appears as such because it is cited with the infinitive inflection (correr) and always appears in actual speech as a non-finite (infinitive or participle) or conjugated form. Such morphemes that cannot occur on their own in this way are usually referred to asbound morphemes.
Incomputational linguistics, the term "stem" is used for the part of the word that never changes, even morphologically, when inflected, and a lemma is the base form of the word.[citation needed]For example, given the word "produced", its lemma (linguistics) is "produce", but the stem is "produc-" because of the inflected form "producing".
A list of all the inflected forms of a word stem is called its inflectional paradigm. The paradigm of theadjectivetallis given below, and the stem of this adjective istall.
Some paradigms do not make use of the same stem throughout; this phenomenon is calledsuppletion. An example of a suppletive paradigm is the paradigm for the adjectivegood: its stem changes fromgoodto the bound morphemebet-.
Both inLatinandGreek, thedeclension(inflection) of somenounsuses a different stem in theoblique casesthan in thenominativeandvocativesingular cases. Such words belong to, respectively, the so-calledthird declensionof the Latin grammar and the so-calledthird declensionof the Ancient Greek grammar. For example, thegenitivesingular is formed by adding-is(Latin) or -ος (Greek) to the oblique stem, and the genitive singular is conventionally listed in Greek and Latin dictionaries to illustrate the oblique.
English words derived from Latin or Greek often involve the oblique stem:adipose,altitudinal,android, andmathematics.
Historically, the difference in stems arose due to sound changes in the nominative. In the Latin third declension, for example, the nominative singular suffix-sis combined with a stem-final consonant. If that consonant wasc, the result wasx(a mere orthographic change), while if it wasg, the-scaused it todevoice, again resulting inx. If the stem-final consonant was anotheralveolar consonant(t, d, r), it elided before the-s. In a later era,nbefore the nominative ending was also lost, producing pairs likeatlas, atlant-(for EnglishAtlas,Atlantic). | https://en.wikipedia.org/wiki/Word_stem |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.