text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
ORYXis an encryption algorithm used in cellular communications in order to protect data traffic. It is astream cipherdesigned to have a very strong 96-bit key strength with a way to reduce the strength to 32-bits for export. However, due to mistakes the actual strength is a trivial 16-bits and any signal can be cracked after the first 25–27 bytes.[1]
It is one of the four cryptographic primitives standardized byTIA's for use in their digital cellular communications standardsTDMAandCDMA.[1]
ORYX is a simple stream cipher based on binarylinear-feedback shift registers(LFSRs) to protect cellular data transmissions (for wireless data services).
The cipher ORYX has four components: three 32-bit LFSRs which labeled as LFSRA, LFSRB and LFSRK, and anS-boxcontaining a known permutationPof the integer values 0 to 255.
The feedback function for LFSRK is defined as:Lt + 32 = Lt + 28 ⊕ Lt + 19 ⊕ Lt + 18 ⊕ Lt + 16 ⊕ Lt + 14 ⊕ Lt + 11 ⊕ Lt + 10 ⊕ Lt + 9 ⊕ Lt + 6 ⊕ Lt + 5 ⊕ Lt + 1 ⊕ Lt
The feedback functions for LFSRA are defined as:Lt + 32 = Lt + 26 ⊕ Lt + 23 ⊕ Lt + 22 ⊕ Lt + 16 ⊕ Lt + 12 ⊕ Lt + 11 ⊕ Lt + 10 ⊕ Lt + 8 ⊕ Lt + 7 ⊕ Lt + 5 ⊕ Lt + 4 ⊕ Lt + 2 ⊕ Lt + 1 ⊕ Lt
andLt + 32 = Lt + 27 ⊕ Lt + 26 ⊕ Lt + 25 ⊕ Lt + 24 ⊕ Lt + 23 ⊕ Lt + 22 ⊕ Lt + 17 ⊕ Lt + 13 ⊕ Lt + 11 ⊕ Lt + 10 ⊕ Lt + 9 ⊕ Lt + 8 ⊕ Lt + 7 ⊕ Lt + 2 ⊕ Lt + 1 ⊕ Lt
The feedback function for LFSRB is:Lt + 32 = Lt + 31 ⊕ Lt + 21 ⊕ Lt + 20 ⊕ Lt + 16 ⊕ Lt + 15 ⊕ Lt + 6 ⊕ Lt + 3 ⊕ Lt + 1 ⊕ Lt
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/ORYX
|
Shibbolethis asingle sign-onlog-in system for computer networks and theInternet. It allows people to sign in using just one identity to various systems run by federations of different organizations or institutions. The federations are often universities or public service organizations.
The ShibbolethInternet2middlewareinitiative created anarchitectureandopen-sourceimplementation foridentity managementandfederated identity-basedauthenticationandauthorization(oraccess control) infrastructure based onSecurity Assertion Markup Language(SAML). Federated identity allows the sharing of information about users from one security domain to the other organizations in a federation. This allows for cross-domain single sign-on and removes the need for content providers to maintain usernames and passwords.Identity providers(IdPs) supply user information, while service providers (SPs) consume this information and give access to secure content.
The Shibboleth project grew out of Internet2. Today, the project is managed by the Shibboleth Consortium. Two of the most popular software components managed by the Shibboleth Consortium are the Shibboleth Identity Provider and the Shibboleth Service Provider, both of which are implementations ofSAML.
The project was named after anidentifying passphraseused in theBible(Judges12:4–6) becauseEphraimiteswere not able to pronounce "sh".
The Shibboleth project was started in 2000 to facilitate the sharing of resources between organizations with incompatibleauthentication and authorization infrastructures.Architectural workwas performed for over a year prior to any software development. After development and testing, Shibboleth IdP 1.0 was released in July 2003.[1]This was followed by the release of Shibboleth IdP 1.3 in August 2005.
Version 2.0 of the Shibboleth software was a major upgrade released in March 2008.[2]It included both IdP and SP components, but, more importantly, Shibboleth 2.0 supported SAML 2.0.
The Shibboleth and SAML protocols were developed during the same timeframe. From the beginning, Shibboleth was based on SAML, but, where SAML was found lacking, Shibboleth improvised, and the Shibboleth developers implemented features that compensated for missing features inSAML 1.1. Some of these features were later incorporated intoSAML 2.0, and, in that sense, Shibboleth contributed to the evolution of the SAML protocol.
Perhaps the most important contributed feature was the legacy Shibboleth AuthnRequest protocol. Since the SAML 1.1 protocol was inherently an IdP-first protocol, Shibboleth invented a simple HTTP-based authentication request protocol that turned SAML 1.1 into an SP-first protocol. This protocol was first implemented in Shibboleth IdP 1.0 and later refined in Shibboleth IdP 1.3.
Building on that early work, theLiberty Allianceintroduced a fully expanded AuthnRequest protocol into the Liberty Identity Federation Framework. Eventually, Liberty ID-FF 1.2 was contributed to OASIS, which formed the basis for the OASIS SAML 2.0 Standard.[importance?]
Shibboleth is a web-based technology that implements the HTTP/POST artifact and attribute push profiles ofSAML, including both Identity Provider (IdP) and Service Provider (SP) components. Shibboleth 1.3 has its own technical overview,[3]architectural document,[4]and conformance document[5]that build on top of the SAML 1.1 specifications.
In the canonical use case:
Shibboleth supports a number of variations on this base case, including portal-style flows whereby the IdP mints an unsolicited assertion to be delivered in the initial access to the SP, and lazy session initiation, which allows an application to trigger content protection through a method of its choice as required.
Shibboleth 1.3 and earlier do not provide a built-inauthenticationmechanism, but any Web-based authentication mechanism can be used to supply user data for Shibboleth to use. Common systems for this purpose includeCASorPubcookie. The authentication and single-sign-on features of the Java container in which the IdP runs (Tomcat, for example) can also be used.
Shibboleth 2.0 builds onSAML 2.0standards. The IdP in Shibboleth 2.0 has to do additional processing in order to support passive and forced authentication requests in SAML 2.0. The SP can request a specific method of authentication from the IdP. Shibboleth 2.0 supports additional encryption capacity.
Shibboleth's access control is performed by matching attributes supplied by IdPs against rules defined by SPs. An attribute is any piece of information about a user, such as "member of this community", "Alice Smith", or "licensed under contract A". User identity is considered an attribute, and is only passed when explicitly required, which preserves user privacy. Attributes can be written in Java or pulled from directories and databases. StandardX.520attributes are most commonly used, but new attributes can be arbitrarily defined as long as they are understood and interpreted similarly by the IdP and SP in a transaction.
Trust between domains is implemented using public key cryptography (often simplyTLSserver certificates) and metadata that describes providers. The use of information passed is controlled through agreements. Federations are often used to simplify these relationships by aggregating large numbers of providers that agree to use common rules and contracts.
Shibboleth is open-source and provided under the Apache 2 license. Many extensions have been contributed by other groups.[citation needed]
|
https://en.wikipedia.org/wiki/Shibboleth_(Internet2)
|
VOMSis an acronym used forVirtual Organization Membership Serviceingrid computing. It is structured as a simple account database with fixed formats for the information exchange and features single login, expiration time, backward compatibility, and multiple virtual organizations.
The database is manipulated by authorization data that defines specific capabilities and roles for users. Administrative tools can be used by administrators to assign roles and capability information in the database. A command-line tool allows users to generate a local proxy credential based on the contents of the VOMS database. This credential includes the basic authentication information that standard Grid proxy credentials contain, but it also includes role and capability information from the VOMS server.
VOMS-aware applications can use the VOMS data to make authentication decisions regarding user requests. VOMS was originally developed by the European DataGrid andEnabling Grids for E-sciencEprojects and is now maintained by the Italian National Institute for Nuclear Physics (INFN).
VOMSis also an acronym forVOucher Management Systemused for providing recharge management services for Prepaid Systems of Telecom Service Providers.
Typically external Voucher Management Systems are used withIntelligent Networkbased prepaid systems.
Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Voms
|
TheAutomatic Certificate Management Environment(ACME) protocol is acommunications protocolfor automating interactions betweencertificate authoritiesand their users' servers, allowing the automated deployment ofpublic key infrastructureat very low cost.[1][2]It was designed by theInternet Security Research Group(ISRG) for theirLet's Encryptservice.[1]
The protocol, based on passingJSON-formatted messages overHTTPS,[2][3]has been published as an Internet Standard inRFC8555[4]by its own charteredIETFworking group.[5]
The ISRG providesfree and open-sourcereference implementations for ACME:certbotis aPython-based implementation of server certificate management software using the ACME protocol,[6][7][8]andboulderis acertificate authorityimplementation, written inGo.[9]
Since 2015 a large variety of client options have appeared for all operating systems.[10]
API v1 specification was published on April 12, 2016. It supports issuing certificates for fully-qualified domain names, such asexample.comorcluster.example.com, but not wildcards like*.example.com. Let's Encrypt turned off API v1 support on 1 June 2021.[11]
API v2 was released March 13, 2018 after being pushed back several times. ACME v2 is not backwards compatible with v1. Version 2 supports wildcard domains, such as*.example.com, allowing for many subdomains to have trustedTLS, e.g.https://cluster01.example.com,https://cluster02.example.com,https://example.com, on private networks under a single domain using a single shared "wildcard" certificate.[12]A major new requirement in v2 is that requests for wildcard certificates require the modification of a Domain Name ServiceTXT record, verifying control over the domain.
Changes to ACME v2 protocol since v1 include:[13]
|
https://en.wikipedia.org/wiki/Automated_Certificate_Management_Environment
|
Abiordered set(otherwise known asboset) is amathematical objectthat occurs in the description of thestructureof the set ofidempotentsin asemigroup.
The set of idempotents in a semigroup is a biordered set and every biordered set is the set of idempotents of some semigroup.[1][2]A regular biordered set is a biordered set with an additional property. The set of idempotents in aregular semigroupis a regular biordered set, and every regular biordered set is the set of idempotents of some regular semigroup.[1]
The concept and the terminology were developed byK S S Nambooripadin the early 1970s.[3][4][1]In 2002, Patrick Jordan introduced the term boset as an abbreviation of biordered set.[5]The defining properties of a biordered set are expressed in terms of twoquasiordersdefined on the set and hence the name biordered set.
According to Mohan S. Putcha, "The axioms defining a biordered set are quite complicated. However, considering the general nature of semigroups, it is rather surprising that such a finite axiomatization is even possible."[6]Since the publication of the original definition of the biordered set by Nambooripad, several variations in the definition have been proposed. David Easdown simplified the definition and formulated the axioms in a special arrow notation invented by him.[7]
IfXandYaresetsandρ⊆X×Y, letρ(y) = {x∈X:xρy}.
LetEbe asetin which apartialbinary operation, indicated by juxtaposition, is defined. IfDEis thedomainof the partial binary operation onEthenDEis arelationonEand (e,f) is inDEif and only if the productefexists inE. The following relations can be defined inE:
IfTis anystatementaboutEinvolving the partial binary operation and the above relations inE, one can define the left-rightdualofTdenoted byT*. IfDEissymmetricthenT* is meaningful wheneverTis.
The setEis called a biordered set if the followingaxiomsand their duals hold for arbitrary elementse,f,g, etc. inE.
InM(e,f) =ωl(e) ∩ ωr(f)(theM-setofeandfin that order), define a relation≺{\displaystyle \prec }by
Then the set
is called thesandwich setofeandfin that order.
We say that a biordered setEis anM-biordered setifM(e,f) ≠ ∅ for alleandfinE.
Also,Eis called aregular biordered setifS(e,f) ≠ ∅ for alleandfinE.
In 2012 Roman S. Gigoń gave a simple proof thatM-biordered sets arise fromE-inversive semigroups.[8][clarification needed]
A subsetFof a biordered setEis a biordered subset (subboset) ofEifFis a biordered set under the partial binary operation inherited fromE.
For anyeinEthe setsωr(e),ωl(e)andω(e)are biordered subsets ofE.[1]
A mappingφ:E→Fbetween two biordered setsEandFis a biordered set homomorphism (also called a bimorphism) if for all (e,f) inDEwe have (eφ) (fφ) = (ef)φ.
LetVbe avector spaceand
whereV=A⊕Bmeans thatAandBaresubspacesofVandVis theinternal direct sumofAandB.
The partial binary operation ⋆ on E defined by
makesEa biordered set. The quasiorders inEare characterised as follows:
The setEof idempotents in a semigroupSbecomes a biordered set if a partial binary operation is defined inEas follows:efis defined inEif and only ifef=eoref=forfe=eorfe=fholds inS. IfSis a regular semigroup thenEis a regular biordered set.
As a concrete example, letSbe the semigroup of all mappings ofX= { 1, 2, 3 }into itself. Let the symbol (abc) denote the map for which1 →a, 2 →b,and3 →c. The setEof idempotents inScontains the following elements:
The following table (taking composition of mappings in the diagram order) describes the partial binary operation inE. AnXin a cell indicates that the corresponding multiplication is not defined.
|
https://en.wikipedia.org/wiki/Biordered_set
|
In mathematics, acompact semigroupis asemigroupin which the sets of solutions to equations can be described by finite sets of equations. The term "compact" here does not refer to anytopologyon the semigroup.
LetSbe a semigroup andXa finite set of letters. A system of equations is a subsetEof theCartesian productX∗×X∗of thefree monoid(finite strings) overXwith itself. The systemEis satisfiable inSif there is a mapffromXtoS, which extends to asemigroup morphismffromX+toS, such that for all (u,v) inEwe havef(u) =f(v) inS. Such anfis asolution, orsatisfying assignment, for the systemE.[1]
Two systems of equations areequivalentif they have the same set of satisfying assignments. A system of equations ifindependentif it is not equivalent to a proper subset of itself.[1]A semigroup iscompactif every independent system of equations is finite.[2]
The class of compact semigroups does not form anequational variety. However, a variety of monoids has the property that all its members are compact if and only if all finitely generated members satisfy themaximal condition on congruences(any family of congruences, ordered by inclusion, has a maximal element).[8]
|
https://en.wikipedia.org/wiki/Compact_semigroup
|
Inmathematics, asemigroup with no elements(theempty semigroup) is asemigroupin which theunderlying setis theempty set. Many authors do not admit the existence of such a semigroup. For them a semigroup is by definition anon-emptyset together with anassociativebinary operation.[1][2]However not all authors insist on the underlying set of a semigroup being non-empty.[3]One can logically define a semigroup in which the underlying setSis empty. The binary operation in the semigroup is theempty functionfromS×StoS. This operationvacuouslysatisfies the closure and associativity axioms of a semigroup. Not excluding the empty semigroup simplifies certain results on semigroups. For example, the result that the intersection of two subsemigroups of a semigroupTis a subsemigroup ofTbecomes valid even when the intersection is empty.
When a semigroup is defined to have additional structure, the issue may not arise. For example, the definition of amonoidrequires anidentity element, which rules out the empty semigroup as a monoid.
Incategory theory, the empty semigroup is always admitted. It is the uniqueinitial objectof the category of semigroups.
A semigroup with no elements is aninverse semigroup, since the necessary condition is vacuously satisfied.
|
https://en.wikipedia.org/wiki/Empty_semigroup
|
In algebra, theprincipal factorof aJ{\displaystyle {\mathcal {J}}}-classJof asemigroupSis equal toJifJis thekernelofS, and toJ∪{0}{\displaystyle J\cup \{0\}}otherwise.
Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Principal_factor
|
Inquantum mechanics, aquantum Markov semigroupdescribes the dynamics in aMarkovianopen quantum system. The axiomatic definition of the prototype ofquantum Markov semigroupswas first introduced byA. M. Kossakowski[1]in 1972, and then developed by V. Gorini,A. M. Kossakowski,E. C. G. Sudarshan[2]andGöran Lindblad[3]in 1976.[4]
An idealquantum systemis not realistic because it should be completely isolated while, in practice, it is influenced by thecouplingto an environment, which typically has a large number of degrees of freedom (for example anatominteracting with the surrounding radiation field). A complete microscopic description of the degrees of freedom of the environment is typically too complicated. Hence, one looks for simpler descriptions of the dynamics of the open system. In principle, one should investigate theunitarydynamics of the total system, i.e. the system and the environment, to obtain information about the reduced system of interest by averaging the appropriate observables over the degrees of freedom of the environment. To model the dissipative effects due to the interaction with the environment, theSchrödinger equationis replaced by a suitablemaster equation, such as aLindblad equationor a stochastic Schrödinger equation in which the infinite degrees of freedom of the environment are "synthesized" as a fewquantum noises. Mathematically, time evolution in a Markovian open quantum system is no longer described by means ofone-parameter groupsof unitary maps, but one needs to introducequantum Markov semigroups.
In general, quantum dynamical semigroups can be defined onvon Neumann algebras, so the dimensionality of the system could be infinite. LetA{\displaystyle {\mathcal {A}}}be a von Neumann algebra acting onHilbert spaceH{\displaystyle {\mathcal {H}}}, a quantum dynamical semigroup onA{\displaystyle {\mathcal {A}}}is a collection of bounded operators onA{\displaystyle {\mathcal {A}}}, denoted byT:=(Tt)t≥0{\displaystyle {\mathcal {T}}:=\left({\mathcal {T}}_{t}\right)_{t\geq 0}}, with the following properties:[5]
Under the condition of complete positivity, the operatorsTt{\displaystyle {\mathcal {T}}_{t}}areσ{\displaystyle \sigma }-weakly continuous if and only ifTt{\displaystyle {\mathcal {T}}_{t}}are normal.[5]Recall that, lettingA+{\displaystyle {\mathcal {A}}_{+}}denote theconvex coneof positive elements inA{\displaystyle {\mathcal {A}}}, a positive operatorT:A→A{\displaystyle T:{\mathcal {A}}\rightarrow {\mathcal {A}}}is said to be normal if for every increasingnet(xα)α{\displaystyle \left(x_{\alpha }\right)_{\alpha }}inA+{\displaystyle {\mathcal {A}}_{+}}withleast upper boundx{\displaystyle x}inA+{\displaystyle {\mathcal {A}}_{+}}one has
for eachu{\displaystyle u}in anorm-denselinear sub-manifoldofH{\displaystyle {\mathcal {H}}}.
A quantum dynamical semigroupT{\displaystyle {\mathcal {T}}}is said to be identity-preserving (or conservative, or Markovian) if
where1∈A{\displaystyle {\boldsymbol {1}}\in {\mathcal {A}}}is the identity element. For simplicity,T{\displaystyle {\mathcal {T}}}is called quantum Markov semigroup. Notice that, the identity-preserving property andpositivityofTt{\displaystyle {\mathcal {T}}_{t}}imply‖Tt‖=1{\displaystyle \left\|{\mathcal {T}}_{t}\right\|=1}for allt≥0{\displaystyle t\geq 0}and thenT{\displaystyle {\mathcal {T}}}is acontraction semigroup.[6]
The Condition (1) plays an important role not only in the proof of uniqueness and unitarity of solution of aHudson–Parthasarathyquantum stochastic differential equation, but also in deducing regularity conditions for paths of classical Markov processes in view ofoperator theory.[7]
The infinitesimal generator of a quantum dynamical semigroupT{\displaystyle {\mathcal {T}}}is the operatorL{\displaystyle {\mathcal {L}}}with domainDom(L){\displaystyle \operatorname {Dom} ({\mathcal {L}})}, where
andL(a):=b{\displaystyle {\mathcal {L}}(a):=b}.
If the quantum Markov semigroupT{\displaystyle {\mathcal {T}}}is uniformly continuous in addition, which meanslimt→0+‖Tt−T0‖=0{\displaystyle \lim _{t\rightarrow 0^{+}}\left\|{\mathcal {T}}_{t}-{\mathcal {T}}_{0}\right\|=0}, then
Under such assumption, the infinitesimal generatorL{\displaystyle {\mathcal {L}}}has the characterization[3]
wherea∈A{\displaystyle a\in {\mathcal {A}}},Vj∈B(H){\displaystyle V_{j}\in {\mathcal {B}}({\mathcal {H}})},∑jVj†Vj∈B(H){\displaystyle \sum _{j}V_{j}^{\dagger }V_{j}\in {\mathcal {B}}({\mathcal {H}})}, andH∈B(H){\displaystyle H\in {\mathcal {B}}({\mathcal {H}})}isself-adjoint. Moreover, above[⋅,⋅]{\displaystyle \left[\cdot ,\cdot \right]}denotes thecommutator, and{⋅,⋅}{\displaystyle \left\{\cdot ,\cdot \right\}}theanti-commutator.
|
https://en.wikipedia.org/wiki/Quantum_dynamical_semigroup
|
Inabstract algebra, amonoid ringis aringconstructed from a ring and amonoid, just as agroup ringis constructed from a ring and agroup.
LetRbe a ring and letGbe a monoid. The monoid ring ormonoid algebraofGoverR, denotedR[G] orRG, is the set of formal sums∑g∈Grgg{\displaystyle \sum _{g\in G}r_{g}g},
whererg∈R{\displaystyle r_{g}\in R}for eachg∈G{\displaystyle g\in G}andrg= 0 for all but finitely manyg, equipped with coefficient-wise addition, and the multiplication in which the elements ofRcommute with the elements ofG. More formally,R[G] is the freeR-module on the setG, endowed withR-linear multiplication defined on the base elements byg·h:=gh, where the left-hand side is understood as the multiplication inR[G] and the right-hand side is understood inG.
Alternatively, one can identify the elementg∈R[G]{\displaystyle g\in R[G]}with the functionegthat mapsgto 1 and every other element ofGto 0. This way,R[G] is identified with the set of functionsφ:G→Rsuch that{g: φ(g) ≠ 0} is finite. equipped with addition of functions, and with multiplication defined by
IfGis agroup, thenR[G] is also called thegroup ringofGoverR.
GivenRandG, there is aring homomorphismα:R→R[G]sending eachrtor1 (where 1 is the identity element ofG),
and amonoid homomorphismβ:G→R[G](where the latter is viewed as a monoid under multiplication) sending eachgto 1g(where 1 is the multiplicative identity ofR).
We have that α(r) commutes with β(g) for allrinRandginG.
The universal property of the monoid ring states that given a ringS, a ring homomorphismα':R→S, and a monoid homomorphismβ':G→Sto the multiplicative monoid ofS,
such that α'(r) commutes with β'(g) for allrinRandginG, there is a unique ring homomorphismγ:R[G] →Ssuch that composing α and β with γ produces α' and β
'.
Theaugmentationis the ring homomorphismη:R[G] →Rdefined by
Thekernelofηis called theaugmentation ideal. It is afreeR-modulewith basis consisting of 1 –gfor allginGnot equal to 1.
Given a ringRand the (additive) monoid ofnatural numbersN(or {xn} viewed multiplicatively), we obtain the ringR[{xn}] =:R[x] ofpolynomialsoverR.
The monoidNn(with the addition) gives the polynomial ring withnvariables:R[Nn] =:R[X1, ...,Xn].
IfGis asemigroup, the same construction yields asemigroup ringR[G].
|
https://en.wikipedia.org/wiki/Semigroup_ring
|
Inmathematics, the termweak inverseis used with several meanings.
In the theory ofsemigroups, a weak inverse of an elementxin a semigroup(S, •)is an elementysuch thaty•x•y=y. If every element has a weak inverse, the semigroup is called anE-inversive orE-dense semigroup. AnE-inversive semigroup may equivalently be defined by requiring that for every elementx∈S, there existsy∈Ssuch thatx•yandy•xareidempotents.[1]
An elementxofSfor which there is an elementyofSsuch thatx•y•x=xis called regular. Aregular semigroupis a semigroup in which every element is regular. This is a stronger notion than weak inverse. Every regular semigroup isE-inversive, but not vice versa.[1]
If every elementxinShas a unique inverseyinSin the sense thatx•y•x=xandy•x•y=ythenSis called aninverse semigroup.
Incategory theory, a weak inverse of anobjectAin amonoidal categoryCwith monoidal product ⊗ and unit objectIis an objectBsuch that bothA⊗BandB⊗Aareisomorphicto the unit objectIofC. A monoidal category in which everymorphismis invertible and every object has a weak inverse is called a2-group.
Thiscategory theory-related article is astub. You can help Wikipedia byexpanding it.
Thisabstract algebra-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Weak_inverse
|
ACartesian monoidis amonoid, with additional structure of pairing and projection operators. It was first formulated byDana ScottandJoachim Lambekindependently.[1]
A Cartesian monoid is astructurewithsignature⟨∗,e,(−,−),L,R⟩{\displaystyle \langle *,e,(-,-),L,R\rangle }where∗{\displaystyle *}and(−,−){\displaystyle (-,-)}arebinary operations,L,R{\displaystyle L,R}, ande{\displaystyle e}are constants satisfying the followingaxiomsfor allx,y,z{\displaystyle x,y,z}in itsuniverse:
The interpretation is thatL{\displaystyle L}andR{\displaystyle R}are left and right projection functions respectively for the pairing function(−,−){\displaystyle (-,-)}.
|
https://en.wikipedia.org/wiki/Cartesian_monoid
|
Inmathematics,Green's relationsare fiveequivalence relationsthat characterise the elements of asemigroupin terms of theprincipal idealsthey generate. The relations are named forJames Alexander Green, who introduced them in a paper of 1951.John Mackintosh Howie, a prominent semigroup theorist, described this work as "so all-pervading that, on encountering a new semigroup, almost the first question one asks is 'What are the Green relations like?'" (Howie 2002). The relations are useful for understanding the nature of divisibility in a semigroup; they are also valid forgroups, but in this case tell us nothing useful, because groups always have divisibility.
Instead of working directly with a semigroupS, it is convenient to define Green's relations over themonoidS1. (S1is "Swith an identity adjoined if necessary"; ifSis not already a monoid, a new element is adjoined and defined to be an identity.) This ensures that principal ideals generated by some semigroup element do indeed contain that element. For an elementaofS, the relevant ideals are:
For elementsaandbofS, Green's relationsL,RandJare defined by
That is,aandbareL-related if they generate the same left ideal;R-related if they generate the same right ideal; andJ-related if they generate the same two-sided ideal. These are equivalence relations onS, so each of them yields a partition ofSinto equivalence classes. TheL-class ofais denotedLa(and similarly for the other relations). TheL-classes andR-classes can be equivalently understood as thestrongly connected componentsof the left and rightCayley graphsofS1.[1]Further, theL,R, andJrelations define threepreorders≤L, ≤R, and ≤J, wherea≤Jbholds for two elementsaandbofSif the ideal generated byais included in that ofb, i.e.,S1aS1⊆S1bS1, and ≤Land ≤Rare defined analogously.[2]
Green used the lowercaseblackletterl{\displaystyle {\mathfrak {l}}},r{\displaystyle {\mathfrak {r}}}andf{\displaystyle {\mathfrak {f}}}for these relations, and wrotea≡b(l){\displaystyle a\equiv b({\mathfrak {l}})}foraLb(and likewise forRandJ).[3]Mathematicians today tend to use script letters such asR{\displaystyle {\mathcal {R}}}instead, and replace Green'smodular arithmetic-style notation with the infix style used here. Ordinary letters are used for the equivalence classes.
TheLandRrelations are left-right dual to one another; theorems concerning one can be translated into similar statements about the other. For example,Lisright-compatible: ifaLbandcis another element ofS, thenacLbc. Dually,Risleft-compatible: ifaRb, thencaRcb.
IfSis commutative, thenL,RandJcoincide.
The remaining relations are derived fromLandR. Their intersection isH:
This is also an equivalence relation onS. The classHais the intersection ofLaandRa. More generally, the intersection of anyL-class with anyR-class is either anH-class or the empty set.
Green's Theoremstates that for anyH{\displaystyle {\mathcal {H}}}-classHof a semigroup S either (i)H2∩H=∅{\displaystyle H^{2}\cap H=\emptyset }or (ii)H2⊆H{\displaystyle H^{2}\subseteq H}andHis a subgroup ofS. An important corollary is that the equivalence classHe, whereeis anidempotent, is a subgroup ofS(its identity ise, and all elements have inverses), and indeed is the largest subgroup ofScontaininge. NoH{\displaystyle {\mathcal {H}}}-class can contain more than one idempotent, thusH{\displaystyle {\mathcal {H}}}isidempotent separating. In amonoidM, the classH1is traditionally called thegroup of units.[4](Beware that unit does not mean identity in this context, i.e. in general there are non-identity elements inH1. The "unit" terminologycomes from ring theory.) For example, in thetransformation monoidonnelements,Tn, the group of units is thesymmetric groupSn.
Finally,Dis defined:aDbif and only if there exists acinSsuch thataLcandcRb. In the language oflattices,Dis the join ofLandR. (The join for equivalence relations is normally more difficult to define, but is simplified in this case by the fact thataLcandcRbfor somecif and only ifaRdanddLbfor somed.)
AsDis the smallest equivalence relation containing bothLandR, we know thataDbimpliesaJb—soJcontainsD. In a finite semigroup,DandJare the same;[5]this is also the case in arational monoid[6]or in anepigroup.[7]
There is also a formulation ofDin terms of equivalence classes, derived directly from the above definition:[8]
Consequently, theD-classes of a semigroup can be seen as unions ofL-classes, as unions ofR-classes, or as unions ofH-classes.CliffordandPreston(1961) suggest thinking of this situation in terms of an "egg-box":[9]
Each row of eggs represents anR-class, and each column anL-class; the eggs themselves are theH-classes. For a group, there is only one egg, because all five of Green's relations coincide, and make all group elements equivalent. The opposite case, found for example in thebicyclic semigroup, is where each element is in anH-class of its own. The egg-box for this semigroup would contain infinitely many eggs, but all eggs are in the same box because there is only oneD-class. (A semigroup for which all elements areD-related is calledbisimple.)
It can be shown that within aD-class, allH-classes are the same size. For example, the transformation semigroupT4contains fourD-classes, within which theH-classes have 1, 2, 6, and 24 elements respectively.
Recent advances in thecombinatoricsof semigroups have used Green's relations to help enumerate semigroups with certain properties. A typical result (Satoh, Yama, and Tokizawa 1994) shows that there are exactly 1,843,120,128non-equivalentsemigroups of order 8, including 221,805 that are commutative; their work is based on a systematic exploration of possibleD-classes. (By contrast, there are onlyfive groups of order 8.)
The full transformation semigroupT3consists of all functions from the set {1, 2, 3} to itself; there are 27 of these. Write (abc) for the function that sends 1 toa, 2 tob, and 3 toc. SinceT3contains the identity map, (1 2 3), there is no need to adjoin an identity.
The egg-box diagram forT3has threeD-classes. They are alsoJ-classes, because these relations coincide for a finite semigroup.
InT3, two functions areR-related if and only if they have the sameimage. Such functions appear in the same row of the table above. Likewise, the functionsfandgareL-related if and only if
forxandyin {1, 2, 3}; such functions are in the same column. Consequently, two functions areD-related if and only if their images are the same size.
The elements in bold are the idempotents. AnyH-class containing one of these is a (maximal) subgroup. In particular, the thirdD-class is isomorphic to the symmetric groupS3. There are also six subgroups of order 2, and three of order 1 (as well as subgroups of these subgroups). Six elements ofT3are not in any subgroup.
There are essentially two ways of generalising an algebraic theory. One is to change its definitions so that it covers more or different objects; the other, more subtle way, is to find some desirable outcome of the theory and consider alternative ways of reaching that conclusion.
Following the first route, analogous versions of Green's relations have been defined forsemirings(Grillet 1970) and rings (Petro 2002). Some, but not all, of the properties associated with the relations in semigroups carry over to these cases. Staying within the world of semigroups, Green's relations can be extended to coverrelative ideals, which are subsets that are only ideals with respect to a subsemigroup (Wallace 1963).
For the second kind of generalisation, researchers have concentrated on properties ofbijectionsbetweenL- andR- classes. IfxRy, then it is always possible to find bijections betweenLxandLythat areR-class-preserving. (That is, if two elements of anL-class are in the sameR-class, then their images under a bijection will still be in the sameR-class.) The dual statement forxLyalso holds. These bijections are right and left translations, restricted to the appropriate equivalence classes. The question that arises is: how else could there be such bijections?
Suppose that Λ and Ρ are semigroups of partial transformations of some semigroupS. Under certain conditions, it can be shown that ifxΡ =yΡ, withxρ1=yandyρ2=x, then the restrictions
are mutually inverse bijections. (Conventionally, arguments are written on the right for Λ, and on the left for Ρ.) Then theLandRrelations can be defined by
andDandHfollow as usual. Generalisation ofJis not part of this system, as it plays no part in the desired property.
We call (Λ, Ρ) aGreen's pair. There are several choices of partial transformation semigroup that yield the original relations. One example would be to take Λ to be the semigroup of all left translations onS1, restricted toS, and Ρ the corresponding semigroup of restricted right translations.
These definitions are due to Clark and Carruth (1980). They subsume Wallace's work, as well as various other generalised definitions proposed in the mid-1970s. The full axioms are fairly lengthy to state; informally, the most important requirements are that both Λ and Ρ should contain the identity transformation, and that elements of Λ should commute with elements of Ρ.
|
https://en.wikipedia.org/wiki/Green%27s_relations
|
Inmathematicsandtheoretical computer science, aKleene algebra(/ˈkleɪni/KLAY-nee; named afterStephen Cole Kleene) is asemiringthat generalizes the theory ofregular expressions: it consists of asetsupporting union (addition), concatenation (multiplication), andKleene staroperations subject to certain algebraic laws. The addition is required to be idempotent (x+x=x{\displaystyle x+x=x}for allx{\displaystyle x}), and induces apartial orderdefined byx≤y{\displaystyle x\leq y}ifx+y=y{\displaystyle x+y=y}. The Kleene star operation, denotedx∗{\displaystyle x*}, must satisfy the laws of theclosure operator.[1]
Kleene algebras have their origins in the theory of regular expressions andregular languagesintroduced by Kleene in 1951 and studied by others including V.N. Redko andJohn Horton Conway. The term was introduced byDexter Kozenin the 1980s, who fully characterized their algebraic properties and, in 1994, gave a finite axiomatization.
Kleene algebras have a number of extensions that have been studied, including Kleene algebras with tests (KAT) introduced by Kozen in 1997.[2]Kleene algebras and Kleene algebras with tests have applications informal verificationof computer programs.[3]They have also been applied to specify and verifycomputer networks.[4]
Various inequivalent definitions of Kleene algebras and related structures have been given in the literature.[5]Here we will give the definition that seems to be the most common nowadays.
A Kleene algebra is asetAtogether with twobinary operations+ :A×A→Aand · :A×A→Aand one function*:A→A, written asa+b,abanda*respectively, so that the following axioms are satisfied.
The above axioms define asemiring. We further require:
It is now possible to define apartial order≤ onAby settinga≤bif and only ifa+b=b(or equivalently:a≤bif and only if there exists anxinAsuch thata+x=b; with any definition,a≤b≤aimpliesa=b). With this order we can formulate the last four axioms about the operation*:
Intuitively, one should think ofa+bas the "union" or the "least upper bound" ofaandband ofabas some multiplication which ismonotonic, in the sense thata≤bimpliesax≤bx. The idea behind the star operator isa*= 1 +a+aa+aaa+ ... From the standpoint ofprogramming language theory, one may also interpret + as "choice", · as "sequencing" and*as "iteration".
Let Σ be a finite set (an "alphabet") and letAbe the set of allregular expressionsover Σ. We consider two such regular expressions equal if they describe the samelanguage. ThenAforms a Kleene algebra. In fact, this is afreeKleene algebra in the sense that any equation among regular expressions follows from the Kleene algebra axioms and is therefore valid in every Kleene algebra.
Again let Σ be an alphabet. LetAbe the set of allregular languagesover Σ (or the set of allcontext-free languagesover Σ; or the set of allrecursive languagesover Σ; or the set ofalllanguages over Σ). Then theunion(written as +) and theconcatenation(written as ·) of two elements ofAagain belong toA, and so does theKleene staroperation applied to any element ofA. We obtain a Kleene algebraAwith 0 being theempty setand 1 being the set that only contains theempty string.
LetMbe amonoidwith identity elementeand letAbe the set of allsubsetsofM. For two such subsetsSandT, letS+Tbe the union ofSandTand setST= {st:sinSandtinT}.S*is defined as the submonoid ofMgenerated byS, which can be described as {e} ∪S∪SS∪SSS∪ ... ThenAforms a Kleene algebra with 0 being the empty set and 1 being {e}. An analogous construction can be performed for any smallcategory.
Thelinear subspacesof a unitalalgebra over a fieldform a Kleene algebra. Given linear subspacesVandW, defineV+Wto be the sum of the two subspaces, and 0 to be the trivial subspace {0}. DefineV·W= span {v · w | v ∈ V, w ∈ W}, thelinear spanof the product of vectors fromVandWrespectively. Define1 = span {I}, the span of the unit of the algebra. The closure ofVis thedirect sumof all powers ofV.
V∗=⨁i=0∞Vi{\displaystyle V^{*}=\bigoplus _{i=0}^{\infty }V^{i}}
SupposeMis a set andAis the set of allbinary relationsonM. Taking + to be the union, · to be the composition and*to be thereflexive transitive closure, we obtain a Kleene algebra.
EveryBoolean algebrawith operations∨{\displaystyle \lor }and∧{\displaystyle \land }turns into a Kleene algebra if we use∨{\displaystyle \lor }for +,∧{\displaystyle \land }for · and seta*= 1 for alla.
A quite different Kleene algebra can be used to implement theFloyd–Warshall algorithm, computing theshortest path's lengthfor every two vertices of aweighted directed graph, byKleene's algorithm, computing a regular expression for every two states of adeterministic finite automaton.
Using theextended real number line, takea+bto be the minimum ofaandbandabto be the ordinary sum ofaandb(with the sum of +∞ and −∞ being defined as +∞).a*is defined to be the real number zero for nonnegativeaand −∞ for negativea. This is a Kleene algebra with zero element +∞ and one element the real number zero.
A weighted directed graph can then be considered as a deterministic finite automaton, with each transition labelled by its weight.
For any two graph nodes (automaton states), the regular expressions computed from Kleene's algorithm evaluates, in this particular Kleene algebra, to the shortest path length between the nodes.[7]
Zero is the smallest element: 0 ≤afor allainA.
The suma+bis theleast upper boundofaandb: we havea≤a+bandb≤a+band ifxis an element ofAwitha≤xandb≤x, thena+b≤x. Similarly,a1+ ... +anis the least upper bound of the elementsa1, ...,an.
Multiplication and addition are monotonic: ifa≤b, then
for allxinA.
Regarding the star operation, we have
IfAis a Kleene algebra andnis a natural number, then one can consider the set Mn(A) consisting of alln-by-nmatriceswith entries inA.
Using the ordinary notions of matrix addition and multiplication, one can define a unique*-operation so that Mn(A) becomes a Kleene algebra.
Kleene introduced regular expressions and gave some of their algebraic laws.[9][10]Although he didn't define Kleene algebras, he asked for a decision procedure for equivalence of regular expressions.[11]Redko proved that no finite set ofequationalaxioms can characterize the algebra of regular languages.[12]Salomaa gave complete axiomatizations of this algebra, however depending on problematic inference rules.[13]The problem of providing a complete set of axioms, which would allow derivation of all equations among regular expressions, was intensively studied byJohn Horton Conwayunder the name ofregular algebras,[14]however, the bulk of his treatment was infinitary.
In 1981,Kozengave a complete infinitary equational deductive system for the algebra of regular languages.[15]In 1994, he gave theabovefinite axiom system, which uses unconditional and conditional equalities (consideringa≤bas an abbreviation fora+b=b), and is equationally complete for the algebra of regular languages, that is, two regular expressionsaandbdenote the same language only ifa=bfollows from theaboveaxioms.[16]
Kleene algebras are a particular case ofclosed semirings, also calledquasi-regular semiringsorLehmann semirings, which are semirings in which every element has at least one quasi-inverse satisfying the equation:a* =aa* + 1 =a*a+ 1. This quasi-inverse is not necessarily unique.[17][18]In a Kleene algebra,a* is the least solution to thefixpointequations:X=aX+ 1 andX=Xa+ 1.[18]
Closed semirings and Kleene algebras appear inalgebraic path problems, a generalization of theshortest pathproblem.[18]
|
https://en.wikipedia.org/wiki/Kleene_algebra
|
Thestar height probleminformal language theoryis the question whether allregular languagescan be expressed usingregular expressionsof limitedstar height, i.e. with a limited nesting depth ofKleene stars. Specifically, is a nesting depth of one always sufficient? If not, is there analgorithmto determine how many are required? The problem was first introduced by Eggan in 1963.[1]
The first question was answered in the negative when in 1963, Eggan gave examples of regular languages ofstar heightnfor everyn. Here, the star heighth(L) of a regular languageLis defined as the minimum star height among all regular expressions representingL. The first few languages found by Eggan are described in the following, by means of giving a regular expression for each language:
The construction principle for these expressions is that expressionen+1{\displaystyle e_{n+1}}is obtained by concatenating two copies ofen{\displaystyle e_{n}}, appropriately renaming the letters of the second copy using fresh alphabet symbols, concatenating the result with another fresh alphabet symbol, and then by surrounding the resulting expression with a Kleene star. The remaining, more difficult part, is to prove that foren{\displaystyle e_{n}}there is no equivalent regular expression of star height less thann; a proof is given inEggan (1963).
However, Eggan's examples use a largealphabet, of size 2n-1 for the language with star heightn. He thus asked whether we can also find examples over binary alphabets. This was proved to be true shortly afterwards by Dejean and Schützenberger in 1966.[2]Their examples can be described by aninductively definedfamily of regular expressions over the binary alphabet{a,b}{\displaystyle \{a,b\}}as follows–cf.Salomaa (1981):
Again, a rigorous proof is needed for the fact thaten{\displaystyle e_{n}}does not admit an equivalent regular expression of lower star height. Proofs are given byDejean & Schützenberger (1966)and bySalomaa (1981).
In contrast, the second question turned out to be much more difficult, and the question became a famous open problem in formal language theory for over two decades.[3]For years, there was only little progress. Thepure-group languageswere the first interesting family of regular languages for which the star height problem was proved to bedecidable.[4]But the general problem remained open for more than 25 years until it was settled byHashiguchi, who in 1988 published an algorithm to determine thestar heightof any regular language.[5]The algorithm wasn't at all practical, being of non-elementarycomplexity. To illustrate the immense resource consumptions of that algorithm,Lombardy & Sakarovitch (2002)give some actual numbers:
[The procedure described by Hashiguchi] leads to computations that are by far impossible, even for very small examples. For instance, ifLis accepted by a 4 state automaton of loop complexity 3 (and with a small 10 element transition monoid), then avery low minorantof the number of languages to be tested withLfor equality is:(101010)(101010)(101010).{\displaystyle \left(10^{10^{10}}\right)^{\left(10^{10^{10}}\right)^{\left(10^{10^{10}}\right)}}.}
Notice that alone the number101010{\displaystyle 10^{10^{10}}}has 10 billion zeros when written down indecimal notation, and is alreadyby farlarger than thenumber of atoms in the observable universe.
A much more efficient algorithm than Hashiguchi's procedure was devised by Kirsten in 2005.[6]This algorithm runs, for a givennondeterministic finite automatonas input, within double-exponential space. Yet the resource requirements of this algorithm still greatly exceed the margins of what is considered practically feasible.
This algorithm has been optimized and generalized to trees by Colcombet and Löding in 2008,[7]as part of the theory of regular cost functions.
It has been implemented in 2017 in the tool suite Stamina.[8]
|
https://en.wikipedia.org/wiki/Star_height_problem
|
InIndian mathematics, aVedicsquareis a variation on a typical 9 × 9multiplication tablewhere the entry in each cell is thedigital rootof the product of the column and row headings i.e. theremainderwhen the product of the row and column headings is divided by 9 (with remainder 0 represented by 9). Numerousgeometricpatternsandsymmetriescan be observed in a Vedic square, some of which can be found in traditionalIslamic art.
The Vedic Square can be viewed as the multiplication table of themonoid((Z/9Z)×,{1,∘}){\displaystyle ((\mathbb {Z} /9\mathbb {Z} )^{\times },\{1,\circ \})}whereZ/9Z{\displaystyle \mathbb {Z} /9\mathbb {Z} }is the set of positive integers partitioned by theresidue classesmodulonine. (the operator∘{\displaystyle \circ }refers to the abstract "multiplication" between the elements of this monoid).
Ifa,b{\displaystyle a,b}are elements of((Z/9Z)×,{1,∘}){\displaystyle ((\mathbb {Z} /9\mathbb {Z} )^{\times },\{1,\circ \})}thena∘b{\displaystyle a\circ b}can be defined as(a×b)mod9{\displaystyle (a\times b)\mod {9}}, where the element 9 is representative of the residue class of 0 rather than the traditional choice of 0.
This does not form agroupbecause not every non-zero element has a correspondinginverse element; for example6∘3=9{\displaystyle 6\circ 3=9}but there is noa∈{1,⋯,9}{\displaystyle a\in \{1,\cdots ,9\}}such that9∘a=6.{\displaystyle 9\circ a=6.}.
The subset{1,2,4,5,7,8}{\displaystyle \{1,2,4,5,7,8\}}forms acyclic groupwith 2 as one choice ofgenerator- this is the group of multiplicativeunitsin theringZ/9Z{\displaystyle \mathbb {Z} /9\mathbb {Z} }. Every column and row includes all six numbers - so this subset forms aLatin square.
A Vedic cube is defined as the layout of eachdigital rootin a three-dimensionalmultiplication table.[2]
Vedic squares with a higherradix(or number base) can be calculated to analyse the symmetric patterns that arise. Using the calculation above,(a×b)mod(base−1){\displaystyle (a\times b)\mod {({\textrm {base}}-1)}}. The images in this section are color-coded so that the digital root of 1 is dark and the digital root of (base-1) is light.
|
https://en.wikipedia.org/wiki/Vedic_square
|
Inmathematics,biquandlesandbiracksare sets with binary operations that generalizequandles and racks. Biquandles take, in the theory ofvirtual knots, the place that quandles occupy in the theory of classicalknots. Biracks and racks have the same relation, while a biquandle is a birack which satisfies some additional conditions.
Biquandles and biracks have two binary operations on a setX{\displaystyle X}writtenab{\displaystyle a^{b}}andab{\displaystyle a_{b}}. These satisfy the following three axioms:
1.(ab)cb=acbc{\displaystyle (a^{b})^{c_{b}}={a^{c}}^{b^{c}}}
2.abcb=acbc{\displaystyle {a_{b}}_{c_{b}}={a_{c}}_{b^{c}}}
3.abcb=acbc{\displaystyle {a_{b}}^{c_{b}}={a^{c}}_{b^{c}}}
These identities appeared in 1992 in reference [FRS] where the object was called a species.
The superscript and subscript notation is useful here because it dispenses with the need for brackets. For example,
if we writea∗b{\displaystyle a*b}forab{\displaystyle a_{b}}anda∗∗b{\displaystyle a\mathbin {**} b}forab{\displaystyle a^{b}}then the
three axioms above become
1.(a∗∗b)∗∗(c∗b)=(a∗∗c)∗∗(b∗∗c){\displaystyle (a\mathbin {**} b)\mathbin {**} (c*b)=(a\mathbin {**} c)\mathbin {**} (b\mathbin {**} c)}
2.(a∗b)∗(c∗b)=(a∗c)∗(b∗∗c){\displaystyle (a*b)*(c*b)=(a*c)*(b\mathbin {**} c)}
3.(a∗b)∗∗(c∗b)=(a∗∗c)∗(b∗∗c){\displaystyle (a*b)\mathbin {**} (c*b)=(a\mathbin {**} c)*(b\mathbin {**} c)}
If in addition the two operations areinvertible, that is givena,b{\displaystyle a,b}in the setX{\displaystyle X}there are uniquex,y{\displaystyle x,y}in the setX{\displaystyle X}such thatxb=a{\displaystyle x^{b}=a}andyb=a{\displaystyle y_{b}=a}then the setX{\displaystyle X}together with the two operations define abirack.
For example, ifX{\displaystyle X}, with the operationab{\displaystyle a^{b}}, is arackthen it is a birack if we define the other operation to be theidentity,ab=a{\displaystyle a_{b}=a}.
For a birack the functionS:X2→X2{\displaystyle S:X^{2}\rightarrow X^{2}}can be defined by
Then
1.S{\displaystyle S}is abijection
2.S1S2S1=S2S1S2{\displaystyle S_{1}S_{2}S_{1}=S_{2}S_{1}S_{2}\,}
In the second condition,S1{\displaystyle S_{1}}andS2{\displaystyle S_{2}}are defined byS1(a,b,c)=(S(a,b),c){\displaystyle S_{1}(a,b,c)=(S(a,b),c)}andS2(a,b,c)=(a,S(b,c)){\displaystyle S_{2}(a,b,c)=(a,S(b,c))}. This condition is sometimes known as theset-theoreticYang-Baxterequation.
To see that 1. is true note thatS′{\displaystyle S'}defined by
is the inverse to
To see that 2. is true let us follow the progress of the triple(c,bc,abcb){\displaystyle (c,b_{c},a_{bc^{b}})}underS1S2S1{\displaystyle S_{1}S_{2}S_{1}}. So
On the other hand,(c,bc,abcb)=(c,bc,acbc){\displaystyle (c,b_{c},a_{bc^{b}})=(c,b_{c},a_{cb_{c}})}. Its progress underS2S1S2{\displaystyle S_{2}S_{1}S_{2}}is
AnyS{\displaystyle S}satisfying 1. 2. is said to be aswitch(precursor of biquandles and biracks).
Examples of switches are the identity, thetwistT(a,b)=(b,a){\displaystyle T(a,b)=(b,a)}andS(a,b)=(b,ab){\displaystyle S(a,b)=(b,a^{b})}whereab{\displaystyle a^{b}}is the operation of a rack.
A switch will define a birack if the operations are invertible. Note that the identity switch does not do this.
A biquandle is a birack which satisfies some additional structure, as described by Nelson and Rische.[1]The axioms of a biquandle are "minimal" in the sense that they are the weakest restrictions that can be placed on the two binary operations while making the biquandle of a virtual knot invariant under Reidemeister moves.
|
https://en.wikipedia.org/wiki/Biracks_and_biquandles
|
Inmathematics,Laver tables(named afterRichard Laver, who discovered them towards the end of the 1980s in connection with his works onset theory) are tables of numbers that have certain properties ofalgebraicandcombinatorialinterest. They occur in the study ofracks and quandles.
For any nonnegativeintegern, then-thLaver tableis the 2n× 2ntable whose entry in the cell at rowpand columnq(1 ≤p,q≤ 2n) is defined as[1]
where⋆n{\displaystyle \star _{n}}is the uniquebinary operationon {1,...,2n} that satisfies the following two equations for allp,q:
and
Note: Equation (1) uses the notationxmod2n{\displaystyle x{\bmod {2}}^{n}}to mean the unique member of {1,...,2n}congruenttoxmodulo2n.
Equation (2) is known as the(left) self-distributive law, and a set endowed withanybinary operation satisfying this law is called ashelf. Thus, then-th Laver table is just themultiplication tablefor the unique shelf ({1,...,2n},⋆n{\displaystyle \star _{n}}) that satisfies Equation (1).
Examples: Following are the first five Laver tables,[2]i.e. the multiplication tables for the shelves ({1,...,2n},⋆n{\displaystyle \star _{n}}),n= 0, 1, 2, 3, 4:
There is no knownclosed-form expressionto calculate the entries of a Laver table directly,[3]but Patrick Dehornoy provides a simplealgorithmfor filling out Laver tables.[4]
Looking at just the first row in then-th Laver table, forn= 0, 1, 2, ..., the entries in each first row are seen to be periodic with a period that's always a power of two, as mentioned in Property 2 above. The first few periods are 1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, ... (sequenceA098820in theOEIS). This sequence is nondecreasing, and in 1995 Richard Laverproved,under the assumption that there exists arank-into-rank(alarge cardinalproperty), that it actually increases without bound. (It is not known whether this is also provable inZFCwithout the additional large-cardinal axiom.)[5]In any case, it grows extremely slowly; Randall Dougherty showed that 32 cannot appear in this sequence (if it ever does) untiln> A(9, A(8, A(8, 254))), where A denotes theAckermann–Péter function.[6]
|
https://en.wikipedia.org/wiki/Laver_table
|
Incombinatorialmathematics, ablock designis anincidence structureconsisting of a set together with afamily of subsetsknown asblocks, chosen such that number of occurrences of each element satisfies certain conditions making the collection of blocks exhibitsymmetry(balance). Block designs have applications in many areas, includingexperimental design,finite geometry,physical chemistry,software testing,cryptography, andalgebraic geometry.
Without further specifications the termblock designusually refers to abalanced incomplete block design(BIBD), specifically (and also synonymously) a2-design,which has been the most intensely studied type historically due to its application in thedesign of experiments.[1][2]Its generalization is known as at-design.
A design is said to bebalanced(up tot) if allt-subsets of the original set occur in equally many (i.e.,λ) blocks[clarification needed]. Whentis unspecified, it can usually be assumed to be 2, which means that eachpairof elements is found in the same number of blocks and the design ispairwise balanced. Fort= 1, each element occurs in the same number of blocks (thereplication number, denotedr) and the design is said to beregular. A block design in which all the blocks have the same size (usually denotedk) is calleduniformorproper. The designs discussed in this article are all uniform. Block designs that are not necessarily uniform have also been studied; fort= 2 they are known in the literature under the general namepairwise balanced designs(PBDs). Any uniform design balanced up totis also balanced in all lower values oft(though with differentλ-values), so for example a pairwise balanced (t= 2) design is also regular (t= 1). When the balancing requirement fails, a design may still bepartially balancedif thet-subsets can be divided intonclasses, each with its own (different)λ-value. Fort= 2 these are known asPBIBD(n) designs, whose classes form anassociation scheme.
Designs are usually said (or assumed) to beincomplete, meaning that the collection of blocks is not all possiblek-subsets, thus ruling out a trivial design.
Block designs may or may not have repeated blocks. Designs without repeated blocks are calledsimple,[3]in which case the "family" of blocks is asetrather than amultiset.
Instatistics, the concept of a block design may be extended tonon-binaryblock designs, in which blocks may contain multiple copies of an element (seeblocking (statistics)). There, a design in which each element occurs the same total number of times is calledequireplicate,which implies aregulardesign only when the design is also binary. The incidence matrix of a non-binary design lists the number of times each element is repeated in each block.
The simplest type of "balanced" design (t=1) is known as atactical configurationor1-design. The correspondingincidence structureingeometryis known simply as aconfiguration, seeConfiguration (geometry). Such a design is uniform and regular: each block containskelements and each element is contained inrblocks. The number of set elementsvand the number of blocksbare related bybk=vr{\displaystyle bk=vr}, which is the total number of element occurrences.
Everybinary matrixwith constant row and column sums is theincidence matrixof a regular uniform block design. Also, each configuration has a correspondingbiregularbipartitegraphknown as its incidence orLevi graph.
Given a finite setX(of elements calledpoints) and integersk,r,λ≥ 1, we define a2-design(orBIBD, standing for balanced incomplete block design)Bto be a family ofk-element subsets ofX, calledblocks, such that anyxinXis contained inrblocks, and any pair of distinct pointsxandyinXis contained inλblocks. Here, the condition that anyxinXis contained inrblocks is redundant, as shown below.
Herev(the number of elements ofX, called points),b(the number of blocks),k,r, and λ are theparametersof the design. (To avoid degenerate examples, it is also assumed thatv>k, so that no block contains all the elements of the set. This is the meaning of "incomplete" in the name of these designs.) In a table:
The design is called a (v,k,λ)-design or a (v,b,r,k,λ)-design. The parameters are not all independent;v,k, and λ determinebandr, and not all combinations ofv,k, andλare possible. The two basic equations connecting these parameters are
obtained by counting the number of pairs (B,p) whereBis a block andpis a point in that block, and
obtained from counting for a fixedxthe triples (x,y,B) wherexandyare distinct points andBis a block that contains them both. This equation for everyxalso proves thatris constant (independent ofx) even without assuming it explicitly, thus proving that the condition that anyxinXis contained inrblocks is redundant andrcan be computed from the other parameters.
The resultingbandrmust be integers, which imposes conditions onv,k, andλ. These conditions are not sufficient as, for example, a (43,7,1)-design does not exist.[4]
Theorderof a 2-design is defined to ben=r−λ. Thecomplementof a 2-design is obtained by replacing each block with its complement in the point setX. It is also a 2-design and has parametersv′ =v,b′ =b,r′ =b−r,k′ =v−k,λ′ =λ+b− 2r. A 2-design and its complement have the same order.
A fundamental theorem,Fisher's inequality, named after the statisticianRonald Fisher, is thatb≥vin any 2-design.
A rather surprising and not very obvious (but very general) combinatorial result for these designs is that if points are denoted by any arbitrarily chosen set of equally or unequally spaced numerics, there is no choice of such a set which can make all block-sums (that is, sum of all points in a given block) constant.[5][6]For other designs such as partially balanced incomplete block designs this may however be possible. Many such cases are discussed in.[7]However, it can also be observed trivially for the magic squares or magic rectangles which can be viewed as the partially balanced incomplete block designs.
The unique (6,3,2)-design (v= 6,k= 3,λ= 2) has 10 blocks (b= 10) and each element is repeated 5 times (r= 5).[8]Using the symbols 0 − 5, the blocks are the following triples:
and the correspondingincidence matrix(av×bbinary matrixwith constant row sumrand constant column sumk) is:
One of four nonisomorphic (8,4,3)-designs has 14 blocks with each element repeated 7 times. Using the symbols 0 − 7 the blocks are the following 4-tuples:[8]
The unique (7,3,1)-design is symmetric and has 7 blocks with each element repeated 3 times. Using the symbols 0 − 6, the blocks are the following triples:[8]
This design is associated with theFano plane, with the elements and blocks of the designcorrespondingto the points and lines of the plane. Its corresponding incidence matrix can also be symmetric, if the labels or blocks are sorted the right way:
The case of equality in Fisher's inequality, that is, a 2-design with an equal number of points and blocks, is called asymmetric design.[9]Symmetric designs have the smallest number of blocks among all the 2-designs with the same number of points.
In a symmetric designr=kholds as well asb=v, and, while it is generally not true in arbitrary 2-designs, in a symmetric design every two distinct blocks meet inλpoints.[10]A theorem ofRyserprovides the converse. IfXis av-element set, andBis av-element set ofk-element subsets (the "blocks"), such that any two distinct blocks have exactly λ points in common, then (X, B) is a symmetric block design.[11]
The parameters of a symmetric design satisfy
This imposes strong restrictions onv, so the number of points is far from arbitrary. TheBruck–Ryser–Chowla theoremgives necessary, but not sufficient, conditions for the existence of a symmetric design in terms of these parameters.
The following are important examples of symmetric 2-designs:
Finite projective planesare symmetric 2-designs withλ= 1 and ordern> 1. For these designs the symmetric design equation becomes:
Sincek=rwe can write theorder of a projective planeasn=k− 1 and, from the displayed equation above, we obtainv= (n+ 1)n+ 1 =n2+n+ 1 points in a projective plane of ordern.
As a projective plane is a symmetric design, we haveb=v, meaning thatb=n2+n+ 1 also. The numberbis the number oflinesof the projective plane. There can be no repeated lines since λ = 1, so a projective plane is a simple 2-design in which the number of lines and the number of points are always the same. For a projective plane,kis the number of points on each line and it is equal ton+ 1. Similarly,r=n+ 1 is the number of lines with which a given point is incident.
Forn= 2 we get a projective plane of order 2, also called theFano plane, withv= 4 + 2 + 1 = 7 points and 7 lines. In the Fano plane, each line hasn+ 1 = 3 points and each point belongs ton+ 1 = 3 lines.
Projective planes are known to exist for all orders which are prime numbers or powers of primes. They form the only known infinite family (with respect to having a constant λ value) of symmetric block designs.[12]
Abiplaneorbiplane geometryis a symmetric 2-design withλ= 2; that is, every set of two points is contained in two blocks ("lines"), while any two lines intersect in two points.[12]They are similar to finite projective planes, except that rather than two points determining one line (and two lines determining one point), two points determine two lines (respectively, points). A biplane of ordernis one whose blocks havek=n+ 2 points; it hasv= 1 + (n+ 2)(n+ 1)/2 points (sincer=k).
The 18 known examples[13]are listed below.
Biplanes of orders 5, 6, 8 and 10 do not exist, as shown by theBruck-Ryser-Chowla theorem.
AnHadamard matrixof sizemis anm×mmatrixHwhose entries are ±1 such thatHH⊤= mIm, whereH⊤is the transpose ofHandImis them×midentity matrix. An Hadamard matrix can be put intostandardized form(that is, converted to an equivalent Hadamard matrix) where the first row and first column entries are all +1. If the sizem> 2 thenmmust be a multiple of 4.
Given an Hadamard matrix of size 4ain standardized form, remove the first row and first column and convert every −1 to a 0. The resulting 0–1 matrixMis theincidence matrixof a symmetric 2-(4a− 1, 2a− 1,a− 1) design called anHadamard 2-design.[19]It contains4a−1{\displaystyle 4a-1}blocks/points; each contains/is contained in2a−1{\displaystyle 2a-1}points/blocks. Each pair of points is contained in exactlya−1{\displaystyle a-1}blocks.
This construction is reversible, and the incidence matrix of a symmetric 2-design with these parameters can be used to form an Hadamard matrix of size 4a.
Aresolvable 2-designis a BIBD whose blocks can be partitioned into sets (calledparallel classes), each of which forms a partition of the point set of the BIBD. The set of parallel classes is called aresolutionof the design.
If a 2-(v,k,λ) resolvable design hascparallel classes, thenb≥v+c− 1.[20]
Consequently, a symmetric design can not have a non-trivial (more than one parallel class) resolution.[21]
Archetypical resolvable 2-designs are the finiteaffine planes. A solution of the famous15 schoolgirl problemis a resolution of a 2-(15,3,1) design.[22]
Given any positive integert, at-designBis a class ofk-element subsets ofX, calledblocks, such that every pointxinXappears in exactlyrblocks, and everyt-element subsetTappears in exactly λ blocks. The numbersv(the number of elements ofX),b(the number of blocks),k,r, λ, andtare theparametersof the design. The design may be called at-(v,k,λ)-design. Again, these four numbers determinebandrand the four numbers themselves cannot be chosen arbitrarily. The equations are
whereλiis the number of blocks that contain anyi-element set of points andλt= λ.
Note thatb=λ0=λ(vt)/(kt){\displaystyle b=\lambda _{0}=\lambda {v \choose t}/{k \choose t}}andr=λ1=λ(v−1t−1)/(k−1t−1){\displaystyle r=\lambda _{1}=\lambda {v-1 \choose t-1}/{k-1 \choose t-1}}.
Theorem:[23]Anyt-(v,k,λ)-design is also ans-(v,k,λs)-design for anyswith 1 ≤s≤t. (Note that the "lambda value" changes as above and depends ons.)
A consequence of this theorem is that everyt-design witht≥ 2 is also a 2-design.
At-(v,k,1)-design is called aSteiner system.
The termblock designby itself usually means a 2-design.
LetD= (X,B) be a t-(v,k,λ) design andpa point ofX. Thederived designDphas point setX− {p} and as block set all the blocks ofDwhich contain p with p removed. It is a (t− 1)-(v− 1,k− 1,λ) design. Note that derived designs with respect to different points may not be isomorphic. A designEis called anextensionofDifEhas a point p such thatEpis isomorphic toD; we callDextendableif it has an extension.
Theorem:[24]If at-(v,k,λ) design has an extension, thenk+ 1 dividesb(v+ 1).
The only extendableprojective planes(symmetric 2-(n2+n+ 1,n+ 1, 1) designs) are those of orders 2 and 4.[25]
Every Hadamard 2-design is extendable (to anHadamard 3-design).[26]
Theorem:.[27]IfD, a symmetric 2-(v,k,λ) design, is extendable, then one of the following holds:
Note that the projective plane of order two is an Hadamard 2-design; the projective plane of order four has parameters which fall in case 2; the only other known symmetric 2-designs with parameters in case 2 are the order 9 biplanes, but none of them are extendable; and there is no known symmetric 2-design with the parameters of case 3.[28]
A design with the parameters of the extension of anaffine plane, i.e., a 3-(n2+ 1,n+ 1, 1) design, is called a finiteinversive plane, orMöbius plane, of ordern.
It is possible to give a geometric description of some inversive planes, indeed, of all known inversive planes. Anovoidin PG(3,q) is a set ofq2+ 1 points, no three collinear. It can be shown that every plane (which is a hyperplane since the geometric dimension is 3) of PG(3,q) meets an ovoidOin either 1 orq+ 1 points. The plane sections of sizeq+ 1 ofOare the blocks of an inversive plane of orderq. Any inversive plane arising this way is calledegglike. All known inversive planes are egglike.
An example of an ovoid is theelliptic quadric, the set of zeros of the quadratic form
where f is an irreduciblequadratic formin two variables over GF(q). [f(x,y) =x2+xy+y2for example].
Ifqis an odd power of 2, another type of ovoid is known – theSuzuki–Tits ovoid.
Theorem. Letqbe a positive integer, at least 2. (a) Ifqis odd, then any ovoid is projectively equivalent to the elliptic quadric in a projective geometry PG(3,q); soqis a prime power and there is a unique egglike inversive plane of orderq. (But it is unknown if non-egglike ones exist.) (b) ifqis even, thenqis a power of 2 and any inversive plane of orderqis egglike (but there may be some unknown ovoids).
Ann-classassociation schemeconsists of asetXof sizevtogether with apartitionSofX×Xinton+ 1binary relations,R0,R1, ...,Rn. A pair of elements in relation Riare said to beith–associates. Each element ofXhasniith associates. Furthermore:
An association scheme iscommutativeifpijk=pjik{\displaystyle p_{ij}^{k}=p_{ji}^{k}}for alli,jandk. Most authors assume this property.
Apartially balanced incomplete block designwithnassociate classes (PBIBD(n)) is a block design based on av-set X withbblocks each of sizekand with each element appearing inrblocks, such that there is an association scheme withnclasses defined onXwhere, if elementsxandyareith associates, 1 ≤i≤n, then they are together in preciselyλiblocks.
A PBIBD(n) determines an association scheme but the converse is false.[29]
LetA(3) be the following association scheme with three associate classes on the setX= {1,2,3,4,5,6}. The (i,j) entry issif elementsiandjare in relation Rs.
The blocks of a PBIBD(3) based onA(3) are:
The parameters of this PBIBD(3) are:v= 6,b= 8,k= 3,r= 4 and λ1= λ2= 2 and λ3= 1. Also, for the association scheme we haven0=n2= 1 andn1=n3= 2.[30]The incidence matrix M is
and the concurrence matrixMMTis
from which we can recover theλandrvalues.
The parameters of a PBIBD(m) satisfy:[31]
A PBIBD(1) is a BIBD and a PBIBD(2) in whichλ1=λ2is a BIBD.[32]
PBIBD(2)s have been studied the most since they are the simplest and most useful of the PBIBDs.[33]They fall into six types[34]based on a classification of thethen knownPBIBD(2)s byBose & Shimamoto (1952):[35]
The mathematical subject of block designs originated in the statistical framework ofdesign of experiments. These designs were especially useful in applications of the technique ofanalysis of variance (ANOVA). This remains a significant area for the use of block designs.
While the origins of the subject are grounded in biological applications (as is some of the existing terminology), the designs are used in many applications where systematic comparisons are being made, such as insoftware testing.
Theincidence matrixof block designs provide a natural source of interestingblock codesthat are used aserror correcting codes. The rows of their incidence matrices are also used as the symbols in a form ofpulse-position modulation.[36]
Suppose that skin cancer researchers want to test three different sunscreens. They coat two different sunscreens on the upper sides of the hands of a test person. After a UV radiation they record the skin irritation in terms of sunburn. The number of treatments is 3 (sunscreens) and the block size is 2 (hands per person).
A corresponding BIBD can be generated by theR-functiondesign.bibof theR-package agricolaeand is specified in the following table:
The investigator chooses the parametersv= 3,k= 2andλ = 1for the block design which are then inserted into the R-function. Subsequently, the remaining parametersbandrare determined automatically.
Using the basic relations we calculate that we needb= 3blocks, that is, 3 test people in order to obtain a balanced incomplete block design. Labeling the blocksA,BandC, to avoid confusion, we have the block design,
A correspondingincidence matrixis specified in the following table:
Each treatment occurs in 2 blocks, sor= 2.
Just one block (C) contains the treatments 1 and 2 simultaneously and the same applies to the pairs of treatments (1,3) and (2,3). Therefore,λ = 1.
It is impossible to use a complete design (all treatments in each block) in this example because there are 3 sunscreens to test, but only 2 hands on each person.
|
https://en.wikipedia.org/wiki/Block_design
|
Inmathematics, aBose–Mesner algebrais a special set ofmatriceswhich arise from a combinatorial structure known as anassociation scheme, together with the usual set of rules for combining (forming the products of) those matrices, such that they form anassociative algebra, or, more precisely, aunitary commutative algebra. Among these rules are:
Bose–Mesner algebras have applications inphysicstospin models, and instatisticsto thedesign of experiments. They are named forR. C. Boseand Dale Marsh Mesner.[1]
LetXbe a set ofvelements. Consider a partition of the 2-element subsets ofXintonnon-empty subsets,R1, ...,Rnsuch that:
This structure is enhanced by adding all pairs of repeated elements ofXand collecting them in a subsetR0. This enhancement permits the parametersi,j, andkto take on the value of zero, and lets some ofx,yorzbe equal.
A set with such an enhanced partition is called anassociation scheme.[2]One may view an association scheme as a partition of the edges of acomplete graph(with vertex setX) into n classes, often thought of as color classes. In this representation, there is a loop at each vertex and all the loops receive the same 0th color.
The association scheme can also be represented algebraically. Consider thematricesDidefined by:
LetA{\displaystyle {\mathcal {A}}}be thevector spaceconsisting of allmatrices∑∑i=0naiDi{\displaystyle \sideset {}{_{i=0}^{n}}\sum a_{i}D_{i}}, withai{\displaystyle a_{i}}complex.[3][4]
The definition of anassociation schemeis equivalent to saying that theDi{\displaystyle D_{i}}arev×v(0,1)-matriceswhich satisfy
The (x,y)-th entry of the left side of 4. is the number of two colored paths of length two joiningxandy(using "colors"iandj) in the graph. Note that the rows and columns ofDi{\displaystyle D_{i}}containvi{\displaystyle v_{i}}1s:
From 1., thesematricesaresymmetric. From 2.,D0,…,Dn{\displaystyle D_{0},\ldots ,D_{n}}arelinearly independent, and the dimension ofA{\displaystyle {\mathcal {A}}}isn+1{\displaystyle n+1}. From 4.,A{\displaystyle {\mathcal {A}}}is closed under multiplication, and multiplication is always associative. Thisassociativecommutative algebraA{\displaystyle {\mathcal {A}}}is called theBose–Mesner algebraof theassociation scheme. Since thematricesinA{\displaystyle {\mathcal {A}}}are symmetric and commute with each other, they can be simultaneously diagonalized. This means that there is amatrixS{\displaystyle S}such that to eachA∈A{\displaystyle A\in {\mathcal {A}}}there is adiagonal matrixΛA{\displaystyle \Lambda _{A}}withS−1AS=ΛA{\displaystyle S^{-1}AS=\Lambda _{A}}. This means thatA{\displaystyle {\mathcal {A}}}is semi-simple and has a unique basis of primitive idempotentsJ0,…,Jn{\displaystyle J_{0},\ldots ,J_{n}}. These are complex n × nmatricessatisfying
TheBose–Mesner algebrahas two distinguished bases: the basis consisting of theadjacency matricesDi{\displaystyle D_{i}}, and the basis consisting of the irreducibleidempotent matricesJk{\displaystyle J_{k}}. By definition, there exist well-definedcomplex numberssuch that
and
The p-numberspi(k){\displaystyle p_{i}(k)}, and the q-numbersqk(i){\displaystyle q_{k}(i)}, play a prominent role in the theory.[5]They satisfy well-defined orthogonality relations. The p-numbers are theeigenvaluesof theadjacency matrixDi{\displaystyle D_{i}}.
Theeigenvaluesofpi(k){\displaystyle p_{i}(k)}andqk(i){\displaystyle q_{k}(i)}, satisfy the orthogonality conditions:
Also
Inmatrixnotation, these are
whereΔv=diag{v0,v1,…,vn},Δμ=diag{μ0,μ1,…,μn}.{\displaystyle \Delta _{v}=\operatorname {diag} \{v_{0},v_{1},\ldots ,v_{n}\},\qquad \Delta _{\mu }=\operatorname {diag} \{\mu _{0},\mu _{1},\ldots ,\mu _{n}\}.}
TheeigenvaluesofDiDℓ{\displaystyle D_{i}D_{\ell }}arepi(k)pℓ(k){\displaystyle p_{i}(k)p_{\ell }(k)}with multiplicitiesμk{\displaystyle \mu _{k}}. This implies that
which proves Equation(8){\displaystyle \left(8\right)}and Equation(11){\displaystyle \left(11\right)},
which gives Equations(9){\displaystyle (9)},(10){\displaystyle (10)}and(12){\displaystyle (12)}.◻{\displaystyle \Box }
There is an analogy between extensions ofassociation schemesandextensionsoffinite fields. The cases we are most interested in are those where the extended schemes are defined on then{\displaystyle n}-thCartesian powerX=Fn{\displaystyle X={\mathcal {F}}^{n}}of a setF{\displaystyle {\mathcal {F}}}on which a basicassociation scheme(F,K){\displaystyle \left({\mathcal {F}},K\right)}is defined. A firstassociation schemedefined onX=Fn{\displaystyle X={\mathcal {F}}^{n}}is called then{\displaystyle n}-thKronecker power(F,K)⊗n{\displaystyle \left({\mathcal {F}},K\right)_{\otimes }^{n}}of(F,K){\displaystyle \left({\mathcal {F}},K\right)}. Next the extension is defined on the same setX=Fn{\displaystyle X={\mathcal {F}}^{n}}by gathering classes of(F,K)⊗n{\displaystyle \left({\mathcal {F}},K\right)_{\otimes }^{n}}. TheKronecker powercorresponds to thepolynomial ringF[X]{\displaystyle F\left[X\right]}first defined on afieldF{\displaystyle \mathbb {F} }, while the extension scheme corresponds to theextension fieldobtained as a quotient. An example of such an extended scheme is theHamming scheme.
Association schemesmay be merged, but merging them leads to non-symmetricassociation schemes, whereas all usualcodesaresubgroupsin symmetricAbelian schemes.[6][7][8]
|
https://en.wikipedia.org/wiki/Bose%E2%80%93Mesner_algebra
|
Combinatorial design theoryis the part ofcombinatorialmathematicsthat deals with the existence, construction and properties ofsystems of finite setswhose arrangements satisfy generalized concepts ofbalanceand/orsymmetry. These concepts are not made precise so that a wide range of objects can be thought of as being under the same umbrella. At times this might involve the numerical sizes of set intersections as inblock designs, while at other times it could involve the spatial arrangement of entries in an array as insudoku grids.
Combinatorial design theory can be applied to the area ofdesign of experiments. Some of the basic theory of combinatorial designs originated in the statisticianRonald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas includingfinite geometry,tournament scheduling,lotteries,mathematical chemistry,mathematical biology,algorithm design and analysis,networking,group testingandcryptography.[1]
Given a certain numbernof people, is it possible to assign them to sets so that each person is in at least one set, each pair of people is in exactly one set together, every two sets have exactly one person in common, and no set contains everyone, all but one person, or exactly one person? The answer depends onn.
This has a solution only ifnhas the formq2+q+ 1. It is less simple to prove that a solution exists ifqis aprime power. It is conjectured that these are theonlysolutions. It has been further shown that if a solution exists forqcongruent to 1 or 2mod4, thenqis a sum of twosquare numbers. This last result, theBruck–Ryser theorem, is proved by a combination of constructive methods based onfinite fieldsand an application ofquadratic forms.
When such a structure does exist, it is called a finiteprojective plane; thus showing howfinite geometryand combinatorics intersect. Whenq= 2, the projective plane is called theFano plane.
Combinatorial designs date to antiquity, with theLo Shu Squarebeing an earlymagic square. One of the earliest datable application of combinatorial design is found in India in the bookBrhat Samhitaby Varahamihira, written around 587 AD, for the purpose of making perfumes using 4 substances selected from 16 different substances using a magic square.[2]
Combinatorial designs developed along with the general growth ofcombinatoricsfrom the 18th century, for example withLatin squaresin the 18th century andSteiner systemsin the 19th century. Designs have also been popular inrecreational mathematics, such asKirkman's schoolgirl problem(1850), and in practical problems, such as the scheduling ofround-robin tournaments(solution published 1880s). In the 20th century designs were applied to thedesign of experiments, notably Latin squares,finite geometry, andassociation schemes, yielding the field ofalgebraic statistics.
The classical core of the subject of combinatorial designs is built aroundbalanced incomplete block designs (BIBDs),Hadamard matrices and Hadamard designs,symmetric BIBDs,Latin squares,resolvable BIBDs,difference sets, and pairwise balanced designs (PBDs).[3]Other combinatorial designs are related to or have been developed from the study of these fundamental ones.
TheHandbook of Combinatorial Designs(Colbourn & Dinitz 2007) has, amongst others, 65 chapters, each devoted to a combinatorial design other than those given above. A partial listing is given below:
|
https://en.wikipedia.org/wiki/Combinatorial_design
|
In mathematics, theO'Nan–Scott theoremis one of the most influential theorems ofpermutation grouptheory; the classification offinite simple groupsis what makes it so useful. Originally the theorem was aboutmaximal subgroupsof thesymmetric group. It appeared as an appendix to a paper by Leonard Scott written for The Santa Cruz Conference on Finite Groups in 1979, with a footnote thatMichael O'Nanhad independently proved the same result.[1]Michael Aschbacherand Scott later gave a corrected version of the statement of the theorem.[2]
The theorem states that a maximal subgroup of the symmetric group Sym(Ω), where |Ω| =n, is one of the following:
In a survey paper written for theBulletin of the London Mathematical Society,Peter J. Cameronseems to have been the first to recognize that the real power in the O'Nan–Scott theorem is in the ability to split the finite primitive groups into various types.[3]A complete version of the theorem with a self-contained proof was given byM.W. Liebeck,Cheryl PraegerandJan Saxl.[4]The theorem is now a standard part of textbooks on permutation groups.[5]
The eight O'Nan–Scott types of finite primitive permutation groups are as follows:
HA (holomorph of an abelian group):These are the primitive groups which are subgroups of the affine general linear group AGL(d,p), for some primepand positive integerd≥ 1. For such a groupGto be primitive, it must contain the subgroup of all translations, and the stabilizerG0inGof the zero vector must be an irreducible subgroup of GL(d,p). Primitive groups of type HA are characterized by having a unique minimal normal subgroup which is elementary abelian and acts regularly.
HS (holomorph of a simple group):LetTbe a finite nonabelian simple group. ThenM=T×Tacts on Ω =Tbyt(t1,t2)=t1−1tt2. NowMhas two minimal normal subgroupsN1,N2, each isomorphic toTand each acts regularly on Ω, one by right multiplication and one by left multiplication. The action ofMis primitive and if we takeα= 1Twe haveMα= {(t,t)|t∈T}, which includes Inn(T) on Ω. In fact anyautomorphismofTwill act on Ω. A primitive group of type HS is then any groupGsuch thatM≅T.Inn(T) ≤G≤T.Aut(T). All such groups haveN1andN2as minimal normal subgroups.
HC (holomorph of a compound group):LetTbe a nonabelian simple group and letN1≅N2≅Tkfor some integerk≥ 2. Let Ω =Tk. ThenM=N1×N2acts transitively on Ω viax(n1,n2)=n1−1xn2for allx∈ Ω,n1∈N1,n2∈N2. As in the HS case, we haveM≅Tk.Inn(Tk) and any automorphism ofTkalso acts on Ω. A primitive group of type HC is a groupGsuch thatM≤G≤Tk.Aut(Tk)andGinduces a subgroup of Aut(Tk) = Aut(T)wrSkwhich acts transitively on the set ofksimple direct factors ofTk. Any suchGhas two minimal normal subgroups, each isomorphic toTkand regular.
A group of type HC preserves a product structure Ω = Δkwhere Δ =TandG≤HwrSkwhereHis a primitive group of type HS on Δ.
TW (twisted wreath):HereGhas a unique minimal normal subgroupNandN≅Tkfor some finite nonabelian simple groupTandNacts regularly on Ω. Such groups can be constructed as twisted wreath products and hence the label TW. The conditions required to get primitivity imply thatk≥ 6 so the smallest degree of such a primitive group is 606.
AS (almost simple):HereGis a group lying betweenTand Aut(T), that is,Gis an almost simple group and so the name. We are not told anything about what the action is, other than that it is primitive. Analysis of this type requires knowing about the possible primitive actions of almost simple groups, which is equivalent to knowing the maximal subgroups of almost simple groups.
SD (simple diagonal):LetN=Tkfor some nonabelian simple groupTand integerk≥ 2 and letH= {(t,...,t)|t∈T} ≤N. ThenNacts on the set Ω of right cosets ofHinNby right multiplication. We can take {(t1,...,tk−1, 1)|ti∈T}to be a set of coset representatives forHinNand so we can identify Ω withTk−1. Now (s1,...,sk) ∈Ntakes the coset with representative (t1,...,tk−1, 1) to the cosetH(t1s1,...,tk−1sk−1,sk) =H(sk−1tks1,...,sk−1tk−1sk−1, 1). The groupSkinduces automorphisms ofNby permuting the entries and fixes the subgroupHand so acts on the set Ω. Also, note thatHacts on Ω by inducing Inn(T) and in fact any automorphism σ ofTacts on Ω by taking the coset with representative (t1,...,tk−1, 1)to the coset with representative (t1σ,...,tk−1σ, 1). Thus we get a groupW=N.(Out(T) ×Sk) ≤ Sym(Ω). A primitive group of type SD is a groupG≤Wsuch thatN◅GandGinduces a primitive subgroup ofSkon theksimple direct factors ofN.
CD (compound diagonal):Here Ω = ΔkandG≤HwrSkwhereHis a primitive group of type SD on Δ with minimal normal subgroupTl. Moreover,N=Tklis a minimal normal subgroup ofGandGinduces a transitive subgroup ofSk.
PA (product action):Here Ω = ΔkandG≤HwrSkwhereHis a primitive almost simple group on Δ withsocleT. ThusGhas a product action on Ω. Moreover,N=Tk◅GandGinduces a transitive subgroup ofSkin its action on theksimple direct factors ofN.
Some authors use different divisions of the types. The most common is to include types HS and SD together as a “diagonal type” and types HC, CD and PA together as a “product action type."[6]Praeger later generalized the O’Nan–Scott Theorem to quasiprimitive groups (groups with faithful actions such that the restriction to every nontrivial normal subgroup is transitive).[7]
|
https://en.wikipedia.org/wiki/O%27Nan%E2%80%93Scott_theorem
|
Incomputer science, anabstract state machine(ASM) is astate machineoperating onstatesthat are arbitrary data structures (structurein the sense ofmathematical logic, that is a nonemptysettogether with a number offunctions(operations) andrelationsover the set).
TheASM Methodis a practical and scientifically well-foundedsystems engineeringmethod that bridges the gap between the two ends of system development:
The method builds upon three basic concepts:
In the original conception of ASMs, a singleagentexecutes a program in a sequence of steps, possibly interacting with its environment. This notion was extended to capturedistributed computations, in which multiple agents execute their programs concurrently.
Since ASMs model algorithms at arbitrary levels of abstraction, they can provide high-level, low-level and mid-level views of a hardware or software design. ASM specifications often consist of a series of ASM models, starting with an abstractground modeland proceeding to greater levels of detail in successiverefinementsor coarsenings.
Due to the algorithmic and mathematical nature of these three concepts, ASM models and their properties of interest can be analyzed using any rigorous form ofverification(by reasoning) orvalidation(by experimentation, testing model executions).
The concept of ASMs is due toYuri Gurevich, who first proposed it in the mid-1980s as a way of improving onTuring's thesisthat everyalgorithmissimulatedby an appropriateTuring machine. He formulated theASM Thesis: every algorithm, no matter howabstract, is step-for-stepemulatedby an appropriate ASM. In 2000, Gurevichaxiomatizedthe notion of sequential algorithms, and proved the ASM thesis for them. Roughly stated, the axioms are as follows:
The axiomatization and characterization of sequential algorithms have been extended toparalleland interactive algorithms.
In the 1990s, through a community effort,[1]the ASM method was developed, using ASMs for theformal specificationand analysis (verification and validation) ofcomputer hardwareandsoftware. Comprehensive ASM specifications ofprogramming languages(includingProlog,C, andJava) anddesign languages(UMLandSDL) have been developed.
A detailed historical account can be found elsewhere.[2][3]
A number of software tools for ASM execution and analysis are available.
(in historical order since 2000)
|
https://en.wikipedia.org/wiki/Abstract_state_machine
|
Inautomata theory, analternating finite automaton(AFA) is anondeterministic finite automatonwhose transitions are divided intoexistentialanduniversaltransitions. For example, letAbe an alternatingautomaton.
Note that due to the universal quantification a run is represented by a runtree.Aaccepts a wordw, if thereexistsa run tree onwsuch thateverypath ends in an accepting state.
A basic theorem states that any AFA is equivalent to adeterministic finite automaton(DFA), hence AFAs accept exactly theregular languages.
An alternative model which is frequently used is the one where Boolean combinations are indisjunctive normal formso that, e.g.,{{q1},{q2,q3}}{\displaystyle \{\{q_{1}\},\{q_{2},q_{3}\}\}}would representq1∨(q2∧q3){\displaystyle q_{1}\vee (q_{2}\wedge q_{3})}. The statett(true) is represented by{∅}{\displaystyle \{\emptyset \}}in this case andff(false) by∅{\displaystyle \emptyset }. This representation is usually more efficient.
Alternating finite automata can be extended to accept trees in the same way astree automata, yieldingalternating tree automata.
An alternating finite automaton (AFA) is a5-tuple,(Q,Σ,q0,F,δ){\displaystyle (Q,\Sigma ,q_{0},F,\delta )}, where
For each stringw∈Σ∗{\displaystyle w\in \Sigma ^{*}}, we define the acceptance functionAw:Q→{0,1}{\displaystyle A_{w}\colon Q\to \{0,1\}}by induction on the length ofw{\displaystyle w}:
The automaton accepts a stringw∈Σ∗{\displaystyle w\in \Sigma ^{*}}if and only ifAw(q0)=1{\displaystyle A_{w}(q_{0})=1}.
This model was introduced byChandra,KozenandStockmeyer.[1]
Even though AFA can accept exactly theregular languages, they are different from other types of finite automata in the succinctness of description, measured by the number of their states.
Chandra et al.[1]proved that converting ann{\displaystyle n}-state AFA to an equivalent DFA
requires22n{\displaystyle 2^{2^{n}}}states in the worst case, though a DFA for the reverse language can be constructued with only2n{\displaystyle 2^{n}}states. Another construction by Fellah, Jürgensen and Yu.[2]converts an AFA withn{\displaystyle n}states to anondeterministic finite automaton(NFA) with up to2n{\displaystyle 2^{n}}states by performing a similar kind of powerset construction as used for the transformation of an NFA to a DFA.
The membership problem asks, given an AFAA{\displaystyle A}and awordw{\displaystyle w}, whetherA{\displaystyle A}acceptsw{\displaystyle w}. This problem isP-complete.[3]This is true even on a singleton alphabet, i.e., when the automaton accepts aunary language.
The non-emptiness problem (is the language of an input AFA non-empty?), the universality problem (is the complement of the language of an input AFA empty?), and the equivalence problem (do two input AFAs recognize the same language) arePSPACE-completefor AFAs[3]: Theorems 23, 24, 25.
|
https://en.wikipedia.org/wiki/Alternating_finite_automaton
|
Incomputer science, acommunicating finite-state machineis afinite-state machinelabeled with "receive" and "send" operations over some alphabet of channels. They were introduced by Brand and Zafiropulo,[1]and can be used as a model ofconcurrentprocesses likePetri nets. Communicating finite-state machines are used frequently for modeling a communication protocol since they make it possible to detect major protocol design errors, including boundedness, deadlocks, and unspecified receptions.[2]
The advantage of communicating finite-state machines is that they make it possible to decide many properties in communication protocols, beyond the level of just detecting such properties. This advantage rules out the need for human assistance or restriction in generality.[1]
Communicating finite-state machines can be more powerful than finite-state machines in situations where the propagation delay is not negligible (so that several messages can be in transit at one time) and in situations where it is natural to describe the protocol parties and the communication medium as separate entities.[1]
Hierarchical state machines are finite-state machines whose states themselves can be other machines. Since a communicating finite-state machine is characterized by concurrency, the most notable trait in acommunicating hierarchical state machineis the coexistence of hierarchy and concurrency. This has been considered highly suitable as it signifies stronger interaction inside the machine.
However, it was proved that the coexistence of hierarchy and concurrency intrinsically costs language inclusion, language equivalence, and all of universality.[3]
For an arbitrary positive integerN{\displaystyle N}, aprotocol[1]: 3withN{\displaystyle N}process(es) is a quadruple{(Si)i=1N,(oi)i=1N,(Mi,j)i,j=1N,(succ)i=1N}{\displaystyle \{(S_{i})_{i=1}^{N},\ (o_{i})_{i=1}^{N},\ (M_{i,j})_{i,j=1}^{N},\ ({\mathtt {succ}})_{i=1}^{N}\}}with:
Aglobal stateis a pair⟨S,C⟩{\displaystyle \langle S,C\rangle }where
Theinitial global stateis a pair⟨O,E⟩{\displaystyle \langle O,\mathrm {E} \rangle }where
There are two kinds of steps, steps in which message are received and steps in which messages are sent.
A step in which thej{\displaystyle j}process receive a message
previously sent by thei{\displaystyle i}-th process is a pair of the form⟨(s1,…,sj,…,sn),(c1,1…c1,n…………mi,jci,j…………cn,1…cn,n)⟩⊢⟨(s1,…,sj′,…,sn),(c1,1…c1,n…………ci,j…………cn,1…cn,n)⟩{\displaystyle \left\langle (s_{1},\dots ,s_{j},\dots ,s_{n}),\left({\begin{array}{lll}c_{1,1}&\dots &c_{1,n}\\\dots &\dots &\dots \\\dots &m_{i,j}c_{i,j}&\dots \\\dots &\dots &\dots \\c_{n,1}&\dots &c_{n,n}\end{array}}\right)\right\rangle \vdash \left\langle (s_{1},\dots ,s'_{j},\dots ,s_{n}),\left({\begin{array}{lll}c_{1,1}&\dots &c_{1,n}\\\dots &\dots &\dots \\\dots &c_{i,j}&\dots \\\dots &\dots &\dots \\c_{n,1}&\dots &c_{n,n}\end{array}}\right)\right\rangle }whensucci(sj,+mi,j)=sj′{\displaystyle {\mathtt {succ}}_{i}(s_{j},+m_{i,j})=s'_{j}}, withmi,j′∈Mi,j{\displaystyle m'_{i,j}\in M_{i,j}}. Similarly, a pair in which a message is sent by thei{\displaystyle i}-th process to thej{\displaystyle j}-th one is a pair of the form⟨(s1,…,si,…,sn),(c1,1…c1,n…………ci,j…………cn,1…cn,n)⟩⊢⟨(s1,…,si′,…,sn),(c1,1…c1,n…………mi,jci,j…………cn,1…cn,n)⟩{\displaystyle \left\langle (s_{1},\dots ,s_{i},\dots ,s_{n}),\left({\begin{array}{lll}c_{1,1}&\dots &c_{1,n}\\\dots &\dots &\dots \\\dots &c_{i,j}&\dots \\\dots &\dots &\dots \\c_{n,1}&\dots &c_{n,n}\end{array}}\right)\right\rangle \vdash \left\langle (s_{1},\dots ,s'_{i},\dots ,s_{n}),\left({\begin{array}{lll}c_{1,1}&\dots &c_{1,n}\\\dots &\dots &\dots \\\dots &m_{i,j}c_{i,j}&\dots \\\dots &\dots &\dots \\c_{n,1}&\dots &c_{n,n}\end{array}}\right)\right\rangle }whensucci(si,−mi,j)=si′{\displaystyle {\mathtt {succ}}_{i}(s_{i},-m_{i,j})=s'_{i}}
Arunis a sequence of global states such that a step relate a state to the next one, and such that the first state is initial.
It is said that a global state⟨S,C⟩{\displaystyle \langle S,C\rangle }isreachableif there exists a run passing through this state.
It has been proved with the introduction of the concept itself that when two finite-state machines communicate with only one type of messages, boundedness, deadlocks, and unspecified reception state can be decided and identified while such is not the case when the machines communicate with two or more types of messages. Later, it has been further proved that when only one finite-state machine communicates with single type of message while the communication of its partner is unconstrained, we can still decide and identify boundedness, deadlocks, and unspecified reception state.[2]
It has been further proved that when the message priority relation is empty, boundedness, deadlocks and unspecified reception state can be decided even under the condition in which there are two or more types of messages in the communication between finite-state machines.[4]
Boundedness, deadlocks, and unspecified reception state are all decidable in polynomial time (which means that a particular problem can be solved in tractable, not infinite, amount of time) since the decision problems regarding them are nondeterministic logspace complete.[2]
Some extensions considered are:
Achannel systemis essentially a version of communicating finite-state machine in which the machine is not divided into distinct process. Thus, there is a single state of state, and there is no restriction relating which system can read/write on any channel.
Formally, given a protocol⟨(Si)i=1n,(oi)i=1n,(Mi,j)i,j=1n,(succ)i⟩{\displaystyle \langle (S_{i})_{i=1}^{n},(o_{i})_{i=1}^{n},(M_{i,j})_{i,j=1}^{n},({\mathtt {succ}})_{i}\rangle }, its associated channel system is⟨∏(Si)i=1n,(oi)i=1n,⋃i,j=1n(Mi,j),Δ⟩{\displaystyle \langle \prod (S_{i})_{i=1}^{n},(o_{i})_{i=1}^{n},\bigcup _{i,j=1}^{n}(M_{i,j}),\Delta \rangle }, whereΔ{\displaystyle \Delta }is the set of((s1,…,sj,…,sn),?mi,j,(s1,…,succj(sj,+mi,j),…,sn){\displaystyle ((s_{1},\dots ,s_{j},\dots ,s_{n}),?m_{i,j},(s_{1},\dots ,{\mathtt {succ}}_{j}(s_{j},+m_{i,j}),\dots ,s_{n})}and of((s1,…,si,…,sn),!mi,j,(s1,…,succi(si,−mi,j),…,sn){\displaystyle ((s_{1},\dots ,s_{i},\dots ,s_{n}),!m_{i,j},(s_{1},\dots ,{\mathtt {succ}}_{i}(s_{i},-m_{i,j}),\dots ,s_{n})}.
|
https://en.wikipedia.org/wiki/Communicating_finite-state_machine
|
Control tablesaretablesthat control thecontrol flowor play a major part in program control. There are no rigid rules about the structure or content of a control table—its qualifying attribute is its ability to direct control flow in some way through "execution" by aprocessororinterpreter. The design of such tables is sometimes referred to astable-driven design[1][2](although this typically refers to generating code automatically from external tables rather than direct run-time tables). In some cases, control tables can be specific implementations offinite-state-machine-basedautomata-based programming. If there are several hierarchical levels of control table they may behave in a manner equivalent toUML state machines.
Control tables often have the equivalent ofconditional expressionsorfunctionreferencesembedded in them, usually implied by their relative column position in theassociation list. Control tables reduce the need for programming similarstructuresor program statements over and over again. The two-dimensional nature of most tables makes them easier to view and update than the one-dimensional nature of program code.
In some cases, non-programmers can be assigned to maintain the content of control tables. For example, if a user-entered search phrase contains a certain phrase, a URL (web address) can be assigned in a table that controls where the search user is taken. If the phrase contains "skirt", then the table can route the user to "www.shopping.example/catalogs/skirts", which is the skirts product catalog page. (The example URL doesn't work in practice). Marketing personnel may manage such a table instead of programmers.
The tables can have multiple dimensions, of fixed orvariable lengthsand are usuallyportablebetweencomputer platforms, requiring only a change to the interpreter, not thealgorithmitself – the logic of which is essentially embodied within the table structure and content. The structure of the table may be similar to amultimapassociative array, where a data value (or combination of data values) may be mapped to one or more functions to be performed.
In perhaps its simplest implementation, a control table may sometimes be a one-dimensional table fordirectlytranslating araw datavalue to a corresponding subroutineoffset,indexorpointerusing the raw data value either directly as the index to the array, or by performing some basic arithmetic on the data beforehand. This can be achieved inconstant time(without alinear searchorbinary searchusing a typicallookup tableon anassociative array). In mostarchitectures, this can be accomplished in two or threemachine instructions– without any comparisons or loops. The technique is known as a "trivial hash function" or, when used specifically for branch tables, "double dispatch".
For this to be feasible, the range of all possible values of the data needs to be small (e.g. anASCIIorEBCDICcharacter value which have a range ofhexadecimal'00' – 'FF'. If the actual range isguaranteedto be smaller than this, the array can be truncated to less than 256 bytes).
In automata-based programming andpseudoconversational transactionprocessing, if the number of distinct program states is small, a "dense sequence" control variable can be used to efficiently dictate the entire flow of the main program loop.
A two byte raw data value would require aminimumtable size of 65,536 bytes – to handle all input possibilities – whilst allowing just 256 different output values. However, this direct translation technique provides an extremely fastvalidation& conversion to a (relative) subroutine pointer if theheuristics, together with sufficient fast access memory, permits its use.
Abranch tableis a one-dimensional 'array' of contiguousmachine codebranch/jumpinstructions to effect amultiway branchto a program label when branched into by an immediately preceding, and indexed branch. It is sometimes generated by anoptimizing compilerto execute aswitch statement– provided that the input range is small and dense, with few gaps.[3](as created by the previous array example)
Although quite compact – compared to the multiple equivalentIfstatements – the branch instructions still carry some redundancy, since the branchopcodeand condition code mask are repeated alongside the branch offsets. Control tables containing only the offsets to the program labels can be constructed to overcome this redundancy (at least in assembly languages) and yet requiring only minor execution timeoverheadcompared to a conventional branch table.
More usually, a control table can be thought of as atruth tableor as an executable ("binary") implementation of a printeddecision table(or atreeof decision tables, at several levels). They contain (often implied)propositions, together with one or more associated 'actions'. These actions are usually performed by generic or custom-builtsubroutinesthat are called by an "interpreter" program. The interpreter in this instance effectively functions as avirtual machine, that 'executes' the control table entries and thus provides a higher level ofabstractionthan the underlying code of the interpreter.
A control table can be constructed along similar lines to a language dependentswitch statementbut with the added possibility of testing for combinations of input values (usingbooleanstyleAND/ORconditions) and potentially calling multiplesubroutines(instead of just a single set of values and 'branch to' program labels). (The switch statement construct in any case may not be available, or has confusingly differing implementations inhigh level languages(HLL). The control table concept, by comparison, has no intrinsic language dependencies, but might nevertheless beimplementeddifferently according to the available data definition features of the chosen programming language.)
A control table essentially embodies the 'essence' of a conventional program, stripped of its programming language syntax and platform dependent components (e.g.IF/THEN DO.., FOR.., DO WHILE.., SWITCH, GOTO, CALL) and 'condensed' to its variables (e.g. input1), values (e.g. 'A','S','M' and 'D'), and subroutine identities (e.g. 'Add','subtract,..' or #1, #2,..). The structure of the table itself typicallyimpliesthe (default) logical operations involved – such as 'testing for equality', performing a subroutine and 'next operation' or following the default sequence (rather than these being explicitly stated within program statements – as required in otherprogramming paradigms).
A multi-dimensional control table will normally, as a minimum, contain value/action pairs and may additionally contain operators andtypeinformation such as, the location, size and format of input or output data, whetherdata conversion(or otherrun-timeprocessing nuances) is required before or after processing (if not already implicit in the function itself). The table may or may not containindexesor relative or absolutepointersto generic or customized primitives orsubroutinesto be executed depending upon other values in the "row".
The table illustrated below applies only to 'input1' since no specific input is specified in the table.
(This side-by-side pairing of value and action has similarities to constructs inevent-driven programming, namely 'event-detection' and 'event-handling' but without (necessarily) theasynchronousnature of the event itself)
The variety of values that can beencodedwithin a control table is largely dependent upon thecomputer languageused.Assembly languageprovides the widest scope fordata typesincluding (for the actions), the option of directly executablemachine code. Typically a control table will contain values for each possible matching class of input together with a corresponding pointer to an action subroutine. Some languages claim not to supportpointers(directly) but nevertheless can instead support an index which can be used to represent a 'relative subroutine number' to perform conditional execution, controlled by the value in the table entry (e.g. for use in an optimizedSWITCHstatement – designed with zero gaps i.e. amultiway branch).
Comments positioned above each column (or even embedded textual documentation) can render a decision table 'human readable' even after 'condensing down' (encoding) to its essentials (and still broadly in-line with the original program specification – especially if a printed decision table,enumeratingeach unique action, is created before coding begins).
The table entries can also optionally contain counters to collect run-time statistics for 'in-flight' or later optimization
Control tables can reside instaticstorage, onauxiliary storage, such as aflat fileor on adatabaseor may alternatively be partially or entirely built dynamically at programinitializationtime from parameters (which themselves may reside in a table). For optimum efficiency, the table should be memory resident when the interpreter begins to use it.
The interpreter can be written in any suitable programming language including ahigh level language. A suitably designedgenericinterpreter, together with a well chosen set of generic subroutines (able to process the most commonly occurring primitives), would require additional conventional coding only for new custom subroutines (in addition to specifying the control table itself). The interpreter, optionally, may only apply to some well-defined sections of a complete application program (such as themain control loop) and not other, 'less conditional', sections (such as program initialization, termination and so on).
The interpreter does not need to be unduly complex, or produced by a programmer with the advanced knowledge of a compiler writer, and can be written just as any other application program – except that it is usually designed with efficiency in mind. Its primary function is to "execute" the table entries as a set of "instructions". There need be no requirement for parsing of control table entries and these should therefore be designed, as far as possible, to be 'execution ready', requiring only the "plugging in" of variables from the appropriate columns to the already compiled generic code of the interpreter. Theprogram instructionsare, in theory, infinitelyextensibleand constitute (possibly arbitrary) values within the table that are meaningful only to the interpreter. Thecontrol flowof the interpreter is normally by sequential processing of each table row but may be modified by specific actions in the table entries.
These arbitrary values can thus be designed withefficiencyin mind – by selecting values that can be used as direct indexes to data orfunction pointers. For particular platforms/language, they can be specifically designed to minimizeinstruction path lengthsusingbranch tablevalues or even, in some cases such as inJITcompilers, consist of directly executablemachine code"snippets" (or pointers to them).
The subroutines may be coded either in the same language as the interpreter itself or any other supported program language (provided that suitable inter-language 'Call' linkage mechanisms exist). The choice of language for the interpreter and/or subroutines will usually depend upon how portable it needs to be across variousplatforms. There may be several versions of the interpreter to enhance theportabilityof a control table. A subordinate control table pointer may optionally substitute for a subroutine pointer in the 'action' columns if the interpreter supports this construct, representing a conditional 'drop' to a lower logical level, mimicking a conventionalstructured programstructure.
At first sight, the use of control tables would appear to add quite a lot to a program'soverhead, requiring, as it does, an interpreter process before the 'native' programming language statements are executed. This however is not always the case. By separating (or 'encapsulating') the executable coding from the logic, as expressed in the table, it can be more readily targeted to perform its function most efficiently. This may be experienced most obviously in aspreadsheetapplication – where the underlying spreadsheet software transparently converts complex logical 'formulae' in the most efficient manner it is able, in order to display its results.
The examples below have been chosen partly to illustrate potential performance gains that may not onlycompensatesignificantly for the additional tier of abstraction, but alsoimproveupon – what otherwise might have been – less efficient, less maintainable and lengthier code. Although the examples given are for a 'low level' assembly language and for theC language, it can be seen, in both cases, that very few lines of code are required to implement the control table approach and yet can achieve very significantconstant timeperformance improvements, reduce repetitive source coding and aid clarity, as compared withverboseconventional program language constructs. See also thequotationsbyDonald Knuth, concerning tables and the efficiency ofmultiway branchingin this article.
The following examples arearbitrary(and based upon just a single input for simplicity), however the intention is merely to demonstrate how control flow can be effected via the use of tables instead of regular program statements. It should be clear that this technique can easily be extended to deal with multiple inputs, either by increasing the number of columns or utilizing multiple table entries (with optional and/or operator). Similarly, by using (hierarchical) 'linked' control tables,structured programmingcan be accomplished (optionally using indentation to help highlight subordinate control tables).
"CT1" is an example of a control table that is a simplelookup table. The first column represents the input value to be tested (by an implied 'IF input1 = x') and, if TRUE, the corresponding 2nd column (the 'action') contains a subroutine address to perform by acall(orjumpto – similar to aSWITCHstatement). It is, in effect, amultiway branchwith return (a form of "dynamic dispatch"). The last entry is the default case where no match is found.
For programming languages that support pointers withindata structuresalongside other data values, the above table (CT1) can be used to directcontrol flowto an appropriatesubroutinesaccording to matching value from the table (without a column to indicate otherwise, equality is assumed in this simple case).
No attempt is made to optimize the lookup in coding for this first example (forIBM/360maximum 16Mb address range orZ/Architecture), and it uses instead a simplelinear searchtechnique – purely to illustrate the concept and demonstrate fewer source lines. To handle all 256 different input values, approximately 265 lines of source code would be required (mainly single line table entries) whereas multiple 'compare and branch' would have normally required around 512 source lines (the size of thebinaryis also approximately halved, each table entry requiring only 4 bytes instead of approximately 8 bytes for a series of 'compare immediate'/branch instructions (For larger input variables, the saving is even greater).
improving the performance of the interpreter in above example
Improved interpreter(up to26 times less executed instructionsthan the above example on average, where n= 1 to 64 and up to 13 times less than would be needed using multiple comparisons).
To handle 64 different input values, approximately 85 lines of source code (or less) are required (mainly single line table entries) whereas multiple 'compare and branch' would require around 128 lines (the size of thebinaryis also almost halved – despite the additional 256 byte table required to extract the 2nd index).
Further improved interpreter(up to21 times less executed instructions (where n>=64)than the first example on average and up to 42timesless than would be needed using multiple comparisons).
To handle 256 different input values, approximately 280 lines of source code or less, would be required (mainly single line table entries), whereas multiple 'compare and branch' would require around 512 lines (the size of thebinaryis also almost halved once more).
This example inCuses two tables, the first (CT1) is a simplelinear searchone-dimensional lookup table – to obtain an index by matching the input (x), and the second, associated table (CT1p), is a table of addresses of labels to jump to.
This can be made more efficient if a 256 byte table is used to translate the raw ASCII value (x) directly to a dense sequential index value for use in directly locating the branch address from CT1p (i.e. "index mapping" with a byte-wide array). It will then execute inconstant timefor all possible values of x (If CT1p contained the names of functions instead of labels, the jump could be replaced with a dynamic function call, eliminating the switch-like goto – but decreasing performance by the additional cost of functionhousekeeping).
The next example below illustrates how a similar effect can be achieved in languages that do not support pointer definitions in data structures but do support indexed branching to a subroutine – contained within a (0-based) array of subroutine pointers. The table (CT2) is used to extract the index (from 2nd column) to the pointer array (CT2P). If pointer arrays are not supported, a SWITCH statement or equivalent can be used to alter the control flow to one of a sequence of program labels (e.g.: case0, case1, case2, case3, case4) which then either process the input directly, or else perform a call (with return) to the appropriate subroutine (default, Add, Subtract, Multiply or Divide,..) to deal with it.
As in above examples, it is possible to very efficiently translate the potentialASCIIinput values (A,S,M,D or unknown) into a pointer array index without actually using a table lookup, but is shown here as a table for consistency with the first example.
Multi-dimensional control tables can be constructed (i.e. customized) that can be 'more complex' than the above examples that might test for multiple conditions on multiple inputs or perform more than one 'action', based on some matching criteria. An 'action' can include a pointer to another subordinate control table. The simple example below has had animplicit'OR' condition incorporated as an extra column (to handle lower case input, however in this instance, this could equally have been handled simply by having an extra entry for each of the lower case characters specifying the same subroutine identifier as the upper case characters). An extra column to count the actual run-time events for each input as they occur is also included.
The control table entries are then much more similar to conditional statements inprocedural languagesbut, crucially, without the actual (language dependent) conditional statements (i.e. instructions) being present (the generic code isphysicallyin the interpreter that processes the table entries, not in the table itself – which simply embodies the program logic via its structure and values).
In tables such as these, where a series of similar table entries defines the entire logic, a table entry number or pointer may effectively take the place of aprogram counterin more conventional programs and may be reset in an 'action', also specified in the table entry. The example below (CT4) shows how extending the earlier table, to include a 'next' entry (and/or including an 'alter flow' (jump) subroutine) can create aloop(This example is actually not the most efficient way to construct such a control table but, by demonstrating a gradual 'evolution' from the first examples above, shows how additional columns can be used to modify behaviour.) The fifth column demonstrates that more than one action can be initiated with a single table entry – in this case an action to be performedafterthe normal processing of each entry ('-' values mean 'no conditions' or 'no action').
Structured programmingor"Goto-less" code, (incorporating the equivalent of 'DO WHILE' or 'for loop' constructs), can also be accommodated with suitably designed and 'indented' control table structures.
In the specialist field oftelecommunications rating(concerned with the determining the cost of a particular call),table-driven ratingtechniques illustrate the use of control tables in applications where the rules may change frequently because of market forces. The tables that determine the charges may be changed at short notice by non-programmers in many cases.[4][5]
If the algorithms are not pre-built into the interpreter (and therefore require additional runtime interpretation of an expression held in the table), it is known as "Rule-based Rating" rather than table-driven rating (and consequently consumes significantly more overhead).
Aspreadsheetdata sheet can be thought of as a two dimensional control table, with the non empty cells representing data to the underlying spreadsheet program (the interpreter). The cells containing formula are usually prefixed with an equals sign and simply designate a special type of data input that dictates the processing of other referenced cells – by altering the control flow within the interpreter. It is the externalization of formulae from the underlying interpreter that clearly identifies both spreadsheets, and the above cited "rule based rating" example as readily identifiable instances of the use of control tables by non programmers.
If the control tables technique could be said to belong to any particularprogramming paradigm, the closest analogy might be automata-based programming or"reflective"(a form ofmetaprogramming– since the table entries could be said to 'modify' the behaviour of the interpreter). The interpreter itself however, and the subroutines, can be programmed using any one of the available paradigms or even a mixture. The table itself can be essentially a collection of "raw data" values that do not even need to be compiled and could be read in from an external source (except in specific, platform dependent, implementations using memory pointers directly for greater efficiency).
A multi-dimensional control table has some conceptual similarities tobytecodeoperating on avirtual machine, in that aplatform dependent"interpreter"program is usually required to perform the actual execution (that is largely conditionally determined by the tables content). There are also some conceptual similarities to the recentCommon Intermediate Language(CIL) in the aim of creating a common intermediate 'instruction set' that is independent of platform (but unlike CIL, no pretensions to be used as a common resource for other languages).P-codecan also be considered a similar but earlier implementation with origins as far back as 1966.
When a multi-dimensional control table is used to determine program flow, the normal "hardware"program counterfunction is effectively simulated with either apointerto the first (or next) table entry or else anindexto it. "Fetching" the instruction involves decoding thedatain that table entry – without necessarily copying all or some of the data within the entry first. Programming languages that are able to usepointershave the dual advantage that lessoverheadis involved, both in accessing the contents and also advancing the counter to point to the next table entry after execution. Calculating the next 'instruction' address (i.e. table entry) can even be performed as an optional additional action of every individual table entry allowingloopsand orjumpinstructions at any stage.
The interpreter program can optionally save the program counter (and other relevant details depending upon instruction type) at each stage to record a full or partial trace of the actual program flow fordebuggingpurposes,hot spotdetection,code coverageanalysis andperformance analysis(see examples CT3 & CT4 above).
Optionally:-
The following mainly apply to their use in multi-dimensional tables, not the one-dimensional tables discussed earlier.
Multiway branching is an important programming technique which is all too often replaced by an inefficient sequence of if tests.Peter Naurrecently wrote me that he considers the use of tables to control program flow as a basic idea of computer science that has been nearly forgotten; but he expects it will be ripe for rediscovery any day now. It is the key to efficiency in all the best compilers I have studied.
There is another way to look at a program written in interpretative language. It may be regarded as a series of subroutine calls, one after another. Such a program may in fact be expanded into a long sequence of calls on subroutines, and, conversely, such a sequence can usually be packed into a coded form that is readily interpreted. The advantage of interpretive techniques are the compactness of representation, the machine independence, and the increased diagnostic capability. An interpreter can often be written so that the amount of time spent in interpretation of the code itself and branching to the appropriate routine is negligible
The space required to represent a program can often be decreased by the use of interpreters in which common sequences of operations are represented compactly. A typical example is the use of a finite-state machine to encode a complex protocol or lexical format into a small table
Jump tables can be especially efficient if the range tests can be omitted. For example, if the control value is an enumerated type (or a character) then it can only contain a small fixed range of values and a range test is redundant provided the jump table is large enough to handle all possible values
Programs must be written for people to read, and only incidentally for machines to execute.
Show me your flowchart and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowchart; it'll be obvious.
|
https://en.wikipedia.org/wiki/Control_table
|
DEVS, abbreviatingDiscrete Event System Specification, is a modular and hierarchical formalism for modeling and analyzing general systems that can be discrete event systems which might be described bystate transition tables, and continuous state systems which might be described bydifferential equations, and hybrid continuous state and discrete event systems. DEVS is atimed event system.
DEVS is a formalism for modeling and analysis of discrete event systems (DESs). The DEVS formalism was invented byBernard P. Zeigler, who is emeritus professor at theUniversity of Arizona. DEVS was introduced to the public in Zeigler's first book,Theory of Modeling and Simulationin 1976,[1]while Zeigler was an associate professor atUniversity of Michigan. DEVS can be seen as an extension of theMoore machineformalism,[2]which is a finite state automaton where the outputs are determined by the current state alone (and do not depend directly on the input). The extension was done by
Since the lifespan of each state is a real number (more precisely, non-negative real) or infinity, it is distinguished from discrete time systems, sequential machines, andMoore machines, in which time is determined by a tick time multiplied by non-negative integers. Moreover, the lifespan can be arandom variable; for example the lifespan of a given state can be distributedexponentiallyoruniformly. The state transition and output functions of DEVS can also bestochastic.
Zeigler proposed a hierarchical algorithm for DEVS model simulation in 1984[4]which was published inSimulationjournal in 1987. Since then, many extended formalism from DEVS have been introduced with their own purposes: DESS/DEVS for combined continuous and discrete event systems, P-DEVS for parallel DESs, G-DEVS for piecewise continuous state trajectory modeling of DESs, RT-DEVS for realtime DESs, Cell-DEVS for cellular DESs, Fuzzy-DEVS for fuzzy DESs, Dynamic Structuring DEVS for DESs changing their coupling structures dynamically, and so on. In addition to its extensions, there are some subclasses such asSP-DEVSandFD-DEVShave been researched for achieving decidability of system properties.
Due to the modular and hierarchical modeling views, as well as its simulation-based analysis capability, the DEVS formalism and its variations have been used in many application of engineering (such as hardware design, hardware/software codesign,communications systems,manufacturingsystems) and science (such asbiology, andsociology)
DEVS defines system behavior as well as system structure. System behavior in DEVS formalism is described using input and output events as well as states. For example, for the ping-pong player of Fig. 1, the input event is?receive, and the output event is!send. Each player,A,B, has its states:SendandWait.Sendstate takes 0.1 seconds to send back the ball that is the output event!send, while theWaitstate lasts until the player receives the ball that is the input event?receive.
The structure of ping-pong game is to connect two players: PlayerA'soutput event!sendis transmitted to PlayerB'sinput event?receive, and vice versa.
In the classic DEVS formalism,Atomic DEVScaptures the system behavior, whileCoupled DEVSdescribes the structure of system.
The following formal definition is for Classic DEVS.[5]In this article, we will use the time base,T=[0,∞){\displaystyle \mathbb {T} =[0,\infty )}that is the set of non-negative real numbers; the extended time base,T∞=[0,∞]{\displaystyle \mathbb {T} ^{\infty }=[0,\infty ]}that is the set of non-negative real numbers plus infinity.
An atomic DEVS model is defined as a 7-tuple
where
The atomic DEVS model for player A of Fig. 1 is given Player=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle <X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}such that
X={?receive}Y={!send}S={(d,σ)|d∈{Wait,Send},σ∈T∞}s0=(Send,0.1)ta(s)=σfor alls∈Sδext(((Wait,σ),te),?receive)=(Send,0.1)δint(Send,σ)=(Wait,∞)δint(Wait,σ)=(Send,0.1)λ(Send,σ)=!sendλ(Wait,σ)=ϕ{\displaystyle {\begin{aligned}X&=\{?{\textit {receive}}\}\\Y&=\{!{\textit {send}}\}\\S&=\{(d,\sigma )|d\in \{{\textit {Wait}},{\textit {Send}}\},\sigma \in \mathbb {T} ^{\infty }\}\\s_{0}&=({\textit {Send}},0.1)\\ta(s)&=\sigma {\text{ for all }}s\in S\\\delta _{ext}((({\textit {Wait}},\sigma ),t_{e}),?{\textit {receive}})&=({\textit {Send}},0.1)\\\delta _{int}({\textit {Send}},\sigma )&=({\textit {Wait}},\infty )\\\delta _{int}({\textit {Wait}},\sigma )&=({\textit {Send}},0.1)\\\lambda ({\textit {Send}},\sigma )&=!{\textit {send}}\\\lambda ({\textit {Wait}},\sigma )&=\phi \end{aligned}}}
Both Player A and Player B are atomic DEVS models.
Simply speaking, there are two cases that an atomic DEVS modelM{\displaystyle M}can change its states∈S{\displaystyle s\in S}: (1) when an external inputx∈X{\displaystyle x\in X}comes into the systemM{\displaystyle M}; (2) when the elapsed timete{\displaystyle t_{e}}reaches the lifespan ofs{\displaystyle s}which is defined byta(s){\displaystyle ta(s)}. At the same time of (2),M{\displaystyle M}generates an outputy∈Y{\displaystyle y\in Y}which is defined byλ(s){\displaystyle \lambda (s)}.
For formal behavior description of given an Atomic DEVS model, refer to the sectionBehavior of atomic DEVS. Computer algorithms to implement the behavior of a given Atomic DEVS model are available in the sectionSimulation Algorithms for Atomic DEVS.
The coupled DEVS defines which sub-components belong to it and how they are connected with each other. A coupled DEVS model is defined as an 8-tuple
where
The ping-pong game of Fig. 1 can be modeled as a coupled DEVS modelN=<X,Y,D,{Mi},Cxx,Cyx,Cyy,Select>{\displaystyle N=<X,Y,D,\{M_{i}\},C_{xx},C_{yx},C_{yy},Select>}whereX={}{\displaystyle X=\{\}};Y={}{\displaystyle Y=\{\}};D={A,B}{\displaystyle D=\{A,B\}};MAandMB{\displaystyle M_{A}{\text{ and }}M_{B}}is described as above;Cxx={}{\displaystyle C_{xx}=\{\}};Cyx={(A.!send,B.?receive),(B.!send,A.?receive)}{\displaystyle C_{yx}=\{(A.!send,B.?receive),(B.!send,A.?receive)\}}; andCyy(A.!send)=ϕ,Cyy(B.!send)=ϕ{\displaystyle C_{yy}(A.!send)=\phi ,C_{yy}(B.!send)=\phi }.
Simply speaking, like the behavior of the atomic DEVS class, a coupled DEVS modelN{\displaystyle N}changes its components' states (1) when an external eventx∈X{\displaystyle x\in X}comes intoN{\displaystyle N}; (2) when one of componentsMi{\displaystyle M_{i}}wherei∈D{\displaystyle i\in D}executes its internal state transition and generates its outputyi∈Yi{\displaystyle y_{i}\in Y_{i}}. In both cases (1) and (2), a triggering event is transmitted to all influences which are defined by coupling setsCxx,Cyx,{\displaystyle C_{xx},C_{yx},}andCyy{\displaystyle C_{yy}}.
For formal definition of behavior of the coupled DEVS, you can refer to the sectionBehavior of Coupled DEVS. Computer algorithms to implement the behavior of a given coupled DEVS mode are available at the sectionSimulation Algorithms for Coupled DEVS.
The simulation algorithm of DEVS models considers two issues: time synchronization and message propagation.Time synchronizationof DEVS is to control all models to have the identical current time. However, for an efficient execution, the algorithm makes the current time jump to the most urgent time when an event is scheduled to execute its internal state transition as well as its output generation.Message propagationis to transmit a triggering message which can be either an input or output event along the associated couplings which are defined in a coupled DEVS model. For more detailed information, the reader can refer toSimulation Algorithms for Atomic DEVSandSimulation Algorithms for Coupled DEVS.
By introducing a quantization method which abstracts a continuous segment as a piecewise const segment, DEVS can simulate behaviors of continuous state systems which are described by networks ofdifferential algebraic equations. This research has been initiated by Zeigler in 1990s.[7]Many properties have been clarified by Prof. Kofman in 2000s and Dr. Nutaro. In 2006, Prof. Cellier who is the author ofContinuous System Modeling,[8]and Prof. Kofman wrote a text book,Continuous System Simulation,[9]in which Chapters 11 and 12 cover how DEVS simulates continuous state systems. Dr. Nutaro's book,[10]covers the discrete event simulation of continuous state systems too.[11]
As an alternative analysis method against the sampling-based simulation method, an exhaustive generating behavior approach, generally calledverificationhas been applied for analysis of DEVS models. It is proven that infinite states of a given DEVS model (especially a coupled DEVS model ) can be abstracted by behaviorally isomorphic finite structure, called areachability graphwhen the given DEVS model is a sub-class of DEVS such as Schedule-Preserving DEVS (SP-DEVS), Finite & Deterministic DEVS (FD-DEVS),[12]and Finite & Real-time DEVS (FRT-DEVS).[13]As a result, based on the reachability graph, (1) dead-lock and live-lock freeness as qualitative properties are decidable with SP-DEVS,[14]FD-DEVS,[15]and FRT-DEVS;[13]and (2) min/max processing time bounds as a quantitative property are decidable with SP-DEVS so far by 2012.
Numerous extensions of the classic DEVS formalism have been developed in the last decades. Among them formalisms which allow to have changing model structures while the simulation time evolves.
G-DEVS,[16][17]Parallel DEVS, Dynamic Structuring DEVS, Cell-DEVS,[18]dynDEVS, Fuzzy-DEVS, GK-DEVS, ml-DEVS, Symbolic DEVS, Real-Time DEVS, rho-DEVS
There are some sub-classes known as Schedule-Preserving DEVS (SP-DEVS) and Finite and Deterministic DEVS (FD-DEVS) which were designated to support verification analysis.SP-DEVSandFD-DEVSwhose expressiveness areE(SP-DEVS)⊂{\displaystyle \subset }E(FD-DEVS)⊂{\displaystyle \subset }E(DEVS) whereE(formalism) denotes the expressiveness offormalism.
The behavior of a given DEVS model is a set of sequences of timed events including null events, calledevent segments, which make the model move from one state to another within a set of legal states. To define it this way, the concept of a set of illegal state as well a set of legal states needs to be introduced.
In addition, since the behavior of a given DEVS model needs to define how the state transition change both when time is passed by and when an event occurs, it has been described by a much general formalism, called general system.[19]In this article, we use a sub-class of General System formalism, calledtimed event systeminstead.
Depending on how the total state and the external state transition function of a DEVS model are defined, there are two ways to define the behavior of a DEVS model usingTimed Event System. Since thebehavior of a coupled DEVSmodel is defined as anatomic DEVSmodel, the behavior of coupled DEVS class is also defined by timed event system.
Suppose that a DEVS model,M=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle {\mathcal {M}}=<X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}has
Then the DEVS model,M{\displaystyle {\mathcal {M}}}is aTimed Event SystemG=<Z,Q,Q0,QA,Δ>{\displaystyle {\mathcal {G}}=<Z,Q,Q_{0},Q_{A},\Delta >}where
For a total stateq=(s,te)∈QA{\displaystyle q=(s,t_{e})\in Q_{A}}at timet∈T{\displaystyle t\in \mathbb {T} }and anevent segmentω∈ΩZ,[tl,tu]{\displaystyle \omega \in \Omega _{Z,[t_{l},t_{u}]}}as follows.
Ifunit event segmentω{\displaystyle \omega }is thenull event segment, i.e.ω=ϵ[t,t+dt]{\displaystyle \omega =\epsilon _{[t,t+dt]}}
Ifunit event segmentω{\displaystyle \omega }is atimed eventω=(x,t){\displaystyle \omega =(x,t)}where the event is an input eventx∈X{\displaystyle x\in X},
Ifunit event segmentω{\displaystyle \omega }is atimed eventω=(y,t){\displaystyle \omega =(y,t)}where the event is an output event or the unobservable eventy∈Yϕ{\displaystyle y\in Y^{\phi }},
Computer algorithms to simulate this view of behavior are available atSimulation Algorithms for Atomic DEVS.
Suppose that a DEVS model,M=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle {\mathcal {M}}=<X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}has
Then the DEVSQ=D{\displaystyle Q={\mathcal {D}}}is a timed event systemG=<Z,Q,Q0,QA,Δ>{\displaystyle {\mathcal {G}}=<Z,Q,Q_{0},Q_{A},\Delta >}where
For a total stateq=(s,ts,te)∈QA{\displaystyle q=(s,t_{s},t_{e})\in Q_{A}}at timet∈T{\displaystyle t\in \mathbb {T} }and anevent segmentω∈ΩZ,[tl,tu]{\displaystyle \omega \in \Omega _{Z,[t_{l},t_{u}]}}as follows.
Ifunit event segmentω{\displaystyle \omega }is thenull event segment, i.e.ω=ϵ[t,t+dt]{\displaystyle \omega =\epsilon _{[t,t+dt]}}
Ifunit event segmentω{\displaystyle \omega }is atimed eventω=(x,t){\displaystyle \omega =(x,t)}where the event is an input eventx∈X{\displaystyle x\in X},
Ifunit event segmentω{\displaystyle \omega }is atimed eventω=(y,t){\displaystyle \omega =(y,t)}where the event is an output event or the unobservable eventy∈Yϕ{\displaystyle y\in Y^{\phi }},
Computer algorithms to simulate this view of behavior are available atSimulation Algorithms for Atomic DEVS.
View1 has been introduced by Zeigler[20]in which given a total stateq=(s,te)∈Q{\displaystyle q=(s,t_{e})\in Q}and
whereσ{\displaystyle \sigma }is the remaining time.[20][19]In other words, the set of partial states is indeedS={(d,σ)|d∈S′,σ∈T∞}{\displaystyle S=\{(d,\sigma )|d\in S',\sigma \in \mathbb {T} ^{\infty }\}}whereS′{\displaystyle S'}is a state set.
When a DEVS model receives an input eventx∈X{\displaystyle x\in X}, View1 resets the elapsed timete{\displaystyle t_{e}}by zero, if the DEVS model needs to ignorex{\displaystyle x}in terms of the lifespan control, modelers have to update the remaining time
in the external state transition functionδext{\displaystyle \delta _{ext}}that is the responsibility of the modelers.
Since the number of possible values ofσ{\displaystyle \sigma }is the same as the number of possible input events coming to the DEVS model, that is unlimited. As a result, the number of statess=(d,σ)∈S{\displaystyle s=(d,\sigma )\in S}is also unlimited that is the reason why View2 has been proposed.
If we don't care the finite-vertex reachability graph of a DEVS model, View1 has an advantage of simplicity for treating the elapsed timete=0{\displaystyle t_{e}=0}every time any input event arrives into the DEVS model. But disadvantage might be modelers of DEVS should know how to manageσ{\displaystyle \sigma }as above, which is not explicitly explained inδext{\displaystyle \delta _{ext}}itself but inΔ{\displaystyle \Delta }.
View2 has been introduced by Hwang and Zeigler[21][22]in which given a total stateq=(s,ts,te)∈Q{\displaystyle q=(s,t_{s},t_{e})\in Q}, the remaining time,σ{\displaystyle \sigma }is computed as
When a DEVS model receives an input eventx∈X{\displaystyle x\in X}, View2 resets the elapsed timete{\displaystyle t_{e}}by zero only ifδext(q,x)=(s′,1){\displaystyle \delta _{ext}(q,x)=(s',1)}. If the DEVS model needs to ignorex{\displaystyle x}in terms of the lifespan control, modelers can useδext(q,x)=(s′,0){\displaystyle \delta _{ext}(q,x)=(s',0)}.
Unlike View1, since the remaining timeσ{\displaystyle \sigma }is not component ofS{\displaystyle S}in nature, if the number of states, i.e.|S|{\displaystyle |S|}is finite, we can draw a finite-vertex (as well as edge) state-transition diagram.[21][22]As a result, we can abstract behavior of such a DEVS-class network, for exampleSP-DEVSandFD-DEVS, as a finite-vertex graph, called reachability graph.[21][22]
DEVS is closed under coupling.[3][23]In other words, given acoupled DEVSmodelN{\displaystyle N}, its behavior is described as an atomic DEVS modelM{\displaystyle M}. For a given coupled DEVSN{\displaystyle N}, once we have an equivalent atomic DEVSM{\displaystyle M}, behavior ofM{\displaystyle M}can be referred tobehavior of atomic DEVSwhich is based onTimed Event System.
Similar tobehavior of atomic DEVS, behavior of the Coupled DEVS class is described depending on definition of the total state set and its handling as follows.
Given acoupled DEVSmodelN=<X,Y,D,{Mi},Cxx,Cyx,Cyy,Select>{\displaystyle N=<X,Y,D,\{M_{i}\},C_{xx},C_{yx},C_{yy},Select>}, its behavior is described as an atomic DEVS modelM=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle M=<X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}
where
where
Given the partial states=(…,(si,tei),…)∈S{\displaystyle s=(\ldots ,(s_{i},t_{ei}),\ldots )\in S}, letIMM(s)={i∈D|tai(si)=ta(s)}{\displaystyle IMM(s)=\{i\in D|ta_{i}(s_{i})=ta(s)\}}denotethe set of imminent components. Thefiring componenti∗∈D{\displaystyle i^{*}\in D}which triggers the internal state transition and an output event is determined by
where
Given acoupled DEVSmodelN=<X,Y,D,{Mi},Cxx,Cyx,Cyy,Select>{\displaystyle N=<X,Y,D,\{M_{i}\},C_{xx},C_{yx},C_{yy},Select>}, its behavior is described as an atomic DEVS modelM=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle M=<X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}
where
where
and
Given the partial states=(…,(si,tsi,tei),…)∈S{\displaystyle s=(\ldots ,(s_{i},t_{si},t_{ei}),\ldots )\in S}, letIMM(s)={i∈D|tsi−tei=ta(s)}{\displaystyle IMM(s)=\{i\in D|t_{si}-t_{ei}=ta(s)\}}denotethe set of imminent components. Thefiring componenti∗∈D{\displaystyle i^{*}\in D}which triggers the internal state transition and an output event is determined by
where
Since in a coupled DEVS model with non-empty sub-components, i.e.,|D|>0{\displaystyle |D|>0}, the number of clocks which trace their elapsed times are multiple, so time passage of the model is noticeable.
Given a total stateq=(s,te)∈Q{\displaystyle q=(s,t_{e})\in Q}wheres=(…,(si,tei),…){\displaystyle s=(\ldots ,(s_{i},t_{ei}),\ldots )}
Ifunit event segmentω{\displaystyle \omega }is thenull event segment, i.e.ω=ϵ[t,t+dt]{\displaystyle \omega =\epsilon _{[t,t+dt]}}, the state trajectory in terms ofTimed Event Systemis
Given a total stateq=(s,ts,te)∈Q{\displaystyle q=(s,t_{s},t_{e})\in Q}wheres=(…,(si,tsi,tei),…){\displaystyle s=(\ldots ,(s_{i},t_{si},t_{ei}),\ldots )}
Ifunit event segmentω{\displaystyle \omega }is thenull event segment, i.e.ω=ϵ[t,t+dt]{\displaystyle \omega =\epsilon _{[t,t+dt]}}, the state trajectory in terms ofTimed Event Systemis
Given anatomic DEVSmodel, simulation algorithms are methods to generate the model's legal behaviors which are trajectories not to reach to illegal states. (seeBehavior of DEVS). Zeigler originally introduced the algorithms that handle time variables related tolifespants∈[0,∞]{\displaystyle t_{s}\in [0,\infty ]}andelapsed timete∈[0,∞){\displaystyle t_{e}\in [0,\infty )}by introducing two other time variables,last event time,tl∈[0,∞){\displaystyle t_{l}\in [0,\infty )}, andnext event timetn∈[0,∞]{\displaystyle t_{n}\in [0,\infty ]}with the following relations:[3]
and
wheret∈[0,∞){\displaystyle t\in [0,\infty )}denotes thecurrent time. And theremaining time,
is equivalently computed as
, apparentlytr∈[0,∞]{\displaystyle t_{r}\in [0,\infty ]}.
Since the behavior of a given atomic DEVS model can be defined in two different views depending on the total state and the external transition function (refer toBehavior of DEVS), the simulation algorithms are also introduced in two different views as below.
Regardless of two different views of total states, algorithms for initialization and internal transition cases are commonly defined as below.
As addressed inBehavior of Atomic DEVS, when DEVS receives an input event, right callingδext{\displaystyle \delta _{ext}}, the last event time,tl{\displaystyle t_{l}}is set by the current time,t{\displaystyle t}, thus the elapsed timete{\displaystyle t_{e}}becomes zero becausete=t−tl{\displaystyle t_{e}=t-t_{l}}.
Notice that as addressed inBehavior of Atomic DEVS, depending on the value ofb{\displaystyle b}return byδext{\displaystyle \delta _{ext}}, last event time,tl{\displaystyle t_{l}}, and next event time,tn{\displaystyle t_{n}},consequently, elapsed time,te{\displaystyle t_{e}}, and lifespantn{\displaystyle t_{n}}, are updated (ifb=1{\displaystyle b=1}) or preserved (ifb=0{\displaystyle b=0}).
Given a coupled DEVS model, simulation algorithms are methods to generate the model'slegalbehaviors, which are a set of trajectories not to reach illegal states. (seebehavior of a Coupled DEVSmodel.) Zeigler originally introduced the algorithms that handle time variables related tolifespants∈[0,∞]{\displaystyle t_{s}\in [0,\infty ]}andelapsed timete∈[0,∞){\displaystyle t_{e}\in [0,\infty )}by introducing two other time variables,last event time,tl∈[0,∞){\displaystyle t_{l}\in [0,\infty )}, andnext event timetn∈[0,∞]{\displaystyle t_{n}\in [0,\infty ]}with the following relations:[3]
and
wheret∈[0,∞){\displaystyle t\in [0,\infty )}denotes thecurrent time. And theremaining time,
is equivalently computed as
apparentlytr∈[0,∞]{\displaystyle t_{r}\in [0,\infty ]}. Based on these relationships, the algorithms to simulate the behavior of a given Coupled DEVS are written as follows.
FD-DEVS(Finite & Deterministic Discrete Event System Specification) is a formalism for modeling and analyzingdiscrete event dynamic systemsin both simulation and verification ways. FD-DEVS also provides modular and hierarchical modeling features which have been inherited from Classic DEVS.
FD-DEVS was originally named as "Schedule-Controllable DEVS"[24]and designed to support verification analysis of its networks which had been an open problem of DEVS formalism for 30 years. In addition, it was also designated to resolve the so-called "OPNA" problem ofSP-DEVS. From the viewpoint of Classic DEVS, FD-DEVS has three restrictions
The third restriction can be also seen as a relaxation fromSP-DEVSwhere the schedule is always preserved by any input events. Due to this relaxation there is no longer OPNA problem, but there is also one limitation that a time-line abstraction which can be used for abstracting elapsed times of SP-DEVS networks is no longer useful for FD-DEVS network.[24]But another time abstraction method[25]which was invented by Prof. D. Dill can be applicable to obtain a finite-vertex reachability graph for FD-DEVS networks.
Consider a single ping-pong match in which there are two players. Each player can be modeled by FD-DEVS such that the player model has an input event?receiveand an output event!send, and it has two states:SendandWait. Once the player gets into "Send", it will generates "!send" and go back to "Wait" after the sending time which is 0.1 time unit. When staying at "Wait" and if it gets "?receive", it changes into "Send" again. In other words, the player model stays at "Wait" forever unless it gets "?receive".
To make a complete ping-pong match, one player starts as an offender whose initial state is "Send" and the other starts as a defender whose initial state is "Wait". Thus in Fig. 1. Player A is the initial offender and Player B is the initial defender. In addition, to make the game continue, each player's "?send" event should be coupled to the other player's "?receive" as shown in Fig. 1.
Consider a toaster in which there are two slots that have their own start knobs as shown in Fig. 2(a). Each slot has the identical functionality except their toasting time. Initially, the knob is not pushed, but if one pushes the knob, the associated slot starts toasting for its toasting time: 20 seconds for the left slot, 40 seconds for the right slot. After the toasting time, each slot and its knobs pop up. Notice that even though one tries to push a knob when its associated slot is toasting, nothing happens.
One can model it with FD-DEVS as shown in Fig. 2(b). Two slots are modeled as atomic FD-DEVS whose input event is "?push" and output event is "!pop", states are "Idle" (I) and "Toast" (T) with the initial state is "idle". When it is "Idle" and receives "?push" (because one pushes the knob), its state changes to "Toast". In other words, it stays at "Idle" forever unless it receives "?push" event. 20 (res. 40) seconds later the left (res. right) slot returns to "Idle".
M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}
where
The formal representation of the player in the ping-pong example shown in Fig. 1 can be given as follows.M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}whereX{\displaystyle X}={?receive};Y{\displaystyle Y}={!send};S{\displaystyle S}={Send, Wait};s0{\displaystyle s_{0}}=Send for Player A, Wait for Player B;τ{\displaystyle \tau }(Send)=0.1,τ{\displaystyle \tau }(Wait)=∞{\displaystyle \infty };δx{\displaystyle \delta _{x}}(Wait,?receive)=(Send,1),δx{\displaystyle \delta _{x}}(Send,?receive)=(Send,0);δy{\displaystyle \delta _{y}}(Send)=(!send, Wait),δy{\displaystyle \delta _{y}}(Wait)=(ϕ{\displaystyle \phi }, Wait).
The formal representation of the slot of Two-slot Toaster Fig. 2(a) and (b) can be given as follows.M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}whereX{\displaystyle X}={?push};Y{\displaystyle Y}={!pop};S{\displaystyle S}={I, T};s0{\displaystyle s_{0}}=I;τ{\displaystyle \tau }(T)=20 for the left slot, 40 for the right slot,τ{\displaystyle \tau }(I)=∞{\displaystyle \infty };δx{\displaystyle \delta _{x}}(I, ?push)=(T,1),δx{\displaystyle \delta _{x}}(T,?push)=(T,0);δy{\displaystyle \delta _{y}}(T)=(!pop, I),δy{\displaystyle \delta _{y}}(I)=(ϕ{\displaystyle \phi }, I).
As mentioned above, FD-DEVS is an relaxation of SP-DEVS. That means, FD-DEVS is a supper class of SP-DEVS. We would give a model of FD-DEVS of acrosswalk light controllerwhich is used forSP-DEVSin this Wikipedia.M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}whereX{\displaystyle X}={?p};Y{\displaystyle Y}={!g:0, !g:1, !w:0, !w:1};S{\displaystyle S}={BG, BW, G, GR, R, W, D};s0{\displaystyle s_{0}}=BG,τ{\displaystyle \tau }(BG)=0.5,τ{\displaystyle \tau }(BW)=0.5,τ{\displaystyle \tau }(G)=30,τ{\displaystyle \tau }(GR)=30,τ{\displaystyle \tau }(R)=2,τ{\displaystyle \tau }(W)=26,τ{\displaystyle \tau }(D)=2;δx{\displaystyle \delta _{x}}(G,?p)=(GR,0),δx{\displaystyle \delta _{x}}(s,?p)=(s,0) if s≠{\displaystyle \neq }G;δy{\displaystyle \delta _{y}}(BG)=(!g:1, BW),δy{\displaystyle \delta _{y}}(BW)=(!w:0, G),δy{\displaystyle \delta _{y}}(G)=(ϕ{\displaystyle \phi }, G),δy{\displaystyle \delta _{y}}(GR)=(!g:0, R),δy{\displaystyle \delta _{y}}(R)=(!w:1, W),δy{\displaystyle \delta _{y}}(W)=(!w:0, D),δy{\displaystyle \delta _{y}}(D)=(!g:1, G);
A FD-DEVS model,M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}is DEVSM=<X,Y,S′,s0′,ta,δext,δint,λ>{\displaystyle {\mathcal {M}}=<X,Y,S',s_{0}',ta,\delta _{ext},\delta _{int},\lambda >}where
δext(s,ts,te,x)={(s′,ts−te)ifδx(s,x)=(s′,0)(s′,τ(s′))ifδx(s,x)=(s′,1){\displaystyle \delta _{ext}(s,t_{s},t_{e},x)={\begin{cases}(s',t_{s}-t_{e})&{\text{if }}\delta _{x}(s,x)=(s',0)\\(s',\tau (s'))&{\text{if }}\delta _{x}(s,x)=(s',1)\\\end{cases}}}
For details of DEVS behavior, the readers can refer toBehavior of Atomic DEVS
Fig. 3. shows an event segment (top) and the associated state trajectory (bottom) of Player A who plays the ping-pong game introduced in Fig. 1. In Fig. 3. the status of Player A is described as (state, lifespan, elapsed time)=(s,ts,te{\displaystyle s,t_{s},t_{e}}) and the line segment of the bottom of Fig. 3. denotes the value of the elapsed time. Since the initial state of Player A is "Send" and its lifetime is 0.1 seconds, the height of (Send, 0.1,te{\displaystyle t_{e}}) is 0.1 which is the value ofts{\displaystyle t_{s}}. After changing into (Wait, inf, 0) whente{\displaystyle t_{e}}is reset by 0, Player A doesn't know whente{\displaystyle t_{e}}becomes 0 again. However, since Player B sends back the ball to Player A 0.1 second later, Player A gets back to (Send, 0.1 0) at time 0.2. From that time, 0.1 seconds later when Player A's status becomes (Send, 0.1, 0.1), Player A sends back the ball to Player B and gets into (Wait, inf, 0). Thus, this cyclic state transitions which move "Send" to "Wait" back and forth will go forever.
Fig. 4. shows an event segment (top) and the associated state trajectory (bottom) of the left slot of the two-slot toaster introduced in Fig. 2. Like Fig.3, the status of the left slot is described as (state, lifespan, elapsed time)=(s,ts,te{\displaystyle s,t_{s},t_{e}}) in Fig. 4. Since the initial state of the toaster is "I" and its lifetime is infinity, the height of (Wait, inf,te{\displaystyle t_{e}}) can be determined by when ?push occurs. Fig. 4. illustrates the case ?push happens at time 40 and the toaster changes into (T, 20, 0). From that time, 20 seconds later when its status becomes (T, 20, 20), the toaster gets back to (Wait, inf, 0) where we don't know when it gets back to "Toast" again. Fig. 4. shows the case that ?push occurs at time 90 so the toaster get into (T,20,0). Notice that even though there someone push again at time 97, that status (T, 20, 7) doesn't change at all becauseδx{\displaystyle \delta _{x}}(T,?push)=(T,1).
The property of non-negative rational-valued lifespans which can be preserved or changed by input events along with finite numbers of states and events guarantees that the behavior of FD-DEVS networks can be abstracted as an equivalent finite-vertex reachability graph by abstracting the infinitely-many values of the elapsed times using the time abstracting technique introduced by Prof. D. Dill.[25]An algorithm generating a finite-vertex reachability graph (RG) has been introduced by Zeigler.[22][28]
Fig. 5. shows the reachability graph of two-slot toaster which was shown in Fig. 2. In the reachability graph, each vertex has its own discrete state and time zone which are ranges ofte1,te2{\displaystyle t_{e1},t_{e2}}andte1−te2{\displaystyle t_{e1}-t_{e2}}. For example, for node (6) of Fig. 5, discrete state information is ((E,∞{\displaystyle \infty }),(T,40)), and time zone is0≤te1≤40,0≤te2≤40,−20≤te1−t21≤0{\displaystyle 0\leq t_{e1}\leq 40,0\leq t_{e2}\leq 40,-20\leq t_{e1}-t_{21}\leq 0}. Each directed arch shows how its source vertex changes into the destination vertex along with an associated event and a set of reset models. For example, the transition arc (6) to (5) is triggered bypush1event. At that time, the set {1} of the arc denotes that elapsed time of 1 (that iste1{\displaystyle t_{e1}}is reset by 0 when transition (6) to (5) occurs.[22]
As a qualitative property, safety of a FD-DEVS network is decidable by (1) generating RG of the given network and (2) checking whether some bad states are reachable or not.[21]
As a qualitative property, liveness of a FD-DEVS network is decidable by (1) generating RG of the given network, (2) from RG, generating kerneldirected acyclic graph(KDAG) in which a vertex isstrongly connected component, and (3) checking if a vertex of KDAG contains a state transition cycle which contains a set of liveness states.[21]
The features that all characteristic functions,τ,δx,δy{\displaystyle \tau ,\delta _{x},\delta _{y}}of FD-DEVS are deterministic can be seen as somehow a limitation to model the system that has non-deterministic behaviors. For example, if a player of the ping-pong game shown in Fig. 1. has a stochastic lifespan at "Send" state, FD-DEVS doesn't capture the non-determinism effectively.
There are two open source libraries DEVS# written inC#[29]and XSY written inPython[30]that support some reachability graph-based verification algorithms for finding safeness and liveness.
For standardization of DEVS, especially using FDDEVS, Dr. Saurabh Mittal together with co-workers has worked on defining of XML format of FDDEVS.[31]This standard XML format was used for UML execution.[32]
SP-DEVS(Schedule-Preserving Discrete Event System Specification) is a formalism for modeling and analyzing discrete event systems in both simulation and verification ways. SP-DEVS also provides modular and hierarchical modeling features which have been inherited from the Classic DEVS.
SP-DEVS has been designed to support verification analysis of its networks by guaranteeing to obtain a finite-vertex reachability graph of the original networks, which had been an open problem of DEVS formalism for roughly 30 years. To get such a reachability graph of its networks, SP-DEVS has been imposed the three restrictions:
Thus, SP-DEVS is a sub-class of both DEVS andFD-DEVS. These three restrictions lead that SP-DEVS class is closed under coupling even though the number of states are finite. This property enables a finite-vertex graph-based verification for some qualitative properties and quantitative property, even with SP-DEVS coupled models.
Consider a crosswalk system. Since a red light (resp. don't-walk light) behaves the opposite way of a green light (resp. walk light), for simplicity, we consider just two lights: a green light (G) and a walk light (W); and one push button as shown in Fig. 1. We want to control two lights of G and W with a set of timing constraints.
To initialize two lights, it takes 0.5 seconds to turn G on and 0.5 seconds later, W gets off. Then, every 30 seconds, there is a chance that G becomes off and W on if someone pushed the push button. For a safety reason, W becomes on two seconds after G got off. 26 seconds later, W gets off and then two seconds later G gets back on. These behaviors repeats.
To build a controller for above requirements, we can consider one input event 'push-button' (abbreviated by ?p) and four output events 'green-on' (!g:1), 'green-off' (!g:0), 'walk-on' (!w:1) and 'walk-off (!w:0) which will be used as commands signals for the green light and the walk light. As a set of states of the controller, we considers 'booting-green' (BG), 'booting-walk' (BW), 'green-on' (G), 'green-to-red' (GR), 'red-on' (R), 'walk-on' (W), 'delay' (D). Let's design the state transitions as shown in Fig. 2. Initially, the controller starts at BG whose lifespan is 0.5 seconds. After the lifespan, it moves to BW state at this moment, it generates the 'green-on' event, too. After 0.5 seconds staying at BW, it moves to G state whose lifespan is 30 seconds. The controller can keep staying at G by looping G to G without generating any output event or can move to GR state when it receives the external input event ?p. But, theactual staying timeat GR is the remaining time for looping at G. From GR, it moves to R state with generating an output event !g:0 and its R state last two seconds then it will move to W state with output event !w:1. 26 seconds later, it moves to D state with generating !w:0 and after staying 2 seconds at D, it moves back to G with output event !g:1.
The above controller for crosswalk lights can be modeled by an atomic SP-DEVS model. Formally, an atomic SP-DEVS is a 7-tupleM=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}
where
The above controller shown in Fig. 2 can be written asM=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}whereX{\displaystyle X}={?p};Y{\displaystyle Y}={!g:0, !g:1, !w:0, !w:1};S{\displaystyle S}={BG, BW, G, GR, R, W, D};s0{\displaystyle s_{0}}=BG,τ{\displaystyle \tau }(BG)=0.5,τ{\displaystyle \tau }(BW)=0.5,τ{\displaystyle \tau }(G)=30,τ{\displaystyle \tau }(GR)=30,τ{\displaystyle \tau }(R)=2,τ{\displaystyle \tau }(W)=26,τ{\displaystyle \tau }(D)=2;δx{\displaystyle \delta _{x}}(G,?p)=GR,δx{\displaystyle \delta _{x}}(s,?p)=s if s≠{\displaystyle \neq }G;δy{\displaystyle \delta _{y}}(BG)=(!g:1, BW),δy{\displaystyle \delta _{y}}(BW)=(!w:0, G),δy{\displaystyle \delta _{y}}(G)=(ϕ{\displaystyle \phi }, G),δy{\displaystyle \delta _{y}}(GR)=(!g:0, R),δy{\displaystyle \delta _{y}}(R)=(!w:1, W),δy{\displaystyle \delta _{y}}(W)=(!w:0, D),δy{\displaystyle \delta _{y}}(D)=(!g:1, G);
To captured the dynamics of an atomic SP-DEVS, we need to introduce two variables associated to time. One is thelifespan, the other is theelapsed timesince the last resetting. Letts∈Q[0,∞]{\displaystyle t_{s}\in \mathbb {Q} _{[0,\infty ]}}be the lifespan which is not continuously increasing but set by when a discrete event happens. Lette∈[0,∞]{\displaystyle t_{e}\in [0,\infty ]}denote the elapsed time which is continuously increasing over time if there is no resetting.
Fig.3. shows a state trajectory associated with an event segment of the SP-DEVS model shown in Fig. 2. The top of Fig.3. shows an event trajectory in which the horizontal axis is a time axis so it shows an event occurs at a certain time, for example, !g:1 occurs at 0.5 and !w:0 at 1.0 time unit, and so on. The bottom of Fig. 3 shows the state trajectory associated with the above event segment in which the states∈S{\displaystyle s\in S}is associated with its lifespan and its elapsed time in the form of(s,ts,te){\displaystyle (s,t_{s},t_{e})}. For example, (G, 30, 11) denotes that the state is G, its lifespan is and the elapsed time is 11 time units. The line segments of the bottom of Fig. 3 shows the time flow of the elapsed time which is the only one continuous variable in SP-DEVS.
One interesting feature of SF-DEVS might be the preservation of schedule the restriction (3) of SP-DEVS which is drawn at time 47 in Fig. 3. when the external event ?p happens. At this moment, even though the state can change from G to GR, the elapsed time does not change so the line segment is not broken at time 47 andte{\displaystyle t_{e}}can grow up tote{\displaystyle t_{e}}which is 30 in this example. Due to this preservation of the schedule from input events as well as the restriction of the time advance to the non-negative rational number (see restriction (2) above), the height of every saw can be a non-negative rational number or infinity (as shown in the bottom of Fig. 3.) in a SP-DEVS model.
A SP-DEVS model,M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}is DEVSM=<X,Y,S′,s0′,ta,δext,δint,λ>{\displaystyle {\mathcal {M}}=<X,Y,S',s_{0}',ta,\delta _{ext},\delta _{int},\lambda >}where
The property of non-negative rational-valued lifespans which are not changed by input events along with finite numbers of states and events guarantees that the behavior of SP-DEVS networks can be abstracted as an equivalent finite-vertex reachability graph by abstracting the infinitely-many values of the elapsed times.
To abstract the infinitely-many cases of elapsed times for each components of SP-DEVS networks, a time-abstraction method, called thetime-line abstractionhas been introduced in which the orders and relative difference of schedules are preserved.[34][35]By using the time-line abstraction technique, the behavior of any SP-DEVS network can be abstracted as a reachability graph whose numbers of vertices and edges are finite.
As a qualitative property, safety of a SP-DEVS network is decidable by (1) generating the finite-vertex reachability graph of the given network and (2) checking whether some bad states are reachable or not.[34]
As a qualitative property, liveness of a SP-DEVS network is decidable by (1) generating the finite-vertex reachability graph (RG) of the given network, (2) from RG, generating kerneldirected acyclic graph(KDAG) in which a vertex isstrongly connected component, and (3) checking if a vertex of KDAG contains a state transition cycle which contains a set of liveness states.[34]
As a quantitative property, minimum and maximum processing time bounds from two events in SP-DEVS networks can be computed by (1) generating the finite-vertex reachability graph and (2.a) by finding the shortest paths for the minimum processing time bound and (2.b) by finding the longest paths (if available) for the maximum processing time bound.[35]
Let a total state(s,ts,te){\displaystyle (s,t_{s},t_{e})}of a SP-DEVS model bepassiveifts=∞{\displaystyle t_{s}=\infty }; otherwise, it beactive.
One of known SP-DEVS's limitation is a phenomenon that "once an SP-DEVS model becomes passive, it never returns to become active (OPNA)". This phenomenon was found first by Hwang,[36]although it was originally called ODNR ("once it dies, it never returns"). The reason why this one happens is because of the restriction (3) above in which no input event can change the schedule so the passive state can not be awaken into the active state.
For example, the toaster models drawn in Fig. 3(b) are not SP-DEVS because the total state associated with "idle" (I), is passive but it moves to an active state, "toast" (T) whose toasting time is 20 seconds or 40 seconds. Actually, the model shown in Fig. 3(b) isFD-DEVS.
There is an open source library, called DEVS#[29]that supports some algorithms for finding safeness and liveness as well as Min/Max processing time bounds.
|
https://en.wikipedia.org/wiki/DEVS
|
APetri net, also known as aplace/transition net(PT net), is one of severalmathematicalmodeling languagesfor the description ofdistributed systems. It is a class ofdiscrete event dynamic system. A Petri net is a directedbipartite graphthat has two types of elements: places and transitions. Place elements are depicted as white circles and transition elements are depicted as rectangles.
A place can contain any number of tokens, depicted as black circles. A transition is enabled if all places connected to it as inputs contain at least one token. Some sources[1]state that Petri nets were invented in August 1939 byCarl Adam Petri— at the age of 13 — for the purpose of describing chemical processes.
Like industry standards such asUMLactivity diagrams,Business Process Model and Notation, andevent-driven process chains, Petri nets offer agraphical notationfor stepwise processes that include choice,iteration, andconcurrent execution. Unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis[citation needed].
The German computer scientistCarl Adam Petri, after whom such structures are named, analyzed Petri nets extensively in his 1962 dissertationKommunikation mit Automaten.
A Petri net consists ofplaces,transitions, andarcs. Arcs run from a place to a transition or vice versa, never between places or between transitions. The places from which an arc runs to a transition are called theinput placesof the transition; the places to which arcs run from a transition are called theoutput placesof the transition.
Graphically, places in a Petri net may contain a discrete number of marks calledtokens. Any distribution of tokens over the places will represent a configuration of the net called amarking. In an abstract sense relating to a Petri net diagram, a transition of a Petri net mayfireif it isenabled, i.e. there are sufficient tokens in all of its input places; when the transition fires, it consumes the required input tokens, and creates tokens in its output places. A firing is atomic, i.e. a single non-interruptible step.
Unless anexecution policy(e.g. a strict ordering of transitions, describing precedence) is defined, the execution of Petri nets isnondeterministic: when multiple transitions are enabled at the same time, they will fire in any order.
Since firing is nondeterministic, and multiple tokens may be present anywhere in the net (even in the same place), Petri nets are well suited for modeling theconcurrentbehavior of distributed systems.
Petri nets arestate-transition systemsthat extend a class of nets called elementary nets.[2]
Definition 1.Anetis atupleN=(P,T,F){\displaystyle N=(P,T,F)}where
Definition 2.Given a netN= (P,T,F), aconfigurationis a setCso thatC⊆P.
Definition 3.Anelementary netis a net of the formEN= (N,C) where
Definition 4.APetri netis a net of the formPN= (N,M,W), which extends the elementary net so that
If a Petri net is equivalent to an elementary net, thenZcan be the countable set {0,1} and those elements inPthat map to 1 underMform a configuration. Similarly, if a Petri net is not an elementary net, then themultisetMcan be interpreted as representing a non-singleton set of configurations. In this respect,Mextends the concept of configuration for elementary nets to Petri nets.
In the diagram of a Petri net (see top figure right), places are conventionally depicted with circles, transitions with long narrow rectangles and arcs as one-way arrows that show connections of places to transitions or transitions to places. If the diagram were of an elementary net, then those places in a configuration would be conventionally depicted as circles, where each circle encompasses a single dot called atoken. In the given diagram of a Petri net (see right), the place circles may encompass more than one token to show the number of times a place appears in a configuration. The configuration of tokens distributed over an entire Petri net diagram is called amarking.
In the top figure (see right), the placep1is an input place of transitiont; whereas, the placep2is an output place to the same transition. LetPN0(top figure) be a Petri net with a marking configuredM0, andPN1(bottom figure) be a Petri net with a marking configuredM1. The configuration ofPN0enablestransitiontthrough the property that all input places have sufficient number of tokens (shown in the figures as dots) "equal to or greater" than the multiplicities on their respective arcs tot. Once and only once a transition is enabled will the transition fire. In this example, thefiringof transitiontgenerates a map that has the marking configuredM1in the image ofM0and results in Petri netPN1, seen in the bottom figure. In the diagram, the firing rule for a transition can be characterised by subtracting a number of tokens from its input places equal to the multiplicity of the respective input arcs and accumulating a new number of tokens at the output places equal to the multiplicity of the respective output arcs.
Remark 1.The precise meaning of "equal to or greater" will depend on the precise algebraic properties of addition being applied onZin the firing rule, where subtle variations on the algebraic properties can lead to other classes of Petri nets; for example, algebraic Petri nets.[3]
The following formal definition is loosely based on (Peterson 1981). Many alternative definitions exist.
APetri net graph(calledPetri netby some, but see below) is a 3-tuple(S,T,W){\displaystyle (S,T,W)}, where
Theflow relationis the set of arcs:F={(x,y)∣W(x,y)>0}{\displaystyle F=\{(x,y)\mid W(x,y)>0\}}. In many textbooks, arcs can only have multiplicity 1. These texts often define Petri nets usingFinstead ofW. When using this convention, a Petri net graph is abipartitedirected graph(S∪T,F){\displaystyle (S\cup T,F)}with node partitionsSandT.
Thepresetof a transitiontis the set of itsinput places:∙t={s∈S∣W(s,t)>0}{\displaystyle {}^{\bullet }t=\{s\in S\mid W(s,t)>0\}};
itspostsetis the set of itsoutput places:t∙={s∈S∣W(t,s)>0}{\displaystyle t^{\bullet }=\{s\in S\mid W(t,s)>0\}}. Definitions of pre- and postsets of places are analogous.
Amarkingof a Petri net (graph) is a multiset of its places, i.e., a mappingM:S→N{\displaystyle M:S\to \mathbb {N} }. We say the marking assigns to each place a number oftokens.
APetri net(calledmarked Petri netby some, see above) is a 4-tuple(S,T,W,M0){\displaystyle (S,T,W,M_{0})}, where
In words
We are generally interested in what may happen when transitions may continually fire in arbitrary order.
We say that a markingM'is reachable froma markingMin one stepifM⟶GM′{\displaystyle M{\underset {G}{\longrightarrow }}M'}; we say that itis reachable fromMifM⟶G∗M′{\displaystyle M{\overset {*}{\underset {G}{\longrightarrow }}}M'}, where⟶G∗{\displaystyle {\overset {*}{\underset {G}{\longrightarrow }}}}is thereflexive transitive closureof⟶G{\displaystyle {\underset {G}{\longrightarrow }}}; that is, if it is reachable in 0 or more steps.
For a (marked) Petri netN=(S,T,W,M0){\displaystyle N=(S,T,W,M_{0})}, we are interested in the firings that can be performed starting with the initial markingM0{\displaystyle M_{0}}. Its set ofreachable markingsis the setR(N)=D{M′|M0→(S,T,W)∗M′}{\displaystyle R(N)\ {\stackrel {D}{=}}\ \left\{M'{\Bigg |}M_{0}{\xrightarrow[{(S,T,W)}]{*}}M'\right\}}
Thereachability graphofNis the transition relation⟶G{\displaystyle {\underset {G}{\longrightarrow }}}restricted to its reachable markingsR(N){\displaystyle R(N)}. It is thestate spaceof the net.
Afiring sequencefor a Petri net with graphGand initial markingM0{\displaystyle M_{0}}is a sequence of transitionsσ→=⟨t1⋯tn⟩{\displaystyle {\vec {\sigma }}=\langle t_{1}\cdots t_{n}\rangle }such thatM0⟶G,t1M1∧⋯∧Mn−1⟶G,tnMn{\displaystyle M_{0}{\underset {G,t_{1}}{\longrightarrow }}M_{1}\wedge \cdots \wedge M_{n-1}{\underset {G,t_{n}}{\longrightarrow }}M_{n}}. The set of firing sequences is denoted asL(N){\displaystyle L(N)}.
A common variation is to disallow arc multiplicities and replace thebagof arcsWwith a simple set, called theflow relation,F⊆(S×T)∪(T×S){\displaystyle F\subseteq (S\times T)\cup (T\times S)}.
This does not limitexpressive poweras both can represent each other.
Another common variation, e.g. in Desel and Juhás (2001),[4]is to allowcapacitiesto be defined on places. This is discussed underextensionsbelow.
The markings of a Petri net(S,T,W,M0){\displaystyle (S,T,W,M_{0})}can be regarded asvectorsof non-negative integers of length|S|{\displaystyle |S|}.
Its transition relation can be described as a pair of|S|{\displaystyle |S|}by|T|{\displaystyle |T|}matrices:
Then their difference
can be used to describe the reachable markings in terms of matrix multiplication, as follows.
For any sequence of transitionsw, writeo(w){\displaystyle o(w)}for the vector that maps every transition to its number of occurrences inw. Then, we have
It must be required thatwis a firing sequence; allowing arbitrary sequences of transitions will generally produce a larger set.
W−=[∗t1t2p110p201p301p400],W+=[∗t1t2p101p210p310p401],WT=[∗t1t2p1−11p21−1p31−1p401]{\displaystyle W^{-}={\begin{bmatrix}*&t1&t2\\p1&1&0\\p2&0&1\\p3&0&1\\p4&0&0\end{bmatrix}},\ W^{+}={\begin{bmatrix}*&t1&t2\\p1&0&1\\p2&1&0\\p3&1&0\\p4&0&1\end{bmatrix}},\ W^{T}={\begin{bmatrix}*&t1&t2\\p1&-1&1\\p2&1&-1\\p3&1&-1\\p4&0&1\end{bmatrix}}}
M0=[1021]{\displaystyle M_{0}={\begin{bmatrix}1&0&2&1\end{bmatrix}}}
Meseguerand Montanari considered a kind ofsymmetric monoidal categoriesknown asPetri categories.[5]
One thing that makes Petri nets interesting is that they provide a balance between modeling power and analyzability: many things one would like to know about concurrent systems can be automatically determined for Petri nets, although some of those things are very expensive to determine in the general case. Several subclasses of Petri nets have been studied that can still model interesting classes of concurrent systems, while these determinations become easier.
An overview of suchdecision problems, with decidability andcomplexityresults for Petri nets and some subclasses, can be found in Esparza and Nielsen (1995).[6]
Thereachability problemfor Petri nets is to decide, given a Petri netNand a markingM, whetherM∈R(N){\displaystyle M\in R(N)}.
It is a matter of walking the reachability-graph defined above, until either the requested-marking is reached or it can no longer be found. This is harder than it may seem at first: the reachability graph is generally infinite, and it isn't easy to determine when it is safe to stop.
In fact, this problem was shown to beEXPSPACE-hard[7]years before it was shown to be decidable at all (Mayr, 1981). Papers continue to be published on how to do it efficiently.[8]In 2018, Czerwiński et al. improved the lower bound and showed that the problem is notELEMENTARY.[9]In 2021, this problem was shown to beAckermann-complete(thus notprimitive recursive), independently by Jerome Leroux[10]and by Wojciech Czerwiński and Łukasz Orlikowski.[11]These results thus close the long-standing complexity gap.
While reachability seems to be a good tool to find erroneous states, for practical problems the constructed graph usually has far too many states to calculate. To alleviate this problem,linear temporal logicis usually used in conjunction with thetableau methodto prove that such states cannot be reached. Linear temporal logic uses thesemi-decision techniqueto find if indeed a state can be reached, by finding a set of necessary conditions for the state to be reached then proving that those conditions cannot be satisfied.
Petri nets can be described as having different degrees of livenessL1−L4{\displaystyle L_{1}-L_{4}}. A Petri net(N,M0){\displaystyle (N,M_{0})}is calledLk{\displaystyle L_{k}}-liveif and only ifall of its transitions areLk{\displaystyle L_{k}}-live, where a transition is
Note that these are increasingly stringent requirements:Lj+1{\displaystyle L_{j+1}}-liveness impliesLj{\displaystyle L_{j}}-liveness, forj∈1,2,3{\textstyle \textstyle {j\in {1,2,3}}}.
These definitions are in accordance with Murata's overview,[12]which additionally usesL0{\displaystyle L_{0}}-liveas a term fordead.
A place in a Petri net is calledk-boundif it does not contain more thanktokens in all reachable markings, including the initial marking; it is said to besafeif it is 1-bounded; it isboundedif it isk-boundedfor somek.
A (marked) Petri net is calledk-bounded,safe, orboundedwhen all of its places are.
A Petri net (graph) is called(structurally) boundedif it is bounded for every possible initial marking.
A Petri net is bounded if and only if its reachability graph is finite.
Boundedness is decidable by looking atcovering, by constructing theKarp–Miller Tree.
It can be useful to explicitly impose a bound on places in a given net.
This can be used to model limited system resources.
Some definitions of Petri nets explicitly allow this as a syntactic feature.[13]Formally,Petri nets with place capacitiescan be defined as tuples(S,T,W,C,M0){\displaystyle (S,T,W,C,M_{0})}, where(S,T,W,M0){\displaystyle (S,T,W,M_{0})}is a Petri net,C:P→∣N{\displaystyle C:P\rightarrow \!\!\!\shortmid \mathbb {N} }an assignment of capacities to (some or all) places, and the transition relation is the usual one restricted to the markings in which each place with a capacity has at most that many tokens.
For example, if in the netN, both places are assigned capacity 2, we obtain a Petri net with place capacities, sayN2; its reachability graph is displayed on the right.
Alternatively, places can be made bounded by extending the net. To be exact,
a place can be madek-bounded by adding a "counter-place" with flow opposite to that of the place, and adding tokens to make the total in both placesk.
As well as for discrete events, there are Petri nets for continuous and hybrid discrete-continuous processes[14]that are useful in discrete, continuous and hybridcontrol theory,[15]and related to discrete, continuous and hybridautomata.
There are many extensions to Petri nets. Some of them are completely backwards-compatible (e.g.coloured Petri nets) with the original Petri net, some add properties that cannot be modelled in the original Petri net formalism (e.g. timed Petri nets). Although backwards-compatible models do not extend the computational power of Petri nets, they may have more succinct representations and may be more convenient for modeling.[16]Extensions that cannot be transformed into Petri nets are sometimes very powerful, but usually lack the full range of mathematical tools available to analyse ordinary Petri nets.
The termhigh-level Petri netis used for many Petri net formalisms that extend the basic P/T net formalism; this includes coloured Petri nets, hierarchical Petri nets such asNets within Nets, and all other extensions sketched in this section. The term is also used specifically for the type of coloured nets supported byCPN Tools.
A short list of possible extensions follows:
There are many more extensions to Petri nets, however, it is important to keep in mind, that as the complexity of the net increases in terms of extended properties, the harder it is to use standard tools to evaluate certain properties of the net. For this reason, it is a good idea to use the most simple net type possible for a given modelling task.
Instead of extending the Petri net formalism, we can also look at restricting it, and look at particular types of Petri nets, obtained by restricting the syntax in a particular way. Ordinary Petri nets are the nets where all arc weights are 1. Restricting further, the following types of ordinary Petri nets are commonly used and studied:
Workflow nets(WF-nets) are a subclass of Petri nets intending to model theworkflowof process activities.[24]The WF-net transitions are assigned to tasks or activities, and places are assigned to the pre/post conditions.
The WF-nets have additional structural and operational requirements, mainly the addition of a single input (source) place with no previous transitions, and output place (sink) with no following transitions. Accordingly, start and termination markings can be defined that represent the process status.
WF-nets have thesoundnessproperty,[24]indicating that a process with a start marking ofktokens in its source place, can reach the termination state marking withktokens in its sink place (defined ask-sound WF-net). Additionally, all the transitions in the process could fire (i.e., for each transition there is a reachable state in which the transition is enabled).
A general sound (G-sound) WF-net is defined as beingk-sound for everyk> 0.[25]
A directedpathin the Petri net is defined as the sequence of nodes (places and transitions) linked by the directed arcs. Anelementary pathincludes every node in the sequence only once.
Awell-handledPetri net is a net in which there are no fully distinct elementary paths between a place and a transition (or transition and a place), i.e., if there are two paths between the pair of nodes then these paths share a node.
An acyclic well-handled WF-net is sound (G-sound).[26]
Extended WF-net is a Petri net that is composed of a WF-net with additional transition t (feedback transition). The sink place is connected as the input place of transition t and the source place as its output place. Firing of the transition causes iteration of the process (Note, the extended WF-net is not a WF-net).[24]
WRI (Well-handled with Regular Iteration) WF-net, is an extended acyclic well-handled WF-net.
WRI-WF-net can be built as composition of nets, i.e., replacing a transition within a WRI-WF-net with a subnet which is a WRI-WF-net. The result is also WRI-WF-net. WRI-WF-nets are G-sound,[26]therefore by using only WRI-WF-net building blocks, one can get WF-nets that are G-sound by construction.
Thedesign structure matrix(DSM) can model process relations, and be utilized for process planning. TheDSM-netsare realization of DSM-based plans into workflow processes by Petri nets, and are equivalent to WRI-WF-nets. The DSM-net construction process ensures the soundness property of the resulting net.
Other ways of modelling concurrent computation have been proposed, includingvector addition systems,communicating finite-state machines,Kahn process networks,process algebra, theactor model, andtrace theory.[27]Different models provide tradeoffs of concepts such ascompositionality,modularity, and locality.
An approach to relating some of these models of concurrency is proposed in the chapter by Winskel and Nielsen.[28]
|
https://en.wikipedia.org/wiki/Petri_net
|
In thetheory of computation, a branch oftheoretical computer science, apushdown automaton(PDA) is
a type ofautomatonthat employs astack.
Pushdown automata are used in theories about what can be computed by machines. They are more capable thanfinite-state machinesbut less capable thanTuring machines(seebelow).Deterministic pushdown automatacan recognize alldeterministic context-free languageswhile nondeterministic ones can recognize allcontext-free languages, with the former often used inparserdesign.
The term "pushdown" refers to the fact that thestackcan be regarded as being "pushed down" like a tray dispenser at a cafeteria, since the operations never work on elements other than the top element. Astack automaton, by contrast, does allow access to and operations on deeper elements. Stack automata can recognize a strictly larger set of languages than pushdown automata.[1]Anested stack automatonallows full access, and also allows stacked values to be entire sub-stacks rather than just single finite symbols.
Afinite-state machineonly considers the input signal and the current state: it has no stack to work with and therefore is unable to access previous values of the input. It can only choose a new state, the result of following the transition. Apushdown automaton (PDA)differs from a finite state machine in two ways:
A pushdown automaton reads a given input string from left to right. In each step, it chooses a transition by indexing a table by input symbol, current state, and the symbol at the top of the stack. A pushdown automaton can also manipulate the stack, as part of performing a transition. The manipulation can be to push a particular symbol to the top of the stack, or to pop off the top of the stack. The automaton can alternatively ignore the stack, and leave it as it is.
Put together: Given an input symbol, current state, and stack symbol, the automaton can follow a transition to another state, and optionally manipulate (push or pop) the stack.
If, in every situation, at most one such transition action is possible, then the automaton is called adeterministic pushdown automaton(DPDA). In general, if several actions are possible, then the automaton is called ageneral, ornondeterministic,PDA. A given input string may drive a nondeterministic pushdown automaton to one of several configuration sequences; if one of them leads to an accepting configuration after reading the complete input string, the latter is said to belong to thelanguage accepted by the automaton.
We use standard formal language notation:Γ∗{\displaystyle \Gamma ^{*}}denotes the set of finite-lengthstringsover alphabetΓ{\displaystyle \Gamma }andε{\displaystyle \varepsilon }denotes theempty string.
A PDA is formally defined as a 7-tuple:
M=(Q,Σ,Γ,δ,q0,Z,F){\displaystyle M=(Q,\Sigma ,\Gamma ,\delta ,q_{0},Z,F)}where
An element(p,a,A,q,α)∈δ{\displaystyle (p,a,A,q,\alpha )\in \delta }is a transition ofM{\displaystyle M}. It has the intended meaning thatM{\displaystyle M}, in statep∈Q{\displaystyle p\in Q}, on the inputa∈Σ∪{ε}{\displaystyle a\in \Sigma \cup \{\varepsilon \}}and withA∈Γ{\displaystyle A\in \Gamma }as topmost stack symbol, may reada{\displaystyle a}, change the state toq{\displaystyle q}, popA{\displaystyle A}, replacing it by pushingα∈Γ∗{\displaystyle \alpha \in \Gamma ^{*}}. The(Σ∪{ε}){\displaystyle (\Sigma \cup \{\varepsilon \})}component of the transition relation is used to formalize that the PDA can either read a letter from the input, or proceed leaving the input untouched.[citation needed]
In many texts[2]the transition relation is replaced by an (equivalent) formalization, where
Hereδ(p,a,A){\displaystyle \delta (p,a,A)}contains all possible actions in statep{\displaystyle p}withA{\displaystyle A}on the stack, while readinga{\displaystyle a}on the input. One writes for exampleδ(p,a,A)={(q,BA)}{\displaystyle \delta (p,a,A)=\{(q,BA)\}}precisely when(q,BA)∈{(q,BA)},(q,BA)∈δ(p,a,A),{\displaystyle (q,BA)\in \{(q,BA)\},(q,BA)\in \delta (p,a,A),}because((p,a,A),{(q,BA)})∈δ{\displaystyle ((p,a,A),\{(q,BA)\})\in \delta }. Note thatfinitein this definition is essential.
In order to formalize the semantics of the pushdown automaton a description of the current situation is introduced. Any 3-tuple(p,w,β)∈Q×Σ∗×Γ∗{\displaystyle (p,w,\beta )\in Q\times \Sigma ^{*}\times \Gamma ^{*}}is called an instantaneous description (ID) ofM{\displaystyle M}, which includes the current state, the part of the input tape that has not been read, and the contents of the stack (topmost symbol written first). The transition relationδ{\displaystyle \delta }defines the step-relation⊢M{\displaystyle \vdash _{M}}ofM{\displaystyle M}on instantaneous descriptions. For instruction(p,a,A,q,α)∈δ{\displaystyle (p,a,A,q,\alpha )\in \delta }there exists a step(p,ax,Aγ)⊢M(q,x,αγ){\displaystyle (p,ax,A\gamma )\vdash _{M}(q,x,\alpha \gamma )}, for everyx∈Σ∗{\displaystyle x\in \Sigma ^{*}}and everyγ∈Γ∗{\displaystyle \gamma \in \Gamma ^{*}}.
In general pushdown automata are nondeterministic meaning that in a given instantaneous description(p,w,β){\displaystyle (p,w,\beta )}there may be several possible steps. Any of these steps can be chosen in a computation.
With the above definition in each step always a single symbol (top of the stack) is popped, replacing it with as many symbols as necessary. As a consequence no step is defined when the stack is empty.
Computations of the pushdown automaton are sequences of steps. The computation starts in the initial stateq0{\displaystyle q_{0}}with the initial stack symbolZ{\displaystyle Z}on the stack, and a stringw{\displaystyle w}on the input tape, thus with initial description(q0,w,Z){\displaystyle (q_{0},w,Z)}.
There are two modes of accepting. The pushdown automaton either accepts by final state, which means after reading its input the automaton reaches an accepting state (inF{\displaystyle F}), or it accepts by empty stack (ε{\displaystyle \varepsilon }), which means after reading its input the automaton empties its stack. The first acceptance mode uses the internal memory (state), the second the external memory (stack).
Formally one defines
Here⊢M∗{\displaystyle \vdash _{M}^{*}}represents thereflexiveandtransitive closureof the step relation⊢M{\displaystyle \vdash _{M}}meaning any number of consecutive steps (zero, one or more).
For each single pushdown automaton these two languages need to have no relation: they may be equal but usually this is not the case. A specification of the automaton should also include the intended mode of acceptance. Taken over all pushdown automata both acceptance conditions define the same family of languages.
Theorem.For each pushdown automatonM{\displaystyle M}one may construct a pushdown automatonM′{\displaystyle M'}such thatL(M)=N(M′){\displaystyle L(M)=N(M')}, and vice versa, for each pushdown automatonM{\displaystyle M}one may construct a pushdown automatonM′{\displaystyle M'}such thatN(M)=L(M′){\displaystyle N(M)=L(M')}
The following is the formal description of the PDA which recognizes the language{0n1n∣n≥0}{\displaystyle \{0^{n}1^{n}\mid n\geq 0\}}by final state:
M=(Q,Σ,Γ,δ,q0,Z,F){\displaystyle M=(Q,\ \Sigma ,\ \Gamma ,\ \delta ,\ q_{0},\ Z,\ F)}, where
The transition relationδ{\displaystyle \delta }consists of the following six instructions:
In words, the first two instructions say that in statepany time the symbol0is read, oneAis pushed onto the stack. Pushing symbolAon top of anotherAis formalized as replacing topAbyAA(and similarly for pushing symbolAon top of aZ).
The third and fourth instructions say that, at any moment the automaton may move from statepto stateq.
The fifth instruction says that in stateq, for each symbol1read, oneAis popped.
Finally, the sixth instruction says that the machine may move from stateqto accepting stateronly when the stack consists of a singleZ.
There seems to be no generally used representation for PDA. Here we have depicted the instruction(p,a,A,q,α){\displaystyle (p,a,A,q,\alpha )}by an edge from statepto stateqlabelled bya;A/α{\displaystyle a;A/\alpha }(reada; replaceAbyα{\displaystyle \alpha }).
The following illustrates how the above PDA computes on different input strings. The subscriptMfrom the step symbol⊢{\displaystyle \vdash }is here omitted.
Everycontext-free grammarcan be transformed into an equivalent nondeterministic pushdown automaton. The derivation process of the grammar is simulated in a leftmost way. Where the grammar rewrites a nonterminal, the PDA takes the topmost nonterminal from its stack and replaces it by the right-hand part of a grammatical rule (expand). Where the grammar generates a terminal symbol, the PDA reads a symbol from input when it is the topmost symbol on the stack (match). In a sense the stack of the PDA contains the unprocessed data of the grammar, corresponding to a pre-order traversal of a derivation tree.
Technically, given a context-free grammar, the PDA has a single state, 1, and its transition relation is constructed as follows.
The PDA accepts by empty stack. Its initial stack symbol is the grammar's start symbol.[3]
For a context-free grammar inGreibach normal form, defining (1,γ) ∈ δ(1,a,A) for each grammar ruleA→aγ also yields an equivalent nondeterministic pushdown automaton.[4]
The converse, finding a grammar for a given PDA, is not that easy. The trick is to code two states of the PDA into the nonterminals of the grammar.
Theorem.For each pushdown automatonM{\displaystyle M}one may construct a context-free grammarG{\displaystyle G}such thatN(M)=L(G){\displaystyle N(M)=L(G)}.[5]
The language of strings accepted by a deterministic pushdown automaton (DPDA) is called adeterministic context-free language. Not all context-free languages are deterministic.[a]As a consequence, the DPDA is a strictly weaker variant of the PDA. Even forregular languages, there is a size explosion problem: for anyrecursive functionf{\displaystyle f}and for arbitrarily large integersn{\displaystyle n}, there is a PDA of sizen{\displaystyle n}describing a regular language whose smallest DPDA has at leastf(n){\displaystyle f(n)}states.[b]For many non-regular PDAs, any equivalent DPDA would require an unbounded number of states.
A finite automaton with access to two stacks is a more powerful device, equivalent in power to aTuring machine.[8]Alinear bounded automatonis a device which is more powerful than a pushdown automaton but less so than a Turing machine.[c]
A pushdown automaton is computationally equivalent to a "restricted" Turing Machine (TM) with two tapes which is restricted in the following manner- On the first tape, the TM can only read the input and move from left to right (it cannot make changes). On the second tape, it can only "push" and "pop" data. Or equivalently, it can read, write and move left and right with the restriction that the only action it can perform at each step is to either delete the left-most character in the string (pop) or add an extra character left to the left-most character in the string (push).
That a PDA is weaker than a TM can be brought down to the fact that the procedure "pop" deletes some data. In order to make a PDA as strong as a TM, we need to save somewhere the data lost through "pop". We can achieve this by introducing a second stack. In the TM model of PDA of last paragraph, this is equivalent to a TM with 3 tapes, where the first tape is the read-only input tape, and the 2nd and the 3rd tape are the "push and pop" (stack) tapes. In order for such a PDA to simulate any given TM, we give the input of the PDA to the first tape, while keeping both the stacks empty. It then goes on to push all the input from the input tape to the first stack. When the entire input is transferred to the 1st stack, now we proceed like a normal TM, where moving right on the tape is the same as popping a symbol from the 1st stack and pushing a (possibly updated) symbol into the second stack, and moving left corresponds to popping a symbol from the 2nd stack and pushing a (possibly updated) symbol into the first stack. We hence have a PDA with 2 stacks that can simulate any TM.
A generalized pushdown automaton (GPDA) is a PDA that writes an entire string of some known length to the stack or removes an entire string from the stack in one step.
A GPDA is formally defined as a 6-tuple:
whereQ,Σ,Γ,q0{\displaystyle Q,\Sigma \,,\Gamma \,,q_{0}}, andF{\displaystyle F}are defined the same way as a PDA.
is the transition function.
Computation rules for a GPDA are the same as a PDA except that theai+1{\displaystyle a_{i+1}}'s andbi+1{\displaystyle b_{i+1}}'s are now strings instead of symbols.
GPDA's and PDA's are equivalent in that if a language is recognized by a PDA, it is also recognized by a GPDA and vice versa.
One can formulate an analytic proof for the equivalence of GPDA's and PDA's using the following simulation:
Letδ(q1,w,x1x2⋅xm)⟶(q2,y1y2...yn){\displaystyle \delta (q_{1},w,x_{1}x_{2}\cdot x_{m})\longrightarrow (q_{2},y_{1}y_{2}...y_{n})}be a transition of the GPDA
whereq1,q2∈Q,w∈Σϵ,x1,x2,…,xm∈Γ∗,m≥0,y1,y2,…,yn∈Γ∗,n≥0{\displaystyle q_{1},q_{2}\in Q,w\in \Sigma _{\epsilon },x_{1},x_{2},\ldots ,x_{m}\in \Gamma ^{*},m\geq 0,y_{1},y_{2},\ldots ,y_{n}\in \Gamma ^{*},n\geq 0}.
Construct the following transitions for the PDA:
As a generalization of pushdown automata, Ginsburg, Greibach, and Harrison (1967) investigatedstack automata, which may additionally step left or right in the input string (surrounded by special endmarker symbols to prevent slipping out), and step up or down in the stack in read-only mode.[11][12]A stack automaton is callednonerasingif it never pops from the stack. The class of languages accepted by nondeterministic, nonerasing stack automata isNSPACE(n2), which is a superset of thecontext-sensitive languages.[1]The class of languages accepted by deterministic, nonerasing stack automata isDSPACE(n⋅log(n)).[1]
Analternating pushdown automaton(APDA) is a pushdown automaton with a state set
States inQ∃{\displaystyle Q_{\exists }}andQ∀{\displaystyle Q_{\forall }}are calledexistentialresp.universal. In an existential state an APDA nondeterministically chooses the next state and accepts ifat least oneof the resulting computations accepts. In a universal state APDA moves to all next states and accepts ifallthe resulting computations accept.
The model was introduced byChandra,KozenandStockmeyer.[13]Ladner,LiptonandStockmeyer[14]proved that this model is equivalent toEXPTIMEi.e. a language is accepted by some APDAif, and only if, it can be decided by an exponential-time algorithm.
Aizikowitz and Kaminski[15]introducedsynchronized alternating pushdown automata(SAPDA) that are equivalent toconjunctive grammarsin the same way as nondeterministic PDA are equivalent to context-free grammars.
|
https://en.wikipedia.org/wiki/Pushdown_automaton
|
Inquantum computing,quantum finite automata(QFA) orquantum state machinesare a quantum analog ofprobabilistic automataor aMarkov decision process. They provide a mathematical abstraction of real-worldquantum computers. Several types of automata may be defined, includingmeasure-onceandmeasure-manyautomata. Quantum finite automata can also be understood as the quantization ofsubshifts of finite type, or as a quantization ofMarkov chains. QFAs are, in turn, special cases ofgeometric finite automataortopological finite automata.
The automata work by receiving a finite-lengthstringσ=(σ0,σ1,⋯,σk){\displaystyle \sigma =(\sigma _{0},\sigma _{1},\cdots ,\sigma _{k})}of lettersσi{\displaystyle \sigma _{i}}from a finitealphabetΣ{\displaystyle \Sigma }, and assigning to each such string aprobabilityPr(σ){\displaystyle \operatorname {Pr} (\sigma )}indicating the probability of the automaton being in anaccept state; that is, indicating whether the automaton accepted or rejected the string.
Thelanguagesaccepted by QFAs are not theregular languagesofdeterministic finite automata, nor are they thestochastic languagesofprobabilistic finite automata. Study of thesequantum languagesremains an active area of research.
There is a simple, intuitive way of understanding quantum finite automata. One begins with agraph-theoreticinterpretation ofdeterministic finite automata(DFA). A DFA can be represented as adirected graph, with states as nodes in the graph, and arrows representing state transitions. Each arrow is labelled with a possible input symbol, so that, given a specific state and an input symbol, the arrow points at the next state. One way of representing such a graph is by means of a set ofadjacency matrices, with one matrix for each input symbol. In this case, the list of possible DFA states is written as acolumn vector. For a given input symbol, the adjacency matrix indicates how any given state (row in the state vector) will transition to the next state; a state transition is given bymatrix multiplication.
One needs a distinct adjacency matrix for each possible input symbol, since each input symbol can result in a different transition. The entries in the adjacency matrix must be zero's and one's. For any given column in the matrix, only one entry can be non-zero: this is the entry that indicates the next (unique) state transition. Similarly, the state of the system is a column vector, in which only one entry is non-zero: this entry corresponds to the current state of the system. LetΣ{\displaystyle \Sigma }denote the set of input symbols. For a given input symbolα∈Σ{\displaystyle \alpha \in \Sigma }, writeUα{\displaystyle U_{\alpha }}as the adjacency matrix that describes the evolution of the DFA to its next state. The set{Uα|α∈Σ}{\displaystyle \{U_{\alpha }|\alpha \in \Sigma \}}then completely describes the state transition function of the DFA. LetQrepresent the set of possible states of the DFA. If there areNstates inQ, then each matrixUα{\displaystyle U_{\alpha }}isNbyN-dimensional. The initial stateq0∈Q{\displaystyle q_{0}\in Q}corresponds to a column vector with a one in theq0'th row. A general stateqis then a column vector with a one in theq'th row. Byabuse of notation, letq0andqalso denote these two vectors. Then, after reading input symbolsαβγ⋯{\displaystyle \alpha \beta \gamma \cdots }from the input tape, the state of the DFA will be given byq=⋯UγUβUαq0.{\displaystyle q=\cdots U_{\gamma }U_{\beta }U_{\alpha }q_{0}.}The state transitions are given by ordinarymatrix multiplication(that is, multiplyq0byUα{\displaystyle U_{\alpha }},etc.); the order of application is 'reversed' only because we follow the standard notation oflinear algebra.
The above description of a DFA, in terms oflinear operatorsand vectors, almost begs for generalization, by replacing the state-vectorqby some general vector, and the matrices{Uα}{\displaystyle \{U_{\alpha }\}}by some general operators. This is essentially what a QFA does: it replacesqby aunit vector, and the{Uα}{\displaystyle \{U_{\alpha }\}}byunitary matrices. Other, similar generalizations also become obvious: the vectorqcan be somedistributionon amanifold; the set of transition matrices becomeautomorphismsof the manifold; this defines a topological finite automaton. Similarly, the matrices could be taken as automorphisms of ahomogeneous space; this defines a geometric finite automaton.
Before moving on to the formal description of a QFA, there are two noteworthy generalizations that should be mentioned and understood. The first is thenon-deterministic finite automaton(NFA). In this case, the vectorqis replaced by a vector that can have more than one entry that is non-zero. Such a vector then represents an element of thepower setofQ; it’s just anindicator functiononQ. Likewise, the state transition matrices{Uα}{\displaystyle \{U_{\alpha }\}}are defined in such a way that a given column can have several non-zero entries in it. Equivalently, the multiply-add operations performed during component-wise matrix multiplication should be replaced by Boolean and-or operations, that is, so that one is working with aringofcharacteristic 2.
A well-known theorem states that, for each DFA, there is an equivalent NFA, andvice versa. This implies that the set oflanguagesthat can be recognized by DFA's and NFA's are the same; these are theregular languages. In the generalization to QFAs, the set of recognized languages will be different. Describing that set is one of the outstanding research problems in QFA theory.
Another generalization that should be immediately apparent is to use astochastic matrixfor the transition matrices, and aprobability vectorfor the state; this gives aprobabilistic finite automaton. The entries in the state vector must be real numbers, positive, and sum to one, in order for the state vector to be interpreted as a probability. The transition matrices must preserve this property: this is why they must be stochastic. Each state vector should be imagined as specifying a point in asimplex; thus, this is a topological automaton, with the simplex being the manifold, and the stochastic matrices being linear automorphisms of the simplex onto itself. Since each transition is (essentially) independent of the previous (if we disregard the distinction between accepted and rejected languages), the PFA essentially becomes a kind ofMarkov chain.
By contrast, in a QFA, the manifold iscomplex projective spaceCPN{\displaystyle \mathbb {C} P^{N}}, and the transition matrices are unitary matrices. Each point inCPN{\displaystyle \mathbb {C} P^{N}}corresponds to a (pure)quantum-mechanical state; the unitary matrices can be thought of as governing the time evolution of the system (viz in theSchrödinger picture). The generalization from pure states tomixed statesshould be straightforward: A mixed state is simply ameasure-theoreticprobability distributiononCPN{\displaystyle \mathbb {C} P^{N}}.
A worthy point to contemplate is the distributions that result on the manifold during the input of a language. In order for an automaton to be 'efficient' in recognizing a language, that distribution should be 'as uniform as possible'. This need for uniformity is the underlying principle behindmaximum entropy methods: these simply guarantee crisp, compact operation of the automaton. Put in other words, themachine learningmethods used to trainhidden Markov modelsgeneralize to QFAs as well: theViterbi algorithmand theforward–backward algorithmgeneralize readily to the QFA.
Although the study of QFA was popularized in the work of Kondacs and Watrous in 1997[1]and later by Moore and Crutchfeld,[2]they were described as early as 1971, byIon Baianu.[3][4]
Measure-once automata were introduced byCris MooreandJames P. Crutchfield.[2]They may be defined formally as follows.
As with an ordinaryfinite automaton, the quantum automaton is considered to haveN{\displaystyle N}possible internal states, represented in this case by anN{\displaystyle N}-statequdit|ψ⟩{\displaystyle |\psi \rangle }. More precisely, theN{\displaystyle N}-state qudit|ψ⟩∈P(CN){\displaystyle |\psi \rangle \in P(\mathbb {C} ^{N})}is an element of(N−1){\displaystyle (N-1)}-dimensionalcomplex projective space, carrying aninner product‖⋅‖{\displaystyle \Vert \cdot \Vert }that is theFubini–Study metric.
Thestate transitions,transition matricesorde Bruijn graphsare represented by a collection ofN×N{\displaystyle N\times N}unitary matricesUα{\displaystyle U_{\alpha }}, with one unitary matrix for each letterα∈Σ{\displaystyle \alpha \in \Sigma }. That is, given an input letterα{\displaystyle \alpha }, the unitary matrix describes the transition of the automaton from its current state|ψ⟩{\displaystyle |\psi \rangle }to its next state|ψ′⟩{\displaystyle |\psi ^{\prime }\rangle }:
Thus, the triple(P(CN),Σ,{Uα|α∈Σ}){\displaystyle (P(\mathbb {C} ^{N}),\Sigma ,\{U_{\alpha }\;\vert \;\alpha \in \Sigma \})}form aquantum semiautomaton.
Theaccept stateof the automaton is given by anN×N{\displaystyle N\times N}projection matrixP{\displaystyle P}, so that, given aN{\displaystyle N}-dimensional quantum state|ψ⟩{\displaystyle |\psi \rangle }, the probability of|ψ⟩{\displaystyle |\psi \rangle }being in the accept state is
The probability of the state machine accepting a given finite input stringσ=(σ0,σ1,⋯,σk){\displaystyle \sigma =(\sigma _{0},\sigma _{1},\cdots ,\sigma _{k})}is given by
Here, the vector|ψ⟩{\displaystyle |\psi \rangle }is understood to represent the initial state of the automaton, that is, the state the automaton was in before it started accepting the string input. The empty string∅{\displaystyle \varnothing }is understood to be just the unit matrix, so that
is just the probability of the initial state being an accepted state.
Because the left-action ofUα{\displaystyle U_{\alpha }}on|ψ⟩{\displaystyle |\psi \rangle }reverses the order of the letters in the stringσ{\displaystyle \sigma }, it is not uncommon for QFAs to be defined using a right action on theHermitian transposestates, simply in order to keep the order of the letters the same.
Alanguageover the alphabetΣ{\displaystyle \Sigma }is accepted with probabilityp{\displaystyle p}by a quantum finite automaton (and a given, fixed initial state|ψ⟩{\displaystyle |\psi \rangle }), if, for all sentencesσ{\displaystyle \sigma }in the language, one hasp≤Pr(σ){\displaystyle p\leq \operatorname {Pr} (\sigma )}.
Consider the classicaldeterministic finite automatongiven by thestate transition table
The quantum state is a vector, inbra–ket notation
with thecomplex numbersa1,a2{\displaystyle a_{1},a_{2}}normalized so that
The unitary transition matrices are
and
TakingS1{\displaystyle S_{1}}to be the accept state, the projection matrix is
As should be readily apparent, if the initial state is the pure state|S1⟩{\displaystyle |S_{1}\rangle }or|S2⟩{\displaystyle |S_{2}\rangle }, then the result of running the machine will be exactly identical to the classical deterministic finite state machine. In particular, there is a language accepted by this automaton with probability one, for these initial states, and it is identical to theregular languagefor the classical DFA, and is given by theregular expression:
The non-classical behaviour occurs if botha1{\displaystyle a_{1}}anda2{\displaystyle a_{2}}are non-zero. More subtle behaviour occurs when the matricesU0{\displaystyle U_{0}}andU1{\displaystyle U_{1}}are not so simple; see, for example, thede Rham curveas an example of a quantum finite state machine acting on the set of all possible finite binary strings.
Measure-many automata were introduced by Kondacs and Watrous in 1997.[1]The general framework resembles that of the measure-once automaton, except that instead of there being one projection, at the end, there is a projection, orquantum measurement, performed after each letter is read. A formal definition follows.
TheHilbert spaceHQ{\displaystyle {\mathcal {H}}_{Q}}is decomposed into threeorthogonal subspaces
In the literature, these orthogonal subspaces are usually formulated in terms of the setQ{\displaystyle Q}of orthogonal basis vectors for the Hilbert spaceHQ{\displaystyle {\mathcal {H}}_{Q}}. This set of basis vectors is divided up into subsetsQacc⊆Q{\displaystyle Q_{\text{acc}}\subseteq Q}andQrej⊆Q{\displaystyle Q_{\text{rej}}\subseteq Q}, such that
is thelinear spanof the basis vectors in the accept set. The reject space is defined analogously, and the remaining space is designated thenon-haltingsubspace. There are three projection matrices,Pacc{\displaystyle P_{\text{acc}}},Prej{\displaystyle P_{\text{rej}}}andPnon{\displaystyle P_{\text{non}}}, each projecting to the respective subspace:
and so on. The parsing of the input string proceeds as follows. Consider the automaton to be in a state|ψ⟩{\displaystyle |\psi \rangle }. After reading an input letterα{\displaystyle \alpha }, the automaton will be in the state
At this point, a measurement whose three possible outcomes have eigenspacesHaccept{\displaystyle {\mathcal {H}}_{\text{accept}}},Hreject{\displaystyle {\mathcal {H}}_{\text{reject}}},Hnon-halting{\displaystyle {\mathcal {H}}_{\text{non-halting}}}is performed on the state|ψ′⟩{\displaystyle |\psi ^{\prime }\rangle }, at which time its wave-function collapses into one of the three subspacesHaccept{\displaystyle {\mathcal {H}}_{\text{accept}}}orHreject{\displaystyle {\mathcal {H}}_{\text{reject}}}orHnon-halting{\displaystyle {\mathcal {H}}_{\text{non-halting}}}. The probability of collapse to the "accept" subspace is given by
and analogously for the other two spaces.
If the wave function has collapsed to either the "accept" or "reject" subspaces, then further processing halts. Otherwise, processing continues, with the next letter read from the input, and applied to what must be an eigenstate ofPnon{\displaystyle P_{\text{non}}}. Processing continues until the whole string is read, or the machine halts. Often, additional symbolsκ{\displaystyle \kappa }and $ are adjoined to the alphabet, to act as the left and right end-markers for the string.
In the literature, the measure-many automaton is often denoted by the tuple(Q;Σ;δ;q0;Qacc;Qrej){\displaystyle (Q;\Sigma ;\delta ;q_{0};Q_{\text{acc}};Q_{\text{rej}})}. Here,Q{\displaystyle Q},Σ{\displaystyle \Sigma },Qacc{\displaystyle Q_{\text{acc}}}andQrej{\displaystyle Q_{\text{rej}}}are as defined above. The initial state is denoted by|q0⟩{\displaystyle |q_{0}\rangle }. The unitary transformations are denoted by the mapδ{\displaystyle \delta },
so that
As of 2019, mostquantum computersare implementations of measure-once quantum finite automata, and the software systems for programming them expose the state-preparation of|ψ⟩{\displaystyle |\psi \rangle }, measurementP{\displaystyle P}and a choice of unitary transformationsUα{\displaystyle U_{\alpha }}, such thecontrolled NOT gate, theHadamard transformand otherquantum logic gates, directly to the programmer.
The primary difference between real-world quantum computers and the theoretical framework presented above is that the initial state preparation cannot ever result in a point-likepure state, nor can the unitary operators be precisely applied. Thus, the initial state must be taken as amixed state
for some probability distributionp(x){\displaystyle p(x)}characterizing the ability of the machinery to prepare an initial state close to the desired initial pure state|ψ⟩{\displaystyle |\psi \rangle }. This state is not stable, but suffers from some amount ofquantum decoherenceover time. Precise measurements are also not possible, and one instead usespositive operator-valued measuresto describe the measurement process. Finally, each unitary transformation is not a single, sharply defined quantum logic gate, but rather a mixture
for some probability distributionpα(x){\displaystyle p_{\alpha }(x)}describing how well the machinery can effect the desired transformationUα{\displaystyle U_{\alpha }}.
As a result of these effects, the actual time evolution of the state cannot be taken as an infinite-precision pure point, operated on by a sequence of arbitrarily sharp transformations, but rather as anergodicprocess, or more accurately, amixing processthat only concatenates transformations onto a state, but also smears the state over time.
There is no quantum analog to thepush-down automatonorstack machine. This is due to theno-cloning theorem: there is no way to make a copy of the current state of the machine, push it onto a stack for later reference, and then return to it.
The above constructions indicate how the concept of a quantum finite automaton can be generalized to arbitrarytopological spaces. For example, one may take some (N-dimensional)Riemann symmetric spaceto take the place ofCPN{\displaystyle \mathbb {C} P^{N}}. In place of the unitary matrices, one uses theisometriesof the Riemannian manifold, or, more generally, some set ofopen functionsappropriate for the given topological space. The initial state may be taken to be a point in the space. The set of accept states can be taken to be some arbitrary subset of the topological space. One then says that aformal languageis accepted by thistopological automatonif the point, after iteration by the homeomorphisms, intersects the accept set. But, of course, this is nothing more than the standard definition of anM-automaton. The behaviour of topological automata is studied in the field oftopological dynamics.
The quantum automaton differs from the topological automaton in that, instead of having a binary result (is the iterated point in, or not in, the final set?), one has a probability. The quantum probability is the (square of) the initial state projected onto some final stateP; that isPr=|⟨P|ψ⟩|2{\displaystyle \mathbf {Pr} =\vert \langle P\vert \psi \rangle \vert ^{2}}. But thisprobability amplitudeis just a very simple function of the distance between the point|P⟩{\displaystyle \vert P\rangle }and the point|ψ⟩{\displaystyle \vert \psi \rangle }inCPN{\displaystyle \mathbb {C} P^{N}}, under the distancemetricgiven by theFubini–Study metric. To recap, the quantum probability of a language being accepted can be interpreted as a metric, with the probability of accept being unity, if the metric distance between the initial and final states is zero, and otherwise the probability of accept is less than one, if the metric distance is non-zero. Thus, it follows that the quantum finite automaton is just a special case of ageometric automatonor ametric automaton, whereCPN{\displaystyle \mathbb {C} P^{N}}is generalized to somemetric space, and the probability measure is replaced by a simple function of the metric on that space.
|
https://en.wikipedia.org/wiki/Quantum_finite_automaton
|
SCXMLstands for State Chart XML: State Machine Notation for Control Abstraction. It is anXML-basedmarkup languagethat provides a genericstate-machine-based execution environment based onHarel statecharts.
SCXML is able to describe complexfinite-state machines. For example, it is possible to describe notations such as sub-states, parallel states, synchronization, or concurrency, in SCXML.
The objective of this standard is to genericizestate diagramnotations that are already used in other XML contexts. For example, it is expected that SCXML notations will replace theState machines notationsused in the nextCCXML2.0 version (an XML standard designed to providetelephonysupport toVoiceXML). It could also be used as a multimodal control language in theMultimodal Interaction Activity.
One of the goals of this language is to make sure that the language is compatible with CCXML and that there is an easy path for existing CCXML scripts to be converted to SCXML without major changes to the programming model or document structure (for example, by using anXSL Transformation).
The current version of the specification was released by the W3C in September 2015.[1]
According to the W3C SCXML specification,[2]SCXML is a general-purpose event-based state machine language that can be used in many ways, including:
The draft W3CVoiceXML3.0 specification[3]includes State Chart and SCXML Representation to define functionality.
Multimodal application designs can use different modalities (for example, voice vs. touchscreen vs. keyboard and mouse) for different parts of a communication best suited to it. For example, voice input can be used to avoid having to type on the small screen of a mobile phone, but the screen may be a faster way of communicating a list or map, compared to listening to long descriptions of available options. SCXML makes it easy to do several things in parallel, and the Interaction Manager SCXML application will maintain the synchronization between Voice and Visual dialogues.
The W3C document Authoring Applications for the Multimodal Architecture[4]describes a multimodal system that implements the W3C Multimodal Architecture and gives an example of a simple multimodal application authored using various W3C markup languages, including SCXML, CCXML, VoiceXML 2.1 and HTML.
The following implementations are inactive, i.e., the last change to their source code was made more than two years ago:
|
https://en.wikipedia.org/wiki/SCXML
|
Inmathematicsandtheoretical computer science, asemiautomatonis adeterministic finite automatonhaving inputs but no output. It consists of asetQofstates, a set Σ called the input alphabet, and a functionT:Q× Σ →Qcalled the transition function.
Associated with any semiautomaton is amonoidcalled thecharacteristic monoid,input monoid,transition monoidortransition systemof the semiautomaton, whichactson the set of statesQ. This may be viewed either as an action of thefree monoidofstringsin the input alphabet Σ, or as the inducedtransformation semigroupofQ.
In older books like Clifford and Preston (1967)semigroup actionsare called "operands".
Incategory theory, semiautomata essentially arefunctors.
Atransformation semigrouportransformation monoidis a pair(M,Q){\displaystyle (M,Q)}consisting of asetQ(often called the "set ofstates") and asemigroupormonoidMoffunctions, or "transformations", mappingQto itself. They are functions in the sense that every elementmofMis a mapm:Q→Q{\displaystyle m\colon Q\to Q}. Ifsandtare two functions of the transformation semigroup, their semigroup product is defined as theirfunction composition(st)(q)=(s∘t)(q)=s(t(q)){\displaystyle (st)(q)=(s\circ t)(q)=s(t(q))}.
Some authors regard "semigroup" and "monoid" as synonyms. Here a semigroup need not have anidentity element; a monoid is a semigroup with an identity element (also called "unit"). Since the notion of functions acting on a set always includes the notion of an identity function, which when applied to the set does nothing, a transformation semigroup can be made into a monoid by adding the identity function.
LetMbe amonoidandQbe a non-empty set. If there exists a multiplicative operation
which satisfies the properties
for 1 the unit of the monoid, and
for allq∈Q{\displaystyle q\in Q}ands,t∈M{\displaystyle s,t\in M}, then the triple(Q,M,μ){\displaystyle (Q,M,\mu )}is called arightM-actor simply aright act. In long-hand,μ{\displaystyle \mu }is theright multiplication of elements of Q by elements of M. The right act is often written asQM{\displaystyle Q_{M}}.
Aleft actis defined similarly, with
and is often denoted asMQ{\displaystyle \,_{M}Q}.
AnM-act is closely related to a transformation monoid. However the elements ofMneed not be functionsper se, they are just elements of some monoid. Therefore, one must demand that the action ofμ{\displaystyle \mu }be consistent with multiplication in the monoid (i.e.μ(q,st)=μ(μ(q,s),t){\displaystyle \mu (q,st)=\mu (\mu (q,s),t)}), as, in general, this might not hold for some arbitraryμ{\displaystyle \mu }, in the way that it does for function composition.
Once one makes this demand, it is completely safe to drop all parenthesis, as the monoid product and the action of the monoid on the set are completelyassociative. In particular, this allows elements of the monoid to be represented asstringsof letters, in the computer-science sense of the word "string". This abstraction then allows one to talk aboutstring operationsin general, and eventually leads to the concept offormal languagesas being composed of strings of letters.[further explanation needed]
Another difference between anM-act and a transformation monoid is that for anM-actQ, two distinct elements of the monoid may determine the same transformation ofQ. If we demand that this does not happen, then anM-act is essentially the same as a transformation monoid.
For twoM-actsQM{\displaystyle Q_{M}}andBM{\displaystyle B_{M}}sharing the same monoidM{\displaystyle M}, anM-homomorphismf:QM→BM{\displaystyle f\colon Q_{M}\to B_{M}}is a mapf:Q→B{\displaystyle f\colon Q\to B}such that
for allq∈QM{\displaystyle q\in Q_{M}}andm∈M{\displaystyle m\in M}. The set of allM-homomorphisms is commonly written asHom(QM,BM){\displaystyle \mathrm {Hom} (Q_{M},B_{M})}orHomM(Q,B){\displaystyle \mathrm {Hom} _{M}(Q,B)}.
TheM-acts andM-homomorphisms together form acategorycalledM-Act.[1]
Asemiautomatonis a triple(Q,Σ,T){\displaystyle (Q,\Sigma ,T)}whereΣ{\displaystyle \Sigma }is a non-empty set, called theinput alphabet,Qis a non-empty set, called theset of states, andTis thetransition function
When the set of statesQis a finite set—it need not be—, a semiautomaton may be thought of as adeterministic finite automaton(Q,Σ,T,q0,A){\displaystyle (Q,\Sigma ,T,q_{0},A)}, but without the initial stateq0{\displaystyle q_{0}}or set ofaccept statesA. Alternately, it is afinite-state machinethat has no output, and only an input.
Any semiautomaton induces an act of a monoid in the following way.
LetΣ∗{\displaystyle \Sigma ^{*}}be thefree monoidgenerated by thealphabetΣ{\displaystyle \Sigma }(so that the superscript * is understood to be theKleene star); it is the set of all finite-lengthstringscomposed of the letters inΣ{\displaystyle \Sigma }.
For every wordwinΣ∗{\displaystyle \Sigma ^{*}}, letTw:Q→Q{\displaystyle T_{w}\colon Q\to Q}be the function, defined recursively, as follows, for allqinQ:
LetM(Q,Σ,T){\displaystyle M(Q,\Sigma ,T)}be the set
The setM(Q,Σ,T){\displaystyle M(Q,\Sigma ,T)}is closed underfunction composition; that is, for allv,w∈Σ∗{\displaystyle v,w\in \Sigma ^{*}}, one hasTw∘Tv=Tvw{\displaystyle T_{w}\circ T_{v}=T_{vw}}. It also containsTε{\displaystyle T_{\varepsilon }}, which is theidentity functiononQ. Since function composition isassociative, the setM(Q,Σ,T){\displaystyle M(Q,\Sigma ,T)}is a monoid: it is called theinput monoid,characteristic monoid,characteristic semigrouportransition monoidof the semiautomaton(Q,Σ,T){\displaystyle (Q,\Sigma ,T)}.
If the set of statesQis finite, then the transition functions are commonly represented asstate transition tables. The structure of all possible transitions driven by strings in the free monoid has a graphical depiction as ade Bruijn graph.
The set of statesQneed not be finite, or even countable. As an example, semiautomata underpin the concept ofquantum finite automata. There, the set of statesQare given by thecomplex projective spaceCPn{\displaystyle \mathbb {C} P^{n}}, and individual states are referred to asn-statequbits. State transitions are given byunitaryn×nmatrices. The input alphabetΣ{\displaystyle \Sigma }remains finite, and other typical concerns of automata theory remain in play. Thus, thequantum semiautomatonmay be simply defined as the triple(CPn,Σ,{Uσ1,Uσ2,…,Uσp}){\displaystyle (\mathbb {C} P^{n},\Sigma ,\{U_{\sigma _{1}},U_{\sigma _{2}},\dotsc ,U_{\sigma _{p}}\})}when the alphabetΣ{\displaystyle \Sigma }haspletters, so that there is one unitary matrixUσ{\displaystyle U_{\sigma }}for each letterσ∈Σ{\displaystyle \sigma \in \Sigma }. Stated in this way, the quantum semiautomaton has many geometrical generalizations. Thus, for example, one may take aRiemannian symmetric spacein place ofCPn{\displaystyle \mathbb {C} P^{n}}, and selections from its group ofisometriesas transition functions.
Thesyntactic monoidof aregular languageisisomorphicto the transition monoid of theminimal automatonaccepting the language.
|
https://en.wikipedia.org/wiki/Semiautomaton
|
Inalgebraandtheoretical computer science, anactionoractof asemigroupon asetis a rule which associates to each element of the semigroup atransformationof the set in such a way that the product of two elements of the semigroup (using the semigroupoperation) is associated with thecompositeof the two corresponding transformations. The terminology conveys the idea that the elements of the semigroup areactingas transformations of the set. From analgebraicperspective, a semigroup action is a generalization of the notion of agroup actioningroup theory. From the computer science point of view, semigroup actions are closely related toautomata: the set models the state of the automaton and the action models transformations of that state in response to inputs.
An important special case is amonoid actionoract, in which the semigroup is amonoidand theidentity elementof the monoid acts as theidentity transformationof a set. From acategory theoreticpoint of view, a monoid is acategorywith one object, and an act is a functor from that category to thecategory of sets. This immediately provides a generalization to monoid acts on objects in categories other than the category of sets.
Another important special case is atransformation semigroup. This is a semigroup of transformations of a set, and hence it has a tautological action on that set. This concept is linked to the more general notion of a semigroup by an analogue ofCayley's theorem.
(A note on terminology: the terminology used in this area varies, sometimes significantly, from one author to another. See the article for details.)
LetSbe a semigroup. Then a (left)semigroup action(oract) ofSis a setXtogether with an operation• :S×X→Xwhich is compatible with the semigroupoperation∗ as follows:
This is the analogue in semigroup theory of a (left)group action, and is equivalent to asemigroup homomorphisminto the set of functions onX. Right semigroup actions are defined in a similar way using an operation• :X×S→Xsatisfying(x•a) •b=x• (a∗b).
IfMis a monoid, then a (left)monoid action(oract) ofMis a (left) semigroup action ofMwith the additional property that
whereeis the identity element ofM. This correspondingly gives a monoid homomorphism. Right monoid actions are defined in a similar way. A monoidMwith an action on a set is also called anoperator monoid.
A semigroup action ofSonXcan be made into monoid act by adjoining an identity to the semigroup and requiring that it acts as the identity transformation onX.
IfSis a semigroup or monoid, then a setXon whichSacts as above (on the left, say) is also known as a (left)S-act,S-set,S-action,S-operand, orleft act overS. Some authors do not distinguish between semigroup and monoid actions, by regarding the identity axiom (e•x=x) as empty when there is no identity element, or by using the termunitaryS-actfor anS-act with an identity.[1]
The defining property of an act is analogous to theassociativityof the semigroup operation, and means that all parentheses can be omitted. It is common practice, especially in computer science, to omit the operations as well so that both the semigroup operation and the action are indicated by juxtaposition. In this waystringsof letters fromSact onX, as in the expressionstxfors,tinSandxinX.
It is also quite common to work with right acts rather than left acts.[2]However, every right S-act can be interpreted as a left act over theopposite semigroup, which has the same elements as S, but where multiplication is defined by reversing the factors,s•t=t•s, so the two notions are essentially equivalent. Here we primarily adopt the point of view of left acts.
It is often convenient (for instance if there is more than one act under consideration) to use a letter, such asT{\displaystyle T}, to denote the function
defining theS{\displaystyle S}-action and hence writeT(s,x){\displaystyle T(s,x)}in place ofs⋅x{\displaystyle s\cdot x}. Then for anys{\displaystyle s}inS{\displaystyle S}, we denote by
the transformation ofX{\displaystyle X}defined by
By the defining property of anS{\displaystyle S}-act,T{\displaystyle T}satisfies
Further, consider a functions↦Ts{\displaystyle s\mapsto T_{s}}. It is the same ascurry(T):S→(X→X){\displaystyle \operatorname {curry} (T):S\to (X\to X)}(seeCurrying). Becausecurry{\displaystyle \operatorname {curry} }is a bijection, semigroup actions can be defined as functionsS→(X→X){\displaystyle S\to (X\to X)}which satisfy
That is,T{\displaystyle T}is a semigroup action ofS{\displaystyle S}onX{\displaystyle X}if and only ifcurry(T){\displaystyle \operatorname {curry} (T)}is asemigroup homomorphismfromS{\displaystyle S}to the full transformation monoid ofX{\displaystyle X}.
LetXandX′ beS-acts. Then anS-homomorphism fromXtoX′ is a map
such that
The set of all suchS-homomorphisms is commonly written asHomS(X,X′){\displaystyle \mathrm {Hom} _{S}(X,X')}.
M-homomorphisms ofM-acts, forMa monoid, are defined in exactly the same way.
For a fixed semigroupS, the leftS-acts are the objects of a category, denotedS-Act, whose morphisms are theS-homomorphisms. The corresponding category of rightS-acts is sometimes denoted by Act-S. (This is analogous to thecategoriesR-Mod and Mod-Rof left and rightmodulesover aring.)
For a monoidM, the categoriesM-Act and Act-Mare defined in the same way.
A correspondence between transformation semigroups and semigroup actions is described below. If we restrict it tofaithfulsemigroup actions, it has nice properties.
Any transformation semigroup can be turned into a semigroup action by the following construction. For any transformation semigroupS{\displaystyle S}ofX{\displaystyle X}, define a semigroup actionT{\displaystyle T}ofS{\displaystyle S}onX{\displaystyle X}asT(s,x)=s(x){\displaystyle T(s,x)=s(x)}fors∈S,x∈X{\displaystyle s\in S,x\in X}. This action is faithful, which is equivalent tocurry(T){\displaystyle curry(T)}beinginjective.
Conversely, for any semigroup actionT{\displaystyle T}ofS{\displaystyle S}onX{\displaystyle X}, define a transformation semigroupS′={Ts∣s∈S}{\displaystyle S'=\{T_{s}\mid s\in S\}}. In this construction we "forget" the setS{\displaystyle S}.S′{\displaystyle S'}is equal to theimageofcurry(T){\displaystyle curry(T)}. Let us denotecurry(T){\displaystyle curry(T)}asf{\displaystyle f}for brevity. Iff{\displaystyle f}isinjective, then it is a semigroupisomorphismfromS{\displaystyle S}toS′{\displaystyle S'}. In other words, ifT{\displaystyle T}is faithful, then we forget nothing important. This claim is made precise by the following observation: if we turnS′{\displaystyle S'}back into a semigroup actionT′{\displaystyle T'}ofS′{\displaystyle S'}onX{\displaystyle X}, thenT′(f(s),x)=T(s,x){\displaystyle T'(f(s),x)=T(s,x)}for alls∈S,x∈X{\displaystyle s\in S,x\in X}.T{\displaystyle T}andT′{\displaystyle T'}are "isomorphic" viaf{\displaystyle f}, i.e., we essentially recoveredT{\displaystyle T}. Thus, some authors[3]see no distinction between faithful semigroup actions and transformation semigroups.
Transformation semigroups are of essential importance for the structure theory offinite-state machinesinautomata theory. In particular, asemiautomatonis a triple (Σ,X,T), where Σ is a non-empty set called theinput alphabet,Xis a non-empty set called theset of statesandTis a function
called thetransition function. Semiautomata arise fromdeterministic automataby ignoring the initial state and the set of accept states.
Given a semiautomaton, letTa:X→X, fora∈ Σ, denote the transformation ofXdefined byTa(x) =T(a,x). Then the semigroup of transformations ofXgenerated by {Ta:a∈ Σ} is called thecharacteristic semigrouportransition systemof (Σ,X,T). This semigroup is a monoid, so this monoid is called thecharacteristicortransition monoid. It is also sometimes viewed as a Σ∗-act onX, where Σ∗is thefree monoidof strings generated by the alphabet Σ,[note 1]and the action of strings extends the action of Σ via the property
Krohn–Rhodes theory, sometimes also calledalgebraic automata theory, gives powerful decomposition results for finite transformation semigroups by cascading simpler components.
|
https://en.wikipedia.org/wiki/Semigroup_action
|
Inautomata theory,sequential logicis a type oflogic circuitwhose output depends on the present value of its input signals and on thesequenceof past inputs, the input history.[1][2][3][4]This is in contrast tocombinational logic, whose output is a function of only the present input. That is, sequential logic hasstate(memory) while combinational logic does not.
Sequential logic is used to constructfinite-state machines, a basic building block in all digital circuitry. Virtually all circuits in practical digital devices are a mixture of combinational and sequential logic.
A familiar example of a device with sequential logic is atelevision setwith "channel up" and "channel down" buttons.[1]Pressing the "up" button gives the television an input telling it to switch to the next channel above the one it is currently receiving. If the television is on channel 5, pressing "up" switches it to receive channel 6. However, if the television is on channel 8, pressing "up" switches it to channel "9". In order for the channel selection to operate correctly, the television must be aware of which channel it is currently receiving, which was determined by past channel selections.[1]The television stores the current channel as part of itsstate. When a "channel up" or "channel down" input is given to it, the sequential logic of the channel selection circuitry calculates the new channel from the input and the current channel.
Digital sequential logic circuits are divided intosynchronousandasynchronoustypes. In synchronous sequential circuits, the state of the device changes only at discrete times in response to aclock signal. In asynchronous circuits the state of the device can change at any time in response to changing inputs.
Nearly all sequential logic today isclockedorsynchronouslogic. In a synchronous circuit, anelectronic oscillatorcalled aclock(orclock generator) generates a sequence of repetitive pulses called theclock signalwhich is distributed to all the memory elements in the circuit. The basic memory element in synchronous logic is theflip-flop. The output of each flip-flop only changes when triggered by the clock pulse, so changes to the logic signals throughout the circuit all begin at the same time, at regular intervals, synchronized by the clock.
The output of all the storage elements (flip-flops) in the circuit at any given time, the binary data they contain, is called thestateof the circuit. The state of the synchronous circuit only changes on clock pulses. At each cycle, the next state is determined by the current state and the value of the input signals when the clock pulse occurs.
The main advantage of synchronous logic is its simplicity. The logic gates which perform the operations on the data require a finite amount of time to respond to changes to their inputs. This is calledpropagation delay. The interval between clock pulses must be long enough so that all the logic gates have time to respond to the changes and their outputs "settle" to stable logic values before the next clock pulse occurs. As long as this condition is met (ignoring certain other details) the circuit is guaranteed to be stable and reliable. This determines the maximum operating speed of the synchronous circuit.
Synchronous logic has two main disadvantages:
Asynchronous(clocklessorself-timed)sequential logicis not synchronized by a clock signal; the outputs of the circuit change directly in response to changes in inputs. The advantage of asynchronous logic is that it can be faster than synchronous logic, because the circuit doesn't have to wait for a clock signal to process inputs. The speed of the device is potentially limited only by thepropagation delaysof thelogic gatesused.
However, asynchronous logic is more difficult to design and is subject to problems not encountered in synchronous designs. The main problem is that digital memory elements are sensitive to the order that their input signals arrive; if two signals arrive at aflip-flopor latch at almost the same time, which state the circuit goes into can depend on which signal gets to the gate first. Therefore, the circuit can go into the wrong state, depending on small differences in thepropagation delaysof the logic gates. This is called arace condition. This problem is not as severe in synchronous circuits because the outputs of the memory elements only change at each clock pulse. The interval between clock signals is designed to be long enough to allow the outputs of the memory elements to "settle" so they are not changing when the next clock comes. Therefore, the only timing problems are due to "asynchronous inputs"; inputs to the circuit from other systems which are not synchronized to the clock signal.
Asynchronous sequential circuits are typically used only in a few critical parts of otherwise synchronous systems where speed is at a premium, such as parts of microprocessors anddigital signal processingcircuits.
The design of asynchronous logic uses different mathematical models and techniques from synchronous logic, and is an active area of research.
|
https://en.wikipedia.org/wiki/Sequential_logic
|
Astate diagramis used incomputer scienceand related fields to describe the behavior of systems. State diagrams require that the system is composed of a finite number ofstates. Sometimes, this is indeed the case, while at other times this is a reasonableabstraction. Many forms of state diagrams exist, which differ slightly and have differentsemantics.
State diagrams provide an abstract description of asystem'sbehavior. This behavior is analyzed and represented by a series of events that can occur in one or more possible states. Hereby "each diagram usually represents objects of a single class and track the different states of its objects through the system".[1]
State diagrams can be used to graphically representfinite-state machines(also called finite automata). This was introduced byClaude ShannonandWarren Weaverin their 1949 bookThe Mathematical Theory of Communication. Another source isTaylor Boothin his 1967 bookSequential Machines and Automata Theory. Another possible representation is thestate-transition table.
A classic form of state diagram for afinite automaton(FA) is adirected graphwith the following elements (Q, Σ, Z, δ, q0, F):[2][3]
The output function ω represents the mapping of ordered pairs of input symbols and states onto output symbols, denoted mathematically asω:Σ×Q→Z.
For adeterministic finite automaton(DFA),nondeterministic finite automaton(NFA),generalized nondeterministic finite automaton(GNFA), orMoore machine, the input is denoted on each edge. For aMealy machine, input and output are signified on each edge, separated with a slash "/": "1/0" denotes the state change upon encountering the symbol "1" causing the symbol "0" to be output. For aMoore machinethe state's output is usually written inside the state's circle, also separated from the state's designator with a slash "/". There are also variants that combine these two notations.
For example, if a state has a number of outputs (e.g. "a= motor counter-clockwise=1, b= caution light inactive=0") the diagram should reflect this : e.g. "q5/1,0" designates state q5 with outputs a=1, b=0. This designator will be written inside the state's circle.
S1andS2are states andS1is anaccepting stateor afinal state. Each edge is labeled with the input. This example shows an acceptor for binary numbers that contain an even number of zeros.
S0,S1, andS2are states. Each edge is labeled with "j/k" wherejis the input andkis the output.
Harel statecharts,[5]invented by computer scientistDavid Harel, are gaining widespread usage since a variant has become part of theUnified Modeling Language(UML).[non-primary source needed]The diagram type allows the modeling ofsuperstates,orthogonal regions, and activities as part of a state.
Classic state diagrams require the creation of distinct nodes for every valid combination of parameters that define the state. For all but the simplest of systems, this can lead to a very large number of nodes and transitions between nodes (state and transition explosion), which reduces the readability of the state diagram. With Harel statecharts it is possible to model multiple cross-functional state diagrams within the statechart. Each of these cross-functional state machines can transition internally without affecting the other state machines. The current state of each cross-functional state machine defines the state of the system. The Harel statechart is equivalent to a state diagram but improves its readability.
There are other sets of semantics available to represent state diagrams. For example, there are tools for modeling and designing logic for embedded controllers.[6]These diagrams, like Harel's original state machines,[7]support hierarchically nested states, orthogonal regions, state actions, and transition actions.[8]
Newcomers to the state machine formalism often confusestate diagramswithflowcharts. The figure below shows a comparison of astate diagramwith a flowchart. A state machine (panel (a)) performs actions in response to explicit events. In contrast, the flowchart (panel (b)) automatically transitions from node to node upon completion of activities.[9]
Nodes of flowcharts are edges in the induced graph of states.
The reason is that each node in a flowchart represents a program command.
A program command is an action to be executed.
A command is not a state, but when applied to the program's state, causes a transition to another state.
In more detail, the source code listing represents a program graph.
Executing the program graph (parsing and interpreting) results in a state graph.
So each program graph induces a state graph.
Conversion of the program graph to its associated state graph is called "unfolding" of the program graph.
The program graph is a sequence of commands.
If no variables exist, then the state consists only of the program counter, which keeps track of program location during execution (what is the next command to be applied).
Before executing a command, the program counter is at some position (state before the command is executed).
Executing the command moves the program counter to the next command.
Since the program counter is the whole state, executing the command changed the state.
Thus, the command itself corresponds to a transition between the two states.
Now consider the full case, when variables exist and are affected by the program commands being executed.
Not only does the program counter change between different program counter locations, but variables might also change values due to the commands executed.
Consequently, even if we revisit some program command (e.g. in a loop), this does not imply the program is in the same state.
In the previous case, the program would be in the same state because the whole state is just the program counter. Thus, if the program counterpoints to the same position (next command) it suffices to specify that we are in the same state.
However, if the state includes variables that change value, we can be at the same program location with different variable values, meaning in a different state in the program's state space.
The term "unfolding" originates from this multiplication of locations when producing the state graph from the program graph.
A self transition is a transition where the initial and the final state are the same.
A representative example is a do loop incrementing some counter until it overflows and becomes 0 again.
Although the do loop executes the same increment command iteratively, its state space is not a cycle but a line.
This results from the state being the program location (here cycling) combined with the counter value, which is strictly increasing (until the overflow). Thus, different states are visited in sequence until the overflow occurs.
After the overflow the counter becomes 0 again, so the initial state is revisited in the state space, closing a cycle in the state space (assuming the counter was initialized to 0).
The figure above attempts to show that reversal of roles by aligning the arcs of the state diagrams with the processing stages of the flowchart.
One can compare a flowchart to an assembly line in manufacturing because the flowchart describes the progression of some task from beginning to end (e.g., transforming source code input into object code output by a compiler). A state machine generally has no notion of such a progression. The door state machine example shown above is not in a more advanced stage in the "closed" state than in the "opened" state. Rather, it simply reacts differently to the open/close events. A state in a state machine is an efficient way of specifying a behavior, rather than a stage of processing.
An interesting extension is to allow arcs to flow from any number of states to any number of states. This only makes sense if the system is allowed to be in multiple states at once, which implies that an individual state only describes a condition or other partial aspect of the overall, global state. The resulting formalism is known as aPetri net.
Another extension allows the integration of flowcharts within Harel statecharts. This extension supports the development of software that is both event driven and workflow driven.
|
https://en.wikipedia.org/wiki/State_diagram
|
Incomputer science, more precisely, in the theory ofdeterministic finite automata(DFA), asynchronizing wordorreset sequenceis a word in the input alphabet of the DFA that sends any state of the DFA to one and the same state.[1]That is, if an ensemble of copies of the DFA are each started in different states, and all of the copies process the synchronizing word, they will all end up in the same state. Not every DFA has a synchronizing word; for instance, a DFA with two states, one for words of even length and one for words of odd length, can never be synchronized.
Given a DFA, the problem of determining if it has a synchronizing word can be solved inpolynomial time[2]using a theorem due to Ján Černý. A simple approach considers the power set of states of the DFA, and builds a directed graph where nodes belong to the power set, and a directed edge describes the action of the transition function. A path from the node of all states to a singleton state shows the existence of a synchronizing word. This algorithm isexponentialin the number of states. A polynomial algorithm results however, due to a theorem of Černý that exploits the substructure of the problem, and shows that a synchronizing word exists if and only if every pair of states has a synchronizing word.
The problem of estimating the length of synchronizing words has a long history and was posed independently by several authors, but it is commonly known as theČerný conjecture. In 1969,Ján Černýconjectured that (n− 1)2is theupper boundfor the length of the shortest synchronizing word for anyn-state complete DFA (a DFA with completestate transition graph).[3]If this is true, it would be tight: in his 1964 paper, Černý exhibited a class of automata (indexed by the numbernof states) for which the shortest reset words have this length.[4]The best upper bound known is 0.1654n3, far from the lower bound.[5]Forn-state DFAs over ak-letter input alphabet, an algorithm byDavid Eppsteinfinds a synchronizing word of length at most 11n3/48 +O(n2), and runs intime complexityO(n3+kn2). This algorithm does not always find the shortest possible synchronizing word for a given automaton; as Eppstein also shows, the problem of finding the shortest synchronizing word isNP-complete. However, for a special class of automata in which all state transitions preserve thecyclic orderof the states, he describes a different algorithm with time O(kn2) that always finds the shortest synchronizing word, proves that these automata always have a synchronizing word of length at most (n− 1)2(the bound given in Černý's conjecture), and exhibits examples of automata with this special form whose shortest synchronizing word has length exactly (n− 1)2.[2]
Theroad coloring problemis the problem of labeling the edges of aregulardirected graphwith the symbols of ak-letter input alphabet (wherekis theoutdegreeof each vertex) in order to form a synchronizable DFA. It was conjectured in 1970 by Benjamin Weiss andRoy Adlerthat anystrongly connectedandaperiodicregular digraph can be labeled in this way; their conjecture was proven in 2007 byAvraham Trahtman.[6][7]
Atransformation semigroupissynchronizingif it contains an element of rank 1, that is, an element whose image is of cardinality 1.[8]A DFA corresponds to a transformation semigroup with a distinguished generator set.
|
https://en.wikipedia.org/wiki/Synchronizing_word
|
Inalgebra, atransformation semigroup(orcomposition semigroup) is a collection oftransformations(functionsfrom a set to itself) that isclosedunderfunction composition. If it includes theidentity function, it is amonoid, called atransformation(orcomposition)monoid. This is thesemigroupanalogue of apermutation group.
A transformation semigroup of a set has a tautologicalsemigroup actionon that set. Such actions are characterized by being faithful, i.e., if two elements of the semigroup have the same action, then they are equal.
An analogue ofCayley's theoremshows that any semigroup can be realized as a transformation semigroup of some set.
Inautomata theory, some authors use the termtransformation semigroupto refer to a semigroupacting faithfullyon a set of "states" different from the semigroup's base set.[1]There isa correspondence between the two notions.
Atransformation semigroupis a pair (X,S), whereXis a set andSis a semigroup of transformations ofX. Here atransformationofXis just afunctionfrom a subset ofXtoX, not necessarily invertible, and thereforeSis simply a set of transformations ofXwhich isclosedundercomposition of functions. The set of allpartial functionson a given base set,X, forms aregular semigroupcalled the semigroup of all partial transformations (or thepartial transformation semigrouponX), typically denoted byPTX{\displaystyle {\mathcal {PT}}_{X}}.[2]
IfSincludes the identity transformation ofX, then it is called atransformation monoid. Any transformation semigroupSdetermines a transformation monoidMby taking the union ofSwith the identity transformation. A transformation monoid whose elements are invertible is apermutation group.
The set of all transformations ofXis a transformation monoid called thefull transformation monoid(orsemigroup) ofX. It is also called thesymmetric semigroupofXand is denoted byTX. Thus a transformation semigroup (or monoid) is just asubsemigroup(orsubmonoid) of the full transformation monoid ofX.
If (X,S) is a transformation semigroup thenXcan be made into asemigroup actionofSby evaluation:
This is a monoid action ifSis a transformation monoid.
The characteristic feature of transformation semigroups, as actions, is that they arefaithful, i.e., if
thens=t. Conversely if a semigroupSacts on a setXbyT(s,x) =s•xthen we can define, fors∈S, a transformationTsofXby
The map sendingstoTsis injective if and only if (X,T) is faithful, in which case the image of this map is a transformation semigroup isomorphic toS.
Ingroup theory,Cayley's theoremasserts that any groupGis isomorphic to a subgroup of thesymmetric groupofG(regarded as a set), so thatGis apermutation group. This theorem generalizes straightforwardly to monoids: any monoidMis a transformation monoid of its underlying set, via the action given by left (or right) multiplication. This action is faithful because ifax=bxfor allxinM, then by takingxequal to the identity element, we havea=b.
For a semigroupSwithout a (left or right) identity element, we takeXto be the underlying set of themonoid corresponding toSto realiseSas a transformation semigroup ofX. In particular any finite semigroup can be represented as asubsemigroupof transformations of a setXwith |X| ≤ |S| + 1, and ifSis a monoid, we have the sharper bound |X| ≤ |S|, as in the case offinite groups.[3]: 21
Incomputer science, Cayley representations can be applied to improve the asymptotic efficiency of semigroups by reassociating multiple composed multiplications. The action given by left multiplication results in right-associated multiplication, and vice versa for the action given by right multiplication. Despite having the same results for any semigroup, the asymptotic efficiency will differ. Two examples of useful transformation monoids given by an action of left multiplication are the functional variation of thedifference listdata structure, and the monadic Codensity transformation (a Cayley representation of amonad, which is a monoid in a particularmonoidalfunctor category).[4]
LetMbe a deterministicautomatonwith state spaceSand alphabetA. The words in thefree monoidA∗induce transformations ofSgiving rise to amonoid morphismfromA∗to the full transformation monoidTS. The image of this morphism is the transformation semigroup ofM.[3]: 78
For aregular language, thesyntactic monoidis isomorphic to the transformation monoid of theminimal automatonof the language.[3]: 81
|
https://en.wikipedia.org/wiki/Transformation_semigroup
|
Intheoretical computer science, atransition systemis a concept used in the study ofcomputation. It is used to describe the potential behavior ofdiscrete systems. It consists ofstatesand transitions between states, which may be labeled with labels chosen from a set; the same label may appear on more than one transition. If the label set is asingleton, the system is essentially unlabeled, and a simpler definition that omits the labels is possible.
Transition systems coincide mathematically withabstract rewriting systems(as explained further in this article) anddirected graphs. They differ fromfinite-state automatain several ways:
Transition systems can be represented as directed graphs.
Formally, atransition systemis a pair(S,T){\displaystyle (S,T)}whereS{\displaystyle S}is a set of states andT{\displaystyle T}, thetransition relation, is a subset ofS×S{\displaystyle S\times S}. We say that there is a transition from statep{\displaystyle p}to stateq{\displaystyle q}if(p,q)∈T{\displaystyle (p,q)\in T}, and denote itp→q{\displaystyle p\rightarrow q}.
Alabelled transition systemis a tuple(S,Λ,T){\displaystyle (S,\Lambda ,T)}whereS{\displaystyle S}is a set of states,Λ{\displaystyle \Lambda }is a set of labels, andT{\displaystyle T}, thelabelled transition relation, is a subset ofS×Λ×S{\displaystyle S\times \Lambda \times S}. We say that there is a transition from statep{\displaystyle p}to stateq{\displaystyle q}with labelα{\displaystyle \alpha }iff(p,α,q)∈T{\displaystyle (p,\alpha ,q)\in T}and denote it
Labels can represent different things depending on the language of interest. Typical uses of labels include representing input expected, conditions that must be true to trigger the transition, or actions performed during the transition. Labelled transitions systems were originally introduced asnamedtransition systems.[1]
The formal definition can be rephrased as follows. Labelled state transition systems onS{\displaystyle S}with labels fromΛ{\displaystyle \Lambda }correspondone-to-onewith functionsS→P(Λ×S){\displaystyle S\to {\mathcal {P}}(\Lambda \times S)}, whereP{\displaystyle {\mathcal {P}}}is the (covariant)powerset functor. Under this bijection(S,Λ,T){\displaystyle (S,\Lambda ,T)}is sent toξT:S→P(Λ×S){\displaystyle \xi _{T}:S\to {\mathcal {P}}(\Lambda \times S)}, defined by
In other words, a labelled state transition system is acoalgebrafor the functorP(Λ×−){\displaystyle P(\Lambda \times {-})}.
There are many relations between these concepts. Some are simple, such as observing that a labelled transition system where the set of labels consists of only one element is equivalent to an unlabelled transition system. However, not all these relations are equally trivial.
As a mathematical object, an unlabeled transition system is identical with an (unindexed)abstract rewriting system. If we consider the rewriting relation as an indexed set of relations, as some authors do, then a labeled transition system is equivalent to an abstract rewriting system with the indices being the labels. The focus of the study and the terminology are different, however. In a transition system one is interested in interpreting the labels as actions, whereas in an abstract rewriting system the focus is on how objects may be transformed (rewritten) into others.[2]
Inmodel checking, a transition system is sometimes defined to include an additional labeling function for the states as well, resulting in a notion that encompasses that ofKripke structure.[3]
Action languagesare extensions of transition systems, adding a set offluentsF, a set of valuesV, and a function that mapsF×StoV.[4]
|
https://en.wikipedia.org/wiki/Transition_system
|
Atree automatonis a type ofstate machine. Tree automata deal withtree structures, rather than thestringsof more conventional state machines.
The following article deals with branching tree automata, which correspond toregular languages of trees.
As with classical automata, finite tree automata (FTA) can be either adeterministic automatonor not. According to how the automaton processes the input tree, finite tree automata can be of two types: (a) bottom up, (b) top down. This is an important issue, as although non-deterministic (ND) top-down and ND bottom-up tree automata are equivalent in expressive power, deterministic top-down automata are strictly less powerful than their deterministic bottom-up counterparts, because tree properties specified by deterministic top-down tree automata can only depend on path properties. (Deterministic bottom-up tree automata are as powerful as ND tree automata.)
Abottom-up finite tree automatonoverFis defined as a tuple
(Q,F,Qf, Δ),
whereQis a set of states,Fis aranked alphabet(i.e., an alphabet whose symbols have an associatedarity),Qf⊆Qis a set of final states, and Δ is a set oftransition rulesof the formf(q1(x1),...,qn(xn)) →q(f(x1,...,xn)), for ann-aryf∈F,q,qi∈Q, andxivariables denoting subtrees. That is, members of Δ are rewrite rules from nodes whose childs' roots are states, to nodes whose roots are states. Thus the state of a node is deduced from the states of its children.
Forn=0, that is, for a constant symbolf, the above transition rule definition readsf() →q(f()); often the empty parentheses are omitted for convenience:f→q(f).
Since these transition rules for constant symbols (leaves) do not require a state, no explicitly defined initial states are needed.
A bottom-up tree automaton is run on aground termoverF, starting at all its leaves simultaneously and moving upwards, associating a run state fromQwith each subterm.
The term is accepted if its root is associated to an accepting state fromQf.[1]
Atop-down finite tree automatonoverFis defined as a tuple
(Q,F,Qi, Δ),
with two differences with bottom-up tree automata. First,Qi⊆Q, the set of its initial states, replacesQf; second, its transition rules are oriented conversely:q(f(x1,...,xn)) →f(q1(x1),...,qn(xn)), for ann-aryf∈F,q,qi∈Q, andxivariables denoting subtrees.
That is, members of Δ are here rewrite rules from nodes whose roots are states to nodes whose children's roots are states.
A top-down automaton starts in some of its initial states at the root and moves downward along branches of the tree, associating along a run a state with each subterm inductively.
A tree is accepted if every branch can be gone through this way.[2]
A tree automaton is calleddeterministic(abbreviatedDFTA) if no two rules from Δ have the same left hand side; otherwise it is callednondeterministic(NFTA).[3]Non-deterministic top-down tree automata have the same expressive power as non-deterministic bottom-up ones;[4]the transition rules are simply reversed, and the final states become the initial states.
In contrast,deterministictop-down tree automata[5]are less powerful than their bottom-up counterparts, because in a deterministic tree automaton no two transition rules have the same left-hand side. For tree automata, transition rules are rewrite rules; and for top-down ones, the left-hand side will be parent nodes. Consequently, a deterministic top-down tree automaton will only be able to test for tree properties that are true in all branches, because the choice of the state to write into each child branch is determined at the parent node, without knowing the child branches contents.
Infinite-tree automataextend top-down automata to infinite trees, and can be used to prove decidability ofS2S, themonadic second-ordertheory with two successors. Finite tree automata (nondeterministic if top-down) suffice for WS2S.[6]
Employing coloring to distinguish members ofFandQ, and using the ranked alphabetF={false,true,nil,cons(.,.) }, withconshaving arity 2 and all other symbols having arity 0, a bottom-up tree automaton accepting the set of all finite lists of boolean values can be defined as (Q,F,Qf, Δ) withQ= {Bool,BList},Qf= {BList},and Δ consisting of the rules
In this example, the rules can be understood intuitively as assigning to each term its type in a bottom-up manner; e.g. rule (4) can be read as "A termcons(x1,x2) has typeBList, providedx1andx2has typeBoolandBList, respectively".
An accepting example run is
Cf. the derivation of the same term from a regular tree grammar corresponding to the automaton, shown atRegular tree grammar#Examples.
A rejecting example run is
Intuitively, this corresponds to the termcons(false,true) not being well-typed.
Using the same colorization as above, this example shows how tree automata generalize ordinary string automata.
The finite deterministic string automaton shown in the picture accepts all strings of binary digits that denote a multiple of 3.
Using the notions fromDeterministic finite automaton#Formal definition, it is defined by:
In the tree automaton setting, the input alphabet is changed such that the symbols0and1are both unary, and a nullary symbol, saynilis used for tree leaves.
For example, the binary string "110" in the string automaton setting corresponds to the term "1(1(0(nil)))" in the tree automaton setting; this way, strings can be generalized to trees, or terms.
The top-down finite tree automaton accepting the set of all terms corresponding to multiples of 3 in binary string notation is then defined by:
For example, the tree "1(1(0(nil)))" is accepted by the following tree automaton run:
In contrast, the term "1(0(nil))" leads to following non-accepting automaton run:
Since there are no other initial states thanS0to start an automaton run with, the term "1(0(nil))" is not accepted by the tree automaton.
For comparison purposes, the table gives in column (A) and (D) a(right) regular (string) grammar, and aregular tree grammar, respectively, each accepting the same language as its automaton counterpart.
For a bottom-up automaton, a ground termt(that is, a tree) is accepted if there exists a reduction that starts fromtand ends withq(t), whereqis a final state. For a top-down automaton, a ground termtis accepted if there exists a reduction that starts fromq(t) and ends witht, whereqis an initial state.
The tree languageL(A)accepted, orrecognized, by a tree automatonAis the set of all ground terms accepted byA. A set of ground terms isrecognizableif there exists a tree automaton that accepts it.
A linear (that is, arity-preserving) tree homomorphism preserves recognizability.[7]
A non-deterministic finite tree automaton iscompleteif there is at least one transition rule available for every possible symbol-states combination.
A stateqisaccessibleif there exists a ground termtsuch that there exists a reduction fromttoq(t).
An NFTA isreducedif all its states are accessible.[8]
Every sufficiently large[9]ground termtin a recognizable tree languageLcan be vertically tripartited[10]such that arbitrary repetition ("pumping") of the middle part keeps the resulting term inL.[11][12]
For the language of all finite lists of boolean values from the above example, all terms beyond the height limitk=2 can be pumped, since they need to contain an occurrence ofcons. For example,
all belong to that language.
The class of recognizable tree languages is closed under union, under complementation, and under intersection.[13]
A congruence on the set of all trees over a ranked alphabetFis anequivalence relationsuch thatu1≡v1and ... andun≡vnimpliesf(u1,...,un) ≡f(v1,...,vn), for everyf∈F.
It is of finite index if its number of equivalence-classes is finite.
For a given tree-languageL, a congruence can be defined byu≡LvifC[u] ∈L⇔C[v] ∈Lfor each contextC.
TheMyhill–Nerode theoremfor tree automata states that the following three statements are equivalent:[14]
|
https://en.wikipedia.org/wiki/Tree_automaton
|
UML state machine,[1]formerly known asUML statechart, is an extension of themathematicalconcept of afinite automatonincomputer scienceapplications as expressed in theUnified Modeling Language(UML) notation.
The concepts behind it are about organizing the way a device, computer program, or other (often technical) process works such that an entity or each of its sub-entities is always in exactly one of a number of possible states and where there are well-defined conditional transitions between these states.
UML state machine is an object-based variant ofHarel statechart,[2]adapted and extended by UML.[1][3]The goal of UML state machines is to overcome the main limitations of traditionalfinite-state machineswhile retaining their main benefits.
UML statecharts introduce the new concepts ofhierarchically nested statesandorthogonal regions, while extending the notion ofactions. UML state machines have the characteristics of bothMealy machinesandMoore machines. They supportactionsthat depend on both the state of the system and the triggeringevent, as in Mealy machines, as well asentry and exit actions, which are associated with states rather than transitions, as in Moore machines.[4]
The term "UML state machine" can refer to two kinds of state machines:behavioral state machinesandprotocol state machines.
Behavioral state machines can be used to model the behavior of individual entities (e.g., class instances), a subsystem, a package, or even an entire system.
Protocol state machines are used to express usage protocols and can be used to specify the legal usage scenarios of classifiers, interfaces, and ports.
Many software systems areevent-driven, which means that they continuously wait for the occurrence of some external or internaleventsuch as a mouse click, a button press, a time tick, or an arrival of a data packet. After recognizing the event, such systems react by performing the appropriate computation that may include manipulating the hardware or generating “soft” events that trigger other internal software components. (That's why event-driven systems are alternatively calledreactive systems.) Once the event handling is complete, the system goes back to waiting for the next event.
The response to an event generally depends on both the type of the event and on the internalstateof the system and can include a change of state leading to astate transition. The pattern of events, states, and state transitions among those states can be abstracted and represented as afinite-state machine(FSM).
The concept of a FSM is important inevent-driven programmingbecause it makes the event handling explicitly dependent on both the event-type and on the state of the system. When used correctly, a state machine can drastically cut down the number of execution paths through the code, simplify the conditions tested at each branching point, and simplify the switching between different modes of execution.[5]Conversely, using event-driven programming without an underlying FSM model can lead programmers to produce error prone, difficult to extend and excessively complex application code.[6]
UML preserves the general form of thetraditional state diagrams. The UML state diagrams aredirected graphsin which nodes denote states and connectors denote state transitions. For example, Figure 1 shows a UML state diagram corresponding to the computer keyboard state machine. In UML, states are represented as rounded rectangles labeled with state names. The transitions, represented as arrows, are labeled with the triggering events followed optionally by the list of executed actions. Theinitial transitionoriginates from the solid circle and specifies the default state when the system first begins. Every state diagram should have such a transition, which should not be labeled, since it is not triggered by an event. The initial transition can have associated actions.
Aneventis something that happens that affects the system.
Strictly speaking, in the UML specification,[1]the term event refers to the type of occurrence rather than to any concrete instance of that occurrence.
For example, Keystroke is an event for the keyboard, but each press of a key is not an event but a concrete instance of the Keystroke event. Another event of interest for the keyboard might be Power-on, but turning the power on tomorrow at 10:05:36 will be just an instance of the Power-on event.
An event can have associatedparameters, allowing the event instance to convey not only the occurrence of some interesting incident but also quantitative information regarding that occurrence. For example, the Keystroke event generated by pressing a key on a computer keyboard has associated parameters that convey the character scan code as well as the status of the Shift, Ctrl, and Alt keys.
An event instance outlives the instantaneous occurrence that generated it and might convey this occurrence to one or more state machines. Once generated, the event instance goes through a processing life cycle that can consist of up to three stages. First, the event instance isreceivedwhen it is accepted and waiting for processing (e.g., it is placed on theevent queue). Later, the event instance isdispatchedto the state machine, at which point it becomes the current event. Finally, it isconsumedwhen the state machine finishes processing the event instance. A consumed event instance is no longer available for processing.
Each state machine has astate, which governs reaction of the state machine to events. For example, when you strike a key on a keyboard, the character code generated will be either an uppercase or a lowercase character, depending on whether the Caps Lock is active. Therefore, the keyboard's behavior can be divided into two states: the "default" state and the "caps_locked" state. (Most keyboards include an LED that indicates that the keyboard is in the "caps_locked" state.) The behavior of a keyboard depends only on certain aspects of its history, namely whether the Caps Lock key has been pressed, but not, for example, on how many and exactly which other keys have been pressed previously. A state can abstract away all possible (but irrelevant) event sequences and capture only the relevant ones.
In the context of software state machines (and especially classical FSMs), the termstateis often understood as a singlestate variablethat can assume only a limited number of a priori determined values (e.g., two values in case of the keyboard, or more generally - some kind of variable with anenumtype in many programming languages). The idea ofstate variable(and classical FSM model) is that the value of thestate variablefully defines the current state of the system at any given time. The concept of the state reduces the problem of identifying the execution context in the code to testing just the state variable instead of many variables, thus eliminating a lot of conditional logic.
In practice, however, interpreting the whole state of the state machine as a singlestate variablequickly becomes impractical for all state machines beyond very simple ones. Indeed, even if we have a single 32-bit integer in our machine state, it could contribute to over 4 billion different states - and will lead to a prematurestate explosion. This interpretation is not practical, so in UML state machines the wholestateof the state machine is commonly split into (a) an enumerablestate variableand (b) all the other variables which are namedextended state. Another way to see it is to interpret the enumerablestate variableas a qualitative aspect and theextended stateas quantitative aspects of the whole state. In this interpretation, a change of variable does not always imply a change of the qualitative aspects of the system behavior and therefore does not lead to a change of state.[7]
State machines supplemented withextended statevariables are calledextended state machinesand UML state machines belong to this category. Extended state machines can apply the underlying formalism to much more complex problems than is practical without including extended state variables. For example, if we have to implement some kind of limit in our FSM (say, limiting number of keystrokes on keyboard to 1000), withoutextended statewe'd need to create and process 1000 states - which is not practical; however, with an extended state machine we can introduce akey_countvariable, which is initialized to 1000 and decremented by every keystroke without changingstate variable.
The state diagram from Figure 2 is an example of an extended state machine, in which the complete condition of the system (called theextended state) is the combination of a qualitative aspect—thestate variable—and the quantitative aspects—theextended statevariables.
The obvious advantage of extended state machines is flexibility. For example, changing the limit governed bykey_countfrom 1000 to 10000 keystrokes, would not complicate the extended state machine at all. The only modification required would be changing the initialization value of thekey_countextended state variable during initialization.
This flexibility of extended state machines comes with a price, however, because of the complex coupling between the "qualitative" and the "quantitative" aspects of the extended state. The coupling occurs through the guard conditions attached to transitions, as shown in Figure 2.
Guard conditions(or simply guards) areBoolean expressionsevaluated dynamically based on the value ofextended state variablesandevent parameters. Guard conditions affect the behavior of a state machine by enabling actions or transitions only when they evaluate to TRUE and disabling them when they evaluate to FALSE. In the UML notation, guard conditions are shown in square brackets (e.g.,[key_count == 0]in Figure 2).
The need for guards is the immediate consequence of adding memoryextended state variablesto the state machine formalism. Used sparingly, extended state variables and guards make up a powerful mechanism that can simplify designs.
On the other hand, it is possible to abuse extended states and guards quite easily.[8]
When anevent instanceis dispatched, the state machine responds by performingactions, such as changing a variable, performing I/O, invoking a function, generating another event instance, or changing to another state. Any parameter values associated with the current event are available to all actions directly caused by that event.
Switching from one state to another is calledstate transition, and the event that causes it is called the triggering event, or simply thetrigger. In the keyboard example, if the keyboard is in the "default" state when the CapsLock key is pressed, the keyboard will enter the "caps_locked" state. However, if the keyboard is already in the "caps_locked" state, pressing CapsLock will cause a different transition—from the "caps_locked" to the "default" state. In both cases, pressing CapsLock is the triggering event.
Inextended state machines, a transition can have aguard, which means that the transition can "fire" only if the guard evaluates to TRUE. A state can have many transitions in response to the same trigger, as long as they have nonoverlapping guards; however, this situation could create problems in the sequence of evaluation of the guards when the common trigger occurs. The UML specification[1]intentionally does not stipulate any particular order; rather, UML puts the burden on the designer to devise guards in such a way that the order of their evaluation does not matter. Practically, this means that guard expressions should have no side effects, at least none that would alter evaluation of other guards having the same trigger.
All state machine formalisms, including UML state machines, universally assume that a state machine completes processing of each event before it can start processing the next event. This model of execution is calledrun to completion, or RTC.
In the RTC model, the system processes events in discrete, indivisible RTC steps. New incoming events cannot interrupt the processing of the current event and must be stored (typically in anevent queue) until the state machine becomes idle again. These semantics completely avoid any internal concurrency issues within a single state machine. The RTC model also gets around the conceptual problem of processing actions associated with transitions, where the state machine is not in a well-defined state (is between two states) for the duration of the action. During event processing, the system is unresponsive (unobservable), so the ill-defined state during that time has no practical significance.
Note, however, that RTC does not mean that a state machine has to monopolize the CPU until the RTC step is complete.[1]Thepreemptionrestriction only applies to the task context of the state machine that is already busy processing events. In amultitasking environment, other tasks (not related to the task context of the busy state machine) can be running, possibly preempting the currently executing state machine. As long as other state machines do not share variables or other resources with each other, there are noconcurrency hazards.
The key advantage of RTC processing is simplicity. Its biggest disadvantage is that the responsiveness of a state machine is determined by its longest RTC step. Achieving short RTC steps can often significantly complicate real-time designs.
Though thetraditional FSMsare an excellent tool for tackling smaller problems, it's also generally known that they tend to become unmanageable, even for moderately involved systems. Due to the phenomenon known asstate and transition explosion, the complexity of a traditional FSM tends to grow much faster than the complexity of the system it describes. This happens because the traditional state machine formalism inflicts repetitions. For example, if you try to represent the behavior of a simple pocket calculator with a traditional FSM, you'll immediately notice that many events (e.g., the Clear or Off button presses) are handled identically in many states. A conventional FSM shown in the figure below, has no means of capturing such a commonality and requiresrepeatingthe same actions and transitions in many states. What's missing in the traditional state machines is the mechanism for factoring out the common behavior in order to share it across many states.
UML state machines address exactly this shortcoming of the conventional FSMs. They provide a number of features for eliminating the repetitions so that the complexity of a UML state machine no longer explodes but tends to faithfully represent the complexity of the reactive system it describes. Obviously, these features are very interesting to software developers, because only they make the whole state machine approach truly applicable to real-life problems.
The most important innovation of UML state machines over the traditional FSMs is the introduction ofhierarchically nested states(that is why statecharts are also calledhierarchical state machines, orHSMs). The semantics associated with state nesting are as follows (see Figure 3): If a system is in the nested state, for example "result" (called thesubstate), it also (implicitly) is in the surrounding state "on" (called thesuperstate). This state machine will attempt to handle any event in the context of the substate, which conceptually is at the lower level of the hierarchy. However, if the substate "result" does not prescribe how to handle the event, the event is not quietly discarded as in a traditional "flat" state machine; rather, it is automatically handled at the higher level context of the superstate "on". This is what is meant by the system being in state "result" as well as "on". Of course, state nesting is not limited to one level only, and the simple rule of event processing applies recursively to any level of nesting.
States that contain other states are calledcomposite states; conversely, states without internal structure are calledsimple states. A nested state is called adirect substatewhen it is not contained by any other state; otherwise, it is referred to as atransitively nested substate.
Because the internal structure of a composite state can be arbitrarily complex, any hierarchical state machine can be viewed as an internal structure of some (higher-level) composite state. It is conceptually convenient to define one composite state as the ultimate root of state machine hierarchy. In the UML specification, every state machine has aregion(the abstract root of every state machine hierarchy),[9]which contains all the other elements of the entire state machine.
The graphical rendering of this all-enclosing region is optional.
As you can see, the semantics of hierarchical state decomposition are designed to facilitate reusing of behavior. The substates (nested states) need only define the differences from the superstates (containing states). A substate can easily inherit[6]the common behavior from its superstate(s) by simply ignoring commonly handled events, which are then automatically handled by higher-level states. In other words, hierarchical state nesting enablesprogramming by difference.[10]
The aspect of state hierarchy emphasized most often isabstraction—an old and powerful technique for coping with complexity. Instead of addressing all aspects of a complex system at the same time, it is often possible to ignore (abstract away) some parts of the system. Hierarchical states are an ideal mechanism for hiding internal details because the designer can easily zoom out or zoom in to hide or show nested states.
However, composite states don't simply hide complexity; they also actively reduce it through the powerful mechanism of hierarchical event processing. Without such reuse, even a moderate increase in system complexity could lead to an explosive increase in the number of states and transitions. For example, the hierarchical state machine representing the pocket calculator (Figure 3) avoids repeating the transitions Clear and Off in virtually every state. Avoiding repetition allows the growth of HSMs to remain proportionate to growth in system complexity. As the modeled system grows, the opportunity for reuse also increases and thus potentially counteracts the disproportionate increase in numbers of states and transitions typical of traditional FSMs.
Analysis by hierarchical state decomposition can include the application of the operation 'exclusive-OR' to any given state. For example, if a system is in the "on" superstate (Figure 3), it may be the case that it is also in either "operand1" substate OR the "operand2" substate OR the "opEntered" substate OR the "result" substate. This would lead to description of the "on" superstate as an 'OR-state'.
UML statecharts also introduce the complementary AND-decomposition. Such decomposition means that a composite state can contain two or more orthogonal regions (orthogonal means compatible and independent in this context) and that being in such a composite state entails being in all its orthogonal regions simultaneously.[11]
Orthogonal regions address the frequent problem of a combinatorial increase in the number of states when the behavior of a system is fragmented into independent, concurrently active parts. For example, apart from the main keypad, a computer keyboard has an independent numeric keypad. From the previous discussion, recall the two states of the main keypad already identified: "default" and "caps_locked" (see Figure 1). The numeric keypad also can be in two states—"numbers" and "arrows"—depending on whether Num Lock is active. The complete state space of the keyboard in the standard decomposition is therefore theCartesian productof the two components (main keypad and numeric keypad) and consists of four states: "default–numbers," "default–arrows," "caps_locked–numbers," and "caps_locked–arrows." However, this would be an unnatural representation because the behavior of the numeric keypad does not depend on the state of the main keypad and vice versa. The use of orthogonal regions allows the mixing of independent behaviors as a Cartesian product to be avoided and, instead, for them to remain separate, as shown in Figure 4.
Note that if the orthogonal regions are fully independent of each other, their combined complexity is simply additive, which means that the number of independent states needed to model the system is simply the sumk + l + m + ..., wherek, l, m, ...denote numbers of OR-states in each orthogonal region. However, the general case of mutual dependency, on the other hand, results in multiplicative complexity, so in general, the number of states needed is the productk × l × m × ....
In most real-life situations, orthogonal regions would be only approximately orthogonal (i.e. not truly independent). Therefore, UML statecharts provide a number of ways for orthogonal regions to communicate and synchronize their behaviors. Among these rich sets of (sometimes complex) mechanisms, perhaps the most important feature is that orthogonal regions can coordinate their behaviors by sending event instances to each other.
Even though orthogonal regions imply independence of execution (allowing more or less concurrency), the UML specification does not require that a separate thread of execution be assigned to each orthogonal region (although this can be done if desired). In fact, most commonly, orthogonal regions execute within the same thread.[12]The UML specification requires only that the designer does not rely on any particular order for event instances to be dispatched to the relevant orthogonal regions.
Every state in a UML statechart can have optionalentry actions, which are executed upon entry to a state, as well as optionalexit actions, which are executed upon exit from a state. Entry and exit actions are associated with states, not transitions. Regardless of how a state is entered or exited, all its entry and exit actions will be executed. Because of this characteristic, statecharts behave likeMoore machines. The UML notation for state entry and exit actions is to place the reserved word "entry" (or "exit") in the state right below the name compartment, followed by the forward slash and the list of arbitrary actions (see Figure 5).
The value of entry and exit actions is that they provide means forguaranteed initialization and cleanup, very much like class constructors and destructors inObject-oriented programming. For example, consider the "door_open" state from Figure 5, which corresponds to the toaster oven behavior while the door is open. This state has a very important safety-critical requirement: Always disable the heater when the door is open. Additionally, while the door is open, the internal lamp illuminating the oven should light up.
Of course, such behavior could be modeled by adding appropriate actions (disabling the heater and turning on the light) to every transition path leading to the "door_open" state (the user may open the door at any time during "baking" or "toasting" or when the oven is not used at all). It should not be forgotten to extinguish the internal lamp with every transition leaving the "door_open" state. However, such a solution would cause therepetitionof actions in many transitions. More importantly, such an approach leaves the design error-prone during subsequent amendments to behavior (e.g., the next programmer working on a new feature, such as top-browning, might simply forget to disable the heater on transition to "door_open").
Entry and exit actions allow implementation of desired behavior in a safer, simpler, and more intuitive way. As shown in Figure 5, it could be specified that the exit action from "heating" disables the heater, the entry action to "door_open" lights up the oven lamp, and the exit action from "door_open" extinguishes the lamp. The use of entry and exit actions is preferable to placing an action on a transition because it avoids repetitive coding and improves function by eliminating a safety hazard; (heater on while door open). The semantics of exit actions guarantees that, regardless of the transition path, the heater will be disabled when the toaster is not in the "heating" state.
Because entry actions are executed automatically whenever an associated state is entered, they often determine the conditions of operation or the identity of the state, very much as a class constructor determines the identity of the object being constructed. For example, the identity of the "heating" state is determined by the fact that the heater is turned on. This condition must be established before entering any substate of "heating" because entry actions to a substate of "heating," like "toasting," rely on proper initialization of the "heating" superstate and perform only the differences from this initialization. Consequently, the order of execution of entry actions must always proceed from the outermost state to the innermost state (top-down).
Not surprisingly, this order is analogous to the order in which class constructors are invoked. Construction of a class always starts at the very root of the class hierarchy and follows through all inheritance levels down to the class being instantiated. The execution of exit actions, which corresponds to destructor invocation, proceeds in the exact reverse order (bottom-up).
Very commonly, an event causes only some internal actions to execute but does not lead to a change of state (state transition). In this case, all actions executed comprise theinternal transition. For example, when one types on a keyboard, it responds by generating different character codes. However, unless the Caps Lock key is pressed, the state of the keyboard does not change (no state transition occurs). In UML, this situation should be modeled with internal transitions, as shown in Figure 6. The UML notation for internal transitions follows the general syntax used for exit (or entry) actions, except instead of the word entry (or exit) the internal transition is labeled with the triggering event (e.g., see the internal transition triggered by the ANY_KEY event in Figure 6).
In the absence of entry and exit actions, internal transitions would be identical toself-transitions(transitions in which the target state is the same as the source state). In fact, in a classicalMealy machine, actions are associated exclusively with state transitions, so the only way to execute actions without changing state is through a self-transition (depicted as a directed loop in Figure 1 from the top of this article). However, in the presence of entry and exit actions, as in UML statecharts, a self-transition involves the execution of exit and entry actions and therefore it is distinctively different from an internal transition.
In contrast to a self-transition, no entry or exit actions are ever executed as a result of an internal transition, even if the internal transition is inherited from a higher level of the hierarchy than the currently active state. Internal transitions inherited from superstates at any level of nesting act as if they were defined directly in the currently active state.
State nesting combined with entry and exit actions significantly complicates the state transition semantics in HSMs compared to the traditional FSMs. When dealing withhierarchically nested statesandorthogonal regions, the simple termcurrent statecan be quite confusing. In an HSM, more than one state can be active at once. If the state machine is in a leaf state that is contained in a composite state (which is possibly contained in a higher-level composite state, and so on), all the composite states that either directly or transitively contain the leaf state are also active. Furthermore, because some of the composite states in this hierarchy might have orthogonal regions, the current active state is actually represented by a tree of states starting with the single region at the root down to individual simple states at the leaves. The UML specification refers to such a state tree as state configuration.[1]
In UML, a state transition can directly connect any two states. These two states, which may be composite, are designated as themain sourceand themain targetof a transition. Figure 7 shows a simple transition example and explains the state roles in that transition. The UML specification prescribes that taking a state transition involves executing the actions in the following predefined sequence (see Section 14.2.3.9.6 ofOMG Unified Modeling Language (OMG UML)[1]):
The transition sequence is easy to interpret in the simple case of both the main source and the main target nesting at the same level. For example, transition T1 shown in Figure 7 causes the evaluation of the guard g(); followed by the sequence of actions:a(); b(); t(); c(); d();ande(); assuming that the guardg()evaluates to TRUE.
However, in the general case of source and target states nested at different levels of the state hierarchy, it might not be immediately obvious how many levels of nesting need to be exited.
The UML specification[1]prescribes that a transition involves exiting all nested states from the current active state (which might be a direct or transitive substate of the main source state) up to, but not including, theleast common ancestor(LCA) state of the main source and main target states.
As the name indicates, the LCA is the lowest composite state that is simultaneously a superstate (ancestor) of both the source and the target states. As described before, the order of execution of exit actions is always from the most deeply nested state (the current active state) up the hierarchy to the LCA but without exiting the LCA. For instance, the LCA(s1,s2) of states "s1" and "s2" shown in Figure 7 is state "s."
Entering the target state configuration commences from the level where the exit actions left off (i.e., from inside the LCA). As described before, entry actions must be executed starting from the highest-level state down the state hierarchy to the main target state. If the main target state is composite, the UML semantics prescribes to "drill" into its submachine recursively using the local initial transitions. The target state configuration is completely entered only after encountering a leaf state that has no initial transitions.
Before UML 2,[1]the only transition semantics in use was theexternal transition, in which the main source of the transition is always exited and the main target of the transition is always entered.
UML 2 preserved the "external transition" semantics for backward compatibility, but introduced also a new kind of transition calledlocal transition(see Section 14.2.3.4.4 ofUnified Modeling Language (UML)[1]).
For many transition topologies, external and local transitions are actually identical. However, a local transition doesn't cause exit from and reentry to the main source state if the main target state is a substate of the main source. In addition, a local state transition doesn't cause exit from and reentry to the main target state if the main target is a superstate of the main source state.
Figure 8 contrasts local (a) and external (b) transitions. In the top row, you see the case of the main source containing the main target. The local transition does not cause exit from the source, while the external transition causes exit and reentry to the source. In the bottom row of Figure 8, you see the case of the main target containing the main source. The local transition does not cause entry to the target, whereas the external transition causes exit and reentry to the target.
Sometimes an event arrives at a particularly inconvenient time, when a state machine is in a state that cannot handle the event. In many cases, the nature of the event is such that it can be postponed (within limits) until the system enters another state, in which it is better prepared to handle the original event.
UML state machines provide a special mechanism fordeferring eventsin states. In every state, you can include a clause[event list]/defer. If an event in the current state's deferred event list occurs, the event will be saved (deferred) for future processing until a state is entered that does not list the event in its deferred event list. Upon entry to such a state, the UML state machine will automatically recall any saved event(s) that are no longer deferred and will then either consume or discard these events. It is possible for a superstate to have a transition defined on an event that is deferred by a substate. Consistent with other areas in the specification of UML state machines, the substate takes precedence over the superstate, the event will be deferred and the transition for the superstate will not be executed. In the case of orthogonal regions where one orthogonal region defers an event and another consumes the event, the consumer takes precedence and the event is consumed and not deferred.
Harel statecharts, which are the precursors of UML state machines, have been invented as "a visual formalism for complex systems",[2]so from their inception, they have been inseparably associated with graphical representation in the form of state diagrams.
However, it is important to understand that the concept of UML state machine transcends any particular notation, graphical or textual.
The UML specification[1]makes this distinction apparent by clearly separating state machine semantics from the notation.
However, the notation of UML statecharts is not purely visual. Any nontrivial state machine requires a large amount of textual information (e.g., the specification of actions and guards). The exact syntax of action and guard expressions isn't defined in the UML specification, so many people use either structured English or, more formally, expressions in an implementation language such asC,C++, orJava.[13]In practice, this means that UML statechart notation depends heavily on the specificprogramming language.
Nevertheless, most of the statecharts semantics are heavily biased toward graphical notation. For example, state diagrams poorly represent the sequence of processing, be it order of evaluation ofguardsor order of dispatching events toorthogonal regions. The UML specification sidesteps these problems by putting the burden on the designer not to rely on any particular sequencing. However, it is the case that when UML state machines are actually implemented, there is inevitably full control over order of execution, giving rise to criticism that the UML semantics may be unnecessarily restrictive. Similarly, statechart diagrams require a lot of plumbing gear (pseudostates, like joins, forks, junctions, choicepoints, etc.) to represent the flow of control graphically. In other words, these elements of the graphical notation do not add much value in representing flow of control as compared to plainstructured code.
The UML notation and semantics are really geared toward computerizedUML tools. A UML state machine, as represented in a tool, is not just the state diagram, but rather a mixture of graphical and textual representation that precisely captures both the state topology and the actions. The users of the tool can get several complementary views of the same state machine, both visual and textual, whereas the generated code is just one of the many available views.
|
https://en.wikipedia.org/wiki/UML_state_machine
|
Inpublic key infrastructure, avalidation authority(VA) is an entity that provides a service used to verify the validity orrevocation statusof adigital certificateper the mechanisms described in theX.509standard andRFC5280(page 69).[1]
The dominant method used for this purpose is to host acertificate revocation list(CRL) for download via theHTTPorLDAPprotocols. To reduce the amount ofnetwork trafficrequired for certificate validation, theOCSPprotocol may be used instead.
While this is a potentially labor-intensive process, the use of a dedicated validation authority allows for dynamic validation of certificates issued by anoffline root certificate authority. While the root CA itself will be unavailable to network traffic, certificates issued by it can always be verified via the validation authority and the protocols mentioned above.
The ongoing administrative overhead of maintaining the CRLs hosted by the validation authority is typically minimal, as it is uncommon for root CAs to issue (or revoke) large numbers of certificates.
While a validation authority is capable of responding to a network-based request for a CRL, it lacks the ability to issue or revoke certificates. It must be continuously updated with current CRL information from acertificate authoritywhich issued the certificates contained within the CRL.
|
https://en.wikipedia.org/wiki/Validation_authority
|
Acontact pageis a commonweb pageon awebsitefor visitors to contact the organization or individual providing the website.[1]
The page contains one or more of the following items:
In the case of large organizations, the contact page may provide information for several offices (headquarters, field offices, etc.) and departments (customer support, sales, investor relations, press relations, etc.).
|
https://en.wikipedia.org/wiki/Contact_page
|
ThePeople For Internet Responsibility (PFIR)is a global,ad hocnetwork of individuals concerned about the operations, development, management, and regulation of theInternetin responsible ways, co-founded byLauren WeinsteinandPeter G. Neumannin November, 1999 in California. PFIR is attempting to become a nonprofit 501 (c)(3) corporation, and claims to be nonpartisan, does not partake in lobbying, and has no political agenda. The main goal of PFIR is to be a resource for people around the world to impact critical issues on the Internet that have a significant impact on today's societies worldwide.[1]
Regarding Internet issues, PFIR is a resource for analysis, discussion, education, and data that is targeted to aid people from all around the world with successfully participating in the process of internet evolution, use, and control. PFIR uses their website, telephone, email services, workshops, television and radio broadcasts, and other venues in order to provide their resources on a worldwide level.[1]
PFIR believes that there is increasing concern with the extremely rapid commercialization of the World Wide Web, that powerful commercial and political interests that do not necessarily share the concerns of the people at large are irresponsibly skewing decisions in regard to Internet resources. Such areas of concern to PFIR include spam security, freedom of speech, domain name policy, filtering, and other topics.[1]
ThisWorld Wide Web–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/People_for_Internet_Responsibility
|
DigiNotarwas a Dutchcertificate authority, established in 1998 and acquired in January 2011 byVASCO Data Security International, Inc.[1][2]The company was hacked in June 2011 and it issued hundreds of fakecertificates, some of which were used forman-in-the-middle attackson IranianGmailusers. The company was declared bankrupt in September 2011.
On 3 September 2011, after it had become clear that a security breach had resulted in thefraudulentissuing ofcertificates, theDutch governmenttook over operational management of DigiNotar's systems.[3]That same month, the company was declared bankrupt.[4][5]
An investigation into the hacking by Dutch-government appointed Fox-IT consultancy identified 300,000IranianGmailusers as the main target of the hack (targeted subsequently usingman-in-the-middleattacks), and suspected that the Iranian government was behind the hack.[6]While nobody has been charged with the break-in and compromise of the certificates (as of 2013[update]), cryptographerBruce Schneiersays the attack may have been "either the work of theNSA, or exploited by the NSA."[7]However, this has been disputed, with others saying the NSA had only detected a foreignintelligence serviceusing the fake certificates.[8]The hack has also been claimed by the so-called Comodohacker, allegedly a 21-year-old Iranian student, who also claimed to have hacked four other certificate authorities, includingComodo, a claim found plausible byF-Secure, although not fully explaining how it led to the subsequent "widescale interception of Iranian citizens".[9]
After more than 500 fake DigiNotar certificates were found, major web browser makers reacted by blacklisting all DigiNotar certificates.[10]The scale of the incident was used by some organizations likeENISAandAccessNow.orgto call for a deeper reform ofHTTPSin order to remove the weakest link possibility that a single compromised CA can affect that many users.[11][12]
DigiNotar's main activity was as acertificate authority, issuing two types of certificate. First, they issued certificates under their own name (where the root CA was "DigiNotar Root CA").[13]Entrustcertificates were not issued since July 2010, but some were still valid up to July 2013.[14][15]Secondly, they issued certificates for the Dutch government'sPKIoverheid("PKIgovernment") program. This issuance was via two intermediate certificates, each of which chained up to one of the two "Staat der Nederlanden" root CAs. National and local Dutch authorities and organisations offering services for the government who want to use certificates for secure internet communication can request such a certificate. Some of the most-used electronic services offered by Dutch governments used certificates from DigiNotar. Examples were the authentication infrastructureDigiDand the central car-registration organisationNetherlands Vehicle Authority[nl](RDW).
DigiNotar's root certificates were removed from the trusted-root lists of all major web browsers and consumer operating systems on or around 29 August 2011;[16][17][18]the "Staat der Nederlanden" roots were initially kept because they were not believed to be compromised. However, they have since been revoked.
DigiNotar was originally set up in 1998 by the DutchnotaryDick Batenburg fromBeverwijkand theKoninklijke Notariële Beroepsorganisatie[nl], the national body for Dutchcivil law notaries. The KNB offers all kind of central services to the notaries, and because many of the services that notaries offer are official legal procedures, security in communications is important. The KNB offered advisory services to their members on how to implement electronic services in their business; one of these activities was offering secure certificates.
Dick Batenburg and the KNB formed the group TTP Notarissen (TTP Notaries), where TTP stands fortrusted third party. A notary can become a member of TTP Notarissen if they comply with certain rules. If they comply with additional rules on training and work procedures, they can become an accredited TTP Notary.[19]
Although DigiNotar had been a general-purpose CA for several years, they still targeted the market for notaries and other professionals.
On 10 January 2011 the company was sold to VASCO Data Security International.[1]In a VASCO press release dated 20 June 2011, one day after DigiNotar first detected an incident on their systems[20]VASCO's president andCOOJan Valcke is quoted as stating "We believe that DigiNotar's certificates are among the most reliable in the field."[21]
On 20 September 2011 Vasco announced that its subsidiary DigiNotar was declared bankrupt after filing forvoluntary bankruptcyat theHaarlemcourt. Effective immediately the court appointed areceiver, a court-appointed trustee who takes over the management of all of DigiNotar's affairs as it proceeds through the bankruptcy process toliquidation.[4][22]
Thecurator(court-appointed receiver) didn't want the report fromITSecto be published, as it might lead to additional claims towards DigiNotar.[citation needed]The report covered the way the company operated and details of the hack of 2011 that led to its bankruptcy.[citation needed]
The report was made on request of the Dutch supervisory agencyOPTAwho refused to publish the report in the first place. In afreedom of information(Wet openbaarheid van bestuur[nl]) procedure started by a journalist, the receiver tried to convince the court not to allow publication of this report, and to confirm the OPTA's initial refusal to do so.[23]
The report was ordered to be released, and was made public in October 2012. It shows a near total compromise of the systems.
On 10 July 2011 an attacker with access to DigiNotar's systems issued awildcardcertificateforGoogle. This certificate was subsequently used by unknown persons inIranto conduct aman-in-the-middle attackagainst Google services.[24][25]On 28 August 2011 certificate problems were observed on multipleInternet service providersin Iran.[26]The fraudulent certificate was posted onPastebin.[27]According to a subsequent news release by VASCO, DigiNotar had detected an intrusion into its certificate authority infrastructure on 19 July 2011.[28]DigiNotar did not publicly reveal the security breach at the time.
After this certificate was found, DigiNotar belatedly admitted dozens of fraudulent certificates had been created, including certificates for the domains ofYahoo!,Mozilla,WordPressandThe Tor Project.[29]DigiNotar could not guarantee all such certificates had beenrevoked.[30]Googleblacklisted247 certificates inChromium,[31]but the final known total of misissued certificates is at least 531.[32]Investigation byF-Securealso revealed that DigiNotar's website had been defaced by Turkish and Iranian hackers in 2009.[33]
In reaction, Mozilla revoked trust in the DigiNotar root certificate in all supported versions of itsFirefoxbrowser andMicrosoftremoved the DigiNotar root certificate from its list of trusted certificates with its browsers on all supported releases of Microsoft Windows.[34][35]Chromium/Google Chromewas able to detect the fraudulent*.google.comcertificate, due to its "certificate pinning" security feature;[36]however, this protection was limited to Google domains, which resulted in Google removing DigiNotar from its list of trusted certificate issuers.[24]Operaalways checks the certificate revocation list of the certificate's issuer and so they initially stated they did not need a security update.[37][38]However, later they also removed the root from their trust store.[39]On 9 September 2011Appleissued Security Update 2011-005 forMac OS X10.6.8 and 10.7.1, which removes DigiNotar from the list of trusted root certificates and EV certificate authorities.[40]Without this update,Safariand Mac OS X do not detect the certificate's revocation, and users must use theKeychainutility to manually delete the certificate.[41]Apple did not patch iOS until 13 October 2011, with the release of iOS 5.[42]
DigiNotar also controlled an intermediate certificate which was used for issuing certificates as part of theDutch government’spublic key infrastructure"PKIoverheid" program, chaining up to the official Dutch government certification authority (Staat der Nederlanden).[43]Once this intermediate certificate was revoked or marked as untrusted by browsers, thechain of trustfor their certificates was broken, and it was difficult to access services such as theidentity managementplatformDigiDand theTax and Customs Administration.[44]GOVCERT.NL[nl], the Dutchcomputer emergency response team, initially did not believe the PKIoverheid certificates had been compromised,[45]although security specialists were uncertain.[30][46]Because these certificates were initially thought not to be compromised by the security breach, they were, at the request of the Dutch authorities, kept exempt from the removal of trust[43][47]– although one of the two, the active "Staat der Nederlanden - G2" root certificate, was overlooked by the Mozilla engineers and accidentally distrusted in the Firefox build.[48]However, this assessment was rescinded after an audit by the Dutch government, and the DigiNotar-controlled intermediates in the "Staat der Nederlanden" hierarchy were also blacklisted by Mozilla in the next security update, and also by other browser manufacturers.[49]The Dutch government announced on 3 September 2011 that they would switch to a different firm as certificate authority.[50]
After the initial claim that the certificates under the DigiNotar-controlled intermediate certificate in thePKIoverheidhierarchy weren't affected, further investigation by an external party, the Fox-IT consultancy, showed evidence of hacker activity on those machines as well. Consequently, the Dutch government decided on 3 September 2011 to withdraw their earlier statement that nothing was wrong.[51](The Fox-IT investigators dubbed the incident "Operation Black Tulip".[52]) The Fox-IT report identified 300,000 Iranian Gmail accounts as the main victims of the hack.[6]
DigiNotar was only one of the available CAs in PKIoverheid, so not all certificates used by the Dutch government under their root were affected. When the Dutch government decided that they had lost their trust in DigiNotar, they took back control over the company's intermediate certificate in order to manage an orderly transition, and they replaced the untrusted certificates with new ones from one of the other providers.[51]The much-used DigiD platform now[when?]uses a certificate issued byGetronicsPinkRoccade Nederland B.V.[53]According to the Dutch government, DigiNotar gave them its full co-operation with these procedures.
After the removal of trust in DigiNotar, there are now[when?]fourCertification Service Providers(CSP) that can issue certificates under thePKIoverheidhierarchy:[54]
All four companies have opened special help desks and/or published information on their websites as to how organisations that have a PKIoverheid certificate from DigiNotar can request a new certificate from one of the remaining four providers.[55][56][57][58]
|
https://en.wikipedia.org/wiki/DigiNotar
|
Xcitium(formerlyComodo Security Solutions Inc.) is a cybersecurity company based inBloomfield, New Jersey, United States.[2]The company rebranded as Xcitium in 2022 and specializes in Zero Trustcybersecuritysolutions. It is known for its patented ZeroDwell technology, designed to isolate and manage unknownthreats.[3]
The company was founded in 1998 in theUnited Kingdom[1]byMelih Abdulhayoğlu. The company relocated to theUnited Statesin 2004. Its products are focused on computer and internet security. The firm operates acertificate authoritythat issuesSSL certificates. The company also helped set standards by contributing to theIETF(Internet Engineering Task Force)DNS Certification Authority Authorization(CAA) Resource Record.[4]
In October 2017,Francisco Partnersacquired Comodo Certification Authority (Comodo CA) from Comodo Security Solutions, Inc. Francisco Partners rebranded Comodo CA in November 2018 to Sectigo.[5][6]
On June 28, 2018, the new organization announced that it was expanding from TLS/SSL certificates into IoT security with the announcement of its IoT device security platform.[7]The company announced its new headquarters inRoseland, New Jerseyon July 3, 2018[8]and its acquisition of CodeGuard, a website maintenance and disaster recovery company, on August 16, 2018.[9]
On June 29, 2020, Comodo announced its partnership with CyberSecOp.[citation needed]The firm has partnered with Comodo in the past, and seeks to provide a range of cybersecurity products and consulting services.
Comodo is a member of the following industry organizations:
In response toSymantec's comment asserting paidantivirusis superior to free antivirus, the CEO of Comodo Group,Melih Abdulhayoğluhad challenged Symantec on 18 September 2010 to see whether paid or free products can better defend the consumer againstmalware.[19]GCN'S John Breeden understood Comodo's stance on free Antivirus software and challenging Symantec: "This is actually a pretty smart move based on previous reviews of AV performance we've done in the GCN Lab. Our most recent AV review this year showed no functional difference between free and paid programs in terms of stopping viruses, and it's been that way for many years. In fact you have to go all the way back to 2006 to find an AV roundup where viruses were missed by some companies."[20]
Symantec responded saying that if Comodo is interested they should have their product included in tests by independent reviewers.[21]
Comodo volunteered to a Symantec vs. Comodo independent review.[22]Though this showdown did not take place, Comodo has since been included in multiple independent reviews with AV-Test,[23]PC World,[24]Best Antivirus Reviews,[25]AV-Comparatives,[26]and PC Mag.[27]
On 23 March 2011, Comodo posted a report that 8 days earlier, on 15 March 2011, a user account with an affiliate registration authority had been compromised and was used to create a new user account that issued ninecertificate signing requests.[28]Nine certificates for seven domains were issued.[28]The attack was traced to IP address 212.95.136.18, which originates inTehran, Iran.[28]Moxie Marlinspikeanalyzed theIP addresson his website the next day and found it to haveEnglishlocalization and Windows operating system.[29]Though the firm initially reported that the breach was the result of a "state-driven attack", it subsequently stated that the origin of the attack may be the "result of an attacker attempting to lay a false trail.".[28][30]
Comodo revoked all of the bogus certificates shortly after the breach was discovered. Comodo also stated that it was actively looking into ways to improve the security of its affiliates.[31]
In an update on 31 March 2011, Comodo stated that it detected and thwarted an intrusion into a reseller user account on 26 March 2011. The new controls implemented by Comodo following the incident on 15 March 2011, removed any risk of the fraudulent issue of certificates. Comodo believed the attack was from the same perpetrator as the incident on 15 March 2011.[32]
In regards to this second incident, Comodo stated, "Our CA infrastructure was not compromised. Our keys in our HSMs were not compromised. No certificates have been fraudulently issued. The attempt to fraudulently access the certificate ordering platform to issue a certificate failed."[33]
On 26 March 2011, a person under the username "ComodoHacker" verified that they were the attacker by posting the private keys online[34]and posted a series of messages detailing how poor Comodo's security is and bragging about their abilities.[35][36]
As of 2016, all of the certificates remain revoked.[28]Microsoft issued a security advisory and update to address the issue at the time of the event.[37][38]
For Comodo's lacking response on the issue computer security researcherMoxie Marlinspikecalled the whole event extremely embarrassing for Comodo and rethinkingSSLsecurity. It was also implied that the attacker followed an online video tutorial and searched for basicopsec[29]
Such attacks are not unique to Comodo – the specifics will vary from CA to CA, RA to RA, but there are so many of these entities, all of them trusted by default, that further holes are deemed to be inevitable.[39]
In February 2015, Comodo was associated with a man-in-the-middle enabling tool known as PrivDog, which claims to protect users against malicious advertising.[40]
PrivDog issued a statement on 23 February 2015, saying, "A minor intermittent defect has been detected in a third party library used by the PrivDog standalone application which potentially affects a very small number of users. This potential issue is only present in PrivDog versions, 3.0.96.0 and 3.0.97.0. The potential issue is not present in the PrivDog plug-in that is distributed with Comodo Browsers, and Comodo has not distributed this version to its users. there are potentially a maximum of 6,294 users in the USA and 57,568 users globally that this could potentially impact. The third party library used by PrivDog is not the same third party library used by Superfish....The potential issue has already been corrected. There will be an update tomorrow which will automatically update all 57,568 users of these specific PrivDog versions."[41]
In 2009 Microsoft MVP Michael Burgess accused Comodo of issuing digital certificates to known malware distributors.[42]Comodo responded when notified and revoked the certificates in question, which were used to sign the known malware.[43]
In January 2016,Tavis Ormandyreported that Comodo's Chromodo browser exhibited a number of vulnerabilities, including disabling of thesame-origin policy.[44]
The vulnerability wasn't in the browser itself. Rather, the issue was with an add-on. As soon as Comodo became aware of the issue in early February 2016, the company released a statement and a fix: "As an industry, software in general is always being updated, patched, fixed, addressed, improved – it goes hand in hand with any development cycle...What is critical in software development is how companies address an issue if a certain vulnerability is found – ensuring it never puts the customer at risk." Those using Chromodo immediately received an update.[45]The Chromodo browser was subsequently discontinued by Comodo.
Ormandy noted that Comodo received a "Excellence in Information Security Testing" award from Verizon despite the vulnerability in its browser, despite having its VNC delivered with a default of weak authentication, despite not enabling address space layout randomization (ASLR), and despite using access control lists (ACLs) throughout its product. Ormandy has the opinion that Verizon's certification methodology is at fault here.[46]
In October 2015, Comodo applied for "Let's Encrypt", "Comodo Let's Encrypt", and "Let's Encrypt with Comodo" trademarks.[47][48][49]These trademark applications were filed almost a year after the Internet Security Research Group, parent organization ofLet's Encrypt, started using the name Let's Encrypt publicly in November 2014,[50]and despite the fact Comodo's "intent to use" trademark filings acknowledge that it has never used "Let's Encrypt" as a brand.
On 24 June 2016, Comodo publicly posted in its forum that it had filed for "express abandonment" of their trademark applications.[51]
Comodo's Chief Technical Officer Robin Alden said, "Comodo has filed for express abandonment of the trademark applications at this time instead of waiting and allowing them to lapse. Following collaboration between Let's Encrypt and Comodo, the trademark issue is now resolved and behind us, and we'd like to thank the Let's Encrypt team for helping to bring it to a resolution."[52]
On 25 July 2016, Matthew Bryant showed that Comodo's website is vulnerable to dangling markup injection attacks and can send emails to system administrators from Comodo's servers to approve a wildcard certificate issue request which can be used to issue arbitrary wildcard certificates via Comodo's 30-Day PositiveSSL product.[53]
Bryant reached out in June 2016, and on 25 July 2016, Comodo's Chief Technical Officer Robin Alden confirmed a fix was put in place, within the responsible disclosure date per industry standards.[54]
|
https://en.wikipedia.org/wiki/Comodo_Cybersecurity#2011_breach_incident
|
Key Transparencyallows communicating parties to verifypublic keysused inend-to-end encryption.[1]In many end-to-end encryption services, to initiate communication a user will reach out to a central server and request the public keys of the user with which they wish to communicate.[2]If the central server is malicious or becomes compromised, aman-in-the-middle attackcan be launched through the issuance of incorrect public keys. The communications can then be intercepted and manipulated.[3]Additionally, legal pressure could be applied by surveillance agencies to manipulate public keys and read messages.[2]
With Key Transparency, public keys are posted to a public log that can be universally audited.[4]Communicating parties can verify public keys used are accurate.[4]
This cryptography-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Key_Transparency
|
Constrained Application Protocol(CoAP) is a specializedUDP-basedInternet application protocol for constrained devices, as defined inRFC 7252(published in 2014). It enables those constrained devices called "nodes" to communicate with the wider Internet using similar protocols.
CoAP is designed for use between devices on the same constrained network (e.g., low-power, lossy networks), between devices and general nodes on the Internet, and between devices on different constrained networks both joined by an internet. CoAP is also being used via other mechanisms, such as SMS on mobile communication networks.
CoAP is an application-layer protocol that is intended for use in resource-constrained Internet devices, such aswireless sensor networknodes. CoAP is designed to easily translate toHTTPfor simplified integration with the web, while also meeting specialized requirements such asmulticastsupport, very low overhead, and simplicity.[1][2]Multicast, low overhead, and simplicity are important forInternet of things(IoT) andmachine-to-machine(M2M) communication, which tend to beembeddedand have much less memory and power supply than traditional Internet devices have. Therefore, efficiency is very important. CoAP can run on most devices that support UDP or a UDP analogue.
The Internet Engineering Task Force (IETF) ConstrainedRESTfulEnvironments Working Group (CoRE) has done the major standardization work for this protocol. In order to make the protocol suitable to IoT and M2M applications, various new functions have been added.
The core of the protocol is specified inRFC7252. Various extensions have been proposed, particularly:
CoAP makes use of two message types, requests and responses, using a simple, binary header format. CoAP is by default bound toUDPand optionally toDTLS, providing a high level of communications security. When bound to UDP, the entire messagemustfit within a single datagram. When used with6LoWPANas defined in RFC 4944, messagesshouldfit into a singleIEEE 802.15.4frame to minimize fragmentation.
The smallest CoAP message is 4 bytes in length, if the token, options and payload fields are omitted, i.e. if it only consists of the CoAP header. The header is followed by the token value (0 to 8 bytes) which may be followed by a list of options in an optimized type–length–value format. Any bytes after the header, token and options (if any) are considered the message payload, which is prefixed by the one-byte "payload marker" (0xFF). The length of the payload is implied by the datagram length.
The first 4 bytes are mandatory in all CoAP datagrams, they constitute the fixed-size header.
These fields can be extracted from these 4 bytes in C via these macros:
The three most significant bits form a number known as the "class", which is analogous to theclass of HTTP status codes. The five least significant bits form a code that communicates further detail about the request or response. The entire code is typically communicated in the formclass.code.
You can find the latest CoAP request/response codes at[1], though the below list gives some examples:
Every request carries a token (but it may be zero length) whose value was generated by the client. The server must echo every token value without any modification back to the client in the corresponding response. It is intended for use as a client-local identifier to match requests and responses, especially for concurrent requests.
Matching requests and responses is not done with the message ID because a response may be sent in a different message than the acknowledgement (which uses the message ID for matching). For example, this could be done to prevent retransmissions if obtaining the result takes some time. Such a detached response is called "separate response". In contrast, transmitting the response directly in the acknowledgement is called "piggybacked response" which is expected to be preferred for efficiency reasons.
Option delta:
Option length:
Option value:
RFC 7641, RFC 7959
docs.rs/coap/
There existproxyimplementations which provideforwardorreverseproxy functionality for the CoAP protocol and also implementations which translate between protocols like HTTP and CoAP.
The following projects provide proxy functionality:
In many CoAP application domains it is essential to have the ability to address several CoAP resources as a group, instead of addressing each resource individually
(e.g. to turn on all the CoAP-enabled lights in a room with a single CoAP request triggered by toggling the light switch).
To address this need, the IETF has developed an optional extension for CoAP in the form of an experimental RFC: Group Communication for CoAP - RFC 7390[3]This extension relies on IP multicast to deliver the CoAP request to all group members.
The use of multicast has certain benefits such as reducing the number of packets needed to deliver the request to the members.
However, multicast also has its limitations such as poor reliability and being cache-unfriendly.
An alternative method for CoAP group communication that uses unicasts instead of multicasts relies on having an intermediary where the groups are created.
Clients send their group requests to the intermediary, which in turn sends individual unicast requests to the group members, collects the replies from them, and sends back an aggregated reply to the client.[4]
CoAP defines four security modes:[5]
Research has been conducted on optimizing DTLS by implementing security associates as CoAP resources rather than using DTLS as a security wrapper for CoAP traffic. This research has indicated that improvements of up to 6.5 times none optimized implementations.[6]
In addition to DTLS, RFC8613[7]defines the Object Security for Constrained RESTful Environments (OSCORE) protocol which provides security for CoAP at the application layer.
Although the protocol standard includes provisions for mitigating the threat ofDDoSamplification attacks,[8]these provisions are not implemented in practice,[9]resulting in the presence of over 580,000 targets primarily located in China and attacks up to 320 Gbit/s.[10]
|
https://en.wikipedia.org/wiki/Constrained_Application_Protocol
|
Incomputer networking, theDatagram Congestion Control Protocol(DCCP) is a message-orientedtransport layerprotocol. DCCP implements reliable connection setup, teardown,Explicit Congestion Notification(ECN),congestion control, and feature negotiation. TheIETFpublished DCCP asRFC4340, aproposed standard, in March 2006.RFC4336provides an introduction.
DCCP provides a way to gain access to congestion-control mechanisms without having to implement them at theapplication layer. It allows for flow-based semantics like inTransmission Control Protocol(TCP), but does not provide reliable in-order delivery. Sequenced delivery within multiple streams as in theStream Control Transmission Protocol(SCTP) is not available in DCCP. A DCCP connection containsacknowledgmenttraffic as well as data traffic. Acknowledgments inform a sender whether its packets have arrived, and whether they were marked byExplicit Congestion Notification(ECN). Acknowledgements are transmitted as reliably as the congestion control mechanism in use requires, possibly completely reliably.
DCCP has the option for very long (48-bit)sequence numberscorresponding to a packet ID, rather than a byte ID as in TCP. The long length of the sequence numbers aims to guard against "some blind attacks, such as the injection of DCCP-Resets into the connection".[1]
DCCP is useful for applications with timing constraints on the delivery of data. Such applications includestreaming media,multiplayer online gamesandInternet telephony. In such applications, old messages quickly become useless, so that getting new messages is preferred to resending lost messages. As of 2017[update]such applications have often either settled for TCP or usedUser Datagram Protocol(UDP) and implemented their own congestion-control mechanisms, or have no congestion control at all. While being useful for these applications, DCCP can also serve as a general congestion-control mechanism for UDP-based applications, by adding, as needed, mechanisms for reliable or in-order delivery on top of UDP/DCCP. In this context, DCCP allows the use of different, but generallyTCP-friendlycongestion-control mechanisms.
The following operating systems implement DCCP:
Userspace library:
The DCCP generic header takes different forms depending on the value of X, the Extended Sequence Numbers bit. If X is one, the Sequence Number field is 48 bits long, and the generic header takes 16 bytes, as follows.
If X is zero, only the low 24 bits of the Sequence Number are transmitted, and the generic header is 12 bytes long.
Similarly to the extension ofTCPprotocol by multipath capability (MPTCP) also for DCCP the multipath feature is under discussion at IETF[7]correspondingly denoted asMP-DCCP. First implementations have already been developed, tested, and presented in a collaborative approach between operators and academia[8]and are available as an open source solution.
|
https://en.wikipedia.org/wiki/Datagram_Congestion_Control_Protocol
|
TheFast Adaptive and Secure Protocol(FASP) is aproprietarydata transferprotocol. FASP is a network-optimized network protocol created by Michelle C. Munson and Serban Simu, productized byAspera, and now owned byIBMsubsequent to its acquisition of Aspera. The associated client/server software packages are also commonly called Aspera.[1][2]The technology ispatentedunder US Patent #8085781,Bulk Data Transfer, #20090063698,Method and system for aggregate bandwidth control.[3]and others.
Built upon theconnectionlessUDPprotocol, FASP does not expect any feedback on everypacketsent, and yet provides fully reliable data transfer over best effort IP networks. Only the packets marked as really lost must be requested again by the recipient. As a result, it does not suffer as much loss of throughput asTCPdoes on networks with highlatencyor highpacket lossand avoids the overhead of naive "UDP data blaster" protocols.[4][5]The protocol innovates upon naive "data blaster" protocols through an optimal control-theoretic retransmission algorithm and implementation that achieves maximumgoodputand avoids redundant retransmission of data. Its control model is designed to fill the available bandwidth of the end-to-end path over which the transfer occurs with only "good" and needed data.
Large organizations like theEuropean Nucleotide Archive,[2]the USNational Institutes of HealthNational Center for Biotechnology Information[6]and others[7]use the protocol. The technology was recognized with many awards including an Engineering Emmy from the Academy of Film and Television.
FASP has built-in security mechanisms that do not affect the transmission speed. Theencryptionalgorithmsused are based exclusively onopen standards. Some product implementation use secure key exchange and authentication such asSSH.
The data is optionally encrypted or decrypted immediately before sending and receiving with theAES-128. To counteract attacks by monitoring the encrypted information during long transfers, the AES is operated incipher feedback modewith a random, publicinitialization vectorfor each block. In addition, an integrity check of eachdata blocktakes place, in which case, for example, aman-in-the-middle attackwould be noticed.
FASP's controlportis TCP port 22 – the same port that SSH uses. For data transfer, it begins at UDP port 33001, which increments with each additional connection thread.[1]
FASP's flow control algorithm, unlike TCP's, completely ignores packet drops. Instead, it acts on changes in measured packet delivery time. When that is growing, queues are getting longer and channel bandwidth is exceeded; falling, queues are getting shorter. Acting on this information is complicated because the receiver has it and the sender needs it, but its lifetime is often less than the transmission delay; and measurements are noisy. Thus, the sender uses a predictive filter fed updates from the receiver.[8]
The transmission rate is chosen to match and not exceed the available channel bandwidth, and trigger no drops, accounting for all traffic on the channel.[9]By contrast, TCP slowly increases its rate until it sees a packet drop and falls back, interpreting any drop as congestion. On a channel with long delay and frequent packet loss, TCP never approaches the actual bandwidth available. FASP cooperates with TCP flows on the same channel, using up bandwidth TCP leaves unused.
|
https://en.wikipedia.org/wiki/Fast_and_Secure_Protocol
|
Early research and development:
Merging the networks and creating the Internet:
Commercialization, privatization, broader access leads to the modern Internet:
Examples of Internet services:
HTTP/3is the third major version of theHypertext Transfer Protocolused to exchange information on theWorld Wide Web, complementing the widely deployedHTTP/1.1andHTTP/2. Unlike previous versions which relied on the well-establishedTCP(published in 1974),[2]HTTP/3 usesQUIC(officially introduced in 2021),[3]amultiplexedtransport protocol built onUDP.[4]
HTTP/3 uses similar semantics compared to earlier revisions of the protocol, including the samerequest methods,status codes, andmessage fields, but encodes them and maintains session state differently. However, partially due to the protocol's adoption of QUIC, HTTP/3 has lower latency and loads more quickly in real-world usage when compared with previous versions: in some cases over four times as fast than with HTTP/1.1 (which, for many websites, is the only HTTP version deployed).[5][6]
As of September 2024, HTTP/3 is supported by more than 95% of major web browsers in use[7]and 34% of the top 10 million websites.[8]It has been supported byChromium(and derived projects includingGoogle Chrome,Microsoft Edge,Samsung Internet, andOpera)[9]since April 2020 and byMozilla Firefoxsince May 2021.[7][10]Safari14 implemented the protocol but it remains disabled by default.[11]
HTTP/3 originates from anInternet Draftadopted by the QUIC working group. The original proposal was named "HTTP/2 Semantics Using The QUIC Transport Protocol",[12]and later renamed "Hypertext Transfer Protocol (HTTP) over QUIC".[13]
On 28 October 2018 in a mailing list discussion, Mark Nottingham, Chair of theIETFHTTP and QUIC Working Groups, proposed renaming HTTP-over-QUIC to HTTP/3, to "clearly identify it as another binding of HTTP semantics to the wire protocol [...] so people understand its separation from QUIC".[14]Nottingham's proposal was accepted by fellow IETF members a few days later. The HTTP working group was chartered to assist the QUIC working group during the design of HTTP/3, then assume responsibility for maintenance after publication.[15]
Support for HTTP/3 was added toChrome(Canary build) in September 2019 and then eventually reached stable builds, but was disabled by a feature flag. It was enabled by default in April 2020.[9]Firefox added support for HTTP/3 in November 2019 through a feature flag[7][16][17]and started enabling it by default in April 2021 in Firefox 88.[7][10]Experimental support for HTTP/3 was added to Safari Technology Preview on April 8, 2020[18]and was included with Safari 14 that ships withiOS 14andmacOS 11,[11][19]but it's still disabled by default as of Safari 16, on both macOS and iOS.[citation needed]
On 6 June 2022,IETFpublished HTTP/3 as aProposed StandardinRFC9114.[1]
HTTP semantics are consistent across versions: the samerequest methods,status codes, andmessage fieldsare typically applicable to all versions. The differences are in the mapping of these semantics to underlying transports. BothHTTP/1.1andHTTP/2useTCPas their transport. HTTP/3 usesQUIC, atransport layernetwork protocolwhich usesuser spacecongestion controlover theUser Datagram Protocol(UDP). The switch to QUIC aims to fix a major problem of HTTP/2 called "head-of-line blocking": because the parallel nature of HTTP/2's multiplexing is not visible to TCP'sloss recovery mechanisms, a lost or reorderedpacketcauses all activetransactionsto experience a stall regardless of whether that transaction was impacted by the lost packet. Because QUIC provides native multiplexing, lost packets only impact the streams where data has been lost.
ProposedDNS resource recordsSVCB (service binding) and HTTPS would allow connecting without first receiving the Alt-Svc header via previous HTTP versions, therefore removing the 1 RTT of handshaking of TCP.[20][21]There is client support for HTTPS resource records since Firefox 92, iOS 14, reported Safari 14 support, and Chromium supports it behind a flag.[22][23][24]
Open-sourcelibrariesthat implement client or server logic for QUIC and HTTP/3 include[28]
|
https://en.wikipedia.org/wiki/HTTP/3
|
Low Extra Delay Background Transport(LEDBAT) is a way to transferdataon theInternetquickly without clogging the network.[1]LEDBAT was invented byStanislav Shalunov[2][3]and is used byAppleforsoftwareupdates, byBitTorrentfor most of its transfers[4]and byMicrosoft SCCMsoftware distribution points.[5]At one point in time, LEDBAT was estimated to carry 13–20% ofInternet traffic.[4][6][3]
LEDBAT is a delay-basedcongestion controlalgorithmthat uses all the availablebandwidthwhile limiting the increase in delay;[2][7]it does so by measuringone-way delayand using changes in the measurements to limit congestion that the LEDBAT flow itself induces in the network. LEDBAT is described in RFC 6817.
LEDBAT congestion control has the following goals:[2]
The two main implementations areuTPby BitTorrent and as part of TCP by Apple. BitTorrent uses uTP for most traffic and makes thecodeavailable under anopen-source license.[8]Apple uses LEDBAT for Software Updates so that large software downloads tomacOScomputersandiOS devicesdo not interfere with normal user activities; Apple also makes thesource codeavailable.[9]
Both of the above implementations aim to limit the network queuing delay to 100ms. This is the maximum allowed for by the standardized protocol. If one used a lower value, then it would be starved when the other was in use.[2][9]
Windows 10Anniversary Update introduced support for LEDBAT via undocumented socket option as an experimental Windows TCP Congestion Control Module andWindows Server 2019.[10][11][12]
Assumptions:
The sender sends 5 packets of data every 10 clock counts: 10, 20, 30, 40, 50. The units are unimportant. The receiver is receiving data not only from this particular sender but also from other sources. For the 5 packets that were sent, the receiver receives them at the following clock counts: 112, 135, 176, 250, 326. The first differences (one way delay) between the received and sent clock counts are: 102, 115, 146, 210, 276. The second differences (change in one way delay) are: 13 (115 - 102), 31, 64 and 66. The receiver will infer from the positive increase in one way delays that congestion is increasing and adjust the transfer rate accordingly.
|
https://en.wikipedia.org/wiki/LEDBAT
|
Micro Transport Protocol(μTP, sometimesuTP) is anopenUser Datagram Protocol-based (UDP-based) variant of theBitTorrentpeer-to-peer file-sharingprotocolintended to mitigate poorlatencyand othercongestion controlproblems found in conventional BitTorrent overTransmission Control Protocol(TCP), while providing reliable, ordered delivery.
It was devised to automatically slow down[1]the rate at which packets of data are transmitted between users ofpeer-to-peerfile-sharingtorrentswhen it interferes with other applications. For example, the protocol should automatically allow the sharing of aDSL linebetween a BitTorrent application and a web browser.
μTP emerged from research atInternet2onQoSand high-performancebulk transport, was adapted for use as a background transport protocol by Plicto, founded byStanislav Shalunovand Ben Teitelbaum[2]and later acquired byBitTorrent, Inc.in 2006, and further developed within its new owner.[3]It was first introduced in the μTorrent 1.8.x beta branches, and publicized in the alpha builds ofμTorrent1.9.[4][5]
The implementation of μTP used in μTorrent was later separated into the "libutp" library and published under theMIT license.[6][7]
The firstfree softwareclient to implement μTP wasKTorrent4.0.[8][9]libtorrentimplements μTP since version 0.16.0[10]and it is used inqBittorrentsince 2.8.0.[11]Tixatiimplements μTP since version 1.72.[12]Vuze(formerly Azureus) implements μTP since version 4.6.0.0.[13]Transmissionimplements μTP since version 2.30.[14]
The congestion control algorithm used by μTP, known as Low Extra Delay Background Transport (LEDBAT), aims to decrease the latency caused by applications using the protocol while maximizing bandwidth when latency is not excessive.[15][16]Additionally, information from the μTP congestion controller can be used to choose the transfer rate of TCP connections.[17]
LEDBAT is described inRFC6817. As of 2009, the details of the μTP implementation were different from those of the then-current Internet Draft.[18]
μTP also adds support forNAT traversalusingUDP hole punchingbetween two port-restricted peers where a third unrestricted peer acts as aSTUNserver.[19][20]
|
https://en.wikipedia.org/wiki/Micro_Transport_Protocol
|
Multipurpose Transaction Protocol(MTP) software is aproprietarytransport protocol(OSI Layer 4) developed and marketed by Data Expedition, Inc. (DEI). DEI claims that MTP offers superior performance and reliability when compared to theTransmission Control Protocol(TCP) transport protocol.[1]
MTP is implemented using theUser Datagram Protocol(UDP) packet format. It uses proprietary flow-control and error-correctionalgorithmsto achievereliable deliveryof data and avoid network flooding.
Because MTP/IP uses proprietaryalgorithms, compatible software must be installed on both ends of a communication path. Use of the UDP packet format permits compatibility with standardInternet Protocol(IP) network hardware and software. MTP/IP applications may use any available UDPport number.
MTP and the applications which use it have been implemented for several operating systems, including versions ofMicrosoft Windows,macOS,iOS, andLinux. Hardware platforms include variants ofx86andARM.[2]
MTP/IP is marketed by Data Expedition, Inc. Trial versions of applications which use MTP/IP are available on the company's website.
|
https://en.wikipedia.org/wiki/Multipurpose_Transaction_Protocol
|
TheSecure Real-Time Media Flow Protocol(RTMFP) is aprotocol suitedeveloped byAdobe Systemsfor encrypted, efficientmultimediadelivery through bothclient-serverandpeer-to-peermodels over theInternet. The protocol was originallyproprietary, but was later opened up and is now published asRFC7016.[1]
RTMFP allows users of live, real‐time communications, such associal networking servicesand multi‐user games, to communicate directly with each other using their computer's microphone and webcam. RTMFP is apeer-to-peersystem, but is only designed for direct end user to end user communication for real-time communication, not for file sharing between multiple peers using segmented downloading.[2]Facebook uses this protocol in its Pipe application[3]
RTMFP enables direct, live, real‐time communication for applications such as audio andvideo chatand multi‐player games. RTMFP flows data between the end‐user
clients and not the server, bandwidth is not being used at the server. RTMFP uses theUser Datagram Protocol, (UDP) to send video and audio data over the Internet, so needs to handle missing, dropped, or out of order packets. RTMFP has two features that may help to mitigate the effects of connection errors.
Rapid Connection Restore:Connections are re‐established quickly after brief outages. For example, when awireless networkconnection experiences a dropout. After reconnection, the connection has full capabilities instantly.
IP Mobility:Active network peer sessions are maintained even if a client changes to a newIP address. For example, when a laptop on a wireless network is plugged into a wired connection and receives a new address.
The principal difference is how the protocols communicate over the network. RTMFP is based onUser Datagram Protocol(UDP),[1]whereasReal-Time Messaging Protocol(RTMP) is based onTransmission Control Protocol(TCP).
UDP‐based protocols have some specific advantages over TCP‐based protocols when delivering livestreaming media, such as decreased latency and overhead, and greater tolerance for dropped or missing packets, at the cost of decreased reliability.
Unlike RTMP, RTMFP also supports sending data directly from oneAdobe Flash Playerto another,
without going through a server. A server‐side connection will always be required to establish the initial connection between the end‐users and can be used to provide server‐side data execution or gateways into other systems. The user of aFlash Media Serverwill also be required to authorize network address lookup andNAT traversalservices for the clients to prevent Flash Player from being used in an unmanaged way.
Flash Player 10.0 allowed only one-to-one communication for P2P, but from 10.1 application-levelmulticastis allowed. Flash Player finds appropriate distribution route (overlay network), and can distribute to the group, which is connected by P2P.
RTMFP's underlying protocols are the result of Adobe's acquisition ofAmicimain 2006; strong architectural similarities exist between RTMFP and Amicima'sGPL-licensedSecure Media Flow Protocol (MFP).
|
https://en.wikipedia.org/wiki/Real-Time_Media_Flow_Protocol
|
Incomputer networking, theReliable User Datagram Protocol(RUDP) is atransport layerprotocoldesigned atBell Labsfor thePlan 9operating system. It aims to provide a solution whereUDPis too primitive because guaranteed-orderpacketdelivery is desirable, butTCPadds too much complexity/overhead. In order for RUDP to gain higherquality of service, RUDP implements features that are similar to TCP with less overhead.
In order to ensure quality, it extends UDP by means of adding the following features:
RUDP is not currently a formal standard, however it was described in anIETF Internet Draftin 1999.[1]It has not been proposed for standardization.
Ciscoin its Signalling Link Terminals (either standalone or integrated in another gateway) uses RUDP forbackhaulingofSS7MTP3 orISDNsignaling.
The versions are mutually incompatible and differ slightly from the IETF draft.[citation needed]The structure of the Cisco Session Manager used on top of RUDP is also different.
Microsoft introduced another protocol which it named R-UDP and used it in its MediaRoom product (now owned by Ericsson) for IPTV service delivery over multicast networks. This is a proprietary protocol and very little is known about its operation. It is not thought to be based on the above referenced IETF draft.[2]
|
https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol
|
SPDY(pronounced "speedy")[1]is anobsoleteopen-specificationcommunication protocoldeveloped for transportingweb content.[1]SPDY became the basis forHTTP/2specification. However, HTTP/2 diverged from SPDY and eventually HTTP/2 subsumed all usecases of SPDY.[2]After HTTP/2 was ratified as a standard, major implementers, including Google, Mozilla, and Apple, deprecated SPDY in favor of HTTP/2. Since 2021, no modern browser supports SPDY.
Google announced SPDY in late 2009 and deployed in 2010. SPDY manipulatesHTTPtraffic, with particular goals of reducingweb pageloadlatencyand improvingweb security. SPDY achieves reduced latency throughcompression,multiplexing, and prioritization,[1]although this depends on a combination of network and website deployment conditions.[3][4][5]The name "SPDY" is not anacronym.[6]
HTTP/2 was first discussed when it became apparent that SPDY was gaining traction with implementers (like Mozilla and nginx), and was showing significant improvements over HTTP/1.x. After a call for proposals and a selection process, SPDY was chosen as the basis for HTTP/2. Since then, there have been a number of changes, based on discussion in the Working Group and feedback from implementers.[2]
As of July 2012[update], the group developing SPDY stated publicly that it was working toward standardisation (available as anInternet Draft).[7]The first draft ofHTTP/2used SPDY as the working base for its specification draft and editing.[8]The IETF working group forHTTPbishas released the draft ofHTTP/2.[9]SPDY (draft-mbelshe-httpbis-spdy-00) was chosen as the starting point.[10][11]
Throughout the process, the core developers of SPDY have been involved in the development of HTTP/2, including bothMike Belsheand Roberto Peon.
Chromium,[12]Mozilla Firefox,[13]Opera,[14]Amazon Silk,Internet Explorer,[15]andSafari[16]expressed support for SPDY at the time.
In February 2015, Google announced that following ratification of the HTTP/2 standard, support for SPDY would be deprecated, and that support for SPDY would be withdrawn.[17]On May 15, 2015, HTTP/2 was officially ratified asRFC7540.
On February 11, 2016, Google announced that Chrome would no longer support SPDY after May 15, 2016, the one-year anniversary ofRFC7540which standardizedHTTP/2.[18]
On January 25, 2019, Apple announced that SPDY would be deprecated in favor of HTTP/2, and would be removed in future releases.[19]
Google removed SPDY support inGoogle Chrome51 which was released in 2016.[20]Mozillaremoved it inFirefox50.[21]Applehas deprecated the technology inmacOS10.14.4 andiOS12.2.[19]
SPDY is a versioned protocol. SPDY control frames contain 15 dedicated bits to indicate the version of protocol used for the current session.[22]
The goal of SPDY is to reduce web page load time.[35]This is achieved by prioritizing andmultiplexingthe transfer of web page subresources so that only one connection per client is required.[1][36]TLSencryption is nearly ubiquitous in SPDY implementations, and transmission headers aregzip- orDEFLATE-compressed by design[27](in contrast to HTTP, where the headers are sent as human-readable text). Moreover, servers may hint or even push content instead of awaiting individual requests for each resource of a web page.[37]
SPDY requires the use ofSSL/TLS(with TLS extensionALPN) for security but it also supports operation over plainTCP. The requirement for SSL is for security and to avoid incompatibility when communication is across aproxy.
SPDY does not replace HTTP; it modifies the way HTTP requests and responses are sentover the wire.[1]This means that all existing server-side applications can be used without modification if a SPDY-compatible translation layer is put in place.
SPDY is effectively a tunnel for the HTTP and HTTPS protocols. When sent over SPDY, HTTP requests are processed, tokenized, simplified and compressed. For example, each SPDY endpoint keeps track of which headers have been sent in past requests and can avoid resending the headers that have not changed; those that must be sent are compressed.
For use withinHTTPS, SPDY requires theTLSextension Next Protocol Negotiation (NPN)[38]orApplication-Layer Protocol Negotiation(ALPN)[39]thus browser and server support depends on the HTTPS library.
OpenSSL 1.0.1 or greater introduces NPN.[40]Patches to add NPN support have also been written forNSSand TLSLite.[41]
Security Support Provider Interface(SSPI) from Microsoft have not implemented the NPN extension to its TLS implementation. This has prevented SPDY inclusion in the latest .NET Framework versions. Since SPDY specification is being refined andHTTP/2is expected to include SPDY implementation one could expect Microsoft to release support after HTTP/2 is finalized.
As of May 2021[update], approximately 0.1% of all websites support SPDY,[56]in part due to transition toHTTP/2. In 2016, NGINX and Apache[57]were the major providers of SPDY traffic.[58]In 2015, NGINX 1.9.5 dropped SPDY support in favor of HTTP/2.[59]
Some Google services (e.g.Google Search,Gmail, and otherSSL-enabled services) use SPDY when available.[60]Google's ads are also served from SPDY-enabled servers.[61]
A brief history of SPDY support amongst major web players:
According to W3Techs, as of May 2021[update], most SPDY-enabled websites use nginx, with the LiteSpeed web server coming second.[58]
|
https://en.wikipedia.org/wiki/SPDY
|
TheStream Control Transmission Protocol(SCTP) is acomputer networkingcommunications protocolin thetransport layerof theInternet protocol suite. Originally intended forSignaling System 7(SS7) message transport in telecommunication, the protocol provides the message-oriented feature of theUser Datagram Protocol(UDP), while ensuring reliable, in-sequence transport of messages withcongestion controllike theTransmission Control Protocol(TCP). Unlike UDP and TCP, the protocol supportsmultihomingand redundant paths to increase resilience and reliability.
SCTP is standardized by theInternet Engineering Task Force(IETF) inRFC9260. The SCTP reference implementation was released as part ofFreeBSDversion 7, and has since been widely ported to other platforms.
TheIETFSignaling Transport (SIGTRAN) working group defined the protocol (number 132[1]) in October 2000,[2]and the IETF Transport Area (TSVWG) working group maintains it.RFC9260defines the protocol.RFC3286provides an introduction.
SCTP applications submit data for transmission in messages (groups of bytes) to the SCTP transport layer. SCTP places messages and control information into separatechunks(data chunks and control chunks), each identified by achunk header. The protocol can fragment a message into multiple data chunks, but each data chunk contains data from only one user message. SCTP bundles the chunks into SCTP packets. The SCTP packet, which is submitted to theInternet Protocol, consists of a packet header, SCTP control chunks (when necessary), followed by SCTP data chunks (when available).
SCTP may be characterized as message-oriented, meaning it transports a sequence of messages (each being a group of bytes), rather than transporting an unbroken stream of bytes as in TCP. As in UDP, in SCTP a sender sends a message in one operation, and that exact message is passed to the receiving application process in one operation. In contrast, TCP is a stream-oriented protocol, transportingstreams of bytesreliably and in order. However TCP does not allow the receiver to know how many times the sender application called on the TCP transport passing it groups of bytes to be sent out. At the sender, TCP simply appends more bytes to a queue of bytes waiting to go out over the network, rather than having to keep a queue of individual separate outbound messages which must be preserved as such.
The termmulti-streamingrefers to the capability of SCTP to transmit several independent streams of chunks in parallel, for example transmittingweb pageimages simultaneously with the web page text. In essence, it involves bundling several connections into a single SCTP association, operating on messages (or chunks) rather than bytes.
TCP preserves byte order in the stream by including a byte sequence number with eachsegment. SCTP, on the other hand, assigns a sequence number or a message-id[note 1]to eachmessagesent in a stream. This allows independent ordering of messages in different streams. However, message ordering is optional in SCTP; a receiving application may choose to process messages in the order of receipt instead of in the order of sending.
Features of SCTP include:
The designers of SCTP originally intended it for the transport of telephony (i.e. Signaling System 7) over Internet Protocol, with the goal of duplicating some of the reliability attributes of the SS7 signaling network in IP. This IETF effort is known asSIGTRAN. In the meantime, other uses have been proposed, for example, theDiameterprotocol[3]andReliable Server Pooling(RSerPool).[4]
TCP has provided the primary means to transfer data reliably across the Internet. However, TCP has imposed limitations on several applications. FromRFC4960:
Adoption has been slowed by lack of awareness, lack of implementations (particularly in Microsoft Windows), lack of application support and lack of network support.[6]
SCTP has seen adoption in themobile telephonyspace as the transport protocol for severalcore network interfaces.[7]
SCTP provides redundant paths to increase reliability.
Each SCTP end point needs to check reachability of the primary and redundant addresses of the remote end point using aheartbeat. Each SCTP end point needs to acknowledge the heartbeats it receives from the remote end point.
When SCTP sends a message to a remote address, the source interface will only be decided by the routing table of the host (and not by SCTP).
In asymmetric multihoming, one of the two endpoints does not support multihoming.
In local multihoming and remote single homing, if the remote primary address is not reachable, the SCTP association fails even if an alternate path is possible.
An SCTP packet consists of two basic sections:
Each chunk starts with a one-byte type identifier, with 15 chunk types defined byRFC9260, and at least 5 more defined by additional RFCs.[note 2]Eight flag bits, a two-byte length field, and the data compose the remainder of the chunk. If the chunk does not form a multiple of 4 bytes (i.e., the length is not a multiple of 4), then it is padded with zeros, which are not included in the chunk length. The two-byte length field limits each chunk to a 65,535-byte length (including the type, flags and length fields).
Although encryption was not part of the original SCTP design, SCTP was designed with features for improved security, such as 4-wayhandshake(compared toTCP 3-way handshake) to protect againstSYN floodingattacks, and large "cookies" for association verification and authenticity.
Reliability was also a key part of the security design of SCTP. Multihoming enables an association to stay open even when some routes and interfaces are down. This is of particular importance forSIGTRANas it carriesSS7over an IP network using SCTP, and requires strong resilience during link outages to maintain telecommunication service even when enduring network anomalies.
The SCTP reference implementation runs on FreeBSD, Mac OS X, Microsoft Windows, and Linux.[8]
The followingoperating systemsimplement SCTP:
Third-party drivers:
Userspacelibrary:
The following applications implement SCTP:
In the absence of native SCTP support in operating systems, it is possible totunnelSCTP over UDP,[22]as well as to map TCP API calls to SCTP calls so existing applications can use SCTP without modification.[23]
|
https://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol
|
In computer networking,Structured Stream Transport (SST)is an experimentaltransport protocolthat provides an ordered, reliable byte stream abstraction similar toTCP's, but enhances and optimizes stream management to permit applications to use streams in a much more fine-grained fashion than is feasible with TCP streams.
Thiscomputer networkingarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Structured_Stream_Transport
|
UDP-based Data Transfer Protocol(UDT), is a high-performance data transfer protocol designed for transferring large volumetric datasets over high-speedwide area networks. Such settings are typically disadvantageous for the more commonTCPprotocol.
Initial versions were developed and tested on very high-speed networks (1 Gbit/s, 10 Gbit/s, etc.); however, recent versions of the protocol have been updated to support the commodity Internet as well. For example, the protocol now supports rendezvous connection setup, which is a desirable feature for traversing NAT firewalls usingUDP.
UDT has an open source implementation which can be found onSourceForge. It is one of the most popular solutions for supporting high-speed data transfer and is part of many research projects and commercial products.
UDT was developed by Yunhong Gu[1]during his PhD studies at the National Center for Data Mining (NCDM) ofUniversity of Illinois at Chicagoin the laboratory of Dr. Robert Grossman. Dr. Gu continues to maintain and improve the protocol after graduation.
The UDT project started in 2001, when inexpensive optical networks became popular and triggered a wider awareness of TCP efficiency problems over high-speed wide area networks. The first version of UDT, also known as SABUL (Simple Available Bandwidth Utility Library), was designed to support bulk data transfer for scientific data movement over private networks. SABUL used UDP for data transfer and a separate TCP connection for control messages.
In October, 2003, the NCDM achieved a 6.8gigabitsper second transfer fromChicago, United States toAmsterdam,Netherlands. During the 30-minute test they transmitted approximately 1.4terabytesof data.
SABUL was later renamed to UDT starting with version 2.0, which was released in 2004. UDT2 removed the TCP control connection in SABUL and used UDP for both data and control information. UDT2 also introduced a new congestion control algorithm that allowed the protocol to run "fairly and friendly" with concurrent UDT and TCP flows.
UDT3 (2006) extended the usage of the protocol to the commodity Internet. Congestion control was tuned to support relatively low bandwidth as well. UDT3 also significantly reduced the use of system resources (CPU and memory). Additionally, UDT3 allows users to easily define and install their own congestion control algorithms.
UDT4 (2007) introduced several new features to better support high concurrency and firewall traversing. UDT4 allowed multiple UDT connections to bind to the same UDP port and it also supported rendezvous connection setup for easierUDP hole punching.
A fifth version of the protocol is currently in the planning stage. Possible features include the ability to support multiple independent sessions over a single connection.
Moreover, since the absence of a security feature for UDT has been an issue with its initial implementation in a commercial environment, Bernardo (2011) has developed a security architecture for UDT as part of his PhD studies. This architecture however is undergoing enhancement to support UDT in various network environments (i.e., optical networks).
UDT is built on top ofUser Datagram Protocol(UDP), addingcongestion controland reliability control mechanisms. UDT is an application level, connection oriented,duplexprotocol that supports both reliable data streaming and partial reliable messaging.
UDT uses periodic acknowledgments (ACK) to confirm packet delivery, while negative ACKs (loss reports) are used to report packet loss. Periodic ACKs help to reduce control traffic on the reverse path when the data transfer speed is high, because in these situations, the number of ACKs is proportional to time, rather than the number of data packets.
UDT uses anAIMD(additive increase multiplicative decrease) style congestion control algorithm. The increase parameter is inversely proportional to the available bandwidth (estimated using the packet pair technique), thus UDT can probe high bandwidth rapidly and can slow down for better stability when it approaches maximum bandwidth. The decrease factor is a random number between 1/8 and 1/2. This helps reduce the negative impact of loss synchronization.
In UDT, packet transmission is limited by both rate control and window control. The sending rate is updated by the AIMD algorithm described above. The congestion window, as a secondary control mechanism, is set according to the data arrival rate on the receiver side.
The UDT implementation exposes a set of variables related to congestion control in a C++ class and allows users to define a set of callback functions to manipulate these variables. Thus, users can redefine the control algorithm by overriding some or all of these callback functions. Most TCP control algorithms can be implemented using this feature with fewer than 100 lines of code.
Beside the traditional client/server connection setup (AKA caller/listener, where a listener waits for connection and potentially accepts multiple connecting callers), UDT supports also a new rendezvous connection setup mode. In this mode both sides listen on their port and connect to the peer simultaneously, that is, they both connect to one another. Therefore, both parties must use the same port for connection, and both parties are role-equivalent (in contrast to listener/caller roles in traditional setup). Rendezvous is widely used for firewall traversing when both peers are behind a firewall.
UDT is widely used inhigh-performance computingto support high-speed data transfer over optical networks. For example,GridFTP, a popular data transfer tool in grid computing, has UDT available as a data transfer protocol.
Over the commodity Internet, UDT has been used in many commercial products for fast file transfer overwide area networks.
Because UDT is purely based on UDP, it has also been used in many situations where TCP is at a disadvantage to UDP. These scenarios includepeer-to-peerapplications, video and audio communication, and many others.
UDT is considered a state-of-the-art protocol, addressing infrastructure requirements for transmitting data in high-speed
networks. Its development, however, creates new vulnerabilities because like many other protocols, it relies solely on the existing security mechanisms for current protocols such as the Transmission Control Protocol (TCP) and UDP.
Research conducted by Dr. Danilo Valeros Bernardo of theUniversity of Technology Sydney, a member of theAustralian Technology Networkfocusing on practical experiments on UDT using their proposed security mechanisms and exploring the use of other existing security mechanisms used on TCP/UDP for UDT, gained interesting reviews in various network and security scientific communities.
To analyze the security mechanisms, they carry out a formal proof of correctness to assist them in determining their applicability by usingprotocol composition logic(PCL). This approach is modular, comprising[clarification needed]a separate proof of each protocol section and providing insight into the network environment in which each section can be reliably employed. Moreover, the proof holds for a variety of failure recovery strategies and other implementation and configuration options. They derive their technique from the PCL on TLS and Kerberos in the literature. They work on developing and validating its security architecture by using rewrite systems and automata.
The result of their work, which is first in the literature, is a more robust theoretical and practical representation of a security architecture of UDT, viable to work with other high-speed network protocols.
UDT project has been a base forSRTproject, which uses the transmission reliability for live video streaming over public internet.
The UDT team has won the prestigious Bandwidth Challenge three times during the annualACM/IEEE Supercomputing Conference, the world's premier conference for high-performance computing, networking, storage, and analysis.[2][3][4]
At SC06 (Tampa, FL), the team transferred an astronomy dataset at 8 Gbit/s disk-to-disk from Chicago, IL to Tampa, FL using UDT. At SC08 (Austin, TX), the team demonstrated the use of UDT in a complex high-speed data transfer involving various distributed applications over a 120-node system, across four data centers in Baltimore, Chicago (2), and San Diego. At SC09 (Portland, OR), a collaborative team from NCDM, Naval Research Lab, andiCAIRshowcased UDT-powered wide area data intensive cloud computing applications.
|
https://en.wikipedia.org/wiki/UDP-based_Data_Transfer_Protocol
|
CVE-2015-1637(Schannel),
FREAK("Factoring RSA Export Keys") is asecurity exploitof a cryptographic weakness in theSSL/TLSprotocols introduced decades earlier for compliance withU.S. cryptography export regulations. These involved limiting exportable software to use onlypublic key pairswithRSAmoduli of 512 bits or fewer (so-calledRSA EXPORTkeys), with the intention of allowing them to be broken easily by theNational Security Agency(NSA), but not by other organizations with lesser computing resources. However, by the early 2010s, increases in computing power meant that they could be broken by anyone with access to relatively modest computing resources using the well-knownNumber Field Sievealgorithm, using as little as $100 ofcloud computingservices. Combined with the ability of aman-in-the-middle attackto manipulate the initialcipher suite negotiationbetween the endpoints in the connection and the fact that the finished hash only depended on the master secret, this meant that a man-in-the-middle attack with only a modest amount of computation could break the security of any website that allowed the use of 512-bit export-grade keys. While the exploit was only discovered in 2015, its underlying vulnerabilities had been present for many years, dating back to the 1990s.[1]
The flaw was found by researchers fromIMDEA Software Institute,INRIAandMicrosoft Research.[2][3]The FREAK attack in OpenSSL has the identifierCVE-2015-0204.[4]
Vulnerable software and devices includedApple'sSafari web browser, the default browser inGoogle'sAndroidoperating system,Microsoft'sInternet Explorer, andOpenSSL.[5][6]Microsofthas also stated that itsSchannelimplementation of transport-layer encryption is vulnerable to a version of the FREAK attack in all versions ofMicrosoft Windows.[7]The CVE ID for Microsoft's vulnerability inSchannelisCVE-2015-1637.[8]The CVE ID for Apple's vulnerability in Secure Transport isCVE-2015-1067.[9]
Sites affected by the vulnerability included the US federal government websites fbi.gov, whitehouse.gov and nsa.gov,[10]with around 36% of HTTPS-using websites tested by one security group shown as being vulnerable to the exploit.[11]Based on geolocation analysis using IP2Location LITE, 35% of vulnerable servers are located in the US.[12]
Press reports of the exploit have described its effects as "potentially catastrophic"[13]and an "unintended consequence" of US government efforts to control the spread of cryptographic technology.[10]
As of March 2015[update], vendors were in the process of releasing new software that would fix the flaw.[10][11]On March 9, 2015, Apple released security updates for bothiOS 8andOS Xoperating systems which fixed this flaw.[14][15]On March 10, 2015, Microsoft released a patch which fixed this vulnerability for all supported versions of Windows (Server 2003, Vista and later).[16]Google Chrome41 andOpera28 has also mitigated against this flaw.[3]Mozilla Firefoxis not vulnerable against this flaw.[17]
The research paper explaining this flaw has been published at the 36th IEEE Symposium on Security and Privacy and has been awarded the Distinguished Paper award.[18]
|
https://en.wikipedia.org/wiki/FREAK
|
Datagram Transport Layer Security(DTLS) is acommunications protocolprovidingsecuritytodatagram-based applications by allowing them to communicate in a way designed[1][2][3]to preventeavesdropping,tampering, ormessage forgery. The DTLS protocol is based on thestream-orientedTransport Layer Security(TLS) protocol and is intended to provide similar security guarantees. The DTLS protocol datagram preserves the semantics of the underlying transport—the application does not suffer from the delays associated with stream protocols, but because it usesUser Datagram Protocol(UDP) orStream Control Transmission Protocol(SCTP), the application has to deal withpacket reordering, loss of datagram and data larger than the size of a datagramnetwork packet. Because DTLS uses UDP or SCTP rather than TCP it avoids theTCP meltdown problem[4][5]when being used to create a VPN tunnel.
The following documents define DTLS:
DTLS 1.0 is based on TLS 1.1, DTLS 1.2 is based on TLS 1.2, and DTLS 1.3 is based on TLS 1.3. There is no DTLS 1.1 because this version-number was skipped in order to harmonize version numbers with TLS.[2]Like previous DTLS versions, DTLS 1.3 is intended to provide "equivalent security guarantees [to TLS 1.3] with the exception of order protection/non-replayability".[11]
In February 2013 two researchers from Royal Holloway, University of London discovered a timing attack[46]which allowed them to recover (parts of the) plaintext from a DTLS connection using the OpenSSL or GnuTLS implementation of DTLS whenCipher Block Chainingmode encryption was used.
|
https://en.wikipedia.org/wiki/DTLS
|
Incomputing,Internet Protocol Security(IPsec) is a secure networkprotocol suitethatauthenticatesandencryptspacketsof data to provide secure encrypted communication between two computers over anInternet Protocolnetwork. It is used invirtual private networks(VPNs).
IPsec includes protocols for establishingmutual authenticationbetween agents at the beginning of asessionand negotiation ofcryptographic keysto use during the session. IPsec can protect data flows between a pair of hosts (host-to-host), between a pair of security gateways (network-to-network), or between a security gateway and a host (network-to-host).[1]IPsec uses cryptographic security services to protect communications overInternet Protocol(IP) networks. It supports network-level peer authentication,data origin authentication,data integrity, data confidentiality (encryption), and protection fromreplay attacks.
The protocol was designed by a committee instead of being designed via a competition. Some experts criticized it, stating that it is complex and with a lot of options, which has a devastating effect on a security standard.[2]There is alleged interference ofNSAto weaken its security features.
Starting in the early 1970s, theAdvanced Research Projects Agencysponsored a series of experimentalARPANET encryption devices, at first for nativeARPANETpacket encryption and subsequently forTCP/IPpacket encryption; some of these were certified and fielded. From 1986 to 1991, theNSAsponsored the development of security protocols for the Internet under its Secure Data Network Systems (SDNS) program.[3]This brought together various vendors includingMotorolawho produced a network encryption device in 1988. The work was openly published from about 1988 byNISTand, of these,Security Protocol at Layer 3(SP3) would eventually morph into the ISO standard Network Layer Security Protocol (NLSP).[4]
In 1992, the USNaval Research Laboratory(NRL) was funded by DARPA CSTO to implement IPv6 and to research and implement IP encryption in 4.4BSD, supporting both SPARC and x86 CPU architectures. DARPA made its implementation freely available via MIT. Under NRL'sDARPA-funded research effort, NRL developed theIETFstandards-track specifications (RFC 1825 through RFC 1827) for IPsec.[5]NRL's IPsec implementation was described in their paper in the 1996USENIX ConferenceProceedings.[6]NRL's open-source IPsec implementation was made available online byMITand became the basis for most initial commercial implementations.[5]
TheInternet Engineering Task Force(IETF) formed the IP Security Working Group in 1992[7]to standardize openly specified security extensions to IP, calledIPsec.[8]The NRL developed standards were published by the IETF as RFC 1825 through RFC 1827.[9]
The initialIPv4suite was developed with few security provisions. As a part of the IPv4 enhancement, IPsec is alayer 3OSI modelorinternet layerend-to-end security scheme. In contrast, while some other Internet security systems in widespread use operate above thenetwork layer, such asTransport Layer Security(TLS) that operates above thetransport layerandSecure Shell(SSH) that operates at theapplication layer, IPsec can automatically secure applications at theinternet layer.
IPsec is anopen standardas a part of the IPv4 suite and uses the followingprotocolsto perform various functions:[10][11]
The Security Authentication Header (AH) was developed at theUS Naval Research Laboratoryin the early 1990s and is derived in part from previous IETF standards' work for authentication of theSimple Network Management Protocol(SNMP) version 2. Authentication Header (AH) is a member of the IPsec protocol suite. AH ensures connectionlessintegrityby using ahash functionand a secret shared key in the AH algorithm. AH also guarantees the data origin byauthenticatingIPpackets. Optionally a sequence number can protect the IPsec packet's contents againstreplay attacks,[18][19]using thesliding windowtechnique and discarding old packets.
AH operates directly on top of IP, usingIP protocol number51.[21]
The following AH packet diagram shows how an AH packet is constructed and interpreted:[12]
The IP Encapsulating Security Payload (ESP)[22]was developed at theNaval Research Laboratorystarting in 1992 as part of aDARPA-sponsored research project, and was openly published byIETFSIPP[23]Working Group drafted in December 1993 as a security extension for SIPP. ThisESPwas originally derived from the US Department of DefenseSP3Dprotocol, rather than being derived from the ISO Network-Layer Security Protocol (NLSP). The SP3D protocol specification was published byNISTin the late 1980s, but designed by the Secure Data Network System project of theUS Department of Defense.
Encapsulating Security Payload (ESP) is a member of the IPsec protocol suite. It provides originauthenticitythrough sourceauthentication,data integritythrough hash functions andconfidentialitythroughencryptionprotection for IPpackets. ESP also supportsencryption-only andauthentication-only configurations, but using encryption without authentication is strongly discouraged because it is insecure.[24][25][26]
UnlikeAuthentication Header (AH), ESP in transport mode does not provide integrity and authentication for the entireIP packet. However, intunnel mode, where the entire original IP packet isencapsulatedwith a new packet header added, ESP protection is afforded to the whole inner IP packet (including the inner header) while the outer header (including any outer IPv4 options or IPv6 extension headers) remains unprotected.
ESP operates directly on top of IP, using IP protocol number 50.[21]
The following ESP packet diagram shows how an ESP packet is constructed and interpreted:[27]
The IPsec protocols use asecurity association, where the communicating parties establish shared security attributes such asalgorithmsand keys. As such, IPsec provides a range of options once it has been determined whether AH or ESP is used. Before exchanging data, the two hosts agree on whichsymmetric encryption algorithmis used to encrypt the IP packet, for exampleAESorChaCha20, and which hash function is used to ensure the integrity of the data, such asBLAKE2orSHA256. These parameters are agreed for the particular session, for which a lifetime must be agreed and asession key.[28]
The algorithm for authentication is also agreed before the data transfer takes place and IPsec supports a range of methods. Authentication is possible throughpre-shared key, where asymmetric keyis already in the possession of both hosts, and the hosts send each other hashes of the shared key to prove that they are in possession of the same key. IPsec also supportspublic key encryption, where each host has a public and a private key, they exchange their public keys and each host sends the other anonceencrypted with the other host's public key. Alternatively if both hosts hold apublic key certificatefrom acertificate authority, this can be used for IPsec authentication.[29]
The security associations of IPsec are established using theInternet Security Association and Key Management Protocol(ISAKMP). ISAKMP is implemented by manual configuration with pre-shared secrets,Internet Key Exchange(IKE and IKEv2),Kerberized Internet Negotiation of Keys(KINK), and the use of IPSECKEYDNS records.[17][1]: §1[30]RFC 5386 defines Better-Than-Nothing Security (BTNS) as an unauthenticated mode of IPsec using an extended IKE protocol. C. Meadows, C. Cremers, and others have usedformal methodsto identify various anomalies which exist in IKEv1 and also in IKEv2.[31]
In order to decide what protection is to be provided for an outgoing packet, IPsec uses theSecurity Parameter Index(SPI), an index to the security association database (SADB), along with the destination address in a packet header, which together uniquely identifies a security association for that packet. A similar procedure is performed for an incoming packet, where IPsec gathers decryption and verification keys from the security association database.
ForIP multicasta security association is provided for the group, and is duplicated across all authorized receivers of the group. There may be more than one security association for a group, using different SPIs, thereby allowing multiple levels and sets of security within a group. Indeed, each sender can have multiple security associations, allowing authentication, since a receiver can only know that someone knowing the keys sent the data. Note that the relevant standard does not describe how the association is chosen and duplicated across the group; it is assumed that a responsible party will have made the choice.
To ensure that the connection between two endpoints has not been interrupted, endpoints exchangekeepalivemessages at regular intervals, which can also be used to automatically reestablish a tunnel lost due to connection interruption.
Dead Peer Detection (DPD) is a method of detecting a deadInternet Key Exchange(IKE) peer. The method uses IPsec traffic patterns to minimize the number of messages required to confirm the availability of a peer. DPD is used to reclaim the lost resources in case a peer is found dead and it is also used to perform IKE peer failover.
UDP keepalive is an alternative to DPD.
The IPsec protocols AH and ESP can be implemented in a host-to-host transport mode, as well as in a network tunneling mode.
In transport mode, only the payload of the IP packet is usuallyencryptedor authenticated. The routing is intact, since the IP header is neither modified nor encrypted; however, when theauthentication headeris used, the IP addresses cannot be modified bynetwork address translation, as this always invalidates thehash value. Thetransportandapplicationlayers are always secured by a hash, so they cannot be modified in any way, for example bytranslatingtheportnumbers.
A means to encapsulate IPsec messages forNAT traversal(NAT-T) has been defined byRFCdocuments describing the NAT-T mechanism.
In tunnel mode, the entire IP packet is encrypted and authenticated. It is then encapsulated into a new IP packet with a new IP header. Tunnel mode is used to createvirtual private networksfor network-to-network communications (e.g. between routers to link sites), host-to-network communications (e.g. remote user access) and host-to-host communications (e.g. private chat).[32]
Tunnel mode supports NAT traversal.
Cryptographic algorithms defined for use with IPsec include:
Refer to RFC 8221 for details.
The IPsec can be implemented in the IP stack of anoperating system. This method of implementation is done for hosts and security gateways. Various IPsec capable IP stacks are available from companies, such as HP or IBM.[33]An alternative is so calledbump-in-the-stack(BITS) implementation, where the operating system source code does not have to be modified. Here IPsec is installed between the IP stack and the networkdrivers. This way operating systems can be retrofitted with IPsec. This method of implementation is also used for both hosts and gateways. However, when retrofitting IPsec the encapsulation of IP packets may cause problems for the automaticpath MTU discovery, where themaximum transmission unit(MTU) size on the network path between two IP hosts is established. If a host or gateway has a separatecryptoprocessor, which is common in the military and can also be found in commercial systems, a so-calledbump-in-the-wire(BITW) implementation of IPsec is possible.[34]
When IPsec is implemented in thekernel, the key management andISAKMP/IKEnegotiation is carried out from user space. The NRL-developed and openly specified "PF_KEY Key Management API, Version 2" is often used to enable the application-space key management application to update the IPsec security associations stored within the kernel-space IPsec implementation.[35]Existing IPsec implementations usually include ESP, AH, and IKE version 2. Existing IPsec implementations onUnix-like operating systems, for example,SolarisorLinux, usually include PF_KEY version 2.
EmbeddedIPsec can be used to ensure the secure communication among applications running over constrained resource systems with a small overhead.[36]
IPsec was developed in conjunction withIPv6and was originally required to be supported by all standards-compliant implementations ofIPv6before RFC 6434 made it only a recommendation.[37]IPsec is also optional forIPv4implementations. IPsec is most commonly used to secure IPv4 traffic.[citation needed]
IPsec protocols were originally defined in RFC 1825 through RFC 1829, which were published in 1995. In 1998, these documents were superseded by RFC 2401 and RFC 2412 with a few incompatible engineering details, although they were conceptually identical. In addition, a mutual authentication and key exchange protocolInternet Key Exchange(IKE) was defined to create and manage security associations. In December 2005, new standards were defined in RFC 4301 and RFC 4309 which are largely a superset of the previous editions with a second version of the Internet Key Exchange standardIKEv2. These third-generation documents standardized the abbreviation of IPsec to uppercase "IP" and lowercase "sec". "ESP" generally refers to RFC 4303, which is the most recent version of the specification.
Since mid-2008, an IPsec Maintenance and Extensions (ipsecme) working group is active at the IETF.[38][39]
In 2013, as part ofSnowden leaks, it was revealed that the USNational Security Agencyhad been actively working to "Insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets" as part of theBullrunprogram.[40]There are allegations that IPsec was a targeted encryption system.[41]
The OpenBSD IPsec stack came later on and also was widely copied. In a letter whichOpenBSDlead developerTheo de Raadtreceived on 11 Dec 2010 from Gregory Perry, it is alleged that Jason Wright and others, working for the FBI, inserted "a number ofbackdoorsandside channelkey leaking mechanisms" into the OpenBSD crypto code. In the forwarded email from 2010, Theo de Raadt did not at first express an official position on the validity of the claims, apart from the implicit endorsement from forwarding the email.[42]Jason Wright's response to the allegations: "Every urban legend is made more real by the inclusion of real names, dates, and times. Gregory Perry's email falls into this category. ... I will state clearly that I did not add backdoors to the OpenBSD operating system or theOpenBSD Cryptographic Framework(OCF)."[43]Some days later, de Raadt commented that "I believe that NETSEC was probably contracted to write backdoors as alleged. ... If those were written, I don't believe they made it into our tree."[44]This was published before the Snowden leaks.
An alternative explanation put forward by the authors of theLogjam attacksuggests that the NSA compromised IPsec VPNs by undermining theDiffie-Hellmanalgorithm used in the key exchange. In their paper,[45]they allege the NSA specially built a computing cluster to precompute multiplicative subgroups for specific primes and generators, such as for the second Oakley group defined in RFC 2409. As of May 2015, 90% of addressable IPsec VPNs supported the second Oakley group as part of IKE. If an organization were to precompute this group, they could derive the keys being exchanged and decrypt traffic without inserting any software backdoors.
A second alternative explanation that was put forward was that theEquation Groupusedzero-day exploitsagainst several manufacturers' VPN equipment which were validated byKaspersky Labas being tied to the Equation Group[46]and validated by those manufacturers as being real exploits, some of which were zero-day exploits at the time of their exposure.[47][48][49]TheCisco PIX and ASAfirewalls had vulnerabilities that were used for wiretapping by the NSA[citation needed].
Furthermore, IPsec VPNs using "Aggressive Mode" settings send a hash of the PSK in the clear. This can be and apparently is targeted by the NSA using offlinedictionary attacks.[45][50][51]
|
https://en.wikipedia.org/wiki/IPsec
|
Obfuscated TCP(ObsTCP) was a proposal for atransport layerprotocol which implementsopportunistic encryptionoverTransmission Control Protocol(TCP). It was designed to prevent masswiretappingand malicious corruption of TCP traffic on theInternet, with lower implementation cost and complexity thanTransport Layer Security(TLS). In August 2008,IETFrejected the proposal for a TCP option, suggesting it be done on the application layer instead.[1]The project has been inactive since a few months later.
In 2010 June, a separate proposal calledtcpcrypthas been submitted, which shares many of the goals of ObsTCP: being transparent to applications, opportunistic and low overhead. It requires even less configuration (no DNS entries or HTTP headers). Unlike ObsTCP, tcpcrypt also provides primitives down to the application to implement authentication and preventman-in-the-middle attacks(MITM).[2]
ObsTCP was created byAdam Langley. The concept of obfuscating TCP communications using opportunistic encryption evolved through several iterations. The experimental iterations of ObsTCP used TCP options in 'SYN' packets to advertise support for ObsTCP, the server responding with a public key in the 'SYNACK'. AnIETFdraft protocol was first published in July 2008. Packets were encrypted with Salsa20/8,[3]and signed packets with MD5 checksums.[4]
The present (third) iteration uses special DNS records (or out of band methods) to advertise support and keys, without modifying the operation of the underlying TCP protocol.[5]
ObsTCP is a low cost protocol intended to protect TCP traffic, without requiringpublic key certificates, the services ofCertificate Authorities, or a complexPublic Key Infrastructure. It is intended to suppress the use of undirected surveillance to trawl unencrypted traffic, rather than protect against man in the middle attack.
The software presently supports the Salsa20/8[3]stream cipher and Curve25519[6]elliptic-curve Diffie Hellman function.
A server using ObsTCP advertises a public key and a port number.
ADNS 'A record'may be used to advertise server support for ObsTCP (with aDNS 'CNAME record'providing a 'friendly' name). HTTP header records, or cached/out of band keyset information may also be used instead.
A client connecting to an ObsTCP server parses the DNS entries, uses HTTP header records, or uses cached/out of band data to obtain the public key and port number, before connecting to the server and encrypting traffic.
|
https://en.wikipedia.org/wiki/Obfuscated_TCP
|
Anapplication delivery controller(ADC) is acomputer networkdevice in adatacenter, often part of anapplication delivery network(ADN), that helps perform common tasks, such as those done byweb acceleratorsto remove load from theweb serversthemselves. Many also provideload balancing. ADCs are often placed in theDMZ, between the outerfirewallor router and aweb farm.[citation needed]
An Application Delivery Controller (ADC) is a type of server that provides a variety of services designed to optimize the distribution of load being handled by backend content servers. An ADC directs web request traffic to optimal data sources in order to remove unnecessary load from web servers. To accomplish this, an ADC includes many OSI layer 3-7 services, including load-balancing.
ADCs are intended to be deployed within the DMZ of a computer server cluster hosting web applications and/or services. In this sense, an ADC can be envisioned as a drop-in load balancer replacement. But that is where the similarities end. When an ADC receives a web request from an external host, it enacts the following process (assuming all features exist and are enabled):
Features commonly found in ADCs include:
In the context of Telco infrastructure, an ADC could provide access control services for a Gi-LAN area.
Starting around 2004, first generation ADCs offered simple application acceleration andload balancing.[citation needed]
In 2006, ADCs began to mature when they began featuring advanced applications services such ascompression,caching,connection multiplexing,traffic shaping,application layer security,SSL offload, andcontent switching, combined with services like serverload balancingin an integrated services framework that optimized and secured business critical application flows.[citation needed]
By 2007, application acceleration products were available from many companies.[1]
Until leaving the market in 2012,Cisco Systemsoffered application delivery controllers. Market leaders likeF5 Networks,Radware, andCitrixhad been gaining market share from Cisco in previous years.[2]
The ADC market segment became fragmented into two general areas: 1) general network optimization; and 2) application/framework specific optimization. Both types of devices improve performance, but the latter is usually more aware of optimization strategies that work best with a particular application framework, focusing onASP.NETorAJAXapplications, for example.[3][4]
|
https://en.wikipedia.org/wiki/Application_delivery_controller
|
Stunnelis anopen-sourcemulti-platformapplicationused to provide a universalTLS/SSLtunnelingservice.
Stunnel is used to provide secure encrypted connections for clients or servers that do not speak TLS or SSL natively.[4]It runs on a variety of operating systems,[5]including mostUnix-likeoperating systems andWindows. Stunnel relies on theOpenSSLlibraryto implement the underlying TLS or SSL protocol.
Stunnel usespublic-key cryptographywithX.509digital certificatesto secure the SSL connection, and clients can optionally be authenticated via a certificate.[6]
Iflinkedagainstlibwrap, it can be configured to act as aproxy–firewallservice as well.[citation needed]
Stunnel is maintained by Polish programmer Michał Trojnara and released under the terms of theGNU General Public License(GPL) withOpenSSLexception.[7]
A stunnel can be used to provide a secureSSLconnection to an existing non-SSL-awareSMTPmail server. Assuming the SMTP server expects TCP connections onport25, the stunnel would be configured to map the SSL port 465 to non-SSL port 25. A mail client connects via SSL to port 465. Network traffic from the client initially passes over SSL to the stunnel application, which transparently encrypts and decrypts traffic and forwards unsecured traffic to port 25 locally. The mail server sees a non-SSL mail client.[citation needed]
The stunnel process could be running on the same or a different server from the unsecured mail application; however, both machines would typically be behind a firewall on a secureinternal network(so that an intruder could not make its own unsecured connection directly to port 25).[citation needed]
|
https://en.wikipedia.org/wiki/Stunnel
|
ATLS termination proxy(orSSL termination proxy,[1]orSSL offloading[2]) is aproxy serverthat acts as anintermediarypoint betweenclientandserverapplications, and is used to terminate and/or establishTLS(orDTLS)tunnelsby decrypting and/or encrypting communications. This is different fromTLS pass-through proxiesthat forward encrypted (D)TLS traffic between clients and servers without terminating the tunnel.
TLS termination proxies can be used to:
TLS termination proxies can provide three connectivity patterns:[3]
Combining a TLS Encrypting proxy in front of a client with a TLS Offloading proxy in front of a server, can allow (D)TLS encryption and authentication for protocols and applications that don't otherwise support it, with two proxies maintaining a secure (D)TLS tunnel over untrusted network segments between client and server.
A proxy used by clients as an intermediary gateway for all outbound connections is typically called aForward proxy, while a proxy used by servers as an intermediary gateway for all inbound connections is typically called aReverse proxy. Forward TLS bridging proxies that allow intrusion detection system to analyse all client traffic are typically marketed as "SSL Forward Proxy".[4][5][6]
TLS Offloading and TLS Bridging proxies typically need to authenticate themselves to clients with a digital certificate using eitherPKIXor DANE authentication. Usually the server operator supplies to its reverse proxy a valid certificate for use during (D)TLS handshake with clients. A forward proxy operator, however would need to create their own privateCA, install it into the trust store of all clients and have the proxy generate a new certificate signed by the private CA in real time for each server that a client tries to connect to.
When network traffic between client and server is routed via a proxy, it can operate intransparentmode by using the client'sIP addressinstead of its own when connecting to the server and using the server's IP address when responding to the client. If aTransparent TLS Bridging Proxyhas a valid server certificate, neither client nor server would be able to detect the proxy presence. An adversary that has compromised the private key of the server's digital certificate or is able to use a compromised/coerced PKIX CAs to issue a new valid certificate for the server, could perform aman-in-the-middle attackby routing TLS traffic between client and server through a Transparent TLS Bridging Proxy and would have the ability to copy decrypted communications, including logon credentials, and modify content of communications on the fly without being detected.
|
https://en.wikipedia.org/wiki/TLS_offloading
|
Apassword, sometimes called apasscode, is secret data, typically a string of characters, usually used to confirm a user's identity. Traditionally, passwords were expected to bememorized,[1]but the large number of password-protected services that a typical individual accesses can make memorization of unique passwords for each service impractical.[2]Using the terminology of the NIST Digital Identity Guidelines,[3]the secret is held by a party called theclaimantwhile the party verifying the identity of the claimant is called theverifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an establishedauthentication protocol,[4]the verifier is able to infer the claimant's identity.
In general, a password is an arbitrarystringofcharactersincluding letters, digits, or other symbols. If the permissible characters are constrained to be numeric, the corresponding secret is sometimes called apersonal identification number(PIN).
Despite its name, a password does not need to be an actual word; indeed, a non-word (in the dictionary sense) may be harder to guess, which is a desirable property of passwords. A memorized secret consisting of a sequence of words or other text separated by spaces is sometimes called apassphrase. A passphrase is similar to a password in usage, but the former is generally longer for added security.[5]
Passwords have been used since ancient times. Sentries would challenge those wishing to enter an area to supply a password orwatchword, and would only allow a person or group to pass if they knew the password.Polybiusdescribes the system for the distribution of watchwords in theRoman militaryas follows:
The way in which they secure the passing round of the watchword for the night is as follows: from the tenthmanipleof each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of thetribune, and receiving from him the watchword—that is a wooden tablet with the word inscribed on it – takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next to him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.[6]
Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example in the opening days of theBattle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password—flash—which was presented as a challenge, and answered with the correct response—thunder. The challenge and response were changed every three days. American paratroopers also famously used a device known as a "cricket" onD-Dayin place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply.[7]
Passwords have been used with computers since the earliest days of computing. TheCompatible Time-Sharing System(CTSS), an operating system introduced atMITin 1961, was the first computer system to implement password login.[8][9]CTSS had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password with privacy."[10]In the early 1970s,Robert Morrisdeveloped a system of storing login passwords in a hashed form as part of theUnixoperating system. The system was based on a simulated Hagelin rotor crypto machine, and first appeared in 6th Edition Unix in 1974. A later version of his algorithm, known ascrypt(3), used a 12-bitsaltand invoked a modified form of theDESalgorithm 25 times to reduce the risk of pre-computeddictionary attacks.[11]
In modern times,user namesand passwords are commonly used by people during alog inprocess thatcontrols accessto protected computeroperating systems,mobile phones,cable TVdecoders,automated teller machines(ATMs), etc. A typicalcomputer userhas passwords for multiple purposes: logging into accounts, retrievinge-mail, accessing applications, databases, networks, web sites, and even reading the morning newspaper online.
The easier a password is for the owner to remember generally means it will be easier for anattackerto guess.[12]However, passwords that are difficult to remember may also reduce the security of a system because (a) users might need to write down or electronically store the password, (b) users will need frequent password resets and (c) users are more likely to re-use the same password across different accounts. Similarly, the more stringent the password requirements, such as "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system.[13]Others argue longer passwords provide more security (e.g.,entropy) than shorter passwords with a wide variety of characters.[14]
InThe Memorability and Security of Passwords,[15]Jeff Yan et al. examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords.
Combining two or more unrelated words and altering some of the letters to special characters or numbers is another good method,[16]but a single dictionary word is not. Having a personally designedalgorithmfor generating obscure passwords is another good method.[17]
However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalises one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1', substitutions that are well known to attackers. Similarly typing the password one keyboard row higher is a common trick known to attackers.[18]
In 2013, Google released a list of the most common password types, all of which are considered insecure because they are too easy to guess (especially after researching an individual on social media), which includes:[19]
Traditional advice to memorize passwords and never write them down has become a challenge because of the sheer number of passwords users of computers and the internet are expected to maintain. One survey concluded that the average user has around 100 passwords.[2]To manage the proliferation of passwords, some users employ the same password for multiple accounts, a dangerous practice since a data breach in one account could compromise the rest. Less risky alternatives include the use ofpassword managers,single sign-onsystems and simply keeping paper lists of less critical passwords.[20]Such practices can reduce the number of passwords that must be memorized, such as the password manager's master password, to a more manageable number.
The security of a password-protected system depends on several factors. The overall system must be designed for sound security, with protection againstcomputer viruses,man-in-the-middle attacksand the like. Physical security issues are also a concern, from deterringshoulder surfingto more sophisticated physical threats such as video cameras and keyboard sniffers. Passwords should be chosen so that they are hard for an attacker to guess and hard for an attacker to discover using any of the available automatic attack schemes.[21]
Nowadays, it is a common practice for computer systems to hide passwords as they are typed. The purpose of this measure is to prevent bystanders from reading the password; however, some argue that this practice may lead to mistakes and stress, encouraging users to choose weak passwords. As an alternative, users should have the option to show or hide passwords as they type them.[21]
Effective access control provisions may force extreme measures on criminals seeking to acquire a password or biometric token.[22]Less extreme measures includeextortion,rubber hose cryptanalysis, andside channel attack.
Some specific password management issues that must be considered when thinking about, choosing, and handling, a password follow.
The rate at which an attacker can submit guessed passwords to the system is a key factor in determining system security. Some systems impose a time-out of several seconds after a small number (e.g., three) of failed password entry attempts, also known as throttling.[3]: 63B Sec 5.2.2In the absence of other vulnerabilities, such systems can be effectively secure with relatively simple passwords if they have been well chosen and are not easily guessed.[23]
Many systems store acryptographic hashof the password. If an attacker gets access to the file of hashed passwords guessing can be done offline, rapidly testing candidate passwords against the true password's hash value. In the example of a web-server, an online attacker can guess only at the rate at which the server will respond, while an off-line attacker (who gains access to the file) can guess at a rate limited only by the hardware on which the attack is running and the strength of the algorithm used to create the hash.
Passwords that are used to generate cryptographic keys (e.g., fordisk encryptionorWi-Fisecurity) can also be subjected to high rate guessing, known aspassword cracking. Lists of common passwords are widely available and can make password attacks efficient. Security in such situations depends on using passwords or passphrases of adequate complexity, making such an attack computationally infeasible for the attacker. Some systems, such asPGPandWi-Fi WPA, apply a computation-intensive hash to the password to slow such attacks, in a technique known askey stretching.
An alternative to limiting the rate at which an attacker can make guesses on a password is to limit the total number of guesses that can be made. The password can be disabled, requiring a reset, after a small number of consecutive bad guesses (say 5); and the user may be required to change the password after a larger cumulative number of bad guesses (say 30), to prevent an attacker from making an arbitrarily large number of bad guesses by interspersing them between good guesses made by the legitimate password owner.[24]Attackers may conversely use knowledge of this mitigation to implement adenial of service attackagainst the user by intentionally locking the user out of their own device; this denial of service may open other avenues for the attacker to manipulate the situation to their advantage viasocial engineering.
Some computer systems store user passwords asplaintext, against which to compare user logon attempts. If an attacker gains access to such an internal password store, all passwords—and so all user accounts—will be compromised. If some users employ the same password for accounts on different systems, those will be compromised as well.
More secure systems store each password in a cryptographically protected form, so access to the actual password will still be difficult for a snooper who gains internal access to the system, while validation of user access attempts remains possible. The most secure do not store passwords at all, but a one-way derivation, such as apolynomial,modulus, or an advancedhash function.[14]Roger Needhaminvented the now-common approach of storing only a "hashed" form of the plaintext password.[25][26]When a user types in a password on such a system, the password handling software runs through a cryptographic hash algorithm, and if the hash value generated from the user's entry matches the hash stored in the password database, the user is permitted access. The hash value is created by applying a cryptographic hash function to a string consisting of the submitted password and, in multiple implementations, another value known as asalt. A salt prevents attackers from easily building a list of hash values for common passwords and prevents password cracking efforts from scaling across all users.[27]MD5andSHA1are frequently used cryptographic hash functions, but they are not recommended for password hashing unless they are used as part of a larger construction such as inPBKDF2.[28]
The stored data—sometimes called the "password verifier" or the "password hash"—is often stored in Modular Crypt Format or RFC 2307 hash format, sometimes in the/etc/passwdfile or the/etc/shadowfile.[29]
The main storage methods for passwords are plain text, hashed, hashed and salted, and reversibly encrypted.[30]If an attacker gains access to the password file, then if it is stored as plain text, no cracking is necessary. If it is hashed but not salted then it is vulnerable torainbow tableattacks (which are more efficient than cracking). If it is reversibly encrypted then if the attacker gets the decryption key along with the file no cracking is necessary, while if he fails to get the key cracking is not possible. Thus, of the common storage formats for passwords only when passwords have been salted and hashed is cracking both necessary and possible.[30]
If a cryptographic hash function is well designed, it is computationally infeasible to reverse the function to recover aplaintextpassword. An attacker can, however, use widely available tools to attempt to guess the passwords. These tools work by hashing possible passwords and comparing the result of each guess to the actual password hashes. If the attacker finds a match, they know that their guess is the actual password for the associated user. Password cracking tools can operate by brute force (i.e. trying every possible combination of characters) or by hashing every word from a list; large lists of possible passwords in multiple languages are widely available on the Internet.[14]The existence ofpassword crackingtools allows attackers to easily recover poorly chosen passwords. In particular, attackers can quickly recover passwords that are short, dictionary words, simple variations on dictionary words, or that use easily guessable patterns.[31]A modified version of theDESalgorithm was used as the basis for the password hashing algorithm in earlyUnixsystems.[32]Thecryptalgorithm used a 12-bit salt value so that each user's hash was unique and iterated the DES algorithm 25 times in order to make the hash function slower, both measures intended to frustrate automated guessing attacks.[32]The user's password was used as a key to encrypt a fixed value. More recent Unix or Unix-like systems (e.g.,Linuxor the variousBSDsystems) use more secure password hashing algorithms such asPBKDF2,bcrypt, andscrypt, which have large salts and an adjustable cost or number of iterations.[33]A poorly designed hash function can make attacks feasible even if a strong password is chosen.LM hashis a widely deployed and insecure example.[34]
Passwords are vulnerable to interception (i.e., "snooping") while being transmitted to the authenticating machine or person. If the password is carried as electrical signals on unsecured physical wiring between the user access point and the central system controlling the password database, it is subject to snooping bywiretappingmethods. If it is carried as packeted data over the Internet, anyone able to watch thepacketscontaining the logon information can snoop with a low probability of detection.
Email is sometimes used to distribute passwords but this is generally an insecure method. Since most email is sent asplaintext, a message containing a password is readable without effort during transport by any eavesdropper. Further, the message will be stored asplaintexton at least two computers: the sender's and the recipient's. If it passes through intermediate systems during its travels, it will probably be stored on there as well, at least for some time, and may be copied tobackup,cacheor history files on any of these systems.
Using client-side encryption will only protect transmission from the mail handling system server to the client machine. Previous or subsequent relays of the email will not be protected and the email will probably be stored on multiple computers, certainly on the originating and receiving computers, most often in clear text.
The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, usingcryptographicprotection. The most widely used is theTransport Layer Security(TLS, previously calledSSL) feature built into most current Internetbrowsers. Most browsers alert the user of a TLS/SSL-protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use.
There is a conflict between stored hashed-passwords and hash-basedchallenge–response authentication; the latter requires a client to prove to a server that they know what theshared secret(i.e., password) is, and to do this, the server must be able to obtain the shared secret from its stored form. On a number of systems (includingUnix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need the original password to authenticate remotely; they only need the hash.
Rather than transmitting a password, or transmitting the hash of the password,password-authenticated key agreementsystems can perform azero-knowledge password proof, which proves knowledge of the password without exposing it.
Moving a step further, augmented systems forpassword-authenticated key agreement(e.g.,AMP,B-SPEKE,PAK-Z,SRP-6) avoid both the conflict and limitation of hash-based methods. An augmented system allows a client to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and where the un-hashed password is required to gain access.
Usually, a system must provide a way to change a password, either because a user believes the current password has been (or might have been) compromised, or as a precautionary measure. If a new password is passed to the system in unencrypted form, security can be lost (e.g., viawiretapping) before the new password can even be installed in the passworddatabaseand if the new password is given to a compromised employee, little is gained. Some websites include the user-selected password in anunencryptedconfirmation e-mail message, with the obvious increased vulnerability.
Identity managementsystems are increasingly used to automate the issuance of replacements for lost passwords, a feature calledself-service password reset. The user's identity is verified by asking questions and comparing the answers to ones previously stored (i.e., when the account was opened).
Some password reset questions ask for personal information that could be found on social media, such as mother's maiden name. As a result, some security experts recommend either making up one's own questions or giving false answers.[35]
"Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly, monthly or even more often). Such policies usually provoke user protest and foot-dragging at best and hostility at worst.[36]There is often an increase in the number of people who note down the password and leave it where it can easily be found, as well as help desk calls to reset a forgotten password. Users may use simpler passwords or develop variation patterns on a consistent theme to keep their passwords memorable.[37]Because of these issues, there is some debate as to whether password aging is effective.[38]Changing a password will not prevent abuse in most cases, since the abuse would often be immediately noticeable. However, if someone may have had access to the password through some means, such as sharing a computer or breaching a different site, changing the password limits the window for abuse.[39]
Allotting separate passwords to each user of a system is preferable to having a single password shared by legitimate users of the system, certainly from a security viewpoint. This is partly because users are more willing to tell another person (who may not be authorized) a shared password than one exclusively for their use. Single passwords are also much less convenient to change because multiple people need to be told at the same time, and they make removal of a particular user's access more difficult, as for instance on graduation or resignation. Separate logins are also often used for accountability, for example to know who changed a piece of data.
Common techniques used to improve the security of computer systems protected by a password include:
Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result.
It is common practice amongst computer users to reuse the same password on multiple sites. This presents a substantial security risk, because anattackerneeds to only compromise a single site in order to gain access to other sites the victim uses. This problem is exacerbated by also reusingusernames, and by websites requiring email logins, as it makes it easier for an attacker to track a single user across multiple sites. Password reuse can be avoided or minimized by usingmnemonic techniques,writing passwords down on paper, or using apassword manager.[44]
It has been argued by Redmond researchersDinei Florencioand Cormac Herley, together with Paul C. van Oorschot of Carleton University, Canada, that password reuse is inevitable, and that users should reuse passwords for low-security websites (which contain little personal data and no financial information, for example) and instead focus their efforts on remembering long, complex passwords for a few important accounts, such as bank accounts.[45]Similar arguments were made byForbesin not change passwords as often as some "experts" advise, due to the same limitations in human memory.[37]
Historically, multiple security experts asked people to memorize their passwords: "Never write down a password". More recently, multiple security experts such asBruce Schneierrecommend that people use passwords that are too complicated to memorize, write them down on paper, and keep them in a wallet.[46][47][48][49][50][51][52]
Password managersoftware can also store passwords relatively safely, in an encrypted file sealed with a single master password.
To facilitate estate administration, it is helpful for people to provide a mechanism for their passwords to be communicated to the persons who will administer their affairs in the event of their death. Should a record of accounts and passwords be prepared, care must be taken to ensure that the records are secure, to prevent theft or fraud.[53]
Multi-factor authentication schemes combine passwords (as "knowledge factors") with one or more other means of authentication, to make authentication more secure and less vulnerable to compromised passwords. For example, a simple two-factor login might send a text message, e-mail, automated phone call, or similar alert whenever a login attempt is made, possibly supplying a code that must be entered in addition to a password.[54]More sophisticated factors include such things as hardware tokens and biometric security.
Password rotation is a policy that is commonly implemented with the goal of enhancingcomputer security. In 2019, Microsoft stated that the practice is "ancient and obsolete".[55][56]
Most organizations specify apassword policythat sets requirements for the composition and usage of passwords, typically dictating minimum length, required categories (e.g., upper and lower case, numbers, and special characters), prohibited elements (e.g., use of one's own name, date of birth, address, telephone number). Some governments have national authentication frameworks[57]that define requirements for user authentication to government services, including requirements for passwords.
Many websites enforce standard rules such as minimum and maximum length, but also frequently include composition rules such as featuring at least one capital letter and at least one number/symbol. These latter, more specific rules were largely based on a 2003 report by theNational Institute of Standards and Technology(NIST), authored by Bill Burr.[58]It originally proposed the practice of using numbers, obscure characters and capital letters and updating regularly. In a 2017 article inThe Wall Street Journal, Burr reported he regrets these proposals and made a mistake when he recommended them.[59]
According to a 2017 rewrite of this NIST report, a number ofwebsiteshave rules that actually have the opposite effect on the security of their users. This includes complex composition rules as well as forced password changes after certain periods of time. While these rules have long been widespread, they have also long been seen as annoying and ineffective by both users and cyber-security experts.[60]The NIST recommends people use longer phrases as passwords (and advises websites to raise the maximum password length) instead of hard-to-remember passwords with "illusory complexity" such as "pA55w+rd".[61]A user prevented from using the password "password" may simply choose "Password1" if required to include a number and uppercase letter. Combined with forced periodic password changes, this can lead to passwords that are difficult to remember but easy to crack.[58]
Paul Grassi, one of the 2017 NIST report's authors, further elaborated: "Everyone knows that an exclamation point is a 1, or an I, or the last character of a password. $ is an S or a 5. If we use these well-known tricks, we aren't fooling any adversary. We are simply fooling the database that stores passwords into thinking the user did something good."[60]
Pieris Tsokkis and Eliana Stavrou were able to identify some bad password construction strategies through their research and development of a password generator tool. They came up with eight categories of password construction strategies based on exposed password lists, password cracking tools, and online reports citing the most used passwords. These categories include user-related information, keyboard combinations and patterns, placement strategy, word processing, substitution, capitalization, append dates, and a combination of the previous categories[62]
Attempting to crack passwords by trying as many possibilities as time and money permit is abrute force attack. A related method, rather more efficient in most cases, is adictionary attack. In a dictionary attack, all words in one or more dictionaries are tested. Lists of common passwords are also typically tested.
Password strengthis the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Cryptologists and computer scientists often refer to the strength or 'hardness' in terms ofentropy.[14]
Passwords easily discovered are termedweakorvulnerable; passwords difficult or impossible to discover are consideredstrong. There are several programs available for password attack (or even auditing and recovery by systems personnel) such asL0phtCrack,John the Ripper, andCain; some of which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detect weak passwords proposed by users.
Studies of production computer systems have consistently shown that a large fraction of all user-chosen passwords are readily guessed automatically.[63]For example, Columbia University found 22% of user passwords could be recovered with little effort.[64]According toBruce Schneier, examining data from a 2006phishingattack, 55% ofMySpacepasswords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006.[65]He also reported that the single most common password waspassword1, confirming yet again the general lack of informed care in choosing passwords among users. (He nevertheless maintained, based on these data, that the general quality of passwords has improved over the years—for example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.[66])
The multiple ways in which permanent or semi-permanent passwords can be compromised has prompted the development of other techniques. Some are inadequate in practice, and in any case few have become universally available for users seeking a more secure alternative.[74]A 2012 paper[75]examines why passwords have proved so hard to supplant (despite multiple predictions that they would soon be a thing of the past[76]); in examining thirty representative proposed replacements with respect to security, usability and deployability they conclude "none even retains the full set of benefits that legacy passwords already provide."
"The password is dead" is a recurring idea incomputer security. The reasons given often include reference to theusabilityas well as security problems of passwords. It often accompanies arguments that the replacement of passwords by a more secure means of authentication is both necessary and imminent. This claim has been made by a number of people at least since 2004.[76][87][88][89][90][91][92][93]
Alternatives to passwords includebiometrics,two-factor authenticationorsingle sign-on,Microsoft'sCardspace, theHiggins project, theLiberty Alliance,NSTIC, theFIDO Allianceand various Identity 2.0 proposals.[94][95]
However, in spite of these predictions and efforts to replace them passwords are still the dominant form of authentication on the web. In "The Persistence of Passwords", Cormac Herley and Paul van Oorschot suggest that every effort should be made to end the "spectacularly incorrect assumption" that passwords are dead.[96]They argue that "no other single technology matches their combination of cost, immediacy and convenience" and that "passwords are themselves the best fit for many of the scenarios in which they are currently used."
Following this, Bonneau et al. systematically compared web passwords to 35 competing authentication schemes in terms of their usability, deployability, and security.[97][98]Their analysis shows that most schemes do better than passwords on security, some schemes do better and some worse with respect to usability, whileeveryscheme does worse than passwords on deployability. The authors conclude with the following observation: "Marginal gains are often not sufficient to reach the activation energy necessary to overcome significant transition costs, which may provide the best explanation of why we are likely to live considerably longer before seeing the funeral procession for passwords arrive at the cemetery."
|
https://en.wikipedia.org/wiki/Password
|
Animmobiliserorimmobilizeris an electronic security device fitted to amotor vehiclethat prevents the engine from being started unless the correct key (transponderorsmart key) is present. This prevents the vehicle from being "hot wired" after entry has been achieved and thus reducesmotor vehicle theft. Research shows that the uniform application of immobilisers reduced the rate of car theft by 40%.[1]
The electric immobiliser/alarm system was invented by St. George Evans and Edward Birkenbeuel and patented in 1919.[2]They developed a 3x3 grid of double-contactswitcheson a panel mounted inside the car so when the ignition switch was activated, current from the battery (ormagneto) went to the spark plugs allowing the engine to start, or immobilizing the vehicle andsoundingthehorn.[3]The system settings could be changed each time the car was driven.[3]Modern immobiliser systems are automatic, meaning the owner does not have to remember to activate it.[4][5]
Early models used a static code in theignition key(orkey fob) which was recognised by anRFIDloop (transponder) around the lock barrel and checked against the vehicle'sengine control unit(ECU) for a match. If the code is unrecognised, the ECU will not allow fuel to flow and ignition to take place.
Later models userolling codesor advancedcryptographyto defeat copying of the code from the key or ECU (smart key).[citation needed]The microcircuit inside the key is activated by a small electromagnetic field which induces current to flow inside the key body, which in turn broadcasts a uniquebinary code, which is read by the automobile's ECU. When the ECU determines that the coded key is both current and valid, the ECU activates the fuel-injection sequence.[citation needed]
In some vehicles, attempts to use an unauthorised or "non-sequenced" key cause the vehicle to activate a timed "no-start condition" and in some highly advanced systems, even use satellite or mobile phone communication to alert asecurity firmthat an unauthorised attempt was made to code a key.[citation needed]
Coincidentally, this information is often recorded in modern automobile ECUs as part of theiron-board diagnosticswhich may record many other variables including speed, temperature, driver weight, geographic location,throttle positionandyaw angle. This information can be used during insurance investigations, warranty claims or technical troubleshooting.[citation needed]
Immobilisers have been mandatory in all new cars sold inGermanysince 1 January 1998, in theUnited Kingdomsince 1 October 1998, inFinlandsince 1998, inAustraliasince 2001.[citation needed]
In September 2007, aTransport Canadaregulation mandated the installation of engine immobilisers in all new lightweight vehicles and trucks manufactured in Canada.[6]
Hondawas the firstmotorcyclemanufacturer to include immobilisers on its products in the 1990s.[7]Add-on immobilisers are available for older cars or vehicles that do not come equipped with factory immobilisers. The insurance approval for a self-arming immobiliser is known as "Thatcham 2" after the Motor Insurance Repair Research Centre inThatcham,England. Approved immobilisers must intercept at least two circuits; typically the low-voltage ignition circuit and the fuel pump circuit. Some may also intercept the low-current starter motor circuit from the key switch to therelay.
Lack of immobilizers in manyKiaandHyundaiU.S. models after 2010 and before mid-2021 made these cars targets for theft in the early 2020s, especially inMilwaukee County, WisconsinandColumbus, Ohio.[8]TheKia ChallengeTikTok trend was linked to series of Hyundai/Kia vehicle thefts in 2022.
Numerous vulnerabilities have been found in the immobilisers designed to protect modern cars from theft.[9]Many vehicle immobilisers use the Megamos chip, which has been proven to be crackable.[10]The Megamos transponder is one of many different transponders found in today's immobiliser systems and also comes in many different versions. Hacking of an immobiliser in the real world would be performed on the vehicle, not on the key. It would be faster to program a new key to the vehicle than to try to clone the existing key, especially on modern vehicles.[11]
Some immobiliser systems tend to remember the last key code for so long that they may even accept a non-transponder key even after the original key has been removed from the ignition for a few minutes.[12]
A 2016 study in theEconomic Journalfound that the immobiliser lowered the overall rate of car theft by about 40% between 1995 and 2008.[1]The benefits in terms of prevented thefts were at least three times higher than the costs of installing the device.[1]
|
https://en.wikipedia.org/wiki/Immobiliser
|
Akeycard lockis alockoperated by akeycard, a flat, rectangularplastic card. The card typically, but not always, has identical dimensions to that of acredit card, that isID-1 format. The card stores a physical or digital pattern that the door mechanism accepts before disengaging the lock.
There are several common types of keycards in use, including the mechanical holecard,barcode,magnetic stripe,Wiegandwire embedded cards,smart card(embedded with a read/write electronicmicrochip),RFID, andNFCproximity cards.
Keycards are frequently used inhotelsas an alternative to mechanical keys.
The first commercial use of key cards was to raise and lower the gate at automated parking lots where users paid a monthly fee.[1]
Keycard systems operate by physically moving detainers in the locking mechanism with the insertion of the card, by shiningLEDsthrough a pattern of holes in the card and detecting the result, by swiping or inserting a magnetic stripe card, or in the case of RFID or NFC cards, merely being brought into close proximity to a sensor. Keycards may also serve asID cards, or as part of an NFC system, have the code transmitted to amobile phoneto be placed into adigital walletsystem such asApple PayorGoogle Pay, negating the need for a physical keycard.
Many electronic access control locks use aWiegand interfaceto connect the card swipe mechanism to the rest of the electronic entry system.
Newer keycard systems useradio-frequency identification(RFID) technology such as the TLJ infinity.[citation needed]
Mechanical keycard locks employ detainers which must be arranged in pre-selected positions by thekeybefore theboltwill move. This was a mechanical type of lock operated by a plastic key card with a pattern of holes. There were 32 positions for possible hole locations, giving approximately 4.3 billion different keys. The key could easily be changed for each new guest by inserting a new key template in the lock that matched the new key.[2]
In the early 1980s, the key card lock was electrified with LEDs that detected the holes.
Since the keycode is permanently set into the card at manufacture by the positions of magnetic wires,Wiegand cardscannot be erased by magnetic fields or reprogrammed as magnetic stripe cards can. Many electronic access control locks use aWiegand interfaceto connect the card swipe mechanism to the rest of the electronic entry system.
Magnetic stripe (sometimes "strip") based keycard locks function by running the magnetic stripe over a sensor that reads the contents of the stripe. The stripe's contents are compared to those either stored locally in the lock or those of a central system. Some centralized systems operate using hardwired connections to central controllers while others use various frequencies of radio waves to communicate with the central controllers. Some have the feature of a mechanical (traditional key) bypass in case of loss of power.
RFID cards contain a small chip and induction loop which the transmitter on the keycard reader can access. The main advantages with RFID cards is that they do not need to be removed from the wallet or pass holder - as the keycard reader can usually read them from a few inches away.
In the case of the hotel room lock, there is no central system; the keycard and the lock function in the same tradition as a standard key and lock. However, if the card readers communicate with a central system, it is the system that unlocks the door, not the card reader alone.[3]This allows for more control over the locks; for example, a specific card may only work on certain days of the week or time of day. Which locks can be opened by a card can be changed at any time. Logs are often kept of which cards unlocked doors at what times.
Computerized authentication systems, such as key cards, raiseprivacyconcerns, since they enablecomputer surveillanceof each entry. RFID cards and key fobs are becoming increasingly popular due to their ease of use. Many modern households have installeddigital locksthat make use of key cards, in combination withbiometricfingerprintand keypad PIN options. Offices have also slowly installeddigital locksthat integrate with key cards and biometric technology.[4]
|
https://en.wikipedia.org/wiki/Keycard
|
Hashcatis apassword recoverytool. It had a proprietary code base until 2015, but was then released as open source software. Versions are available for Linux, macOS, and Windows. Examples of hashcat-supported hashing algorithms areLM hashes,MD4,MD5,SHA-familyandUnix Cryptformats as well as algorithms used inMySQLandCisco PIX.
Hashcat has received publicity because it is partly based on flaws in other software discovered by it's creator. An example was a flaw in1Password's password manager hashing scheme.[2]It has also been compared to similar software in aUsenixpublication[3]and been described onArs Technica.[4]
Previously, two variants of hashcat existed:
With the release of hashcat v3.00, the GPU and CPU tools were merged into a single tool called hashcat. The CPU-only version became hashcat-legacy.[5]Both CPU and GPU now requireOpenCL.
Many of the algorithms supported by hashcat-legacy (such as MD5, SHA1, and others) can be cracked in a shorter time with the GPU-based hashcat.[6]However, not all algorithms can be accelerated by GPUs.Bcryptis an example of this. Due to factors such as data-dependent branching, serialization, and memory (and more), oclHashcat/cudaHashcat weren't catchall replacements for hashcat-legacy.
hashcat-legacy is available for Linux, OSX and Windows.
hashcat is available for macOS, Windows, and Linux with GPU, CPU and generic OpenCL support which allows for FPGAs and other accelerator cards.
Hashcat offers multiple attack modes for obtaining effective and complex coverage over a hash's keyspace. These modes are:
The traditional bruteforce attack is considered outdated, and the Hashcat core team recommends the Mask-Attack as a full replacement.
Team Hashcat[9](the official team of the Hashcat software composed of core Hashcat members) won first place in the KoreLogic "Crack Me If you Can" Competitions atDefConin 2010,[10]2012, 2014,[11]2015,[12]and 2018, and atDerbyConin 2017.
|
https://en.wikipedia.org/wiki/Hashcat
|
What3words(stylized aswhat3words) is aproprietary[4]geocode systemdesigned to identify any location on the surface ofEarthwith a resolution of approximately 3 metres (9.8 ft). It is owned by What3words Limited, based inLondon, England. The system encodesgeographic coordinatesinto three permanently fixed dictionary words. For example, the front door of10 Downing Streetin London is identified by ///slurs.this.shark.[5]
What3words differs from mostlocation encoding systemsin that it uses words rather than strings of numbers or letters, and the pattern of this mapping is not obvious; the algorithm mapping locations to words is copyrighted.[6]
What3words has been subject to a number of criticisms both for its closed source code[7]and the significant risk of ambiguity and confusion in its three word addresses.[8]This has resulted in some to advise against the use of What3words in safety critical applications.[9][10]
The company has a website, apps foriOSandAndroid, and anAPIfor bidirectional conversion between What3words addresses andlatitude–longitudecoordinates.
Founded by Chris Sheldrick, Jack Waley-Cohen, Mohan Ganesalingam and Michael Dent, What3words was launched in July 2013.[11][12]Sheldrick and Ganesalingam conceived the idea when Sheldrick, working as an event organizer, struggled to get bands and equipment to the appropriateloading docksand entrances of large music venues.[13][14]Sheldrick tried using GPS coordinates, but decided that words were better than numbers after transposing two digits led a driver to the wrong location. He credits a mathematician friend for the idea of dividing the world into 3-metre (10 ft) squares, and the linguist Jack Waley-Cohen with using memorable words.[15]The company was incorporated in March 2013[16]and a patent application for the core technology filed in April 2013.[4]In November 2013, What3words raisedUS$500,000 of seed funding.[17]
What3words originally sold "OneWord" addresses, which were stored in a database for a yearly fee,[12]but this offering was discontinued[18]as the company switched to abusiness-to-businessmodel.[19]In 2015, the company was targeting logistics companies, post offices, and couriers.[15]
In January 2018,Mercedes-Benzbought approximately 10% of the company and announced support for What3words in future versions of theirinfotainmentand navigation system.[20]
In March 2021, it was announced thatITV plchad invested £2 million in What3words as the first investment in itsmedia-for-equityscheme.[21]
what3words has raised more than £50m from investors since launching.[19]In 2024, what3words had a turnover of £2.2m and made a loss of £10.6m.[3]
The what3words system and app is free for anyone to use. The company states that its revenue comes from charging businesses that benefit from its products.[22]
What3words divides the world into a grid of 57 trillion 3-by-3-metre (10 ft × 10 ft) squares, each of which has a three-word address. The company says they do their best to removehomophonesand spelling variations;[23]however, at least 32 pairs of English near-homophones still remain.[24]
Wordlists are available in 50 languages,[25]each of which uses a list of 25,000 words (except for English, which uses 40,000 to cover sea as well as land).[26]Translations are not direct, as direct translations to some languages could produce more than three words. Rather, territories are localised "considering linguistic sensitivities and nuances".[27]Densely populated areas have strings of short words to aid more frequent usage; while less populated areas, such as the North Atlantic, use more complex words.[27][15]
In a 2019 blog,open standardsadvocate and technology expertTerence Edenquestioned the cultural neutrality of using words rather than the numbers generated by map coordinates. "Numbers are (mostly) culturally neutral." he said, "Words are not. Is mile.crazy.shade a respectful name for a war memorial? How about tribes.hurt.stumpy for a temple?"[7]
What3words state that similar addresses are spaced as far apart as possible to avoid confusion,[28]and that similarly sounding codes have a 1 in 2.5 million chance of pointing to locations near each other.[29]
However, security researcher Andrew Tierney calculates that 75% of What3words addresses contain plural words that also exist in singular form (or the reverse).[24]Co-founder and CEO Sheldrick responded that "Whilst the overwhelming proportion of similar-sounding three-word combinations will be so far apart that an error is obvious, there will still be cases where similar sounding word combinations are nearby."[29]Further analysis by Tierney shows that in the London area, around 1 in 24 addresses will be confusable with another London address.[9]
In September 2022, theDepartment for Culture, Media and Sportused What3words to direct mourners to the end ofthe queueto view the Queen lying in state in London. Of the first five codes published, four led to the wrong place,[30]includinga suburb of Londonsome 15 miles from the real end of the queue.[31]Officials later moved to an automated system to generate the identifiers, as they realised having people involved in the process resulted in typos.[30]
A paper published in 2023[8]investigated the patented algorithm without using What3words's own wordlist. It found that usinglinear congruencefor address assignment does a poor job of randomising the wordlist. It also noted that the AutoSuggest feature did not return sufficient results to disambiguate an address. It concluded that "W3W should not be adopted as critical infrastructure without a thorough evaluation against a number of competing alternatives".[8]
According toRory Sutherlandfrom the advertising agencyOgilvyin a 2014 op-ed piece forThe Spectator, the system's advantages are memorability, accuracy, and non-ambiguity in speech.[32]
Mountain rescue services in the UK have warned against relying on the app:
Since 2019, What3words has seen adoption by police, fire and ambulance services, who can use it for free[35]and participate in media campaigns provided by What3Words[36]to promote the app.[37][38][39]By September 2021, more than 85 percent of British emergency services teams used What3words, including theMetropolitan PoliceandLondon Fire Brigade.[40][7]Support has also been added to the Australian Government'sTriple ZeroEmergency Plus App.[41]
In August 2022,East of England Ambulance Servicetook "nearly 10 minutes" to identify a cycle path after being given a What3words address. When approached byCambridge News, the Ambulance Service continued to recommend the app, and did not respond to a query about why they were unable to quickly pinpoint the precise location using the system.[47]
The What3words system has been criticised for being controlled by a private business, and the software for being patented and not freely usable.[7]
The company has pursued a policy of issuing copyright claims against individuals and organisations that have hosted or published files of the What3wordsalgorithmorreverse-engineeredcode that replicates the service's functionality, such as thefreeandopen sourceimplementation WhatFreeWords; the whatfreewords.org website was taken down following aDigital Millennium Copyright Act(DMCA)take-down noticeissued by What3words.[48]This policy has extended to removing comments on social media which refer to unauthorised versions. In late April 2021, a security researcher who had offered onTwitterto share WhatFreeWords software was contacted by What3Words's law firm, requiring him to delete the tweets and the software, and implying that legal action might follow non-compliance.[49]
The site has been parodied by others who have created services including What3Emojis[50]usingemojis, What3Birds[51]usingBritish birds, What3fucks[52]usingswear words, Four King Maps[53][54]also using swear words (covering only theBritish Isles), and What3Numbers[55]usingOpenStreetMaptile identifiers.
|
https://en.wikipedia.org/wiki/What3Words
|
Asoftware license manageris asoftware management toolused byindependent software vendorsor by end-user organizations to control where and how software products are able to run. License managers protect software vendors from losses due tosoftware piracyand enable end-user organizations to comply withsoftware licenseagreements. License managers enable software vendors to offer a wide range of usage-centric software licensing models, such asproduct activation,trial licenses,subscription licenses, feature-based licenses, andfloating licensingfrom the same software package they provide to all users.
A license manager is different from asoftware asset managementtool, which end-user organizations employ to manage the software they have licensed from many software vendors. However, some software asset management tools include license manager functions. These are used to reconcile software licenses and installed software, and generally include device discovery, software inventory, license compliance, andreportingfunctions.
An additional benefit of these software management tools are that they reduce the difficulty, cost, and time required for reporting and can increase operational transparency in order to prevent litigation costs associated with software misuse, as set forth by theSarbanes-Oxley Act.[1][2]
License managementsolutions provided by non-vendor companies are more valuable to the end-users, since most vendors do not provide enough license usage information. A vendor license manager provides limited information, while non-vendor license management solutions are developed for end-users in order to maximally optimize the licenses they have.[3]
Most license managers can cover differentsoftware licensing modelsaslicense donglesor license USB keys,floating licenses,network licenses,concurrent licenseetc.
|
https://en.wikipedia.org/wiki/License_manager
|
Asoftware license manageris asoftware management toolused byindependent software vendorsor by end-user organizations to control where and how software products are able to run. License managers protect software vendors from losses due tosoftware piracyand enable end-user organizations to comply withsoftware licenseagreements. License managers enable software vendors to offer a wide range of usage-centric software licensing models, such asproduct activation,trial licenses,subscription licenses, feature-based licenses, andfloating licensingfrom the same software package they provide to all users.
A license manager is different from asoftware asset managementtool, which end-user organizations employ to manage the software they have licensed from many software vendors. However, some software asset management tools include license manager functions. These are used to reconcile software licenses and installed software, and generally include device discovery, software inventory, license compliance, andreportingfunctions.
An additional benefit of these software management tools are that they reduce the difficulty, cost, and time required for reporting and can increase operational transparency in order to prevent litigation costs associated with software misuse, as set forth by theSarbanes-Oxley Act.[1][2]
License managementsolutions provided by non-vendor companies are more valuable to the end-users, since most vendors do not provide enough license usage information. A vendor license manager provides limited information, while non-vendor license management solutions are developed for end-users in order to maximally optimize the licenses they have.[3]
Most license managers can cover differentsoftware licensing modelsaslicense donglesor license USB keys,floating licenses,network licenses,concurrent licenseetc.
|
https://en.wikipedia.org/wiki/List_of_license_managers
|
Product activationis alicensevalidation procedure required by someproprietary softwareprograms. Product activation prevents unlimited free use of copied or replicated software. Unactivated software refuses to fully function until itdetermineswhether it is authorized to fully function. Activation allows the software to stop blocking its use. An activation can last "forever", or it can have a time limit, requiring a renewal or re-activation for continued use. Product activation is often based on verification of a product key, which is a sequence of letters and/or numbers that is verified via an algorithm or mathematical formula, for a particular solution or set of solutions, possibly combined with verification in a database or some other method for verification which can be done via the internet. If the solution matches expectations, and verification confirms that the product key is genuine, the product is activated. Using hashing, it is possible to represent letters as numbers before verification.
In one form, product activation refers to a method invented byRic Richardsonand patented (U.S. patent 5,490,216) byUnilocwhere a software applicationhasheshardware serial numbers and an ID number specific to the product's license (aproduct key) to generate a unique installation ID. This installation ID is sent to the manufacturer to verify the authenticity of the product key and to ensure that the product key is not being used for multipleinstallations.
Alternatively, the software vendor sends the user a unique product serial number. When the user installs the application it requests that the user enter their product serial number, and checks it with the vendor's systems over theInternet. The application obtains the license limits that apply to that user'slicense, such as a time limit or enabling of product features, from the vendor's system and optionally also locks the license to the user's system. Once activated the license continues working on the user's machine with no further communication required with the vendor's systems. Some activation systems also support activation on user systems without Internet connections; a common approach is to exchangeencryptedfiles at an Internet terminal.
An early example of product activation was in the MS-DOS program D'Bridge Email System written by Chris Irwin, a commercial network system for BBS users and Fidonet. The program generated a unique serial number which then called the author's BBS via a dialup modem connection. Upon connection, the serial number was validated. A unique "key" was returned which allowed the program to continue for a trial period. If two D'Bridge systems communicated using the same key, the software deliberately crashed. The software has long since had the entire activation system removed and is now freeware by Nick J. Andre, Ltd.
Microsoft Product Activationwas introduced in the Brazilian version ofMicrosoft Office 97Small Business Edition[1]andMicrosoft Word 97sold in theHungarianmarket. It broadened that successful pilot with the release ofMicrosoft Publisher 98in the Brazilian market.[1]Microsoft then rolled out product activation in its flagshipMicrosoft Office 2000product. All retail copies sold inAustralia,Brazil,China,France, andNew Zealand, and some sold inCanadaand theUnited States, required the user to activate the product via the Internet.[1][2]However, all copies of Office 2000 do not require activation after April 15, 2003.[3]After its success, the product activation system was extended worldwide and incorporated intoWindows XPandOffice XPand all subsequent versions ofWindowsandOffice. Despite independently developing its own technology, in April 2009 a jury found Microsoft to have willfully infringed Uniloc's patent. However, in September 2009, US District Judge William Smith "vacated" the jury's verdict and ruled in favour of Microsoft.[4]This ruling was subsequently overturned in 2011.
Software that has been installed but not activated does not perform its full functions, and/or imposes limits on file size or session time. Some software allows full functionality for a limited "trial" time before requiring activation. Unactivated software typically reminds the user to activate, at program startup or at intervals, and when the imposed size or time limits are reached. (Some unactivated software has taken disruptive actions such as crashing or vandalism, but this is rare.)
Some 'unactivated' products act as atime-limited trialuntil a product key—a number encoded as a sequence of alphanumeric characters—is purchased and used to activate the software. Some products allow licenses to be transferred from one machine to another using online tools, without having to calltechnical supportto deactivate the copy on the old machine before reactivating it on the new machine.
Software verifies activation every time it starts up, and sometimes while it is running. Some software even "phones home", checking a central database (across the Internet or other means) to check whether the specific activation has been revoked. Some software might stop working or reduce functionality if it cannot connect to the central database.
|
https://en.wikipedia.org/wiki/Product_activation
|
Digital rights management(DRM) is the management of legal access todigital content. Various tools ortechnological protection measures,[1]such asaccess controltechnologies, can restrict the use ofproprietary hardwareandcopyrightedworks.[2]DRM technologies govern the use, modification and distribution of copyrighted works (e.g.software, multimedia content) and of systems that enforce these policies within devices.[3]DRM technologies[4]includelicensing agreements[5]andencryption.[6]
Laws in many countries criminalize the circumvention of DRM, communication about such circumvention, and the creation and distribution of tools used for such circumvention. Such laws are part of the United States'Digital Millennium Copyright Act(DMCA),[7]and theEuropean Union'sInformation Society Directive[8]– with the FrenchDADVSIan example of a member state of the European Union implementing that directive.[9]
Copyright holders argue that DRM technologies are necessary to protectintellectual property, just as physical locks preventpersonal propertyfrom theft.[1]For examples, they can help the copyright holders for maintainingartistic controls,[10]and supporting licenses' modalities such as rentals.[11]Industrial users (i.e. industries) have expanded the use of DRM technologies to various hardware products, such asKeurig'scoffeemakers,[12][13]Philips'light bulbs,[14][15]mobile devicepower chargers,[16][17][18]andJohn Deere'stractors.[19]For instance, tractor companies try to prevent farmers from makingrepairsvia DRM.[20][21]
DRM is controversial. There is an absence of evidence about the DRM capability in preventingcopyright infringement, some complaints by legitimate customers for caused inconveniences, and a suspicion ofstifling innovation and competition.[22]Furthermore, works can become permanently inaccessible if the DRM scheme changes or if a required service is discontinued.[23]DRM technologies have been criticized for restricting individuals from copying or using the content legally, such as byfair useor by making backup copies. DRM is in common use by theentertainment industry(e.g., audio and video publishers).[24]Many online stores such as OverDrive use DRM technologies, as do cable and satellite service operators. Apple removed DRM technology fromiTunesaround 2009.[25]Typical DRM also prevents lending materials out through a library, or accessing works in thepublic domain.[1]
The rise ofdigital mediaand analog-to-digital conversion technologies has increased the concerns of copyright-owners, particularly within the music and video industries. Whileanalogmedia inevitably lose quality with eachcopy generationand during normal use, digital media files may be duplicated without limit with no degradation. Digital devices make it convenient for consumers to convert (rip) media originally in a physical, analog or broadcast form into a digital form for portability or later use. Combined with theInternetandfile-sharingtools, made unauthorized distribution of copyrighted content (digital piracy) much easier.
DRM became a major concern with the growth of the Internet in the 1990s, as piracy crushedCDsales and online video became popular. It peaked in the early 2000s as various countries attempted to respond with legislation and regulations and dissipated in the 2010s associal mediaandstreaming serviceslargely replaced piracy and content providers elaborated next-generation business models.
In 1983, the Software Service System (SSS) devised by the Japanese engineer Ryuichi Moriya was the first example of DRM technology. It was subsequently refined under the namesuperdistribution. The SSS was based on encryption, with specialized hardware that controlled decryption and enabled payments to be sent to the copyright holder. The underlying principle was that the physical distribution of encrypted digital products should be completely unrestricted and that users of those products would be encouraged to do so.[26]
An early DRM protection method for computer andNintendo Entertainment Systemgames was when the game would pause and prompt the player to look up a certain page in a booklet or manual that came with the game; if the player lacked access to the material, they would not be able to continue.
An early example of a DRM system is theContent Scramble System(CSS) employed by theDVD ForumonDVDmovies. CSS uses anencryption algorithmto encrypt content on the DVD disc. Manufacturers of DVD players must license this technology and implement it in their devices so that they can decrypt the content. The CSS license agreement includes restrictions on how the DVD content is played, including what outputs are permitted and how such permitted outputs are made available. This keeps the encryption intact as the content is displayed.[citation needed]
In May 1998, theDigital Millennium Copyright Act(DMCA) passed as an amendment to UScopyright law. It had controversial (possibly unintended) implications. Russian programmerDmitry Sklyarovwas arrested for alleged DMCA infringement after a presentation atDEF CON. The DMCA has been cited as chilling to legitimate users;[27]such as security consultants includingNiels Ferguson, who declined to publish vulnerabilities he discovered inIntel's secure-computing scheme due to fear of arrest under DMCA; and blind or visually impaired users ofscreen readersor otherassistive technologies.[28]
In 1999,Jon Lech JohansenreleasedDeCSS, which allowed a CSS-encrypted DVD to play on a computer runningLinux, at a time when no compliant DVD player for Linux had yet been created. The legality of DeCSS is questionable: one of its authors was sued, and reproduction of the keys themselves is subject to restrictions asillegal numbers.[29]
More modern examples includeADEPT,FairPlay,Advanced Access Content System.
TheWorld Intellectual Property Organization Copyright Treaty(WCT) was passed in 1996. The USDigital Millennium Copyright Act(DMCA), was passed in 1998. The European Union enacted theInformation Society Directive. In 2006, the lower house of the French parliament adopted such legislation as part of the controversialDADVSIlaw, but added that protected DRM techniques should be made interoperable, a move which caused widespread controversy in the United States. TheTribunal de grande instance de Parisconcluded in 2006, that the complete blocking of any possibilities of making private copies was an impermissible behaviour under French copyright law.
The broadcast flag concept was developed by Fox Broadcasting in 2001, and was supported by theMPAAand the U.S.Federal Communications Commission(FCC). A ruling in May 2005 by aUnited States courts of appealsheld that the FCC lacked authority to impose it on the US TV industry. It required that all HDTVs obey a stream specification determining whether a stream can be recorded. This could block instances of fair use, such astime-shifting. It achieved more success elsewhere when it was adopted by theDigital Video Broadcasting Project(DVB), a consortium of about 250 broadcasters, manufacturers, network operators, software developers, and regulatory bodies from about 35 countries involved in attempting to develop new digital TV standards.
In January 2001, the Workshop on Digital Rights Management of theWorld Wide Web Consortiumwas held.[30]
On 22 May 2001, the European Union passed the Information Society Directive, with copyright protections.
In 2003, theEuropean Committee for Standardization/Information Society Standardization System (CEN/ISSS) DRM Report was published.[31]
In 2004, the Consultation process of the European Commission, and the DG Internal Market, on the Communication COM(2004)261 by the European Commission on "Management of Copyright and Related Rights" closed.[32]
In 2005, DRM Workshops ofDirectorate-General for Information Society and Media (European Commission), and the work of the High Level Group on DRM were held.[33]
In 2005,Sony BMGinstalled DRM software on users' computers without clearly notifying the user or requiring confirmation. Among other things, the software included arootkit, which createda security vulnerability. When the nature of the software was made public much later, Sony BMG initially minimized the significance of the vulnerabilities, but eventually recalled millions of CDs, and made several attempts to patch the software to remove the rootkit.Class action lawsuitswere filed, which were ultimately settled by agreements to provide affected consumers with a cash payout or album downloads free of DRM.[34]
Microsoft's media playerZunereleased in 2006 did not support content that used Microsoft'sPlaysForSureDRM scheme.[35]
Windows Media DRM, reads instructions from media files in a rights management language that states what the user may do with the media.[36]Later versions of Windows Media DRM implemented music subscription services that make downloaded files unplayable after subscriptions are cancelled, along with the ability for a regional lockout.[37]Tools likeFairUse4WMstrip Windows Media of DRM restrictions.[38]
The Gowers Review of Intellectual Property by the British Government fromAndrew Gowerswas published in 2006 with recommendations regarding copyright terms, exceptions, orphaned works, and copyright enforcement.
DVB (DVB-CPCM) is an updated variant of the broadcast flag. The technical specification was submitted to European governments in March 2007. As with much DRM, the CPCM system is intended to control use of copyrighted material by the end-user, at the direction of the copyright holder. According to Ren Bucholz of theElectronic Frontier Foundation(EFF), "You won't even know ahead of time whether and how you will be able to record and make use of particular programs or devices".[39]The normative sections were approved for publication by the DVB Steering Board, and formalized byETSIas a formal European Standard (TS 102 825-X) where X refers to the Part number. Nobody has yet stepped forward to provide aCompliance and Robustnessregime for the standard, so it is not presently possible to fully implement a system, as no supplier of device certificates has emerged.
In December 2006, the industrial-gradeAdvanced Access Content System(AACS) forHD DVDandBlu-ray Discs, a process key was published by hackers, which enabled unrestricted access to AACS-protected content.[40][41]
In January 2007,EMIstopped publishing audio CDs with DRM, stating that "the costs of DRM do not measure up to the results."[42]In March, Musicload.de, one of Europe's largest internet music retailers, announced their position strongly against DRM. In an open letter, Musicload stated that three out of every four calls to their customer support phone service are as a result of consumer frustration with DRM.[43]
Apple Inc.made music DRM-free after April 2007[44]and labeled all music as "DRM-Free" after 2008.[45]Other works sold on iTunes such as apps, audiobooks, movies, and TV shows are protected by DRM.[46]
A notable DRM failure happened in November 2007, when videos purchased fromMajor League Baseballprior to 2006 became unplayable due to a change to the servers that validate the licenses.[47]
In 2007, the European Parliament supported the EU's direction on copyright protection.
Asusreleased a soundcard which features a function called "Analog Loopback Transformation" to bypass the restrictions of DRM. This feature allows the user to record DRM-restricted audio via the soundcard's built-in analog I/O connection.[48][49]
Digital distributorGOG.com(formerly Good Old Games) specializes inPCvideo gamesand has a strict non-DRM policy.[50]
Baen BooksandO'Reilly Media, dropped DRM prior to 2012, whenTor Books, a major publisher of science fiction and fantasy books, first sold DRM-freee-books.[51]
TheAxmedisproject completed in 2008. It was a European Commission Integrated Project of the FP6, has as its main goal automating content production,copy protection, and distribution, to reduce the related costs, and to support DRM at both B2B and B2C areas, harmonizing them.
TheINDICAREproject was a dialogue on consumer acceptability of DRM solutions in Europe that completed in 2008.
In mid-2008, theWindowsversion ofMass Effectmarked the start of a wave of titles primarily making use ofSecuROMfor DRM and requiring authentication with a server. The use of the DRM scheme in 2008'sSporeled to protests, resulting in searches for an unlicensed version. This backlash against the activation limit ledSporeto become the most pirated game in 2008, topping the top 10 list compiled byTorrentFreak.[52][53]However,Tweakguidesconcluded that DRM does not appear to increase video game piracy, noting that other games on the list, such asCall of Duty 4andAssassin's Creed, use DRM without limits or online activation. Additionally, other video games that use DRM, such asBioShock,Crysis Warhead, andMass Effect, do not appear on the list.[54]
Many mainstream publishers continued to rely ononlineDRM throughout the later half of 2008 and early 2009, includingElectronic Arts,Ubisoft,Valve, andAtari,The Sims 3being a notable exception in the case of Electronic Arts.[55]Ubisoft broke with the tendency to use online DRM in late 2008, with the release ofPrince of Persiaas an experiment to "see how truthful people really are" regarding the claim that DRM was inciting people to use illegal copies.[56]Although Ubisoft has not commented on the results of the "experiment", Tweakguides noted that twotorrentsonMininovahad over 23,000 people downloading the game within 24 hours of its release.[57]
In 2009,Amazonremotely deleted purchased copies ofGeorge Orwell'sAnimal Farm(1945) andNineteen Eighty-Four(1949) from customers'Amazon Kindlesafter refunding the purchase price.[58]Commentators described these actions asOrwellianand compared Amazon toBig BrotherfromNineteen Eighty-Four.[59][60][61][62]Amazon CEOJeff Bezosthen issued a public apology. FSF wrote that this was an example of the excessive power Amazon has to remotely censor content, and called upon Amazon to drop DRM.[63]Amazon then revealed the reason behind its deletion: the e-books in question were unauthorized reproductions of Orwell's works, which were not within thepublic domainand that the company that published and sold on Amazon's service had no right to do so.[64]
Ubisoft formally announced a return to online authentication on 9 February 2010, through itsUplayonline game platform, starting withSilent Hunter 5,The Settlers 7, andAssassin's Creed II.[65]Silent Hunter 5was first reported to have been compromised within 24 hours of release,[66]but users of the cracked version soon found out that only early parts of the game were playable.[67]The Uplay system works by having the installed game on the local PCs incomplete and then continuously downloading parts of the game code from Ubisoft's servers as the game progresses.[68]It was more than a month after the PC release in the first week of April that software was released that could bypass Ubisoft's DRM inAssassin's Creed II. The software did this by emulating a Ubisoft server for the game. Later that month, a real crack was released that was able to remove the connection requirement altogether.[69][70]
In March 2010, Uplay servers suffered a period of inaccessibility due to a large-scaleDDoS attack, causing around 5% of game owners to become locked out of playing their game.[71]The company later credited owners of the affected games with a free download, and there has been no further downtime.[72]
In 2011, comedianLouis C.K.released hisconcert filmLive at the Beacon Theateras an inexpensive (US$5), DRM-free download. The only attempt to deter unlicensed copies was a letter emphasizing the lack of corporate involvement and direct relationship between artist and viewer. The film was a commercial success, turning a profit within 12 hours of its release. The artist suggested that piracy rates were lower than normal as a result, making the release an important case study for the digital marketplace.[73][74][75]
In 2012, theEU Court of Justiceruled in favor of reselling copyrighted games.[76]
In 2012, India implemented digital rights management protection.[77][78][79]
In 2012,webcomicDiesel Sweetiesreleased a DRM-free PDF e-book.[80][81][82]He followed this with a DRM-free iBook specifically for theiPad[83]that generated more than 10,000 downloads in three days.[84]That led Stevens to launch aKickstarterproject – "ebook stravaganza 3000" – to fund the conversion of 3,000 comics, written over 12 years, into a single "humongous" e-book to be released both for free and through the iBookstore; launched 8 February 2012, with the goal of raising $3,000 in 30 days. The "payment optional" DRM-free model in this case was adopted on Stevens' view that "there is a class of webcomics reader who would prefer to read in large chunks and, even better, would be willing to spend a little money on it."[84]
In February 2012,Double Fineasked forcrowdfundingfor an upcoming video game,Double Fine Adventure, onKickstarterand offered the game DRM-free for backers. This project exceeded its original goal of $400,000 in 45 days, raising in excess of $2 million.[85]Crowdfunding acted as apre-orderor alternatively as asubscription. After the success ofDouble Fine Adventure, many games were crowd-funded and many offered a DRM-free version.[86][87][88]
Websites – such aslibrary.nu(shut down by court order on 15 February 2012), BookFi,BookFinder,Library Genesis, andSci-Hub– allowed e-book downloading by violating copyright.[89][90][91][92]
As of 2013, other developers, such asBlizzard Entertainmentput most of the game logic is on the "side" or taken care of by the servers of the game maker. Blizzard uses this strategy for its gameDiablo IIIand Electronic Arts used this same strategy with their reboot ofSimCity, the necessity of which has been questioned.[93]
In 2014, theEU Court of Justiceruled that circumventing DRM on game devices was legal under some circumstances.[94][95]
In 2014, digital comic distributorComixologyallowed rights holders to provide the option of DRM-free downloads. Publishers that allow this includeDynamite Entertainment,Image Comics,Thrillbent,Top Shelf Productions, andZenescope Entertainment.[96]
In February 2022, Comixology, which was later under the ownership of Amazon, ended the option of downloading DRM-free downloads on all comics, although any comics previously purchased prior to the date will have the option to download comics without DRM.[97][98]
Aproduct key, typically an alphanumerical string, can represent a license to a particular copy of software. During the installation process or software launch, the user is asked to enter the key; if the key is valid (typically via internal algorithms), the key is accepted, and the user can continue. Product keys can be combined with other DRM practices (such as online "activation"), to preventcrackingthe software to run without a product key, or using akeygento generate acceptable keys.
DRM can limit the number of devices on which a legal user can install content.[99]This restriction typically support 3-5 devices. This affects users who have more devices than the limit. Some allow one device to be replaced with another. Without this software and hardware upgrades may require an additional purchase.
Always-on DRM checks and rechecks authorization while the content is in use by interacting with a server operated by the copyright holder. In some cases, only part of the content is actually installed, while the rest is downloaded dynamically during use.
Encryption alters content in a way that means that it cannot be used without first decrypting it.[99]Encryption can ensure that other restriction measures cannot be bypassed by modifying software, so DRM systems typically rely on encryption in addition to other techniques.
Microsoft PlayReadyprevents illicit copying of multimedia and other files.[100]
Restrictions can be applied toelectronic booksand documents, in order to prevent copying, printing, forwarding, and creating backup copies. This is common for bothe-publishersand enterpriseInformation Rights Management. It typically integrates withcontent managementsystem software.[101]
While some commentators claim that DRM complicates e-book publishing,[102]it has been used by organizations such as theBritish Libraryin itssecure electronic delivery serviceto permit worldwide access to rare documents which, for legal reasons, were previously only available to authorized individuals actually visiting the Library's document centre.[103][104][105]
Four main e-book DRM schemes are in common use, fromAdobe, Amazon,Apple, and the Marlin Trust Management Organization (MTMO).
Windows Vistacontains a DRM system calledProtected Media Path, which contains Protected Video Path (PVP).[107]PVP tries to stop DRM-restricted content from playing while unsigned software is running, in order to prevent the unsigned software from accessing the content. Additionally, PVP can encrypt information during transmission to themonitoror thegraphics card, which makes it more difficult to make unauthorized recordings.
Bohemia Interactivehave used a form of technology sinceOperation Flashpoint: Cold War Crisis, wherein if the game copy is suspected of being unauthorized, annoyances like guns losing their accuracy or the players turning into a bird are introduced.[108]Croteam'sSerious Sam 3: BFEcauses a special invincible foe in the game to appear and constantly attack the player until they are killed.[109][110]
Regional lockout (or region coding) prevents the use of a certain product or service, except in a specific region or territory. Lockout may be enforced through physical means, through technological means such as inspecting the user'sIP addressor using an identifying code, or through unintentional means introduced by devices that support only region-specific technologies (such asvideo formats, i.e.,NTSCandPAL).
Digital watermarkscan besteganographicallyembedded within audio or video data. They can be used for recording the copyright owner, the distribution chain or identifying the purchaser. They are not complete DRM mechanisms in their own right, but are used as part of a system for copyright enforcement, such as helping provide evidence for legal purposes, rather than enforcing restrictions.[111]
Some audio/video editing programs may distort, delete, or otherwise interfere with watermarks. Signal/modulator-carrierchromatographymay separate watermarks from the recording or detect them as glitches. Additionally, comparison of two separately obtained copies of audio using basic algorithms can reveal watermarks.[citation needed]
Sometimes,metadatais included in purchased media which records information such as the purchaser's name, account information, or email address. Also included may be the file's publisher, author, creation date, download date, and various notes. This information is not embedded in the content, as a watermark is. It is kept separate from the content, but within the file or stream.
As an example, metadata is used in media purchased from iTunes for DRM-free as well as DRM-restricted content. This information is included asMPEGstandard metadata.[112][113]
USCable televisionset-top boxesrequire a specific piece of hardware to operate. TheCableCardstandard is used to restrict content to services to which the customer is subscribed. Content has an embeddedbroadcast flagthat the card examines to decide whether the content can be viewed by a specific user.
In addition, platforms such asSteammay include DRM mechanisms. Most of the mechanisms above arecopy protectionmechanisms rather than DRM mechanisms per se.
TheWorld Intellectual Property Organizationsupports theWorld Intellectual Property Organization Copyright Treaty(WCT) which requires nations to enact laws against DRM circumvention. The WIPO Internet Treaties do not mandate criminal sanctions, merely requiring "effective legal remedies".[114]
Australia prohibits circumvention of "access control technical protection measures" in Section 116 of the Copyright Act. The law currently imposes penalties for circumvention of such measures[115]as well as the manufacturing[116]and distribution[117]of tools to enable it.
DRM may be legally circumvented under a few distinct circumstances which are named as exceptions in the law:
A person circumventing the access control bears theburden of proofthat one of these exceptions apply.
Penalties for violation of the anti-circumvention laws include aninjunction, monetary damages, and destruction of enabling devices.[118]
China's copyright law was revised in 2001and included a prohibition on "intentionally circumventing or destroying the technological measures taken by a right holder for protecting the copyright or copyright-related rights in his work, sound recording or video recording, without the permission of the copyright owner, or the owner of the copyright-related rights". However, the Chinese government had faced backlash fromNintendoover the heavy burden
on law enforcement action against circumvention devices, stating that the police only view game copiers as infringing Nintendo's trademark, not as infringing copyright. In response, Nintendo obtained copyright registration for its software in 2013 to make it easier to make law enforcement against game copiers and other circumvention devices.[119]
The EU operates under its Information Society Directive, its WIPO implementation. The European Parliament then directed member states to outlaw violation of international copyright for commercial purposes. Punishments range from fines to imprisonment. It excluded patent rights and copying for personal, non-commercial purposes. Copyrighted games can be resold.[76]Circumventing DRM on game devices is legal under some circumstances; protections cover only technological measures the interfere with prohibited actions.[94][95]
India acceded to theWIPO Copyright Treatyand theWIPO Performances and Phonograms Treatyon July 4, 2018,[120]after a 2012 amendment to theCopyright Actcriminalized the circumvention of technical protections. Fair use is not explicitly addressed, but the anti-circumvention provisions do not prohibit circumventing for non-infringing purposes.[77][78][79]
Israel is not a signatory to the WIPO Copyright Treaty. Israeli law does not expressly prohibit the circumvention of technological protection measures.[121]
Japan outlawed circumvention of technological protection measures on June 23, 1999 through an amendment of its 1970 copyright law.[122]The private copying exception does not apply if it has become available due to circumvention of TPMs,[123]and circumvention of a TPM is deemed as copyright infringement. However, circumvention is allowed for research purposes or if it otherwise does not harm the rightsholder's interests.[124]
Pakistan is not a signatory to the WIPO Copyright Treaty or the WIPO Performances and Phonograms Treaty. Pakistani law does not criminalize the circumvention of technological protection measures.[125]
As of January 2022, Pakistan's Intellectual Property Office intended to accede to the WIPO Copyright Treaty and WIPO Performances and Phonograms Treaty. However, there has been no major progress for Pakistan to accede to the treaties,[126]and the timeline of the enactments of amendments to the Copyright Ordinance is unclear.[127]As of February 2023, Pakistan's Intellectual Property Office was currently finalizing draft amendments to its Copyright Ordinance.[128]
US protections are governed by the Digital Millennium Copyright Act (DMCA). It criminalizes the production and dissemination of technology that lets users circumvent copy-restrictions. Reverse engineering is expressly permitted, providing asafe harborwhere circumvention is necessary to interoperate with other software.
Open-source softwarethat decrypts protected content is not prohibited per se. Decryption done for the purpose of achieving interoperability of open source operating systems with proprietary systems is protected. Dissemination of such software for the purpose of violating or encouraging others to violate copyrights is prohibited.
DMCA has been largely ineffective.[129]Cirumvention software is widely available. However, those who wish to preserve the DRM systems have attempted to use the Act to restrict the distribution and development of such software, as in the case of DeCSS. DMCA contains an exception for research, although the exception is subject to qualifiers that created uncertainty in that community.
Cryptanalytic research may violate the DMCA, although this is unresolved.
DRM faces widespread opposition.John Walker[130]andRichard Stallmanare notable critics.[131][132]Stallman has claimed that using the word "rights" is misleading and suggests that the word "restrictions", as in "Digital Restrictions Management", replace it.[133]This terminology has been adopted by other writers and critics.[134][135][136]
Other prominent critics includeRoss Anderson, who headed a British organization that opposes DRM and similar efforts in the UK and elsewhere, andCory Doctorow.[137]EFFand organizations such asFreeCulture.orgare opposed to DRM.[22]TheFoundation for a Free Information Infrastructurecriticized DRM's effect as atrade barrierfrom afree marketperspective.[138]
Bruce Schneierargues that digital copy prevention is futile: "What the entertainment industry is trying to do is to use technology to contradict that natural law. They want a practical way to make copying hard enough to save their existing business. But they are doomed to fail."[139]He described trying to make digital files uncopyable as like "trying to make water not wet".[140]
The creators ofStarForcestated that "The purpose of copy protection is not making the game uncrackable – it is impossible."[141]
Bill Gatesspoke about DRM at 2006CES, saying that DRM causes problems for legitimate consumers.[142]
The Norwegian consumer rights organization "Forbrukerrådet" complained to Apple in 2007 about the company's use of DRM, accusing it of unlawfully restricting users' access to their music and videos, and of usingEULAsthat conflict with Norwegian consumer legislation. The complaint was supported by consumers'ombudsmenin Sweden and Denmark, and was reviewed in the EU in 2014. The United StatesFederal Trade Commissionheld hearings in March 2009, to review disclosure of DRM limitations to customers' use of media products.[143]
ValvepresidentGabe Newellstated, "most DRM strategies are just dumb" because they only decrease the value of a game in the consumer's eyes. Newell suggested that the goal should instead be "[creating] greater value for customers through service value". Valve operatesSteam, an online store forPC games, as well as asocial networking serviceand a DRM platform.[144]
At the 2012Game Developers Conference, the CEO ofCD Projekt Red, Marcin Iwinski, announced that the company would not use DRM. Iwinski stated of DRM, "It's just over-complicating things... the game... is cracked in two hours." Iwinski added "DRM does not protect your game. If there are examples that it does, then people maybe should consider it, but then there are complications with legit users."[145]
TheAssociation for Computing Machineryand theInstitute of Electrical and Electronics Engineersopposed DRM, namingAACSas a technology "most likely to fail" in an issue ofIEEE Spectrum.[146]
TheGNU General Public Licenseversion 3, as released by theFree Software Foundation, has a provision that "strips" DRM of its legal value, so people can break the DRM on GPL software without breaking laws such as theDMCA. In May 2006, FSF launched a "Defective by Design" campaign against DRM.[147][148]
Creative Commonsprovides licensing options that encourage creators to work without the use of DRM.[149]Creative Commons licenses have anti-DRM clauses, making the use of DRM by a licensee a breach of the licenses' Baseline Rights.[150]
Many publishers and artists label their works "DRM-free". Major companies that have done so include Apple,GOG.com,Tor BooksandVimeo on Demand.Comixologyonce had DRM-free works available for sale until 2022 when its parent company, Amazon, removed the option to buy DRM-free works as part of their migration to Amazon's website, although previous purchases remained DRM-free.[151]
Many DRM systems require online authentication. Whenever the server goes down, or a territory experiences an Internet outage, it locks out people from registering or using the material.[152]This is especially true for products that require a persistent online connection, where, for example, a successfulDDoS attackon the server essentially makes the material unusable.
Compact discs(CDs) with DRM schemes are not standards-compliant, and are labeledCD-ROMs. CD-ROMs cannot be played on allCD playersor personal computers.[153]
Certain DRM systems have been associated with reduced performance: some games implementingDenuvo Anti-Tamperperformed better without DRM.[154][155]However, in March 2018,PC GamertestedFinal Fantasy XVfor the performance effects ofDenuvo, which was found to cause no negative gameplay impact despite a little increase in loading time.[156]
DRM copy-prevention schemes can never be wholly secure since the logic needed to decrypt the content is present either in software or hardware and implicitly can be hacked. An attacker can extract this information, decrypt and copy the content, bypassing the DRM.[137]
Satellite and cable systems distribute their content widely and rely on hardware DRM systems. Such systems can be hacked by reverse engineering the protection scheme.
Audio and visual material (excluding interactive materials,e.g., video games) are subject to theanalog hole, namely that in order to view the material, the digital signal must be turned into an analog signal. Post-conversion, the material can be then be copied and reconverted to a digital format.
The analog hole cannot be filled without externally imposed restrictions, such as legal regulations, because the vulnerability is inherent to all analog presentation.[157]The conversion from digital to analog and back reduces recording quality. TheHDCPattempt to plug the analog hole was largely ineffective.[158][159]
DRM opponents argue that it violatesprivate propertyrights and restricts a range of normal and legal user activities. A DRM component such as that found on adigital audio playerrestricts how it acts with regard to certain content, overriding user's wishes (for example, preventing the user from copying a copyrighted song toCDas part of a compilation). Doctorow described this as "the right to make up your own copyright laws".[160]
Windows Vista disabled or degraded content play that used a Protected Media Path.[161]DRM restricts the right to make personal copies, provisions lend copies to friends, provisions for service discontinuance, hardware agnosticism, software and operating system agnosticism,[162]lending library use, customer protections against contract amendments by the publisher, and whether content can pass to the owner's heirs.[163]
When standards and formats change, DRM-restricted content may become obsolete.
When a company undergoes business changes or bankruptcy, its previous services may become unavailable. Examples include MSN Music,[164]Yahoo! Music Store,[165]Adobe Content Server 3 for Adobe PDF,[166]and Acetrax Video on Demand.[167]
DRM laws are widely flouted: according to Australia Official Music Chart Survey, copyright infringements from all causes are practised by millions of people.[168]According to the EFF, "in an effort to attract customers, these music services try to obscure the restrictions they impose on you with clever marketing."[169]
Jeff Raikes, ex-president of the Microsoft Business Division, stated: "If they're going to pirate somebody, we want it to be us rather than somebody else".[170]An analogous argument was made in an early paper by Kathleen Conner and Richard Rummelt.[171]A subsequent study of digital rights management for e-books by Gal Oestreicher-Singer andArun Sundararajanshowed that relaxing some forms of DRM can be beneficial to rights holders because the losses from piracy are outweighed by the increase in value to legal buyers. Even if DRM were unbreakable, pirates still might not be willing to purchase, so sales might not increase.[172]
Piracy can be beneficial to some content providers by increase consumer awareness, spreading and popularizing content. This can also increase revenues via other media, such as live performances.
Mathematical models suggest that DRM schemes can fail to do their job on multiple levels.[173]The biggest failure is that the burden that DRM poses on a legitimate customer reduces the customer's willingness to buy. An ideal DRM would not inconvenience legal buyers. The mathematical models are strictly applicable to the music industry.
Several business models offer DRM alternatives.[174]
Streaming services have created profitable business models by signing users to monthly subscriptions in return for access to the service's library. This model has worked for music (such asSpotify,Apple Music, etc.) and video (such asNetflix,Disney+,Hulu, etc.).
Accessing a pirated copy can be illegal and inconvenient. Businesses that charge acceptable fees for doing so tend to attract customers. A business model that dissuades illegal file sharing is to make legal content downloading easy and cheap. Pirate websites often hostmalwarewhichattaches itself to the files served.[175]If content is provided on legitimate sites and is reasonably priced, consumers are more likely to purchase media legally.[174]
Crowdfundinghas been used as a publishing model for digital content.[85]
Many artists give away individual tracks to create awareness for a subsequent album.[174]
The Artistic Freedom Voucher (AFV) introduced byDean Bakeris a way for consumers to support "creative and artistic work". In this system, each consumer receives a refundable tax credit of $100 to give to any artist of creative work. To restrict fraud, the artists must register with the government. The voucher prohibits any artist that receives the benefits from copyrighting their material for a certain length of time. Consumers would be allowed to obtain music for a certain amount of time easily and the consumer would decide which artists receive the $100. The money can either be given to one artist or to many, and this distribution is up to the consumer.[176]
|
https://en.wikipedia.org/wiki/Digital_rights_management
|
Keynoteis apresentation softwareapplication developed as a part of theiWorkproductivity suite byApple Inc.[3]Version 14 of Keynote for Mac, the latest major update, was released in April 2024. Keynote is available for a range of Apple devices across macOS, iOS and iPadOS.[4]
Keynote began as a computer program for Apple CEOSteve Jobsto use in creating the presentations forMacworld Conference and Expoand other Apple keynote events.[5]Before using Keynote, Jobs had used Concurrence, fromLighthouse Design, a similar product which ran on theNeXTSTEPandOPENSTEPplatforms.[6]
The program was first sold publicly as Keynote 1.0 in 2003, competing against existing presentation software, most notablyMicrosoft PowerPoint.[7]
In 2005, Apple began selling Keynote 2.0 in conjunction withPages, a new word processing and page layout application, in a software package called iWork. At theMacworld Conference & Expo2006, Apple released iWork '06 with updated versions of Keynote 3.0 and Pages 2.0. In addition to official HD compatibility, Keynote 3 added new features, including group scaling, 3D charts, multi-column text boxes, auto bullets in any text field, image adjustments, and free-form masking tools. In addition, Keynote features three-dimensional transitions, such as a rotating cube or a simple flip of the slide.
In the fall of 2007, Apple released Keynote 4.0 in iWork '08, along with Pages 3.0 and the newNumbersspreadsheet application.
On October 23, 2013, Apple redesigned Keynote with version 6.0, and made it free for anyone with a newiOSdevice or a recently purchased Mac.[8]
A version of Keynote forvisionOSwas released on February 2, 2024, alongside the launch of theApple Vision Pro. The app is largely based upon the iPadOS version of the program, and is currently the only component of theiWorksuite to offer a native visionOS app.[9]
Keynote Remote was aniOSapplication that controlled Keynote presentations from aniPhone,iPod TouchoriPadover a Wi-Fi network or Bluetooth connection and was released through the App Store.[11]With the release of Keynote for iOS, the app was integrated into the new Keynote application and the stand-alone app was withdrawn.[12]
Streamlined in-app notifications inform you when a person joins a collaborative presentation for the first time.
Preserve file format and full quality when adding HEIC photos taken on iPhone or iPad.
Hold the Command key to select non-contiguous words, sentences or paragraphs.
Improved compatibility for slide transitions when importing and exporting Microsoft PowerPoint files.
Additional stability and performance improvements.
Control where the presenter display appears when rehearsing a presentation with multiple displays connected.[32]
Improved compatibility with PowerPoint by allowing Shortcuts to now export Keynote presentations in PowerPoint format.
Improved compatibility of copied and pasted objects between Keynote and Freeform.[34]
|
https://en.wikipedia.org/wiki/Keynote_(presentation_software)
|
BugMeNotis anInternetservice that providesusernamesandpasswordsallowing Internet users to bypass mandatory free registration onwebsites. It was started in August 2003 by an anonymous person, later revealed to be Guy King,[1]and allowed Internet users to access websites that have registration walls (for instance, that ofThe New York Times) with the requirement of compulsory registration. This came in response to the increasing number of websites that request such registration, which many Internet users find to be an annoyance and a potential source ofemail spam.[2]
BugMeNot allows users of their service to add new accounts for sites with free registration. It also encourages users to usedisposable email addressservices to create such accounts. However, it does not allow them to add accounts for paid websites, as this could potentially lead tocredit card fraud.[3]BugMeNot also claims to remove accounts for any website, requesting that they do not provide accounts for non-registered users.
To help make access to their service easier, BugMeNot hosts abookmarkletthat can be used with any browser to automatically find a usable account from their service. They also host extensions for theweb browsersMozilla Firefox(but not on Firefox quantum yet),Internet Explorer, andGoogle Chrome(the extensions were created by Eric Hamiter with Dmytri Kleiner and Dean Wilson, respectively).[citation needed]There are also implementations in the form of a BugMeNotOpera widget, orUserJSscripts along with buttons, which makes it fully browser-integrated. AnAndroidapplication is also available.[4]
BugMeNot provides an option for site owners to block their site from the BugMeNot database, if they match one or more of the following criteria:[5]
No option is provided for users to request removing a block if a site ceases to meet the blocking criteria or has never met them in the first place.
Site blocking can be circumvented by BugMeNot users by publishing usernames and passwords under a similar, but different, domain name to which they apply. For example, the owners of the domain abc.def.com might request a block to be put in place, but this will not prevent users uploading access information under the name of def.abc.com. Since one domain owner cannot demand that another domain be blocked, the information remains and is accessibly provided that BugMeNot users tacitly agree that def.abc.com in fact refers to abc.def.com.[original research?]For example, Wikipedia logins are in the database under wikipedia.net because wikipedia.com and wikipedia.org have been banned under the first criterion.[6]
Nearly a year after it was created, BugMeNot was shut down temporarily by its service provider (at that time),HostGator. The site's creator claimed BugMeNot's host was pressured by websites to shut them down, though Hostgator claimed that the BugMeNot site was repeatedly crashing their servers.[7]
The BugMeNot domain was transferred briefly to another hosting company, dissidenthosting.com, but before the site was set up, it began to redirect visitors to web pages belonging to racist groups, without the knowledge or consent of the site's owner. BugMeNot moved again, toNearlyFreeSpeech.NET. BugMeNot's move to this provider, which also hosts a number of highly controversial sites, prompted BugMeNot's creator to say, "Personally, I don't care if I'm sharing a server with neo-Nazis. I might not agree with what they have to say, but the whole thing about freedom of speech is that people are free to speak."[8]
Shortly after BugMeNot returned, reports surfaced that some news sites had begun to attempt to block accounts posted on BugMeNot, though the extent and effectiveness of such efforts, as well as compliance with BugMeNot's Terms of Use,[9]are not known.
The operators of BugMeNot expanded the "MeNot" network in October 2006 with the addition ofRetailMeNot– a service for finding and sharing online coupon codes. Users can add coupons they have found through any method, as well as a description of the coupon and an expiration date. Users can also scan in printed coupons and upload them for others to print.
|
https://en.wikipedia.org/wiki/BugMeNot
|
Indecision makingandpsychology,decision fatiguerefers to the deteriorating quality of decisions made by an individual after a long session of decision making.[1][2]It is now understood as one of the causes of irrational trade-offs in decision making.[2]Decision fatigue may also lead to consumers making poor choices with their purchases.
There is a paradox in that "people who lack choices seem to want them and often will
fight for them", yet at the same time, "people find that making many choices can be [psychologically] aversive."[3]
For example, major politicians and businessmen such as former United States PresidentBarack Obama,Steve Jobs, andMark Zuckerberghave been known to reduce their everyday clothing down to one or two outfits in order to limit the number of decisions they make in a day.[4]
Decision fatigue is a phrase popularised byJohn Tierney, and is the tendency for peoples' decision making to become impaired as a result of having recently taken multiple decisions.[5]
Decision fatigue has been hypothesised to be a symptom, or a result ofego depletion.[6]It differs frommental fatiguewhich describes the psychobiological state that results from a prolonged duration of demanding cognitive tasks, such as multi-tasking or switching between various tasks.[7]
Some psychologists and economists use the term to describe impairments in decision making resulting specifically from a long duration of having to make decisions.[8]Others view factors such as complexity of the decisions being made, repeated acts of self regulation,[9]physiological fatigue, and sleep deprivation[10]as implicated in the emergence of decision fatigue.
Decision fatigue is thought to be a result of unconscious, psychobiological processes, and is a reaction to sustained cognitive, emotional and decisional load, as opposed to a trait or deficiency.[6]Decision fatigue is an emergent construct[6]that has several possible applications in the fields of healthcare psychology,[11]behavioural economics and healthcare policy.
Behavioural attributes of decision fatigue tend to reflect an underlying state ofego depletionand may symbolise an unconscious method whereby individuals adapt their behaviour to prevent further depletion. Individuals experiencing decision fatigue are more prone to avoidant behaviours, such as procrastination; Sjastad and Baumeister demonstrated that decision fatigued individuals are less willing to engage in planning, and were more avoidant, compared to controls.[12]Decision fatigue may also induce passive behaviours, such as inaction and decision avoidance.[13]Furthermore, individuals experiencing decision fatigue may display less persistence when putting effort into decision making, and thus may be prone to choosing the 'default' option.[14]They may also be prone to impulsive, erratic or short-sighted behaviour.[15]
Decision fatigue may also alter cognitive functioning. Some studies suggest that decision fatigue impairs cognitive abilities, especially executive functioning and reasoning abilities. For exampleKathleen VohsandRoy Baumeisterfound that the more that people had made frequent and deliberate choices, the less able they were to persist on a math task, regardless of how tired they were or how long they spent on the task.[16]
There is evidence to suggest that decision fatigue may impact physiological endurance and self control. This was demonstrated in a series of studies which showed that participants who had made a long series of choices were less able to tolerate a bad-tasting drink, and were less able to tolerate pain, compared to controls.[17]This indicates that decision fatigue impairs physiological as well as cognitive self-control.
Trade-offs, where either of two choices have positive and negative elements, are an advanced and energy-consuming form of decision making. A person who is mentally depleted becomes reluctant to make trade-offs, or makes very poor choices.[1]Jonathan LevavatStanford Universitydesigned experiments showing how decision fatigue can leave a person vulnerable to sales and marketing strategies designed to time the sale.[18]"Decision fatigue helps explain why ordinarily sensible people...can't resist the dealer's offer torustprooftheir new car."[19]
Dean Spears ofPrinceton Universityhas argued that decision fatigue caused by the constant need to make financial trade-offs is a major factor in trapping people in poverty.[20]Given that financial situations force the poor to make so many trade-offs, they are left with lessmental energyfor other activities. "If a trip to the supermarket induces more decision fatigue in the poor than in the rich – because each purchase requires more mental trade-offs – by the time they reach the cash register, they'll have less willpower left to resist the Mars bars and Skittles. Not for nothing are these items called impulse purchases."[1]
Decision fatigue can lead people to avoid decisions entirely, a phenomenon called "decision avoidance".[21][22][3]In the formal approach to decision quality management, specific techniques have been devised to help managers cope with decision fatigue.[23]Other forms of decision avoidance used to bypass trade-offs and the emotional costs of decision making can include selecting either the default, or status quo options, where these are available.[21]
Decision fatigue can influence irrationalimpulse purchasesat supermarkets. During a trip to the supermarket, trade-off decisions regarding prices and promotions can produce decision fatigue, hence by the time the shopper reaches the cash register, less willpower remains to resist impulse purchases of candy and sugared items. Sweet snacks are usually featured at the cash register because many shoppers have decision fatigue by the time they get there.Florida State Universitysocial psychologistRoy Baumeisterhas also found that it is directly tied to lowglucoselevels, and that replenishing them restores the ability to make effective decisions. This has been offered as an explanation for why poor shoppers are more likely to eat during their trips.[1]
The "process of choosing may itself drain some of the self's precious resources, thereby leaving the executive function less capable of carrying out its other activities. Decision fatigue can therefore impair self-regulation".[3]"[S]ome degree of failure at self regulation" is at the root of "[m]ost major personal and social problems", such as debt, "underachievement at work and school" and lack of exercise.[24]
Experiments have shown the interrelationship between decision fatigue andego depletion, whereby a person's ability for self-control against impulses decreases in the face of decision fatigue.[25]
Baumeister and Vohs have suggested that the disastrous failure of men in high office to control impulses in their private lives may at times be attributed to decision fatigue stemming from the burden of day-to-day decision making.[25]Similarly,Tierneynotes that "C.F.O.s [are] prone to disastrous dalliances late in the evening", after a long day of decision-making.[19]
With regard to self-regulation in legal regulation: One research study found that the decisions judges make are strongly influenced by how long it has been since their last break. "We find that the percentage of favorable rulings drops gradually from ≈65% to nearly zero within each decision session and returns abruptly to ≈65% after a break."[26]
Several studies have indicated that decision fatigue can increase reliance onmental shortcutsandbiases.
A study by Shai Danziger, Jonathan Levav, and Liora Avnaim-Pesso from Columbia Business School showed that the percentage of favourable rulings by judges on parole boards in a prison dropped gradually (from around 65% to almost 0%) within each 'decision session' recorded, but would return to around 65% after a break.[27]This suggests that judicial rulings were increasingly determined by biased assumptions as decision fatigue increased.
Another demonstration of the relationship between decision fatigue and increased susceptibility to biased decision making was that of journal editors reviewing manuscripts. This study found that when the number of manuscripts discussed per meeting increased from 10 - 19 to over 20, the rate of rejection increased from 38% to 44%. When the number of manuscripts an editor had to read a day increased from 1-2 to 3 or more, the number of manuscripts rejected without peer review increased by 6%.[28]This indicates that the greater decision fatigue editors experienced (whether alone or working in collaboration), the greater their bias towards rejecting manuscripts emerged.
Decision fatigue increases consumers' reliance on cognitive biases, such as anchoring and framing effects, making them more prone to quick, biased choices under conditions of mental exhaustion.
Individuals experiencing decision fatigue may feel a greater degree of decisional conflict. Decisional conflict is a state wherein an individual is uncertain about which course of action to take when the decision between various options involves regret, risk or challenge to their values.[29]Decisional conflict is likely to arise from decision fatigue because decision fatigue impairs one's ability to make decisions efficiently, makes them prone to over-reliance on heuristics and biases, reduces one's ability to make trade-offs, and can even lead to avoiding making decisions.
Decision fatigue might also increase levels of decisional regret.[30]If an individual is aware that their decision-making abilities are impaired, or if they are experiencing decisional conflict as a result of decision fatigue, they may anticipate the regret they can experience as a result of post-decisional feedback on the outcomes they didn't choose.[31]This anticipation of regret may influence decision making, and can further impair the individual's ability to make rational decisions.
This relationship between decisional fatigue, regret, and conflict was demonstrated in a recent study that aimed to find the impacts of decision fatigue on nurses working during the COVID-19 pandemic. Researchers concluded that decision fatigue could be a determinant of psychological outcomes among nurses, and clinical outcomes among patients and their family members.[32]Additionally, the decisional conflict and regret that arises from decision fatigue may impact the mental health and the decision making ability of healthcare workers, and those in occupations that demand long decision-making sessions.
Several psychologists have challenged the effects of ego depletion, such as decision fatigue, on multiple grounds.[33]One replication effort including 23 laboratories did not find an ego depletion effect to be significantly different from zero.[34]This indicates that existing evidence may not be sufficient to support the existence of an ego depletion effect. Furthermore, even when an ego depletion effect does replicate, there is substantive heterogeneity in the effect size in the literature and the average effect size is small.[35]As there is little evidence for ego depletion, then the existence of decision fatigue comes into question.
Stanford University Professor of PsychologyCarol Dweckfound "that while decision fatigue does occur, it primarily affects those who believe that willpower runs out quickly." She states that "people get fatigued or depleted after a taxing task only when they believe that willpower is a limited resource, but not when they believe it's not so limited". She notes that "in some cases, the people who believe that willpower is not so limited actually perform better after a taxing task."[19]
|
https://en.wikipedia.org/wiki/Decision_fatigue
|
Asecurity questionis a form ofshared secret[1]used as anauthenticator. It is commonly used bybanks,cable companiesandwireless providersas an extrasecuritylayer.
Financial institutionshave used questions to authenticate customers since at least the early 20th century. In a 1906 speech at a meeting of a section of theAmerican Bankers Association,Baltimorebanker William M. Hayden described his institution's use of security questions as a supplement to customersignaturerecords. He described the signature cards used in opening newaccounts, which had spaces for the customer's birthplace, "residence," mother's maiden name, occupation and age.[2]
Hayden noted that some of these items were often left blank and that the "residence" information was used primarily to contact the customer, but themother's maiden namewas useful as a "strong test of identity." Although he observed that it was rare for someone outside the customer's family to try to withdraw money from a customer account, he said that the mother's maiden name was useful in verification because it was rarely known outside the family and that even the people opening accounts were "often unprepared for this question."[2]Similarly, under modern practice, acredit card providercould request a customer'smother'smaiden namebefore issuing a replacement for a lost card.[1]
In the 2000s, security questions came into widespread use on theInternet.[1]As a form ofself-service password reset, security questions have reducedinformation technologyhelp deskcosts.[1]By allowing the use of security questionsonline, they are rendered vulnerable tokeystroke loggingandbrute-force guessing attacks,[3]as well asphishing.[4]In addition, whereas a human customer service representative may be able to cope with inexact security answers appropriately, computers areless adept. As such, users must remember the exact spelling and sometimes evencaseof the answers they provide, which poses the threat that more answers will be written down, exposing them to physical theft.
Due to the commonplace nature of social-media, many of the older traditional security questions are no longer useful or secure. A security question is just another form of a password mechanism. Therefore, a security question should not be shared with anyone else, or include any information readily available on social media websites, while remaining simple, memorable, difficult to guess, and constant over time. Understanding that not every question will work for everyone, RSA (a U.S. network security provider, a division of EMC Corporation) gives banks 150 questions to choose from.[1]
Many have questioned the usefulness of security questions.[5][6][7]Security specialistBruce Schneierpoints out that since they are public facts about a person, they are easier to guess for hackers than passwords. Users that know this create fake answers to the questions, then forget the answers, thus defeating the purpose and creating an inconvenience not worth the investment.[8]
|
https://en.wikipedia.org/wiki/Security_question
|
Apassword policyis a set of rules designed to enhance computer security by encouraging users to employ strongpasswordsand use them properly. A password policy is often part of an organization's official regulations and may be taught as part ofsecurity awarenesstraining. Either the password policy is merely advisory, or the computer systems force users to comply with it. Some governments have national authentication frameworks[1]that define requirements for user authentication to government services, including requirements for passwords.
The United States Department of Commerce'sNational Institute of Standards and Technology(NIST) has put out two standards for password policies which have been widely followed.
From 2004, the "NIST Special Publication 800-63. Appendix A,"[2]advised people to use irregular capitalization, special characters, and at least one numeral. This was the advice that most systems followed, and was "baked into" a number of standards that businesses needed to follow.
However, in 2017 a major update changed this advice, particularly that forcing complexity and regular changes is now seen as bad practice.[3][4]: 5.1.1.2
The key points of these are:
NIST included a rationale for the new guidelines in its Appendix A.
Typical components of a password policy include:
Many policies require a minimum password length. Eight characters is typical but may not be appropriate.[6][7][8]Longer passwords are almost always more secure, but some systems impose a maximum length for compatibility withlegacy systems.
Some policies suggest or impose requirements on what type of password a user can choose, such as:
Other systems create an initial password for the user; but require then to change it to one of their own choosing within a short interval.
Password block lists are lists of passwords that are always blocked from use. Block lists contain passwords constructed of character combinations that otherwise meet company policy, but should no longer be used because they have been deemed insecure for one or more reasons, such as being easily guessed, following a common pattern, or public disclosure from previousdata breaches. Common examples are Password1, Qwerty123, or Qaz123wsx.
Some policies require users to change passwords periodically, often every 90 or 180 days. The benefit of password expiration, however, is debatable.[9][10]Systems that implement such policies sometimes prevent users from picking a password too close to a previous selection.[11]
This policy can often backfire. Some users find it hard to devise "good" passwords that are also easy to remember, so if people are required to choose many passwords because they have to change them often, they end up using much weaker passwords; the policy also encourages users to write passwords down. Also, if the policy prevents a user from repeating a recent password, this requires that there is a database in existence of everyone's recent passwords (or theirhashes) instead of having the old ones erased from memory. Finally, users may change their password repeatedly within a few minutes, and then change back to the one they really want to use, circumventing the password change policy altogether.
The human aspects of passwords must also be considered. Unlike computers, human users cannot delete one memory and replace it with another. Consequently, frequently changing a memorized password is a strain on the human memory, and most users resort to choosing a password that is relatively easy to guess (SeePassword fatigue). Users are often advised to usemnemonicdevices to remember complex passwords. However, if the password must be repeatedly changed, mnemonics are useless because the user would not remember which mnemonic to use. Furthermore, the use of mnemonics (leading to passwords such as "2BOrNot2B") makes the password easier to guess.
Administration factors can also be an issue. Users sometimes have older devices that require a password that was used before the password duration expired.[clarification needed]In order to manage these older devices, users may have to resort to writing down all old passwords in case they need to log into an older device.
Requiring a very strong password and not requiring it be changed is often better.[12]However, this approach does have a major drawback: if an unauthorized person acquires a password and uses it without being detected, that person may have access for an indefinite period.
It is necessary to weigh these factors: the likelihood of someone guessing a password because it is weak, versus the likelihood of someone managing to steal, or otherwise acquire without guessing, a stronger password.
Bruce Schneierargues that "pretty much anything that can be remembered can be cracked", and recommends a scheme that uses passwords which will not appear in any dictionaries.[13]
Password policies may include progressive sanctions beginning with warnings and ending with possible loss of computer privileges or job termination. Where confidentiality is mandated by law, e.g. withclassified information, a violation of password policy could be a criminal offense in some jurisdictions.[14]Some[who?]consider a convincing explanation of the importance of security to be more effective than threats of sanctions[citation needed].
The level of password strength required depends, among other things, on how easy it is for an attacker to submit multiple guesses. Some systems limit the number of times a user can enter an incorrect password before some delay is imposed or the account is frozen. At the other extreme, some systems make available aspecially hashedversion of the password, so that anyone can check its validity. When this is done, an attacker can try passwords very rapidly; so much stronger passwords are necessary for reasonable security. (Seepassword crackingandpassword length equation.) Stricter requirements are also appropriate for accounts with higher privileges, such as root or system administrator accounts.
Password policies are usually a tradeoff between theoretical security and the practicalities of human behavior. For example:
A 2010 examination of the password policies of 75 different websites concludes that security only partly explains more stringent policies:monopolyproviders of a service, such as government sites, have more stringent policies than sites where consumers have choice (e.g. retail sites and banks). The study concludes that sites with more stringent policies "do not have greater security concerns, they are simply better insulated from the consequences from poor usability."[15]
Other approaches are available that are generally considered to be more secure than simple passwords. These include use of asecurity tokenorone-time passwordsystem, such asS/Key, ormulti-factor authentication.[16]However, these systems heighten the tradeoff between security and convenience: according toShuman Ghosemajumder, these systems all improve security, but come "at the cost of moving the burden to the end user."[17]
|
https://en.wikipedia.org/wiki/Password_policy
|
Acryptographically secure pseudorandom number generator(CSPRNG) orcryptographic pseudorandom number generator(CPRNG) is apseudorandom number generator(PRNG) with properties that make it suitable for use incryptography. It is also referred to as acryptographic random number generator(CRNG).
Mostcryptographic applicationsrequirerandomnumbers, for example:
The "quality" of the randomness required for these applications varies. For example, creating anoncein someprotocolsneeds only uniqueness. On the other hand, the generation of a masterkeyrequires a higher quality, such as moreentropy. And in the case ofone-time pads, theinformation-theoreticguarantee of perfect secrecy only holds if the key material comes from a true random source with high entropy, and thus just any kind of pseudorandom number generator is insufficient.
Ideally, the generation of random numbers in CSPRNGs uses entropy obtained from a high-quality source, generally the operating system's randomnessAPI. However, unexpected correlations have been found in several such ostensibly independent processes. From an information-theoretic point of view, the amount of randomness, the entropy that can be generated, is equal to the entropy provided by the system. But sometimes, in practical situations, numbers are needed with more randomness than the available entropy can provide. Also, the processes to extract randomness from a running system are slow in actual practice. In such instances, a CSPRNG can sometimes be used. A CSPRNG can "stretch" the available entropy over more bits.
The requirements of an ordinary PRNG are also satisfied by a cryptographically secure PRNG, but the reverse is not true. CSPRNG requirements fall into two groups:
For instance, if the PRNG under consideration produces output by computing bits ofpiin sequence, starting from some unknown point in the binary expansion, it may well satisfy the next-bit test and thus be statistically random, as pi is conjectured to be anormal number. However, this algorithm is not cryptographically secure; an attacker who determines which bit of pi is currently in use (i.e. the state of the algorithm) will be able to calculate all preceding bits as well.
Most PRNGs are not suitable for use as CSPRNGs and will fail on both counts. First, while most PRNGs' outputs appear random to assorted statistical tests, they do not resist determined reverse engineering. Specialized statistical tests may be found specially tuned to such a PRNG that shows the random numbers not to be truly random. Second, for most PRNGs, when their state has been revealed, all past random numbers can be retrodicted, allowing an attacker to read all past messages, as well as future ones.
CSPRNGs are designed explicitly to resist this type ofcryptanalysis.
In theasymptotic setting, a family of deterministic polynomial time computable functionsGk:{0,1}k→{0,1}p(k){\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}for some polynomialp, is apseudorandom number generator(PRNG, or PRG in some references), if it stretches the length of its input (p(k)>k{\displaystyle p(k)>k}for anyk), and if its output iscomputationally indistinguishablefrom true randomness, i.e. for any probabilistic polynomial time algorithmA, which outputs 1 or 0 as a distinguisher,
for somenegligible functionμ{\displaystyle \mu }.[4](The notationx←X{\displaystyle x\gets X}means thatxis chosenuniformlyat random from the setX.)
There is an equivalent characterization: For any function familyGk:{0,1}k→{0,1}p(k){\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}},Gis a PRNG if and only if the next output bit ofGcannot be predicted by a polynomial time algorithm.[5]
Aforward-securePRNG with block lengtht(k){\displaystyle t(k)}is a PRNGGk:{0,1}k→{0,1}k×{0,1}t(k){\displaystyle G_{k}\colon \{0,1\}^{k}\to \{0,1\}^{k}\times \{0,1\}^{t(k)}}, where the input stringsi{\displaystyle s_{i}}with lengthkis the current state at periodi, and the output (si+1{\displaystyle s_{i+1}},yi{\displaystyle y_{i}}) consists of the next statesi+1{\displaystyle s_{i+1}}and the pseudorandom output blockyi{\displaystyle y_{i}}of periodi, that withstands state compromise extensions in the following sense. If the initial states1{\displaystyle s_{1}}is chosen uniformly at random from{0,1}k{\displaystyle \{0,1\}^{k}}, then for anyi, the sequence(y1,y2,…,yi,si+1){\displaystyle (y_{1},y_{2},\dots ,y_{i},s_{i+1})}must be computationally indistinguishable from(r1,r2,…,ri,si+1){\displaystyle (r_{1},r_{2},\dots ,r_{i},s_{i+1})}, in which theri{\displaystyle r_{i}}are chosen uniformly at random from{0,1}t(k){\displaystyle \{0,1\}^{t(k)}}.[6]
Any PRNGG:{0,1}k→{0,1}p(k){\displaystyle G\colon \{0,1\}^{k}\to \{0,1\}^{p(k)}}can be turned into a forward secure PRNG with block lengthp(k)−k{\displaystyle p(k)-k}by splitting its output into the next state and the actual output. This is done by settingG(s)=G0(s)‖G1(s){\displaystyle G(s)=G_{0}(s)\Vert G_{1}(s)}, in which|G0(s)|=|s|=k{\displaystyle |G_{0}(s)|=|s|=k}and|G1(s)|=p(k)−k{\displaystyle |G_{1}(s)|=p(k)-k}; thenGis a forward secure PRNG withG0{\displaystyle G_{0}}as the next state andG1{\displaystyle G_{1}}as the pseudorandom output block of the current period.
Santha and Vazirani proved that several bit streams with weak randomness can be combined to produce a higher-quality, quasi-random bit stream.[7]Even earlier,John von Neumannproved that asimple algorithmcan remove a considerable amount of the bias in any bit stream,[8]which should be applied to each bit stream before using any variation of the Santha–Vazirani design.
CSPRNG designs are divided into two classes:
"Practical" CSPRNG schemes not only include an CSPRNG algorithm, but also a way to initialize ("seed") it while keeping the seed secret. A number of such schemes have been defined, including:
Obviously, the technique is easily generalized to any block cipher;AEShas been suggested.[18]If the keykis leaked, the entire X9.17 stream can be predicted; this weakness is cited as a reason for creating Yarrow.[19]
All these above-mentioned schemes, save for X9.17, also mix the state of a CSPRNG with an additional source of entropy. They are therefore not "pure" pseudorandom number generators, in the sense that the output is not completely determined by their initial state. This addition aims to prevent attacks even if the initial state is compromised.[a]
Several CSPRNGs have been standardized. For example:
The third PRNG in this standard,CTR DRBG, is based on ablock cipherrunning incounter mode. It has an uncontroversial design but has been proven to be weaker in terms of distinguishing attack, than thesecurity levelof the underlying block cipher when the number of bits output from this PRNG is greater than two to the power of the underlying block cipher's block size in bits.[24]
When the maximum number of bits output from this PRNG is equal to the 2blocksize, the resulting output delivers the mathematically expected security level that the key size would be expected to generate, but the output is shown to not be indistinguishable from a true random number generator.[24]When the maximum number of bits output from this PRNG is less than it, the expected security level is delivered and the output appears to be indistinguishable from a true random number generator.[24]
It is noted in the next revision that the claimedsecurity strengthfor CTR_DRBG depends on limiting the total number of generate requests and the bits provided per generate request.
The fourth and final PRNG in this standard is namedDual EC DRBG. It has been shown to not be cryptographically secure and is believed to have akleptographicNSA backdoor.[25]
A good reference is maintained byNIST.[26]
There are also standards for statistical testing of new CSPRNG designs:
The GuardianandThe New York Timesreported in 2013 that theNational Security Agency(NSA) inserted abackdoorinto apseudorandom number generator(PRNG) ofNIST SP 800-90A, which allows the NSA to readily decrypt material that was encrypted with the aid ofDual EC DRBG. Both papers reported[28][29]that, as independent security experts long suspected,[30]the NSA had been introducing weaknesses into CSPRNG standard 800-90; this being confirmed for the first time by one of the top-secret documents leaked toThe GuardianbyEdward Snowden. The NSA worked covertly to get its own version of the NIST draft security standard approved for worldwide use in 2006. The leaked document states that "eventually, NSA became the sole editor". In spite of the known potential for akleptographicbackdoor and other known significant deficiencies with Dual_EC_DRBG, several companies such asRSA Securitycontinued using Dual_EC_DRBG until the backdoor was confirmed in 2013.[31]RSA Security received a $10 million payment from the NSA to do so.[32]
On October 23, 2017,Shaanan Cohney,Matthew Green, andNadia Heninger,cryptographersat theUniversity of PennsylvaniaandJohns Hopkins University, released details of the DUHK (Don't Use Hard-coded Keys) attack onWPA2where hardware vendors use a hardcoded seed key for the ANSI X9.31 RNG algorithm, stating "an attacker can brute-force encrypted data to discover the rest of the encryption parameters and deduce the master encryption key used to encrypt web sessions orvirtual private network(VPN) connections."[33][34]
DuringWorld War II, Japan used a cipher machine for diplomatic communications; the United States was able tocrack it and read its messages, mostly because the "key values" used were insufficiently random.
|
https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator
|
Incomputing, ahardware random number generator(HRNG),true random number generator(TRNG),non-deterministic random bit generator(NRBG),[1]orphysical random number generator[2][3]is a device thatgenerates random numbersfrom aphysical processcapable of producingentropy, unlike apseudorandom number generator(PRNG) that utilizes adeterministic algorithm[2]andnon-physical nondeterministic random bit generatorsthat do not include hardware dedicated to generation of entropy.[1]
Many natural phenomena generate low-level,statistically random"noise" signals, includingthermalandshotnoise,jitterandmetastabilityof electronic circuits,Brownian motion, andatmospheric noise.[4]Researchers also used thephotoelectric effect, involving abeam splitter, otherquantumphenomena,[5][6][7][8][9]and even thenuclear decay(due to practical considerations the latter, as well as the atmospheric noise, is not viable).[4]While "classical" (non-quantum) phenomena are not truly random, an unpredictable physical system is usually acceptable as a source of randomness, so the qualifiers "true" and "physical" are used interchangeably.[10]
A hardware random number generator is expected to output near-perfect random numbers ("full entropy").[1]A physical process usually does not have this property, and a practical TRNG typically includes a few blocks:[11]
Hardware random number generators generally produce only a limited number of random bits per second. In order to increase the available output data rate, they are often used to generate the "seed" for a faster PRNG. DRBG also helps with the noise source "anonymization" (whitening out the noise source identifying characteristics) andentropy extraction. With a proper DRBG algorithm selected (cryptographically secure pseudorandom number generator, CSPRNG), the combination can satisfy the requirements ofFederal Information Processing StandardsandCommon Criteriastandards.[12]
Hardware random number generators can be used in any application that needs randomness. However, in many scientific applications additional cost and complexity of a TRNG (when compared with pseudo random number generators) provide no meaningful benefits. TRNGs have additional drawbacks for data science and statistical applications: impossibility to re-run a series of numbers unless they are stored, reliance on an analog physical entity can obscure the failure of the source. The TRNGs therefore are primarily used in the applications where their unpredictability and the impossibility to re-run the sequence of numbers are crucial to the success of the implementation: in cryptography and gambling machines.[13]
The major use for hardware random number generators is in the field ofdata encryption, for example to create randomcryptographic keysandnoncesneeded to encrypt and sign data. In addition to randomness, there are at least two additional requirements imposed by the cryptographic applications:[14]
A typical way to fulfill these requirements is to use a TRNG to seed acryptographically secure pseudorandom number generator.[15]
Physical devices were used to generate random numbers for thousands of years, primarily forgambling.Dicein particular have been known for more than 5000 years (found on locations in modern Iraq and Iran), and flipping
a coin (thus producing a random bit) dates at least to the times ofancient Rome.[16]
The first documented use of a physical random number generator for scientific purposes was byFrancis Galton(1890).[17]He devised a way to sample aprobability distributionusing a common gambling dice. In addition to the top digit, Galton also looked at the face of a dice closest to him, thus creating 6*4 = 24 outcomes (about 4.6 bits of randomness).[16]
Kendall and Babington-Smith (1938)[18]used a fast-rotating 10-sector disk that was illuminated by periodic bursts of light. The sampling was done by a human who wrote the number under the light beam onto a pad. The device was utilized to produce a 100,000-digit random number table (at the time such tables were used for statistical experiments, like PRNG nowadays).[16]
On 29 April 1947, theRAND Corporationbegan generating random digits with an "electronic roulette wheel", consisting of a random frequency pulse source of about 100,000 pulses per second gated once per second with a constant frequency pulse and fed into a five-bit binary counter.Douglas Aircraftbuilt the equipment, implementing Cecil Hasting's suggestion (RAND P-113)[19]for a noise source (most likely the well known behavior of the 6D4 miniature gasthyratrontube, when placed in a magnetic field[20]). Twenty of the 32 possible counter values were mapped onto the 10 decimal digits and the other 12 counter values were discarded.[21]The results of a long run from the RAND machine, filtered and tested, were converted into a table, which originally existed only as a deck ofpunched cards, but was later published in 1955 as a book, 50 rows of 50 digits on each page[16](A Million Random Digits with 100,000 Normal Deviates). The RAND table was a significant breakthrough in delivering random numbers because such a large and carefully prepared table had never before been available. It has been a useful source for simulations, modeling, and for deriving the arbitrary constants in cryptographic algorithms to demonstrate that the constants had not been selected maliciously ("nothing up my sleeve numbers").[22]
Since the early 1950s, research into TRNGs has been highly active, with thousands of research works published and about 2000 patents granted by 2017.[16]
Multiple different TRNG designs were proposed over time with a large variety of noise sources and digitization techniques ("harvesting"). However, practical considerations (size, power, cost, performance, robustness) dictate the following desirable traits:[23]
Stipčević & Koç in 2014 classified the physical phenomena used to implement TRNG into four groups:[3]
Noise-based RNGs generally follow the same outline: the source of anoise generatoris fed into acomparator. If the voltage is above threshold, the comparator output is 1, otherwise 0. The random bit value is latched using a flip-flop. Sources of noise vary and include:[24]
The drawbacks of using noise sources for an RNG design are:[25]
The idea of chaos-based noise stems from the use of a complex system that is hard to characterize by observing its behavior over time. For example, lasers can be put into (undesirable in other applications) chaos mode with chaotically fluctuating power, with power detected using aphotodiodeand sampled by a comparator. The design can be quite small, as allphotonicselements can be integrated on-chip. Stipčević & Koç characterize this technique as "most objectionable", mostly due to the fact that chaotic behavior is usually controlled by a differential equation and no new randomness is introduced, thus there is a possibility of the chaos-based TRNG producing a limited subset of possible output strings.[27]
The TRNGs based on a free-running oscillator (FRO) typically utilize one or morering oscillators(ROs), outputs of which are sampled using yet anotherclock. Sinceinvertersforming the RO can be thought of as amplifiers with a very large gain, an FRO output exhibits very fast oscillations in phase and frequency domains. The FRO-based TRNGs are very popular due to their use of the standard digital logic despite issues with randomness proofs and chip-to-chip variability.[27]
Quantum random number generation technology is well established with 8 commercialquantum random number generator(QRNG) products offered before 2017.[28]
Herrero-Collantes & Garcia-Escartin list the following stochastic processes as "quantum":
To reduce costs and increase robustness of quantum random number generators,[39]online services have been implemented.[28]
A plurality of quantum random number generators designs[40]are inherently untestable and thus can be manipulated by adversaries. Mannalath et al. call these designs "trusted" in a sense that they can only operate in a fully controlled, trusted environment.[41]
The failure of a TRNG can be quite complex and subtle, necessitating validation of not just the results (the output bit stream), but of the unpredictability of the entropy source.[10]Hardware random number generators should be constantly monitored for proper operation to protect against the entropy source degradation due to natural causes and deliberate attacks.FIPSPub 140-2andNISTSpecial Publication 800-90B[42]define tests which can be used for this.
The minimal set of real-time tests mandated by the certification bodies is not large; for example, NIST in SP 800-90B requires just twocontinuous health tests:[43]
Just as with other components of a cryptography system, a cryptographic random number generator should be designed to resistcertain attacks. Defending against these attacks is difficult without a hardware entropy source.[citation needed]
The physical processes in HRNG introduce new attack surfaces. For example, a free-running oscillator-based TRNG can be attacked using afrequency injection.[44]
There are mathematical techniques for estimating theentropyof a sequence of symbols. None are so reliable that their estimates can be fully relied upon; there are always assumptions which may be very difficult to confirm. These are useful for determining if there is enough entropy in a seed pool, for example, but they cannot, in general, distinguish between a true random source and a pseudorandom generator. This problem is avoided by the conservative use of hardware entropy sources.
|
https://en.wikipedia.org/wiki/Hardware_random_number_generator
|
Master Passwordis a type ofalgorithmfirst implemented byMaarten Billemontfor creating uniquepasswordsin a reproducible manner. It differs from traditionalpassword managersin that the passwords are not stored on disk or in the cloud, but are regenerated every time from information entered by the user: Their name, amaster password, and a unique identifier for the service the password is intended for (usually the URL).[1]
By not storing the passwords anywhere, this approach makes it harder for attackers to steal or intercept them. It also removes the need for synchronization between devices, backups of potential password databases and risks ofdata breach. This is sometimes calledsync-less password management.
Billemont's implementation involves the following parameters:[1]
In Billemont's implementation, the master key is a global 64-byte secret key generated from the user's secretmaster passwordand salted by their full name. The salt is used to avoid attacks based onrainbow tables. Thescryptalgorithm, an intentionally slowkey derivation function, is used for generating the master key to make abrute-force attackinfeasible.
The template seed is a site-specific secret in binary form, generated from the master key, the site name and the counter using theHMAC-SHA256algorithm. It is later converted to a character string using the password templates. The template seed makes every password unique to the website and to the user.
The binary template seed is then converted to one of six available password types. The default type is theMaximum Security Password, others can be selected if the service's password policy does not allow passwords of that format:
Billemont also created multiplefree softwareimplementations of the Master Password algorithm, licensed under theGPLv3.:[2]
Official website
|
https://en.wikipedia.org/wiki/Master_Password_(algorithm)
|
In cryptanalysis andcomputer security,password crackingis the process of guessing passwords[1]protecting acomputer system. A common approach (brute-force attack) is to repeatedly try guesses for the password and to check them against an availablecryptographic hashof the password.[2]Another type of approach ispassword spraying, which is often automated and occurs slowly over time in order to remain undetected, using a list of common passwords.[3]
The purpose of password cracking might be to help a user recover a forgotten password (due to the fact that installing an entirely new password would involve System Administration privileges), to gain unauthorized access to a system, or to act as a preventive measure wherebysystem administratorscheck for easily crackable passwords. On a file-by-file basis, password cracking is utilized to gain access to digital evidence to which a judge has allowed access, when a particular file's permissions restricted.
The time to crack a password is related to bit strength, which is a measure of the password'sentropy, and the details of how the password is stored. Most methods of password cracking require the computer to produce many candidate passwords, each of which is checked. One example isbrute-forcecracking, in which a computer trieseverypossible key or password until it succeeds. With multiple processors, this time can be optimized through searching from the last possible group of symbols and the beginning at the same time, with other processors being placed to search through a designated selection of possible passwords.[4]More common methods of password cracking, such asdictionary attacks, pattern checking, and variations of common words, aim to optimize the number of guesses and are usually attempted before brute-force attacks. Higher password bit strength exponentially increases the number of candidate passwords that must be checked, on average, to recover the password and reduces the likelihood that the password will be found in any cracking dictionary.[5]
The ability to crack passwords using computer programs is also a function of the number of possible passwords per second which can be checked. If a hash of the target password is available to the attacker, this number can be in the billions or trillions per second, since anoffline attackis possible. If not, the rate depends on whether the authentication software limits how often a password can be tried, either by time delays,CAPTCHAs, or forced lockouts after some number of failed attempts. Another situation where quick guessing is possible is when the password is used to form acryptographic key. In such cases, an attacker can quickly check to see if a guessed password successfully decodes encrypted data.
For some kinds of password hash, ordinary desktop computers can test over a hundred million passwords per second using password cracking tools running on a general purpose CPU and billions of passwords per second using GPU-based password cracking tools[1][6][7](seeJohn the Ripperbenchmarks).[8]The rate of password guessing depends heavily on the cryptographic function used by the system to generate password hashes. A suitable password hashing function, such asbcrypt, is many orders of magnitude better than a naive function like simpleMD5orSHA. A user-selected eight-character password with numbers, mixed case, and symbols, with commonly selected passwords and other dictionary matches filtered out, reaches an estimated 30-bit strength, according to NIST. 230is only one billion permutations[9]and would be cracked in seconds if the hashing function were naive. When ordinary desktop computers are combined in a cracking effort, as can be done withbotnets, the capabilities of password cracking are considerably extended. In 2002,distributed.netsuccessfully found a 64-bitRC5key in four years, in an effort which included over 300,000 different computers at various times, and which generated an average of over 12 billion keys per second.[10]
Graphics processing unitscan speed up password cracking by a factor of 50 to 100 over general purpose computers for specific hashing algorithms. As an example, in 2011, available commercial products claimed the ability to test up to 2,800,000,000NTLMpasswords a second on a standard desktop computer using a high-end graphics processor.[11]Such a device can crack a 10-letter single-case password in one day. The work can be distributed over many computers for an additional speedup proportional to the number of available computers with comparable GPUs. However some algorithms run slowly, or even are specifically designed to run slowly, on GPUs. Examples areDES,Triple DES,bcrypt,scrypt, andArgon2.
Hardware acceleration in aGPUhas enabled resources to be used to increase the efficiency and speed of a brute force attack for most hashing algorithms. In 2012, Stricture Consulting Group unveiled a 25-GPU cluster that achieved a brute force attack speed of 350 billion guesses of NTLM passwords per second, allowing them to check958{\textstyle 95^{8}}password combinations in 5.5 hours, enough to crack all 8-character alpha-numeric-special-character passwords commonly used in enterprise settings. Using ocl-HashcatPlus on a VirtualOpenCLcluster platform,[12]the Linux-basedGPU clusterwas used to "crack 90 percent of the 6.5 million password hashes belonging to users of LinkedIn".[13]
For some specific hashing algorithms, CPUs and GPUs are not a good match. Purpose-made hardware is required to run at high speeds. Custom hardware can be made usingFPGAorASICtechnology. Development for both technologies is complex and (very) expensive. In general, FPGAs are favorable in small quantities, ASICs are favorable in (very) large quantities, more energy efficient, and faster. In 1998, theElectronic Frontier Foundation(EFF) built a dedicated password cracker using ASICs. Their machine,Deep Crack, broke a DES 56-bit key in 56 hours, testing over 90 billion keys per second.[14]In 2017, leaked documents showed that ASICs were used for a military project that had a potential to code-break many parts of the Internet communications with weaker encryption.[15]Since 2019, John the Ripper supports password cracking for a limited number of hashing algorithms using FPGAs.[16]Commercial companies are now using FPGA-based setups for password cracking.[17]
Passwords that are difficult to remember will reduce the security of a system because:
Similarly, the more stringent the requirements for password strength, e.g. "have a mix of uppercase and lowercase letters and digits" or "change it monthly", the greater the degree to which users will subvert the system.[18]
In "The Memorability and Security of Passwords",[19]Jeff Yanet al.examine the effect of advice given to users about a good choice of password. They found that passwords based on thinking of a phrase and taking the first letter of each word are just as memorable as naively selected passwords, and just as hard to crack as randomly generated passwords. Combining two unrelated words is another good method. Having a personally designed "algorithm" for generating obscure passwords is another good method.
However, asking users to remember a password consisting of a "mix of uppercase and lowercase characters" is similar to asking them to remember a sequence of bits: hard to remember, and only a little bit harder to crack (e.g. only 128 times harder to crack for 7-letter passwords, less if the user simply capitalizes one of the letters). Asking users to use "both letters and digits" will often lead to easy-to-guess substitutions such as 'E' → '3' and 'I' → '1': substitutions which are well known to attackers. Similarly, typing the password one keyboard row higher is a common trick known to attackers.
Research detailed in an April 2015 paper by several professors atCarnegie Mellon Universityshows that people's choices of password structure often follow several known patterns. For example, when password requirements require a long minimum length such as 16 characters, people tend to repeat characters or even entire words within their passwords.[20]As a result, passwords may be much more easily cracked than their mathematical probabilities would otherwise indicate. Passwords containing one digit, for example, disproportionately include it at the end of the password.[20]
On July 16, 1998,CERTreported an incident where an attacker had found 186,126 encrypted passwords. By the time the breach was discovered, 47,642 passwords had already been cracked.[21]
In December 2009, a major password breach ofRockyou.comoccurred that led to the release of 32 million passwords. The attacker then leaked the full list of the 32 million passwords (with no other identifiable information) to the internet. Passwords were stored incleartextin the database and were extracted through anSQL injectionvulnerability. TheImpervaApplication Defense Center (ADC) did an analysis on the strength of the passwords.[22]Some of the key findings were:
In June 2011,NATO(North Atlantic Treaty Organization) suffered a security breach that led to the public release of first and last names, usernames, and passwords of more than 11,000 registered users of their e-bookshop. The data were leaked as part ofOperation AntiSec, a movement that includesAnonymous,LulzSec, and other hacking groups and individuals.[23]
On July 11, 2011,Booz Allen Hamilton, a large American consulting firm that does a substantial amount of work forthe Pentagon, had its servers hacked byAnonymousand leaked the same day. "The leak, dubbed 'Military Meltdown Monday', includes 90,000 logins of military personnel—including personnel fromUSCENTCOM,SOCOM, theMarine Corps, variousAir Forcefacilities,Homeland Security,State Departmentstaff, and what looks like private-sector contractors."[24]These leaked passwords were found to be hashed withunsaltedSHA-1, and were later analyzed by the ADC team atImperva, revealing that even some military personnel used passwords as weak as "1234".[25]
On July 18, 2011, Microsoft Hotmail banned the password: "123456".[26]
In July 2015, a group calling itself "The Impact Team"stole the user data of Ashley Madison.[27]Many passwords were hashed using both the relatively strongbcryptalgorithm and the weakerMD5hash. Attacking the latter algorithm allowed some 11 million plaintext passwords to be recovered by password cracking group CynoSure Prime.[28]
One method of preventing a password from being cracked is to ensure that attackers cannot get access even to the hashed password. For example, on theUnixoperating system, hashed passwords were originally stored in a publicly accessible file/etc/passwd. On modern Unix (and similar) systems, on the other hand, they are stored in theshadow passwordfile/etc/shadow, which is accessible only to programs running with enhanced privileges (i.e., "system" privileges). This makes it harder for a malicious user to obtain the hashed passwords in the first instance, however many collections of password hashes have been stolen despite such protection. And some common network protocols transmit passwords in cleartext or use weak challenge/response schemes.[29][30]
The use ofsalt, a random value unique to each password that is incorporated in the hashing, prevents multiple hashes from being attacked simultaneously and also prevents the creation of pre-computed dictionaries such asrainbow tables.
Another approach is to combine a site-specific secret key with the password hash, which prevents plaintext password recovery even if the hashed values are purloined. Howeverprivilege escalationattacks that can steal protected hash files may also expose the site secret. A third approach is to usekey derivation functionsthat reduce the rate at which passwords can be guessed.[31]: 5.1.1.2
Modern Unix Systems have replaced the traditionalDES-based password hashing functioncrypt()with stronger methods such ascrypt-SHA,bcrypt, andscrypt.[32]Other systems have also begun to adopt these methods. For instance, the Cisco IOS originally used a reversibleVigenère cipherto encrypt passwords, but now uses md5-crypt with a 24-bit salt when the "enable secret" command is used.[33]These newer methods use large salt values which prevent attackers from efficiently mounting offline attacks against multiple user accounts simultaneously. The algorithms are also much slower to execute which drastically increases the time required to mount a successful offline attack.[34]
Many hashes used for storing passwords, such asMD5and theSHAfamily, are designed for fast computation with low memory requirements and efficient implementation in hardware. Multiple instances of these algorithms can be run in parallel ongraphics processing units(GPUs), speeding cracking. As a result, fast hashes are ineffective in preventing password cracking, even with salt. Somekey stretchingalgorithms, such asPBKDF2andcrypt-SHAiteratively calculate password hashes and can significantly reduce the rate at which passwords can be tested, if the iteration count is high enough. Other algorithms, such asscryptarememory-hard, meaning they require relatively large amounts of memory in addition to time-consuming computation and are thus more difficult to crack using GPUs and custom integrated circuits.
In 2013 a long-termPassword Hashing Competitionwas announced to choose a new, standard algorithm for password hashing,[35]withArgon2chosen as the winner in 2015. Another algorithm,Balloon, is recommended byNIST.[36]Both algorithms are memory-hard.
Solutions like asecurity tokengive aformal proofanswer[clarification needed]by constantly shifting password. Those solutions abruptly reduce the timeframe available forbrute forcing(the attacker needs to break and use the password within a single shift) and they reduce the value of the stolen passwords because of its short time validity.
There are many password cracking software tools, but the most popular[37]areAircrack-ng,Cain & Abel,John the Ripper,Hashcat,Hydra,DaveGrohl, andElcomSoft. Manylitigation support softwarepackages also include password cracking functionality. Most of these packages employ a mixture of cracking strategies; algorithms with brute-force and dictionary attacks proving to be the most productive.[38]
The increased availability of computing power and beginner friendly automated password cracking software for a number of protection schemes has allowed the activity to be taken up byscript kiddies.[39]
|
https://en.wikipedia.org/wiki/Password_length_parameter
|
Acantis thejargonor language of a group, often employed to exclude or mislead people outside the group.[1]It may also be called acryptolect,argot,pseudo-language,anti-languageorsecret language. Each term differs slightly in meaning; their uses are inconsistent.
There are two main schools of thought on the origin of the wordcant:
Anargot(English:/ˈɑːrɡoʊ/; fromFrenchargot[aʁɡo]'slang') is a language used by various groups to prevent outsiders from understanding their conversations. The termargotis also used to refer to the informal specialized vocabulary from a particular field of study, occupation, or hobby, in whichsenseit overlaps withjargon.
In his 1862 novelLes Misérables,Victor Hugorefers to that argot as both "the language of the dark" and "the language of misery".[4]
The earliest known record of the termargotin this context was in a 1628 document. The word was probably derived from the contemporary nameles argotiers, given to a group of thieves at that time.[5]
Under the strictest definition, anargotis a proper language with its own grammatical system.[6]Such complete secret languages are rare because the speakers usually have some public language in common, on which the argot is largely based. Such argots arelexicallydivergentformsof a particular language, with a part of its vocabulary replaced by words unknown to the larger public;argotused in thissenseissynonymouswithcant. For example,argotin this sense is used for systems such asverlanandlouchébem, which retain French syntax and apply transformations only to individual words (and often only to a certain subset of words, such as nouns, or semantic content words).[7]Such systems are examples ofargotsà clef, or "coded argots".[7]
Specific words can go from argot into everyday speech or the other way. For example, modern Frenchloufoque'crazy', 'goofy', now common usage, originated in thelouchébemtransformation of Fr.fou'crazy'.
In the field of medicine,physicianshave been said to have their own spoken argot, cant, or slang, which incorporates commonly understood abbreviations and acronyms, frequently used technicalcolloquialisms, and much everyday professional slang (that may or may not be institutionally or geographically localized).[8]While many of these colloquialisms may prove impenetrable to most lay people, few seem to be specifically designed to conceal meaning from patients (perhaps because standard medical terminology would usually suffice anyway).[8]
The concept of theanti-languagewas first defined and studied by the linguistMichael Halliday, who used the term to describe thelingua francaof ananti-society.[9]An anti-society is a small, separate community intentionally created within a larger society as an alternative to or resistance of it.[9]For example,Adam Podgóreckistudied one anti-society composed of Polish prisoners; Bhaktiprasad Mallik of Sanskrit College studied another composed of criminals in Calcutta.[9]
These societies develop anti-languages as a means to prevent outsiders from understanding their communication and as a manner of establishing a subculture that meets the needs of their alternative social structure.[10]Anti-languages differ fromslangand jargon in that they are used solely among ostracized social groups, including prisoners,[11]criminals, homosexuals,[10]and teenagers.[12]Anti-languages use the same basic vocabulary and grammar as their native language in an unorthodox fashion. For example, anti-languages borrow words from other languages, create unconventional compounds, or utilize new suffixes for existing words. Anti-languages may also change words usingmetathesis, reversal of sounds or letters (e.g., apple toelppa), or substituting their consonants.[9]Therefore, anti-languages are distinct and unique and are not simplydialectsof existing languages.
In his essay "Anti-Language", Halliday synthesized the research of Thomas Harman,Adam Podgórecki, and Bhaktiprasad Mallik to explore anti-languages and the connection between verbal communication and the maintenance of a social structure. For this reason, the study of anti-languages is both a study ofsociologyand linguistics. Halliday's findings can be compiled as a list of nine criteria that a language must meet to be considered an anti-language:
Examples of anti-languages includeCockney rhyming slang,CB slang,verlan, thegrypseraof Polish prisons,thieves' cant,[13]Polari,[14]andBangime.[15]
Anti-languages are sometimes created by authors and used by characters in novels. These anti-languages do not have complete lexicons, cannot be observed in use forlinguistic description, and therefore cannot be studied in the same way a language spoken by an existing anti-society would. However, they are still used in the study of anti-languages. Roger Fowler's "Anti-Languages in Fiction" analyzesAnthony Burgess'sA Clockwork OrangeandWilliam S. Burroughs'Naked Lunchto redefine the nature of the anti-language and to describe its ideological purpose.[16]
A Clockwork Orangeis a popular example of a novel where the main character is a teenage boy who speaks an anti-language calledNadsat. This language is often referred to as an argot, but it has been argued that it is an anti-language because of the social structure it maintains through the social class of the droogs.[12]
In parts ofConnacht, in Ireland,cantmainly refers to anauction, typically onfair day("Cantmen and Cantwomen, some from as far away as Dublin, would converge on Mohill on a Fair Day, ... set up their stalls ... and immediately start auctioning off their merchandise") and secondly means talk ("very entertaining conversation was often described as 'great cant'" or "crosstalk").[17][18]
In Scotland, two unrelated creole languages are termedcant.Scottish Cant(a mixed language, primarilyScotsandRomaniwithScottish Gaelicinfluences) is spoken by lowland Roma groups.Highland Traveller's Cant(orBeurla Reagaird) is aGaelic-based cant of the Indigenous Highland Traveller population.[2]The cants are mutually unintelligible.
The word has also been used as asuffixto coin names for modern-day jargons such as "medicant", a term used to refer to the type of language employed by members of the medical profession that is largely unintelligible to lay people.[1]
Thethieves' cantwas a feature of popular pamphlets and plays, particularly between 1590 and 1615, but continued to feature in literature through the 18th century. There are questions about how genuinely the literature reflectedvernacularuse in the criminal underworld. A thief in 1839 claimed that the cant he had seen in print was nothing like the cant then used by "gypsies, thieves, and beggars." He also said that each of these used distinct vocabularies, which overlapped, the gypsies having a cant word for everything, and the beggars using a lower style than the thieves.[23]
|
https://en.wikipedia.org/wiki/Argot
|
Inmilitaryterminology, acountersignis a sign, word, or any other signal previously agreed upon and required to be exchanged between apicketor guard and anybody approaching his or her post. The term usually encompasses both the sign given by the approaching party as well as the sentry's reply. However, in some militaries, the countersign is strictly the reply of the sentry to the password given by the person approaching.[1]
A well-known sign/countersign used by theAllied forcesonD-DayduringWorld War II: the challenge/sign was "flash", thepassword"thunder" and the countersign (to challenge the person giving the first codeword) "Welcome".[2]
Some countersigns include words that are difficult for an enemy to pronounce. For instance, in the above example, the word "thunder" contains avoiceless dental fricative(/θ/),[3]which does not exist inGerman.[4]
The opening lines ofWilliam Shakespeare's playHamletare between soldiers on duty are viewed as representing a crude sign in which the line "Long live the King!" was a sign between soldiers:
|
https://en.wikipedia.org/wiki/Countersign_(military)
|
Inpolitics, adog whistleis the use ofcoded or suggestive languagein political messaging to garner support from a particular group without provoking opposition. The concept is named afterultrasonicdog whistles, which are audible to dogs but not humans. Dog whistles use language that appears normal to the majority but communicates specific things to intended audiences. They are generally used to convey messages on issues likely to provoke controversy without attracting negative attention.[not verified in body]
According toWilliam Safire, the termdog whistlein reference to politics may have been derived from its use in the field ofopinion polling. Safire quotes Richard Morin, director of polling forThe Washington Post, as writing in 1988:
subtle changes in question-wording sometimes produce remarkably different results ... researchers call this the "Dog Whistle Effect": Respondents hear something in the question that researchers do not.[1]
He speculates that campaign workers adapted the phrase from political pollsters.[1]
In her 2006 bookVoting for Jesus: Christianity and Politics in Australia, academic[clarification needed]Amanda Lohreywrites that the goal of the dog-whistle is to appeal to the greatest possible number of electors while alienating the smallest possible number. She uses as an example politicians choosing broadly appealing words such as "family values", which have extra resonance for Christians, while avoiding overt Christian moralizing that might be a turn-off for non-Christian voters.[2]
Australian political theoristRobert E. Goodinargues that the problem with dog-whistling is that it undermines democracy, because if voters have different understandings of what they were supporting during a campaign, the fact that they were seeming to support the same thing is "democratically meaningless" and does not give the dog-whistler apolicy mandate.[3]
The term was first picked up inAustralian politicsin the mid-1990s, and was frequently applied to the political campaigning ofJohn Howard.[4]Throughouthis 11 yearsasAustralian prime ministerand particularly in his fourth term, Howard was accused of communicating messages appealing to anxious Australian voters usingcode wordssuch as "un-Australian", "mainstream", and "illegals".[5][6]
One notable example was the Howard government's message on refugee arrivals. His government's tough stance onimmigrationwas popular with voters, but was accused of using the issue to additionally send veiled messages of support to voters withracistleanings,[7]while maintainingplausible deniabilityby avoiding overtly racist language.[8]Another example was the publicity of theAustralian citizenship testin 2007.[8]It has been argued that the test may appear reasonable at face value, but is really intended to appeal to those opposing immigration from particular geographic regions.[9]
During the2015 Canadian federal election, theCanadian Broadcasting Corporation(CBC) reported on a controversy involving theConservative partyleader, incumbent Prime MinisterStephen Harper, using the phrase "old-stock Canadians" in a debate, apparently to appeal to his party's base supporters. Commentators, including pollsterFrank Gravesand former Quebec Liberal MPMarlene Jennings, saw this as a codeword historically used against non-white immigrants.[10]
Midway through the election campaign, the Conservative Party had hired Australian political strategistLynton Crosbyas a political adviser when they fell to third place in the polls - behind theLiberal Partyand theNew Democratic Party.[11]On 17 September 2015, during a televised election debate, Stephen Harper, while discussing the government's controversial decision to remove certain immigrants and refugee claimants from accessing Canada's health care system, made reference to "Old Stock Canadians" as being in support of the government's position. Marlene Jennings called his words racist and divisive, as they are used to exclude Canadians of colour.[10]
Darmawan Prasodjo[id]notes the use of the concept of "strong leadership" as a dog whistle in the context ofIndonesian politics.[12]
The popularPalestinian nationalistandanti-Zionistslogan "from the river to the sea" has been called a dog-whistle for the complete destruction of Israel byCharles C. W. CookeandSeth Mandel.[13][14]Pat Falloncalled its usage "a thinly veiled call for the genocide of millions of Jews in Israel," and theAnti-Defamation Leaguenotes that, "It is an antisemitic charge denying the Jewish right to self-determination, including through theremovalof Jews from their ancestral homeland."[15]
According to United States CongresswomanRashida Tlaib, the sole Palestinian-American representative inCongress, the slogan is "an aspirational call for freedom, human rights and peaceful coexistence, not death, destruction, or hate."[16]According to Maha Nassar, Associate Professor in the School of Middle Eastern and North African Studies,University of Arizona, "the majority of Palestinians who use this phrase do so because they believe that, in 10 short words, it sums up their personal ties, their national rights and their vision for the land they call Palestine. And while attempts to police the slogan's use may come from a place of genuine concern, there is a risk that tarring the slogan as antisemitic – and therefore beyond the pale – taps into a longer history of attempts to silence Palestinian voices."[17]In an interview withAl Jazeera, Nimer Sultany, a lecturer in law at the School of Oriental and African Studies (SOAS) in London, said the adjective expresses "the need for equality for all inhabitants of historic Palestine".[18]
From a historical perspective and the perspective ofPalestiniancivilians, the full slogan has had several variations:[19]
The multiplicity of political meanings behind the full chant—for some a call to freedom and others a call to ethnic cleansing— characterizes it as a dog whistle.[citation needed]
Lynton Crosby, who had previously managedJohn Howard's four election campaigns in Australia, worked as aConservative Partyadviser during the2005 UK general election, and the term was introduced to British political discussion at this time.[1]In whatGoodincalls "the classic case" of dog-whistling,[3]Crosby created a campaign for the Conservatives with the slogan "Are you thinking what we're thinking?": a series of posters, billboards, TV commercials and direct mail pieces with messages like "It's not racist to impose limits on immigration" and "how would you feel if a bloke on early release attacked your daughter?"[20]focused on controversial issues like insanitary hospitals, land grabs by squatters and restraints on police behaviour.[21][22]
During theEU Referendum, theLeave campaignwas accused by members of theRemain campaignsuch asLabourMPYvette Cooperand thenGreenMPCaroline Lucasof stirring up racial hatred againstEastern Europeansandethnic minoritiesthroughanti-immigrationdog whistles.[23]Vote Leavedistanced itself fromLeave.EUandUKIPafter theBreaking Pointposter, showing predominantlySyrianandAfghan refugeesnear theCroatia-Slovenia borderwith the solewhiteperson in the image being obscured by text.Boris Johnsonstated it was "not our campaign" and "not my politics".[24]
During the2024 General Election,Reform UKwas accused of racist dog whistling when leaderNigel Faragestated that the thenPrime MinisterRishi Sunak, who is ofIndiandescent, "doesn't understand our culture"[25]and "is not patriotic"[26]after leaving commemorations for the 80th anniversary ofD-Dayearly.
The phrase "states' rights", literally referring to powers of individual state governments in the United States, was described in 2007 by journalist David Greenberg inSlateas "code words" for institutionalized segregation and racism.[27]States' rightswas the banner under which groups like theDefenders of State Sovereignty and Individual Libertiesargued in 1955 against school desegregation.[28]In 1981, formerRepublican PartystrategistLee Atwater, when giving an anonymous interview discussing formerpresidentRichard Nixon'sSouthern strategy, speculated that terms like "states' rights" were used for dog-whistling:[29][30][31]
You start out in 1954 by saying, "Nigger, nigger, nigger." By 1968, you can't say "nigger" – that hurts you. Backfires. So you say stuff likeforced busing, states' rights, and all that stuff. You're getting so abstract now, you're talking about cutting taxes. And all these things you're talking about are totally economic things and a byproduct of them is [that] blacks get hurt worse than whites. And subconsciously maybe that is part of it. I'm not saying that. But I'm saying that if it is getting that abstract, and that coded, that we are doing away with the racial problem one way or the other. You follow me – because obviously sitting around saying, "We want to cut this" is much more abstract than even the busing thing, and a hell of a lot more abstract than "Nigger, nigger."[32]
Atwater was contrasting this with then-PresidentRonald Reagan's campaign, which he felt "was devoid of any kind of racism, any kind of reference". However,Ian Haney López, an American law professor and author of the 2014 bookDog Whistle Politics, described Reagan as "blowing a dog whistle" when the candidate told stories about "Cadillac-driving 'welfare queens' and 'strapping young bucks' buying T-bone steaks withfood stamps" while he was campaigning for the presidency.[33][34][35]He argues that such rhetoric pushes middle-class white Americans to vote against their economic self-interest in order to punish "undeserving minorities" who, they believe, are receiving too much public assistance at their expense. According to López, conservative middle-class whites, convinced by powerful economic interests that minorities are the enemy, supported politicians who promised to curb illegal immigration and crack down on crime but inadvertently also voted for policies that favor the extremely rich, such as slashing taxes for top income brackets, giving corporations more regulatory control over industry and financial markets,union busting, cutting pensions for future public employees, reducing funding for public schools, and retrenching the social welfare state. He argues that these same voters cannot link rising inequality which has affected their lives to the policy agendas they support, which resulted in a massive transfer of wealth to the top 1 percent of the population since the 1980s.[36][37]
In the U.S., the phrase "international bankers" is a well-known dog whistle code for Jews. Its use as such is derived from the anti-Semitic fabricationThe Protocols of the Elders of Zion. It was frequently used by the fascist-supporting radio personalityCharles Coughlinon his national show. His repeated use of the term was a factor in the distributor CBS opting not to renew his contract.[38]The word "globalists" is similarly widely considered an anti-Semitic dog whistle.[39][40][41][42]
JournalistCraig Ungerwrote that PresidentGeorge W. BushandKarl Roveused coded "dog-whistle" language in political campaigning, delivering one message to the overall electorate while at the same time delivering quite a different message to a targetedevangelical Christianpolitical base.[43]William Safire, inSafire's Political Dictionary, offered the example of Bush's criticism during the2004 presidential campaignof theU.S. Supreme Court's 1857Dred Scottdecision denying theU.S. citizenshipof anyAfrican American. To most listeners the criticism seemed innocuous, Safire wrote, but "sharp-eared observers" understood the remark to be a pointed reminder that Supreme Court decisions can be reversed, and a signal that, if re-elected, Bush might nominate to the Supreme Court a justice who would overturnRoe v. Wade.[1]This view is echoed in a 2004Los Angeles Timesarticle byPeter Wallsten.[44]
DuringBarack Obama's campaign and presidency, a number of left-wing commentators described various statements about Obama as racist dog-whistles. During the2008 Democratic primaries, writer Enid Lynette Logan criticizedHillary Clinton's campaign's reliance on code words and innuendo seemingly designed to frame Barack Obama's race as problematic, saying Obama was characterized by the Clinton campaign and its prominent supporters as anti-white due to his association with Rev.Jeremiah Wright, as able to attract only black votes, as anti-patriotic, a drug user, possibly a drug seller, and married to an angry, ungrateful black woman.[45]A light-hearted 2008 article byAmy ChozickinThe Wall Street Journalquestioned whether Obama was too thin to be elected president, given the average weight of Americans; commentatorTimothy Noahwrote that this was a racist dog-whistle, because "When white people are invited to think about Obama's physical appearance, the principal attribute they're likely to dwell on is his dark skin."[46]In a 2010 speech,Sarah Palincriticized Obama, saying "we need a commander in chief, not a professor of law standing at the lectern".Harvardprofessor (and Obama ally)Charles Ogletreecalled this attack racist, because the true idea being communicated was "that he's not one of us".[47]MSNBC commentatorLawrence O'Donnellcalled a 2012 speech byMitch McConnell, in which McConnell criticized Obama for playing too much golf, a racist dog-whistle because O'Donnell felt it was meant to remind listeners of black golferTiger Woods, who at the time was going through an infidelity scandal.[48]
During the 2016 presidential election campaign and on a number of occasions throughout his presidency,Donald Trumpwas accused of using racial and antisemitic "dog whistling" techniques by politicians and major news outlets.[49][50][51][52][53]New York TimescolumnistRoss Douthatremarked that the Trump campaign "slogan 'Make America Great Again' can be read as a dog-whistle to some whiter and more Anglo-Saxon past".[54]
FormerFox NewsanchorTucker Carlsonhas been reported to use dog-whistling tactics on his former commentary showTucker Carlson Tonight.[55][56][57]
During the 2018 gubernatorial race in Florida,Ron DeSantiscame under criticism for comments that were allegedly racist, saying: "The last thing we need to do is to monkey this up by trying to embrace a socialist agenda with huge tax increases and bankrupting the state. That is not going to work. That's not going to be good for Florida."[58]DeSantis was accused of using the verb "monkey" as a racist dog whistle; his opponent,Andrew Gillum, was an African American. DeSantis denied that his comment was meant to be racially charged.[59]
Terms such as "woke", "CRT", and "DEI" have been described as dog whistles againstBlack people.[60][61][62][63][64][65][66]
Roberto SavianoofThe Guardianclaimed that Italian right-wing politicianGiorgia Meloniused theMussolini-era slogan "God, homeland, family" as a dog-whistle to signal her anti-immigration stance, and in 2019, she used her identity as a dog whistle, proclaiming at a rally: "I am Giorgia, I am a woman, I am a mother, I am Italian, I am a Christian."[67]Washington Postcolumnist Philip Bump contended that Meloni has used the term "financial speculators"[68]as a dog-whistle to concealantisemitism.
Academics disagree on whether the dog-whistle notion has conceptual validity and furthermore on the mechanisms by which discourses identified as dog-whistles function. For instance, the sociologistBarry Hindesscriticized Josh Fear's andRobert E. Goodin's respective attempts to theorize dog-whistles on the grounds that they did not pass theWeberiantest of value neutrality: "In the case of the concept of ‘dog-whistle politics,' we find that the investigator's—in this case, Fear's—disapproval enters into the definition of the object of study. Goodin avoids this problem, clearly signalling his disapproval—for example, with his ‘particularly pernicious' (2008, p. 224)—but not letting it interfere with his own conceptualisation of the phenomenon. The difficulty here is that this abstinence leaves him with no real distinction between the general phenomena of coded messaging […] and dog whistling in particular, leaving us to suspect that dog whistling should be seen not so much as a novel form of rhetoric, but rather, to borrow an image fromThomas Hobbes'Leviathan, as a familiar form misliked."[69]
In effect, the philosopher Carlos Santana corroborates Hindess' criticism of the dog-whistle notion as being dependent on the investigator's social and moral values during his own attempted definition, writing: "We don't want every instance of bi-level meaning in political discourse to count as dog whistles, because not every instance of political doublespeak is problematic in the way prototypical dog whistles like welfare queen and family values are. Some, like backhanded compliments to political rivals, aren't a major source of social ills. Some, like aspirational hypocrisy (Quill 2010) and deliberate doublespeak meant to bring diverse constituencies together (Maloyed 2011), might even be socially beneficial. Keep in mind what makes dog whistles problematic: they harm disadvantaged groups, undermine our ability to have a functioning plural society, and muddle our ability to reliably hold political figures responsible for their actions. Given our interest in addressing these harms, it makes sense to limit our definition of dog whistles to the types of bi-level meaning which engender them."[70]
For another instance of criticism, albeit from another direction, the psychologistSteven Pinkerhas remarked that the concept of dog whistling allows people to "claim that anyone says anything because you can easily hear the alleged dogwhistles that aren't in the actual literal contents of what the person says".[71]
Mark Libermanhas argued that it is common for speech and writing to convey messages that will only be picked up on by part of the audience, but that this does not usually mean that the speaker is deliberately conveying a double message.[72]
Finally, Robert Henderson andElin McCreadyargue thatplausible deniabilityis a key characteristic of dog whistles.[73]
|
https://en.wikipedia.org/wiki/Dog_whistle_(politics)
|
Linguistic discrimination(also calledglottophobia,linguicismandlanguagism) is the unfair treatment of people based upontheir use of languageand the characteristics of their speech, such as theirfirst language, theiraccent, the perceived size of theirvocabulary(whether or not the speaker uses complex and varied words), theirmodality, and theirsyntax.[1]For example, anOccitan speakerin France will probably be treated differently from aFrench speaker.[2]
Based on a difference in use of language, a person may automatically form judgments about another person'swealth,education,social status, character or other traits, which may lead todiscrimination. This has led topublic debatesurroundinglocalisation theories, likewise with overalldiversity prevalencein numerous nations acrossthe West.
Linguistic discrimination was at first considered an act ofracism. In the mid-1980s,linguistTove Skutnabb-Kangascaptured the idea of language-based discrimination aslinguicism, which was defined as "ideologies and structures used to legitimize, effectuate, and reproduce unequal divisions of power and resources (both material and non-material) between groups which are defined on the basis of language".[3]Although different names have been given to this form of discrimination, they all hold the same definition. Linguistic discrimination is culturally and socially determined due to preference for one use of language over others.
Scholars have analyzed the role oflinguistic imperialismin linguicism, with some asserting that speakers of dominant languages gravitate toward discrimination against speakers of other, less dominant languages, while disadvantaging themselves linguistically by remaining monolingual.[4]
According to Carolyn McKinley, this phenomenon is most present inAfrica, where the majority of the population speaksEuropean languagesintroduced during thecolonial era; African states are also noted as instituting European languages as the main medium of instruction, instead ofindigenous languages.[4]UNESCOreports have noted that this has historically benefitted only the Africanupper class, conversely disadvantaging the majority of Africa's population who hold varying level of fluency in the European languages spoken across the continent.[4]
Scholars have also noted the influence of the linguistic dominance of English onacademic disciplines;Anna Wierzbicka, professor of linguistics at theAustralian National University, has described disciplines such as thesocial sciencesandhumanitiesas being "locked in a conceptual framework grounded in English", preventingacademiaas a whole from reaching a "more universal, culture-independent perspective."[5]
Speakers with certain accents may experienceprejudice. For example, some accents hold moreprestigethan others depending on the cultural context. However, with so manydialects, it can be difficult to determine which is the most preferable. The best answer linguists can give, such as the authors ofDo You Speak American?, is that it depends on the location and the speaker. Research has determined however that some sounds in languages may be determined to sound less pleasant naturally.[6]Also, certain accents tend to carry more prestige in some societies over other accents. For example, in theUnited StatesspeakingGeneral American(a variety associated with thewhitemiddle class) is widely preferred in many contexts such as television journalism. Also, in theUnited Kingdom, theReceived Pronunciationis associated with being of higher class and thus more likable.[7]In addition to prestige, research has shown that certain accents may also be associated with less intelligence, and having poorer social skills.[8]An example can be seen in the difference between Southerners and Northerners in the United States, where people from the North are typically perceived as being less likable in character, and Southerners are perceived as being less intelligent. As sociolinguist, Lippi-Green, argues, "It has been widely observed that when histories are written, they focus on the dominant class... Generally studies of the development of language over time are very narrowly focused on the smallest portion of speakers: those with power and resources to control the distribution of information."[9]
Linguistic discrimination appeared before the term was established. During the 1980s, scholars explored the connection betweenracismand languages. Linguistic discrimination was a part of racism when it was first studied. The first case found that helped establish the term was inNew Zealand, where whitecolonizersjudge the native population,Māori, by judging their language. Linguistic discrimination may originate from fixed institutions andstereotypesof the elite class. Elites reveal strong racism through writing, speaking, and other communication methods, providing a basis for discrimination. Their way of speaking the language is considered the higher class, emphasizing the idea that how one speaks a language is related to social, economic, and political status.[10]
As sociolinguistics evolved, scholars began to recognize the need for a more nuanced framework to analyze the complex interactions between language and social identity. This led to the introduction of linguistic ideology, a critical concept that specifically addresses the nuances of linguistic discrimination without conflating it with broader issues of racism. Linguistic ideology can be defined as the beliefs, attitudes, and assumptions that society holds about language, including the idea that the way an individual speaks can serve as a powerful indicator of their social status and identity within a community. This perspective enables researchers to unpack how certain linguistic features—such as accents, dialects, and speech patterns—are often laden with social meanings that can perpetuate stereotypes about different groups. The implication is that these ideologies shape our perceptions and evaluations of speakers, leading to discriminatory practices based on linguistic characteristics. Consequently, linguistic discrimination can be understood as a phenomenon deeply rooted in societal beliefs and cognitive biases, which highlight the intersectionality of language, identity, and power dynamics within various populations. By focusing on linguistic ideology, sociolinguistics provides a more targeted lens through which to examine the social consequences of language use and the systemic inequalities that arise from such perceptions. This innovative approach not only enriches our understanding of language as a social tool but also emphasizes the importance of critically examining the underlying ideologies that inform our judgments about speech and the speakers themselves.
It is natural for human beings to want to identify with others. One way we do this is by categorizing individuals into specificsocial groups. While some groups are often assumed to be readily noticeable (such as those defined by ethnicity or gender), other groups are lesssalient. Linguist Carmen Fought explains how an individual's use of language may allow another person to categorize them into a specific social group that may otherwise be less apparent.[11]For example, in the United States it is common to perceive Southerners as less intelligent. Belonging to a social group such as the South may be less salient than membership to other groups that are defined by ethnicity or gender. Language provides a bridge for prejudice to occur for these less salient social groups.[12]
Linguistic discrimination is a form ofracism. Impact of linguistic discrimination ranges from physical violence tomental trauma, and then to extinction of a language. Victims of linguistic discrimination may experience physical bullying in school and a decrease in earnings in jobs. In countries where a variety of languages exist, it is hard for people to obtain basic social service such aseducationand health care[13]since they do not understand the language. Mentally, they may be ashamed or feel guilty to speak their home language.[14]
People who speak a language that is not the mainstream language do not feel social acceptance. Research shows that countries with assimilation policies result in higher stress.[15]They are forced to accept the mainstream language and foreign culture.[16]
According to statistics, every two weeks an endangered language will be extinct. This is because, on the country level, linguisticallymarginalizedpopulations must learn the common language to obtain resources. Their opportunities are very limited when they cannot communicate in a way everyone else understands.[17]
English, being a language that most countries speak in the world, experiences a lot of linguistic discrimination when people from different linguistic backgrounds meet. Regional differences and native languages may have an impact on how people speak the language. For example, many non-native speakers in other countries fail to pronounce the “th” sound. Instead, they use the "s" sound, which is more common in other languages, to replace it. “Thank” becomes “sank,” and “mother” becomes “mozer.” InRussian-Englishpronunciation, “Hi, where were you” may be pronounced like “Hi, veir ver you” since it is closer to Russian. It may be considered an inappropriate ways to speak the language and be ridiculed by native speakers. Research has shown that this linguistic discrimination may lead to bullying and violence in the worst case. However, linguistic discrimination may not always be bad bias or cause superiority. A mixed pronunciation of different languages may also lead to mixed reactions. Some people who are native to the language may find these mixes to be special and good, while some others are unfriendly with these speakers. Nonetheless, all these are stereotypes of certain languages and may lead to cognition bias. Former president Donald Trump's wife,Melania Trump, was harshly mocked and insulted on the internet due to her Slovenian accent of speaking English.[18]In fact, in many countries where English is thelingua franca, accent is a part of identity.[19]
The impacts of colonization on linguistic traditions vary based on the form of colonization experienced: trader, settler or exploitation.[20]Congolese-American linguistSalikoko Mufwenedescribes trader colonization as one of the earliest forms of European colonization. In regions such as the western coast of Africa as well as the Americas, trade relations between European colonizers and indigenous peoples led to the development ofpidgin languages.[20]Some of these languages, such as Delaware Pidgin andMobilian Jargon, were based on Native American languages, while others, such asNigerian PidginandCameroonian Pidgin, were based on European ones.[21]As trader colonization proceeded mainly via these hybrid languages, rather than the languages of the colonizers, scholars like Mufwene contend that it posed little threat to indigenous languages.[21]
Trader colonization was often followed by settler colonization, where European colonizers settled in these colonies to build new homes.[20]Hamel, a Mexican linguist, argues that "segregation" and "integration" were two primary ways through which settler colonists engaged with aboriginal cultures.[22]In countries such as Uruguay, Brazil, Argentina, and those in theCaribbean, segregation and genocide decimated indigenous societies.[22]Widespread death due to war and illness caused many indigenous populations to lose theirindigenous languages.[20]In contrast, in countries that pursued policies of "integration", such as Mexico,Guatemalaand the Andean states, indigenous cultures were lost as aboriginal tribes mixed with colonists.[22]In these countries, the establishment of new European orders led to the adoption of colonial languages in governance and industry.[20]In addition, European colonists also viewed the dissolution of indigenous societies and traditions as necessary for the development of a unifiednation state.[22]This led to efforts to destroy tribal languages and cultures: in Canada and the United States, for example, Native children were sent to boarding schools such as Col. Richard Pratt'sCarlisle Indian Industrial School.[20][23]Today, in countries such as the United States, Canada and Australia, which were once settler colonies, indigenous languages are spoken by only a small minority of the populace.
Several postcolonial literary theorists have drawn a link between linguistic discrimination and the oppression of indigenous cultures. ProminentKenyanauthorNgugi wa Thiong'o, for example, argues in his bookDecolonizing the Mindthat language is both a medium of communication, as well as a carrier of culture.[25]As a result, linguistic discrimination resulting from colonization has facilitated the erasure of pre-colonial histories and identities.[25]For example,African slaveswere taught English and forbidden to use their indigenous languages. This severed the slaves' linguistic and thus cultural connection to Africa.[25]
In contrast to settler colonies, in exploitation colonies, education in colonial tongues was only accessible to a small indigenous elite.[26]Both the British Macaulay Doctrine, as well as French and Portuguese systems ofassimilation, for example, sought to create an "elite class of colonial auxiliaries" who could serve as intermediaries between the colonial government and local populace.[26]As a result, fluency in colonial languages became a signifier of class in colonized lands.[citation needed]
In postcolonial states, linguistic discrimination continues to reinforce notions of class. In Haiti, for example, working-class Haitians predominantly speakHaitian Creole, while members of the local bourgeoisie are able to speak both French and Creole.[27]Members of this local elite frequently conduct business and politics in French, thereby excluding many of the working-class from such activities.[27]In addition, D. L. Sheath, an advocate for the use of indigenous languages in India, also writes that the Indian elite associates nationalism with a unitary identity, and in this context, "uses English as a means of exclusion and an instrument ofcultural hegemony”.[28]
Class disparities in postcolonial nations are often reproduced through education. In countries such asHaiti, schools attended by the bourgeoisie are usually of higher quality and use colonial languages as their means of instruction. On the other hand, schools attended by the rest of the population are often taught inHaitian Creole.[27]Scholars such as Hebblethwaite argue that Creole-based education will improve learning, literacy and socioeconomic mobility in a country where 95% of the population are monolingual in Creole.[29]However, resultant disparities in colonial language fluency and educational quality can impede social mobility.[27]
On the other hand, areas such asFrench Guianahave chosen to teach colonial languages in all schools, often to the exclusion of local indigenous languages.[30]As colonial languages were viewed by many as the "civilized" tongues, being "educated" often meant being able to speak and write in these colonial tongues.[30]Indigenous language education was often seen as an impediment to achieving fluency in these colonial languages, and thus deliberately suppressed.[30]
CertainCommonwealth nationssuch as Uganda and Kenya have historically had a policy of teaching in indigenous languages and only introducing English in the upper grades.[31]This policy was a legacy of the "dual mandate" as conceived byLord Lugard, a British colonial administrator inNigeria.[31]However, by the post-war period, English was increasingly viewed as necessary skill for accessing professional employment and better economic opportunities.[31][32]As a result, there was increasing support amongst the populace for English-based education, which Kenya's Ministry of Education adopted post-independence, and Uganda following their civil war. Later on, members of the Ominde Commission in Kenya expressed the need forKiswahiliin promoting a national and pan-African identity. Kenya therefore began to offer Kiswahili as a compulsory, non-examinable subject in primary school, but it remained secondary to English as a medium of instruction.[31]
While the mastery of colonial languages may provide better economic opportunities, theConvention against Discrimination in Education[33]and theUN Convention on the Rights of the Childalso states that minority children have the right to "use [their] own [languages]". The suppression ofindigenous languageswithin the education system appears to contravene this treaty.[34][35]In addition, children who speak indigenous languages can also be disadvantaged when educated in foreign languages, and often have high illiteracy rates. For example, when the French arrived to "civilize" Algeria, which included imposing French on local Algerians, the literacy rate in Algeria was over 40%, higher than that in France at the time. However, when the French left in 1962, the literacy rate in Algiers was at best 10-15%.[36]
As colonial languages are used as the languages of governance and commerce in many colonial and postcolonial states,[37]locals who only speak indigenous languages can be disenfranchised. By forcing the locals to speak the colonizers' language, colonizers assimilate the indigenous people and hold colonies longer. For example, when representative institutions were introduced to theAlgomaregion in what is now modern-day Canada, the local returning officer only accepted the votes of individuals who were enfranchised, which required indigenous peoples to "read and write fluently... [their] own and another language, either English or French".[38]This caused political parties to increasingly identify with settler perspectives rather than indigenous ones.[38]
It is a common approach for colonizers to set language limitations. Japanese government in 1910 enacted decrees in colony Korea to eliminate existing Korean culture and language. All schools must teach Japanese and Hanja. By doing so, Japanese government was able to make Korea more dependent on Japan and colonize Korea longer.
Even today, many postcolonial states continue to use colonial languages in their public institutions, even though these languages are not spoken by the majority of their residents.[39]For example, theSouth Africanjustice system still relies primarily on English andAfrikaansas its primary languages, even though most South Africans, particularlyBlack South Africans, speak indigenous languages.[40]In these situations, the use of colonial languages can present barriers to participation in public institutions.
Linguistic discrimination is often defined in terms of prejudice of language. It is important to note that although there is a relationship between prejudice and discrimination, they are not always directly related.[41]Prejudicecan be defined as negative attitudes towards a person based on their membership of a social group, whereasdiscriminationcan be seen as the acts towards them. The difference between the two should be recognized because prejudice may be held against someone, but it may not be acted on.[42]The following are examples of linguistic prejudice which may result in discrimination.
While, theoretically, any speaker may be the victim of linguicism regardless of social and ethnic status, oppressed andmarginalizedsocial minorities are often the most consistent targets, due to the fact that the speech varieties that come to be associated with such groups have a tendency to bestigmatized.
Canada was first colonized by French settlers. Later, the British took control of Canada, while the influence of French culture and languages were still enormous. Historically, the Canadian government andEnglish Canadianshavediscriminated against Canada's French-speaking population, during some periods in thehistory of Canada, they have treated its members assecond-class citizens, and they have favored the members of the more powerful English-speaking population. This form of discrimination has resulted in or contributed to many developments in Canadian history, including the rise of theQuebec sovereignty movement,Quebecois nationalism, theLower Canada Rebellion, theRed River Rebellion, a proposedAcadia province, extreme poverty and lowsocio-economic statusof theFrench Canadianpopulation, low francophone graduation rates as a result of the outlawing of francophone schools across Canada, differences in average earnings between francophones and anglophones in the same positions, fewer chances of being hired or promoted for francophones, and many other things.
TheCharter of the French Language, first established in 1977 and amended several times since, has been accused of being discriminatory by English-speakers.[citation needed]The law makes French theofficial languageof Quebec and mandates its use (with exceptions) in government offices and communiques, schools, and in commercial public relations. The law is a way of preventing linguistic discrimination against the majority francophone population of Quebec who were for a very long time controlled by the English minority of the province. The law also seeks to protect French against the growing social and economic dominance of English. Though the English-speaking population had been shrinking since the 1960s, it was hastened by the law, and the 2006 census showed a net loss of 180,000 native English-speakers.[43]Despite this, speaking English at work continues to be strongly correlated with higher earnings, with French-only speakers earning significantly less.[44]The law is credited with successfully raising the status of French in a predominantly English-speaking economy, and it has been influential in countries facing similar circumstances.[43]However, amendments have made it less powerful under the pressure from society and thus less effective than it was in the past.[45]
The linguistic disenfranchisement rate in the EU can significantly vary across countries. For residents in two EU-countries that are either native speakers of English or proficient in English as a foreign language the disenfranchisement rate is equal to zero. In his study "Multilingual communication for whom? Language policy and fairness in the European Union",Michele Gazzolacomes to the conclusion that the current multilingual policy of the EU is not in the absolute the most effective way to inform Europeans about the EU; in certain countries, additional languages may be useful to minimize linguistic exclusion.[46]
In the 24 countries examined, an English-onlylanguage policywould exclude 51% to 90% of adult residents. A language regime based on English, French and German would disenfranchise 30% to 56% of residents, whereas a regime based on six languages would bring the shares of excluded population down to 9–22%. AfterBrexit, the rates of linguistic exclusion associated with a monolingual policy and with a trilingual and a hexalingual regime are likely to increase.[46]
Here and elsewhere the terms 'standard' and 'non-standard' make analysis of linguicism difficult. These terms are used widely by linguists and non-linguists when discussing varieties of American English that engender strong opinions, afalse dichotomywhich is rarely challenged or questioned. This has been interpreted by linguists Nicolas Coupland,Rosina Lippi-Green, andRobin Queen(among others) as a discipline-internal lack of consistency which undermines progress; if linguists themselves cannot move beyond the ideological underpinnings of 'right' and 'wrong' in language, there is little hope of advancing a more nuanced understanding in the general population.[64][65]
Because some black Americans speak a particular non-standard variety of English which is often seen as substandard, they are often targets of linguicism.[66]AAVEis often perceived by members of mainstream American society as indicative of low intelligence or limited education, and as with many other non-standard dialects and especially creoles, it is usually called "lazy" or "bad" English. According to researches, AAVE was initially a language that black people in America used to clearly express the life of oppression.[67]People reflect that it is usually more difficult and understand and respond to an AAVE speaker.[68]
AAVE usually contains words and phrases that have a different meaning from their original meaning in standard English.Pronunciationalso differs from standard English. Some phrases require sufficient cultural background to understand. From the grammatic aspect, AAVE shows more complex structures that allow speaker to express a wider range with more specificity.[69]
The linguistJohn McWhorterhas described this particular form of linguicism as particularly problematic in the United States, where non-standard linguistic structures are often deemed "incorrect" by teachers and potential employers in contrast to other countries such as Morocco, Finland and Italy wherediglossia(the ability to switch between two or more dialects or languages) is an accepted norm, and non-standard usage in conversation is seen as a mark of regional origin, not of intellectual capacity or achievement.
In the1977 Ann Arborcourt case, AAVE was compared against standard English to determine how much of an education barrier existed for children that had been primarily raised with AAVE. The assigned linguists determined that the differences, stemming from a history of racial segregation, were significant enough for the children to receive supplementary teaching to better understand standard English.[70]
For example, a black American who uses a typical AAVE sentence such as "He be comin' in every day and sayin' he ain't done nothing" may be judged as having a deficient command of grammar, whereas, in fact, such a sentence is constructed based on a complex grammar which is different from that of standard English, not a degenerate form of it.[71]A listener may misjudge the user of such a sentence to be unintellectual or uneducated. The speaker may be intellectually capable, educated, and proficient in standard English, but chose to say the sentence in AAVE for social and sociolinguistic reasons such as the intended audience of the sentence, a phenomenon known ascode switching. Currently, AAVE is unique and organized enough to be a new language that derives from English but becomes its own new language. It shares many similar characteristics with standard English, but it has its own complexity with African American culture and history. Nonetheless, AAVE is only used in non-formal situations. It is not uncommon for AAVE speakers to speak in formal and standard English under formal situations.
Reports have shown that black workers who sound more "black" earn on average 12% less than their peers (data in 2009).[72]In education, students who speak in AAVE are educated by their teachers that AAVE is not proper or is not correct. According to a survey, when a person speaks in AAVE, listeners tend to believe that the speaker is an African American from North America and is more related to adjectives such as poor, uneducated, and unintelligent.[73]By merely sounding like black, a person may be assumed to be in certain image.
Furthermore, the legal system in the United States has been found to produce worse outcomes for speakers of AAVE. Court reporters are less accurate at transcribing black speakers,[74]and judges can misinterpret the meaning of black speech in cases.[75]
Another form of linguicism is evidenced by the following: in some parts of the United States, a person who has a strong Spanish accent and uses only simple English words may be thought of as poor, poorly educated, and possibly anundocumented immigrant. However, if the same person has a diluted accent or no noticeable accent at all and can use a myriad of words in complex sentences, they are likely to be perceived as more successful, better educated, and a "legitimatecitizen". Accent has two parts, the speaker and the listener. Thus, some people may perceive an accent as strong because they are not used to hearing them and the emphasis is on an unexpected syllable or as soft and imperceptible. The bias and discrimination that ensues is tied to the difficulty the listener has in understanding that accent. The fact that the person uses a very broad vocabulary creates even morecognitive dissonanceon the part of the listener who will immediately think of the speaker as either undocumented, poor, uneducated or even insulting to their intelligence.
Linguistic discrimination againstAsiansis still a topic understudied. A scholar in a paper included a short story where an Asian reporter was asked whether she can speak English every time she meets a stranger. Everyone assumed that she may not understand English because she had an Asian appearance.[78]In a Pew Research study done in 2022, they found that around 59% of Asian immigrants could speak fluent English.[79]The proportion is much lower for new immigrants. However, this low English literacy level and lack of translation discourages many Asian immigrants to obtain access to social services, such as health care. Asian immigrants, especially younger students, experience a language barrier. They are forced to learn a new language.[80]
Chinglishis a common point of attack. It is the mixture ofChinesephrases or grammar and English that encompasses the way Chinese immigrants speak, often accompanied by a Chinese accent. An example would be "Open the light," since "open" and "turn on" are the same word ("开") in Chinese. Another example would be "Yes, I have."[81]This is the literal translation from Chinese to English, and it is hard for Chinese people to learn this quickly. Speaking Chinglish may result in racial discrimination, while this is only the nuance between Chinese and English grammar.
Users ofAmerican Sign Language(ASL) have faced linguistic discrimination based on the perception of the legitimacy of signed languages compared to spoken languages. This attitude was explicitly expressed in theMilan Conferenceof 1880 which set precedence for public opinion of manual forms of communication, including ASL, creating lasting consequences for members of the Deaf community.[82]The conference almost unanimously (save a handful of allies such asThomas Hopkins Gallaudet), reaffirmed the use oforalism, instruction conducted exclusively in spoken language, as the preferred education method for Deaf individuals.[83]These ideas were outlined in eight resolutions which ultimately resulted in the removal of Deaf individuals from their own educational institutions, leaving generations of Deaf persons to be educated single-handedly by hearing individuals.[84]
Due to misconceptions about ASL, it was not recognized as its own, fully functioning language until recently. In the 1960s, linguistWilliam Stokoeproved ASL to be its own language based on its unique structure and grammar, separate from that of English. Before this, ASL was thought to be merely a collection of gestures used to represent English. Because of its use of visual space, it was mistakenly believed that its users are of a lesser mental capacity. The misconception that ASL users are incapable of complex thought was prevalent, although this has decreased as further studies about its recognition of a language have taken place. For example, ASL users faced overwhelming discrimination for the supposedly "lesser" language that they use and were met with condescension especially when using their language in public.[85]Another way discrimination against ASL is evident is how, despite research conducted by linguists like Stokoe orClayton Valliand Cecil Lucas ofGallaudet University, ASL is not always recognized as a language.[86]Its recognition is crucial both for those learning ASL as an additional language, and for prelingually-deaf children who learn ASL as their first language. Linguist Sherman Wilcox concludes that given that it has a body of literature and international scope, to single ASL out as unsuitable for a foreign language curriculum is inaccurate. Russel S. Rosen also writes about government and academic resistance to acknowledging ASL as a foreign language at the high school or college level, which Rosen believes often resulted from a lack of understanding about the language. Rosen and Wilcox's conclusions both point to discrimination ASL users face regarding its status as a language, that although decreasing over time is still present.[87]
In the medical community, there is immense bias against deafness and ASL. This stems from the belief that spoken languages are superior to sign languages.[88]Because 90% of deaf babies are born to hearing parents, who are usually unaware of the existence of theDeaf community, they often turn to the medical community for guidance.[89]Medical and audiological professionals, who are typically biased against sign languages, encourage parents to get acochlear implantfor their deaf child in order for the child to use spoken language.[88]Research shows, however, that deaf kids without cochlear implants acquire ASL with much greater ease than deaf kids with cochlear implants acquire spoken English. In addition, medical professionals discourage parents from teaching ASL to their deaf kid to avoid compromising their English[90]although research shows that learning ASL does not interfere with a child's ability to learn English. In fact, the early acquisition of ASL proves to be useful to the child in learning English later on. When making a decision about cochlear implantation, parents are not properly educated about the benefits of ASL or the Deaf Community.[89]This is seen by many members of the Deaf Community as cultural and linguistic genocide.[90]
Linguicism applies towritten,spoken, orsigned languages. The quality of a book or article may be judged by the language in which it is written. In the scientific community, for example, those who evaluated a text in two language versions, English and the nationalScandinavian language, rated the English-language version as being of higher scientific content.[116]
TheInternetoperates a great deal using written language. Readers of aweb page,Usenetgroup,forumpost, or chat session may be more inclined to take the author seriously if the language is written in accordance with the standard language.
In contrast to the previous examples of linguistic prejudice, linguistic discrimination involves the actual treatment of individuals based on use of language. Examples may be clearly seen in the workplace, in marketing, and in education systems. For example, some workplaces enforce anEnglish-onlypolicy, which is part of an American political movement that pushes for English to be accepted as the official language. In the United States, the federal law, Titles VI and VII of theCivil Rights Act of 1964protects non-native speakers from discrimination in the workplace based on their national origin or use of dialect. There are state laws which also address the protection of non-native speakers, such as the California Fair Employment and Housing Act. However, industries often argue in retrospect that clear, understandable English is often needed in specific work settings in the U.S.[2]
|
https://en.wikipedia.org/wiki/Glottophobia
|
Glottopoliticsis asociolinguisticconcept coined by Jean-Baptiste Marcellesi and Louis Guespin.
It may be defined as any action taken by society to manage language interaction. Glottopolitics is constantly at work; it is a continuum that ranges from minuscule acts to considerable interventions, ultimately concerning language itself: promotion, prohibition, change of status, etc. There can be no social community without glottopolitics.[1]It is a social practice from which no one can escape (people "do glottopolitics without knowing it", whether they are ordinary citizens or ministers of the economy).[2]
Thissociolinguisticsarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Glottopolitics
|
Identification, friend or foe(IFF) is acombat identificationsystem designed forcommand and control. It uses atransponderthat listens for aninterrogationsignal and then sends aresponsethat identifies the broadcaster. IFF systems usually useradarfrequencies, but other electromagnetic frequencies, radio or infrared, may be used.[1]It enables military and civilianair traffic controlinterrogation systems to identify aircraft, vehicles or forces as friendly, as opposed to neutral or hostile, and to determine their bearing and range from the interrogator. IFF is used by both military and civilian aircraft. IFF was first developed duringWorld War II, with the arrival of radar, and severalfriendly fireincidents.
IFF can only positively identify friendly aircraft or other forces.[2][3][4][5]If an IFF interrogation receives no reply or an invalid reply, the object is not positively identified as foe; friendly forces may not properly reply to IFF for various reasons such as equipment malfunction, and parties in the area not involved in the combat, such as civilian lightgeneral aviationaircraft may not carry a transponder.
IFF is a tool within the broader military action of combat identification (CID), the characterization of objects detected in the field of combat sufficiently accurately to support operational decisions. The broadest characterization is that of friend, enemy, neutral, or unknown. CID not only can reduce friendly fire incidents, but also contributes to overall tactical decision-making.[6]
With the successful deployment of radar systems forair defenceduringWorld War II, combatants were immediately confronted with the difficulty of distinguishing friendly aircraft from hostile ones; by that time, aircraft were flown at high speed and altitude, making visual identification impossible, and the targets showed up as featureless blips on the radar screen. This led to incidents such as theBattle of Barking Creek, over Britain,[7][8][9]and theair attack on the fortress of Koepenickover Germany.[10][11]
Already before the deployment of theirChain Home radar system(CH), theRAFhad considered the problem of IFF.Robert Watson-Watthad filed patents on such systems in 1935 and 1936. By 1938, researchers atBawdsey Manorbegan experiments with "reflectors" consisting ofdipole antennastuned to resonate to the primary frequency of the CH radars. When a pulse from the CH transmitter hit the aircraft, the antennas would resonate for a short time, increasing the amount of energy returned to the CH receiver. The antenna was connected to a motorized switch that periodically shorted it out, preventing it from producing a signal. This caused the return on the CH set to periodically lengthen and shorten as the antenna was turned on and off. In practice, the system was found to be too unreliable to use; the return was highly dependent on the direction the aircraft was moving relative to the CH station, and often returned little or no additional signal.[12]
It had been suspected from the start this system would be of little use in practice. When that turned out to be the case, the RAF turned to an entirely different system that was also being planned. This consisted of a set of tracking stations usingHF/DFradio direction finders. The aircraft voice communications radios were modified to send out a 1 kHz tone for 14 seconds every minute, allowing the stations ample time to measure the aircraft's bearing. Several such stations were assigned to each "sector" of the air defence system, and sent their measurements to a plotting station at sector headquarters, who usedtriangulationto determine the aircraft's location. Known as "pip-squeak", the system worked, but was labour-intensive and did not display its information directly to the radar operators, the information had to be forwarded to them over the telephone. A system that worked directly with the radar was clearly desirable.[13]
The first active IFFtransponder(transmitter/responder) was the IFF Mark I which was used experimentally in 1939. This used aregenerative receiver, which fed a small amount of the amplified output back into the input, strongly amplifying even small signals as long as they were of a single frequency (like Morse code, but unlike voice transmissions). They were tuned to the signal from the CH radar (20–30 MHz), amplifying it so strongly that it was broadcast back out the aircraft's antenna. Since the signal was received at the same time as the original reflection of the CH signal, the result was a lengthened "blip" on the CH display which was easily identifiable. In testing, it was found that the unit would often overpower the radar or produce too little signal to be seen, and at the same time, new radars were being introduced using new frequencies.
Instead of putting Mark I into production, a newIFF Mark IIwas introduced in early 1940. Mark II had a series of separate tuners inside tuned to different radar bands that it stepped through using a motorized switch, while anautomatic gain controlsolved the problem of it sending out too much signal. Mark II was technically complete as the war began, but a lack of sets meant it was not available in quantity and only a small number of RAF aircraft carried it by the time of theBattle of Britain. Pip-squeak was kept in operation during this period, but as the Battle ended, IFF Mark II was quickly put into full operation. Pip-squeak was still used for areas over land where CH did not cover, as well as an emergency guidance system.[14]
Even by 1940 the complex system of Mark II was reaching its limits while new radars were being constantly introduced. By 1941, a number of sub-models were introduced that covered different combinations of radars, common naval ones for instance, or those used by the RAF. But the introduction of radars based on themicrowave-frequencycavity magnetronrendered this obsolete; there was simply no way to make a responder operating in this band using contemporary electronics.
In 1940, English engineerFreddie Williamshad suggested using a single separate frequency for all IFF signals, but at the time there seemed no pressing need to change the existing system. With the introduction of the magnetron, work on this concept began at theTelecommunications Research Establishmentas theIFF Mark III. This was to become the standard for theWestern Alliesfor most of the war.
Mark III transponders were designed to respond to specific 'interrogators', rather than replying directly to received radar signals. These interrogators worked on a limited selection of frequencies, no matter what radar they were paired with. The system also allowed limited communication to be made, including the ability to transmit a coded 'Mayday' response. The IFF sets were designed and built byFerrantiinManchesterto Williams' specifications. Equivalent sets were manufactured in the US, initially as copies of British sets, so that allied aircraft would be identified upon interrogation by each other's radar.[14]
IFF sets were obviously highly classified. Thus, many of them were wired with explosives in the event the aircrew bailed out or crash landed. Jerry Proc reports:
Alongside the switch to turn on the unit was the IFF destruct switch to prevent its capture by the enemy. Many a pilot chose the wrong switch and blew up his IFF unit. The thud of a contained explosion and the acrid smell of burning insulation in the cockpit did not deter many pilots from destroying IFF units time and time again. Eventually, the self destruct switch was secured by a thin wire to prevent its accidental use."[15]
FuG 25aErstling(English: Firstborn, Debut) was developed in Germany in 1940. It was tuned to the low-VHFband at 125 MHz used by theFreya radar, and an adaptor was used with the low-UHF-banded 550–580 MHz used byWürzburg. Before a flight, the transceiver was set up with a selected day code of tenbitswhich was dialed into the unit. To start the identification procedure, the ground operator switched the pulse frequency of his radar from 3,750 Hz to 5,000 Hz. The airborne receiver decoded that and started to transmit the day code. The radar operator would then see the blip lengthen and shorten in the given code. The IFF transmitter worked on 168 MHz with a power of 400 watts (PEP).
The system included a way for ground controllers to determine whether an aircraft had the right code or not but it did not include a way for the transponder to reject signals from other sources.Britishmilitary scientists found a way of exploiting this by building their own IFF transmitter calledPerfectos, which were designed to trigger a response from any FuG 25a system in the vicinity. When an FuG 25a responded on its 168 MHz frequency, the signal was received by the antenna system from anAI Mk. IV radar, which originally operated at 212 MHz. By comparing the strength of the signal on different antennas the direction to the target could be determined. Mounted onMosquitos, the "Perfectos" severely limited German use of the FuG 25a.
TheUnited States Naval Research Laboratoryhad been working on their own IFF system since before the war. It used a single interrogation frequency, like the Mark III, but differed in that it used a separate responder frequency. Responding on a different frequency has several practical advantages, most notably that the response from one IFF cannot trigger another IFF on another aircraft. But it requires a complete transmitter for the responder side of the circuitry, in contrast to the greatly simplified regenerative system used in the British designs. This technique is now known as across-band transponder.
When the Mark II was revealed in 1941 during theTizard Mission, it was decided to use it and take the time to further improve their experimental system. The result was what became IFF Mark IV. The main difference between this and earlier models is that it worked on higher frequencies, around 600 MHz, which allowed much smaller antennas. However, this also turned out to be close to the frequencies used by the GermanWürzburg radarand there were concerns that it would be triggered by that radar and the transponder responses would be picked on its radar display. This would immediately reveal the IFF's operational frequencies.
This led to a US–British effort to make a further improved model, the Mark V, also known as the United Nations Beacon or UNB. This moved to still higher frequencies around 1 GHz but operational testing was not complete when the war ended. By the time testing was finished in 1948, the much improved Mark X was beginning its testing and Mark V was abandoned.
By 1943, Donald Barchok filed a patent for a radar system using the abbreviation IFF in his text with only parenthetic explanation, indicating that this acronym had become an accepted term.[16]In 1945, Emile Labin and Edwin Turner filed patents for radar IFF systems where the outgoing radar signal and the transponder's reply signal could each be independently programmed with a binary codes by setting arrays of toggle switches; this allowed the IFF code to be varied from day to day or even hour to hour.[17][18]
Mark X started as a purely experimental device operating at frequencies above 1 GHz;
the name refers to "experimental", not "number 10". As development continued it was decided to introduce an encoding system known as the "Selective Identification Feature", or SIF. SIF allowed the return signal to contain up to 12 pulses, representing fouroctaldigits of 3 bits each. Depending on the timing of the interrogation signal, SIF would respond in several ways. Mode 1 indicated the type of aircraft or its mission (cargo or bomber, for instance) while Mode 2 returned a tail code.
Mark X began to be introduced in the early 1950s. This was during a period of great expansion of the civilian air transport system, and it was decided to use slightly modified Mark X sets for these aircraft as well. These sets included a new military Mode 3 which was essentially identical to Mode 2, returning a four-digit code, but used a different interrogation pulse, allowing the aircraft to identify if the query was from a military or civilian radar. For civilian aircraft, this same system was known as Mode A, and because they were identical, they are generally known as Mode 3/A.
Several new modes were also introduced during this process. Civilian modes B and D were defined, but never used. Mode C responded with a 12-bit number encoded usingGillham code, which represented the altitude as (that number) x 100 feet - 1200. Radar systems can easily locate an aircraft in two dimensions, but measuring altitude is a more complex problem and, especially in the 1950s, added significantly to the cost of the radar system. By placing this function on the IFF, the same information could be returned for little additional cost, essentially that of adding a digitizer to the aircraft'saltimeter.
Modern interrogators generally send out a series of challenges on Mode 3/A and then Mode C, allowing the system to combine the identity of the aircraft with its altitude and location from the radar.
The current IFF system is the Mark XII. This works on the same frequencies as Mark X, and supports all of its military and civilian modes.[citation needed]
It had long been considered a problem that the IFF responses could be triggered by any properly formed interrogation, and those signals were simply two short pulses of a single frequency. This allowed enemy transmitters to trigger the response, and usingtriangulation, an enemy could determine the location of the transponder. The British had already used this technique against the Germans during WWII, and it was used by the USAF againstVPAFaircraft during theVietnam War.
Mark XII differs from Mark X through the addition of the new military Mode 4. This works in a fashion similar to Mode 3/A, with the interrogator sending out a signal that the IFF responds to. There are two key differences, however.
One is that the interrogation pulse is followed by a 12-bit code similar to the ones sent back by the Mark 3 transponders. The encoded number changes day-to-day. When the number is received and decoded in the aircraft transponder, a further cryptographic encoding is applied. If the result of that operation matches the value dialled into the IFF in the aircraft, the transponder replies with a Mode 3 response as before. If the values do not match, it does not respond.
This solves the problem of the aircraft transponder replying to false interrogations, but does not completely solve the problem of locating the aircraft through triangulation. To solve this problem, a delay is added to the response signal that varies based on the code sent from the interrogator. When received by an enemy that does not see the interrogation pulse, which is generally the case as they are often below theradar horizon, this causes a random displacement of the return signal with every pulse. Locating the aircraft within the set of returns is a difficult process.
During the 1980s, a new civilian mode, Mode S, was added that allowed greatly increased amounts of data to be encoded in the returned signal. This was used to encode the location of the aircraft from the navigation system. This is a basic part of thetraffic collision avoidance system(TCAS), which allows commercial aircraft to know the location of other aircraft in the area and avoid them without the need for ground operators.
The basic concepts from Mode S were then militarized as Mode 5, which is simply acryptographicallyencoded version of the Mode S data.
The IFF ofWorld War IIand Soviet military systems (1946 to 1991) used codedradarsignals (called cross-band interrogation, or CBI) to automatically trigger the aircraft's transponder in an aircraft illuminated by the radar. Radar-based aircraft identification is also calledsecondary surveillance radarin both military and civil usage, with primary radar bouncing an RF pulse off of the aircraft to determine position. George Charrier, working forRCA, filed for apatentfor such an IFF device in 1941. It required the operator to perform several adjustments to the radar receiver to suppress the image of the natural echo on the radar receiver, so that visual examination of the IFF signal would be possible.[19]
The United States and other NATO countries started using a system called Mark XII in the late twentieth century; Britain had not until then implemented an IFF system compatible with that standard, but then developed a program for a compatible system known as successor IFF (SIFF).[20]
Beginning around 2016, most NATO member states began upgrading their Mark XII systems to Mark XIIA Mode 5 where practicable. The transition from the legacy Mode 4 to Mode 5, however, has encountered several integration challenges—such as cryptographic key management, secure enrollment procedures, and ensuring interoperability with diverse legacy hardware. To mitigate these risks, backward compatibility with Mode 4 has been maintained longer than originally planned. According to DSCA Memorandum 18-14, this strategy permits a phased, mixed-mode operation until full Mode 5 fielding is achieved.[21][22]DOT&E has overseen a series of test and evaluation efforts to ensure that Mode 5 meets NATO and DOD standards. In July 2014, DOT&E published the Mark XIIA Mode 5 Joint Operational Test Approach (JOTA) 2 Interoperability Assessment. This assessment evaluated the performance of various interrogators and transponders within a system‐of‐systems environment, based on a joint operational test conducted off the U.S. East Coast, and identified both successes and deficiencies that necessitated additional testing.[22]More recently, the 2021 DOT&E Mark XIIA Mode 5 Test Methodology outlined updated evaluation criteria, test procedures, and integration strategies aimed at resolving persistent issues and verifying that the new systems conform to the required operational and security standards.[23][citation needed] In accordance with STANAG 4570, it is anticipated that by 2030 every interrogator and transponder within NATO will be Mode 5 capable—further standardizing operations and enhancing overall security. [citation needed]
Modes 4 and 5 are designated for use byNATOforces.
InWorld War I, eight submarines were sunk byfriendly fireand inWorld War IInearly twenty were sunk this way.[26]Still, IFF has not been regarded a high concern before the 1990s by the US military as not many othercountries possess submarines.[dubious–discuss][27]
IFF methods that are analogous to aircraft IFF have been deemed unfeasible for submarines because they would make submarines easier to detect. Thus, having friendly submarines broadcast a signal, or somehow increase the submarine's signature (based on acoustics, magnetic fluctuations etc.), are not considered viable.[27]Instead, submarine IFF is done based on carefully defining areas of operation. Each friendly submarine is assigned a patrol area, where the presence of any other submarine is deemed hostile and open to attack. Further, within these assigned areas, surface ships and aircraft refrain from anyanti-submarine warfare(ASW); only the resident submarine may target other submarines in its own area. Ships and aircraft may still engage in ASW in areas that have not been assigned to any friendly submarines.[27]Navies also use database of acoustic signatures to attempt to identify the submarine, but acoustic data can be ambiguous and several countries deploy similar classes of submarines.[28]
|
https://en.wikipedia.org/wiki/Identification_friend_or_foe
|
Jargon, ortechnical language, is the specializedterminologyassociated with a particular field or area of activity.[1]Jargon is normally employed in a particularcommunicative contextand may not be well understood outside that context. The context is usually a particular occupation (that is, a certain trade, profession,vernacularor academic field), but anyingroupcan have jargon. The key characteristic that distinguishes jargon from the rest of a language is its specialized vocabulary, which includes terms and definitions of words that are unique to the context, and terms used in a narrower and more exact sense than when used in colloquial language. This can leadoutgroupsto misunderstand communication attempts. Jargon is sometimes understood as a form of technicalslangand then distinguished from the official terminology used in a particular field of activity.[2]
The termsjargon,slang,andargotare not consistently differentiated in the literature; different authors interpret these concepts in varying ways. According to one definition, jargon differs from slang in being secretive in nature;[3]according to another understanding, it is specifically associated with professional and technical circles.[4]Some sources, however, treat these terms as synonymous.[5][6]The use of jargon became more popular around the sixteenth century attracting persons from different career paths. This led to there being printed copies available on the various forms of jargon.[7]
Jargon, also referred to as "technical language", is "the technical terminology or characteristic idiom of a special activity or group".[8]Most jargon istechnical terminology(technical terms), involvingterms of art[9]orindustry terms, with particular meaning within a specific industry. The primary driving forces in the creation of technical jargon are precision, efficiency of communication, and professionalism.[10]Terms and phrases that are considered jargon have meaningful definitions, and through frequency of use, can becomecatchwords.[11]
While jargon allows greater efficiency in communication among those familiar with it, jargon also raises the threshold of comprehensibility for outsiders.[12]This is usually accepted[citation needed]as an unavoidabletrade-offbut it may also be used as a means ofsocial exclusion(reinforcingingroup–outgroupbarriers) or social aspiration (when introduced as a way of demonstrating expertise). Some academics promote the use of jargon-free language, or plain language,[13]as an audience may be alienated or confused by the technical terminology, and thus lose track of a speaker or writer's broader and more important arguments.[14]
Some words with both a technical and a non-technical meaning are referred to as semi-technical vocabulary: for example, Chinh Ngan Nguyen Le and Julia Miller refer tocolonas ananatomical termand also apunctuation mark;[15]and Derek Matravers refers topersonand its plural formpersonsas technical language used inphilosophy, where their meaning is more specific than "person" and "people" in their everyday use.[16]
The French word is believed to have been derived from the Latin wordgaggire, meaning "to chatter", which was used to describespeechthat the listener did not understand.[17]The word may also come fromOld Frenchjargonmeaning "chatter of birds".[17]Middle English also has the verbjargounenmeaning "to chatter", or "twittering", deriving from Old French.[18]
The first known use of the word in English is found withinThe Canterbury Tales, written byGeoffrey Chaucerbetween 1387 and 1400. Chaucer related "jargon" to the vocalizations of birds.[18]
In colonial history, jargon was seen as a device of communication to bridge the gap between two speakers who did not speak the same tongue. Jargon was synonymous withpidginin naming specific language usages. Jargon then began to have a negative connotation with lacking coherent grammar, or gibberish as it was seen as a "broken" language of many different languages with no full community to call their own. In the 1980s, linguists began restricting this usage of jargon to keep the word to more commonly define a technical or specialized language use.[19]
In linguistics, it is used to mean "specialist language",[20]with the term also seen as closely related toslang,argotandcant.[21]Various kinds of language peculiar to ingroups can be named across asemantic field.Slangcan be either culture-wide or known only within a certain group or subculture.Argotis slang or jargonpurposely used to obscure meaningto outsiders. Conversely, alingua francais used for the opposite effect, helping communicators to overcome unintelligibility, as arepidginsandcreole languages. For example, theChinook Jargonwas a pidgin.[22]Although technical jargon's primary purpose is to aidtechnical communication, not to exclude outsiders by serving as an argot, it can have both effects at once and can provide a technical ingroup withshibboleths. For example, medievalguildscould use this as one means of informalprotectionism. On the other hand, jargon that once was obscure outside a small ingroup can become generally known over time. For example, the termsbit,byte,andhexadecimal(which areterms from computing jargon[23]) are now recognized by many people outsidecomputer science.
The philosopherÉtienne Bonnot de Condillacobserved in 1782 that "every science requires a special language because every science has its own ideas". As a rationalist member of theEnlightenment, he continued: "It seems that one ought to begin by composing this language, but people begin by speaking and writing, and the language remains to be composed."[24]
An industry word is a specialized kind of technical terminology used in a certain industry. Industry words and phrases are often used in a specific area, and those in that field know and use the terminology.[25]
Precise technical terms and their definitions are formally recognized, documented, and taught by educators in the field. Other terms are more colloquial, coined and used by practitioners in the field, and are similar toslang. The boundaries between formal and slang jargon, as in general English, are quite fluid. This is especially true in the rapidly developing world of computers and networking. For instance, the termfirewall(in the sense of a device used to filter network traffic) was at first technical slang. As these devices became more widespread and the term became widely understood, the word was adopted as formal terminology.[26]
Technical terminology evolves due to the need for experts in a field to communicate with precision and brevity but often has the effect of excluding those who are unfamiliar with the particular specialized language of the group.[27]This can cause difficulties, for example, when a patient is unable to follow the discussions of medical practitioners, and thus cannot understand his own condition and treatment. Differences in jargon also cause difficulties where professionals in related fields use different terms for the same phenomena.[28]
The use of jargon in the business world is a common occurrence. The use of jargon in business correspondence reached a high popularity between the late 1800s into the 1950s.[29]In this context, jargon is most frequently used in modes of communication such as emails, reports, and other forms of documentation.[30]Common phrases used in corporate jargon include:
Medicine professionals make extensive use of scientific terminology. Mostpatientsencounter medical jargon when referring to their diagnosis or when receiving or reading their medication.[34]Some commonly used terms in medical jargon are:
At first glance, many people do not understand what these terms mean and may panic when they see these scientific names being used in reference to their health.[41]The argument as to whether medical jargon is a positive or negative attribute of a patient's experience has evidence to support both sides. On one hand, as mentioned before, these phrases can be overwhelming for some patients who may not understand the terminology. However, with the accessibility of the internet, it has been suggested that these terms can be used and easily researched for clarity.[34]
Jargon is commonly found in the field of law. These terms are often used in legal contexts such as legal documents, court proceedings, contracts, and more. Some common terms in this profession include:
There is specialized terminology within the field of education. Educators and administrators use these terms to communicate ideas specific to the education system. Common terms and acronyms considered to be jargon that are used within this profession include:
Jargon may serve the purpose of a "gatekeeper" in conversation, signaling who is allowed into certain forms of conversation. Jargon may serve this function by dictating to which direction or depth a conversation about or within the context of a certain field or profession will go.[44]For example, a conversation between two professionals in which one person has little previous interaction or knowledge of the other person could go one of at least two possible ways. One of the professionals (who the other professional does not know) does not use, or does not correctly use the jargon of their respective field, and is little regarded or remembered beyond small talk or fairly insignificant in this conversation. Or, if the person does use particular jargon (showing their knowledge in the field to be legitimate, educated, or of particular significance) the other professional then opens the conversation up in an in-depth or professional manner.[44]The use of jargon can create a divide in communication, or strengthen it. Outside of conversation, jargon can become confusing in writing. When used in text, readers can become confused if there are terms used that require outside knowledge on the subject.[45]
Ethosis used to create an appeal to authority. It is one of three pillars of persuasion created byAristotleto create a logical argument. Ethos uses credibility to back up arguments. It can indicate to the audience that a speaker is an insider with using specialized terms in the field to make an argument based on authority and credibility.[46]
Jargon can be used to convey meaningful information anddiscoursein a convenient way within communities. A subject expert may wish to avoid jargon when explaining something to a layperson. Jargon may help communicate contextual information optimally.[47]For example, a football coach talking to their team or a doctor working with nurses.[48]
With the rise of theself-advocacywithin theDisability Rights Movement, "jargonized" language has started to face repeated rejection for being language that is widely inaccessible.[49]However, jargon is largely present in everyday language such as in newspapers, financial statements, and instruction manuals. To combat this, several advocacy organizations are working on influencing public agents to offer accessible information in different formats.[50]One accessible format that offers an alternative to jargonized language is "easy read", which consists of a combination of plain language[13]and images.
The criticism against jargon can be found in certain fields where professionals communicate with individuals with no industry background. In a study done by analyzing 58 patients and 10radiation therapists, professionals diagnosed and explained the treatment of a disease to a patient with the use of jargon. It was found that using jargon left patients confused about what the treatments and risks were, suggesting that jargon in themedical fieldis not the best in communicating the terminology and concepts.[51]
Many examples of jargon exist because of its use among specialists and subcultures alike. In the professional world, those who are in the business of filmmaking may use words like "vorkapich" to refer to a montage when talking to colleagues.[52]Inrhetoric, rhetoricians use words like "arete" to refer to a person of power's character when speaking with one another.[53]
|
https://en.wikipedia.org/wiki/Jargon
|
Language analysis for the determination of origin(LADO) is an instrument used inasylumcases todetermine the national or ethnic originof the asylum seeker, through an evaluation of their language profile.[why?]To this end, an interview with the asylum seeker is recorded and analysed. The analysis consists of an examination of thedialectologicallyrelevant features (e.g.accent,grammar,vocabularyandloanwords) in the speech of theasylum seeker. LADO is considered a type ofspeaker identificationbyforensic linguists.[1]LADO analyses are usually made at the request of government immigration/asylum bureaux attempting to verify asylum claims[how?], but may also be performed as part of the appeals process for claims which have been denied[why?]; they have frequently been the subject of appeals and litigation in several countries, e.g. Australia, the Netherlands and the UK.[why?]
A number of established linguistic approaches are considered to be valid methods of conducting LADO, including language variation and change,[2][3]forensic phonetics,[4]dialectology, and language assessment.[5]
The underlying assumption leading to government immigration and asylum bureaux's use of LADO is that a link exists between a person's nationality and the way they speak.[why?]To linguists, this assumption is flawed: instead, research supports links between thefamily and community in which a person learns their native language, and enduring features of their way of speaking it. The notion thatlinguistic socializationinto aspeech communitylies at the heart of LADO has been argued for by linguists since 2004,[6]and is now accepted by a range of government agencies (e.g. Switzerland,[7]Norway[8]), academic researchers (e.g. Eades 2009,[9]Fraser 2011,[10]Maryns 2006,[11]and Patrick 2013[12]), as well as some commercial agencies, e.g. De Taalstudio, according to Verrips 2010[13]).
Since the mid-1990s, language analysis has been used to help determine the geographical origin ofasylum seekersby the governments of a growing number of countries (Reath, 2004),[14]now includingAustralia,Austria,Belgium,Canada,Finland,Germany, theNetherlands,New Zealand,Norway,Sweden,Switzerlandand theUnited Kingdom.
Pilots have been conducted by the UK,Ireland, and Norway.[15]The UK legitimised the process in 2003; it has subsequently been criticised by immigration lawyers (see response by the Immigration Law Practitioners' Association[16]), and also Craig 2012[17]); and social scientists (e.g. Campbell 2013[18]), as well as linguists (e.g. Patrick 2011[19]).
In theNetherlandsLADO is commissioned by the Dutch Immigration Service (IND).[20]Language analysis is used by the IND in cases where asylum seekers cannot produce valid identification documents, and, in addition, the IND sees reason to doubt the claimed origin of the asylum seeker. The IND has a specialised unit (Bureau Land en Taal, or BLT; in English, the Office for Country Information and Language Analysis, or OCILA) that carries out these analyses. Challenges to BLT analyses are provided by De Taalstudio,[21]a private company that provides language analysis and contra-expertise in LADO cases. Claims and criticisms regarding the Dutch LADO processes are discussed by Cambier-Langeveld (2010),[22]the senior linguist for BLT/OCILA, and by Verrips 2010,[23]the founder of De Taalstudio. Zwaan (2008,[24]2010[25]) reviews the legal situation.
LADO reports are provided to governments in a number of ways: by their own regularly-employed linguists and/or freelance analysts; by independent academic experts; by commercial firms; or by a mixture of the above. In Switzerland language analysis is carried out by LINGUA, a specialized unit of the Federal Office for Migration, which both employs linguists and retains independent experts from around the world.[7]The German and Austrian bureaux commission reports primarily from experts within their own countries. The UK and a number of other countries have commercial contracts with providers such as the Swedish firms Sprakab[26]and Verified[27]both of which have carried out language analyses for UK Visas and Immigration (formerly UK Border Agency) and for the Dutch Immigration Service, as well as other countries around the world.
It is widely agreed that language analysis should be done by language experts. Two basic types of practitioners commonly involved in LADO can be distinguished: trained native speakers of the language under analysis, and professional linguists specialized in the language under analysis. Usually native speaker analysts are free-lance employees who are said to be under the supervision of a qualified linguist. When such analysts lack academic training in linguistics, it has been questioned whether they should be accorded the status of 'experts' by asylum tribunals, e.g. by Patrick (2012),[28]who refers to them instead as "non-expert native speakers (NENSs)". Eades et al. (2003) note that "people who have studied linguistics to professional levels [...] have particular knowledge which is not available to either ordinary speakers or specialists in other disciplines".[29]Likewise Dikker and Verrips (2004)[30]conclude that native speakers who lack training in linguistics are not able to formulate reliable conclusions regarding the origin of other speakers of their language. The nature of the training which commercial firms and government bureaux provide to their analysts has been questioned in academic and legal arenas, but few specifics have been provided to date; see however accounts by the Swiss agency Lingua[31]and Cambier-Langeveld of BLT/OCILA,[32]as well as responses to the latter by Fraser[33]and Verrips.[34]
Claims for and against the use of such native-speaker analysts, and their ability to conduct LADO satisfactorily vis-a-vis the ability of academically trained linguists, have only recently begun to be the subject of research (e.g. Wilson 2009),[35]and no consensus yet exists among linguists. While much linguistic research exists on the ability of people, including trained linguists and phoneticians and untrained native speakers, to correctly perceive, identify or label recorded speech that is played to them, almost none of the research has yet been framed in such a way that it can give clear answers to questions about the LADO context.
The matter of native-speaker analysts and many other issues are subjects of ongoing litigation in asylum tribunals and appeals courts in several countries. Vedsted Hansen (2010[36]) describes the Danish situation, Noll (2010[37]) comments on Sweden, and Zwaan (2010) reviews the Dutch situation.
In the UK, a 2010 Upper Tribunal (asylum) case known as 'RB',[38]supported by a 2012 Court of Appeal decision,[39]argue for giving considerable weight to LADO reports carried out by the methodology of native-speaker analyst plus supervising linguist. In contrast, a 2013 Scottish Court of Sessions decision known as M.Ab.N+K.A.S.Y.[40]found that all such reports must be weighed against the standard Practice Directions for expert reports. Lawyers in the latter case have argued that "What matters is the lack of qualification",[41]and since the Scottish court has equal standing to the England and Wales Appeals Court, the UK Supreme Court was petitioned to address the issues. On 5–6 March 2014, the UK Supreme Court heard an appeal[42]brought by the Home Office concerning the nature of expert linguistic evidence provided to the Home Office in asylum cases, whether expert witness should be granted anonymity, the weight that should be given to reports by the Swedish firm Sprakab, and related matters.
Some methods of language analysis in asylum procedures have been heavily criticized by many linguists (e.g., Eades et al. 2003;[43]Arends, 2003). Proponents of the use of native-speaker analysts agree that "[earlier] LADO reports were not very satisfactory from a linguistic point of view... [while even] today's reports are still not likely to satisfy the average academic linguist".[44]Following an item on the Dutch public radio programme Argos, member of parliament De Wit of the Socialist Party presented a number of questions to the State Secretary of the Ministry of Justice regarding the reliability of LADO. The questions and the responses by the State Secretary can be found here.[45]
|
https://en.wikipedia.org/wiki/Language_analysis_for_the_determination_of_origin
|
Linguisticsis the scientific study oflanguage.[1][2][3]The areas of linguistic analysis aresyntax(rules governing the structure of sentences),semantics(meaning),morphology(structure of words),phonetics(speech sounds and equivalent gestures insign languages),phonology(the abstract sound system of a particular language, and analogous systems of sign languages), andpragmatics(how the context of use contributes to meaning).[4]Subdisciplines such asbiolinguistics(the study of the biological variables and evolution of language) andpsycholinguistics(the study of psychological factors in human language) bridge many of these divisions.[5]
Linguistics encompassesmany branches and subfieldsthat span both theoretical and practical applications.[6]Theoretical linguisticsis concerned with understanding theuniversalandfundamental natureof language and developing a general theoretical framework for describing it.Applied linguisticsseeks to utilize the scientific findings of the study of language for practical purposes, such as developing methods of improving language education and literacy.
Linguistic features may be studied through a variety of perspectives:synchronically(by describing the structure of a language at a specific point in time) ordiachronically(through the historical development of a language over a period of time), inmonolingualsor inmultilinguals, among children or among adults, in terms of how it is being learnt or how it was acquired, as abstract objects or as cognitive structures, through written texts or through oral elicitation, and finally through mechanical data collection or practical fieldwork.[7]
Linguistics emerged from the field ofphilology, of which some branches are more qualitative and holistic in approach.[8]Today, philology and linguistics are variably described as related fields, subdisciplines, or separate fields of language study but, by and large, linguistics can be seen as an umbrella term.[9]Linguistics is also related to thephilosophy of language,stylistics,rhetoric,semiotics,lexicography, andtranslation.
Historical linguistics is the study of how language changes over history, particularly with regard to a specific language or a group of languages.Western trendsin historical linguistics date back to roughly the late 18th century, when the discipline grew out ofphilology, the study of ancient texts and oral traditions.[10]
Historical linguistics emerged as one of the first few sub-disciplines in the field, and was most widely practised during the late 19th century.[11]Despite a shift in focus in the 20th century towardsformalismandgenerative grammar, which studies theuniversalproperties of language, historical research today still remains a significant field of linguistic inquiry. Subfields of the discipline includelanguage changeandgrammaticalization.
Historical linguistics studies language change either diachronically (through a comparison of different time periods in the past and present) or in asynchronicmanner (by observing developments between different variations that exist within the current linguistic stage of a language).[12]
At first, historical linguistics was the cornerstone ofcomparative linguistics, which involves a study of the relationship between different languages.[13]At that time, scholars of historical linguistics were only concerned with creating different categories oflanguage families, and reconstructingprehistoricproto-languages by using both thecomparative methodand the method ofinternal reconstruction. Internal reconstruction is the method by which an element that contains a certain meaning is re-used in different contexts or environments where there is a variation in either sound or analogy.[13][better source needed]
The reason for this had been to describe well-knownIndo-European languages, many of which had detailed documentation and long written histories. Scholars of historical linguistics also studiedUralic languages, another European language family for which very little written material existed back then. After that, there also followed significant work on thecorporaof other languages, such as theAustronesian languagesand theNative American language families.
In historical work, theuniformitarian principleis generally the underlying working hypothesis, occasionally also clearly expressed.[14]The principle was expressed early byWilliam Dwight Whitney, who considered it imperative, a "must", of historical linguistics to "look to find the same principle operative also in the very outset of that [language] history."[15]
The above approach of comparativism in linguistics is now, however, only a small part of the much broader discipline called historical linguistics. The comparative study of specific Indo-European languages is considered a highly specialized field today, while comparative research is carried out over the subsequent internal developments in a language: in particular, over the development of modern standard varieties of languages, and over the development of a language from its standardized form to its varieties.[citation needed]
For instance, some scholars also tried to establishsuper-families, linking, for example, Indo-European, Uralic, and other language families to a hypotheticalNostratic language group.[16]While these attempts are still not widely accepted as credible methods, they provide necessary information to establish relatedness in language change. This is generally hard to find for events long ago, due to the occurrence of chance word resemblances and variations between language groups. A limit of around 10,000 years is often assumed for the functional purpose of conducting research.[17]It is also hard to date various proto-languages. Even though several methods are available, these languages can be dated only approximately.[18]
In modern historical linguistics, we examine how languages change over time, focusing on the relationships between dialects within a specific period. This includes studying morphological, syntactical, and phonetic shifts. Connections between dialects in the past and present are also explored.[19]
Syntax is the study of how words andmorphemescombine to form larger units such asphrasesandsentences. Central concerns of syntax includeword order,grammatical relations,constituency,[20]agreement, the nature of crosslinguistic variation, and the relationship between form and meaning. There are numerous approaches to syntax that differ in their central assumptions and goals.
Morphology is the study ofwords, including the principles by which they are formed, and how they relate to one another within a language.[21][22]Most approaches to morphology investigate the structure of words in terms ofmorphemes, which are the smallest units in a language with some independentmeaning. Morphemes includerootsthat can exist as words by themselves, but also categories such asaffixesthat can only appear as part of a larger word. For example, in English the rootcatchand the suffix-ingare both morphemes;catchmay appear as its own word, or it may be combined with-ingto form the new wordcatching. Morphology also analyzes how words behave asparts of speech, and how they may beinflectedto expressgrammatical categoriesincludingnumber,tense, andaspect. Concepts such asproductivityare concerned with how speakers create words in specific contexts, which evolves over the history of a language.
The discipline that deals specifically with the sound changes occurring within morphemes ismorphophonology.
Semantics and pragmatics are branches of linguistics concerned with meaning. These subfields have traditionally been divided according to aspects of meaning: "semantics" refers to grammatical and lexical meanings, while "pragmatics" is concerned with meaning in context. Within linguistics, the subfield offormal semanticsstudies thedenotationsof sentences and how they arecomposedfrom the meanings of their constituent expressions. Formal semantics draws heavily onphilosophy of languageand uses formal tools from logic andcomputer science. On the other hand,cognitive semanticsexplains linguistic meaning via aspects of general cognition, drawing on ideas from cognitive science such asprototype theory.
Pragmatics focuses on phenomena such asspeech acts,implicature, andtalk in interaction.[23]Unlike semantics, which examines meaning that is conventional or "coded" in a given language, pragmatics studies how the transmission of meaning depends not only on the structural and linguistic knowledge (grammar, lexicon, etc.) of the speaker and listener, but also on the context of the utterance,[24]any pre-existing knowledge about those involved, the inferred intent of the speaker, and other factors.[25]
Phonetics and phonology are branches of linguistics concerned with sounds (or the equivalent aspects of sign languages). Phonetics is largely concerned with the physical aspects of sounds such as theirarticulation, acoustics, production, and perception.Phonologyis concerned with the linguistic abstractions and categorizations of sounds, and it tells us what sounds are in a language, how they do and can combine into words, and explains why certain phonetic features are important to identifying a word.[26]
Linguistic structures are pairings of meaning and form. Any particular pairing of meaning and form is aSaussureanlinguistic sign. For instance, the meaning "cat" is represented worldwide with a wide variety of different sound patterns (in oral languages), movements of the hands and face (insign languages), and written symbols (in written languages). Linguistic patterns have proven their importance for theknowledge engineeringfield especially with the ever-increasing amount of available data.
Linguists focusing on structure attempt to understand the rules regarding language use that native speakers know (not always consciously). All linguistic structures can be broken down into component parts that are combined according to (sub)conscious rules, over multiple levels of analysis. For instance, consider the structure of the word "tenth" on two different levels of analysis. On the level of internal word structure (known as morphology), the word "tenth" is made up of one linguistic form indicating a number and another form indicating ordinality. The rule governing the combination of these forms ensures that the ordinality marker "th" follows the number "ten." On the level of sound structure (known as phonology), structural analysis shows that the "n" sound in "tenth" is made differently from the "n" sound in "ten" spoken alone. Although most speakers of English are consciously aware of the rules governing internal structure of the word pieces of "tenth", they are less often aware of the rule governing its sound structure. Linguists focused on structure find and analyze rules such as these, which govern how native speakers use language.
Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound[29]as well as meaning, and include componential subsets of rules, such as those pertaining tophonology(the organization of phonetic sound systems),morphology(the formation and composition of words), andsyntax(the formation and composition of phrases and sentences).[4]Modernframeworks that deal with the principles of grammarincludestructuralandfunctional linguistics, andgenerative linguistics.[30]
Sub-fields that focus on a grammatical study of language include the following:
Discourse is language as social practice (Baynham, 1995) and is a multilayered concept. As a social practice, discourse embodies different ideologies through written and spoken texts. Discourse analysis can examine or expose these ideologies. Discourse not only influences genre, which is selected based on specific contexts but also, at a micro level, shapes language as text (spoken or written) down to the phonological and lexico-grammatical levels. Grammar and discourse are linked as parts of a system.[32]A particular discourse becomes a language variety when it is used in this way for a particular purpose, and is referred to as aregister.[33]There may be certain lexical additions (new words) that are brought into play because of the expertise of the community of people within a certain domain of specialization. Thus, registers and discourses distinguish themselves not only through specialized vocabulary but also, in some cases, through distinct stylistic choices. People in the medical fraternity, for example, may use some medical terminology in their communication that is specialized to the field of medicine. This is often referred to as being part of the "medical discourse", and so on.
The lexicon is a catalogue of words and terms that are stored in a speaker's mind. The lexicon consists of words andbound morphemes, which are parts of words that can not stand alone, likeaffixes. In some analyses, compound words and certain classes of idiomatic expressions and other collocations are also considered to be part of the lexicon. Dictionaries represent attempts at listing, in alphabetical order, the lexicon of a given language; usually, however, bound morphemes are not included.Lexicography, closely linked with the domain of semantics, is the science of mapping the words into an encyclopedia or a dictionary. The creation and addition of new words (into the lexicon) is called coining orneologization,[34]and the new words are calledneologisms.
It is often believed that a speaker's capacity for language lies in the quantity of words stored in the lexicon. However, this is often considered a myth by linguists. The capacity for the use of language is considered by many linguists to lie primarily in the domain of grammar, and to be linked withcompetence, rather than with the growth of vocabulary. Even a very small lexicon is theoretically capable of producing an infinite number of sentences.
Vocabularysize is relevant as a measure of comprehension. There is general consensus that reading comprehension of a written text in English requires 98% coverage, meaning that the person understands 98% of the words in the text.[35]The question of how much vocabulary is needed is therefore related to which texts or conversations need to be understood. A common estimate is 6-7,000word familiesto understand a wide range of conversations and 8-9,000 word families to be able to read a wide range of written texts.[36]
Stylisticsalso involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media.[37]It involves the study and interpretation of texts for aspects of their linguistic and tonal style. Stylistic analysis entails the analysis of description of particulardialectsandregistersused by speech communities. Stylistic features includerhetoric,[38]diction, stress, satire,irony, dialogue, and other forms of phonetic variations. Stylistic analysis can also include the study of language in canonical works of literature, popular fiction, news, advertisements, and other forms of communication in popular culture as well. It is usually seen as a variation in communication that changes from speaker to speaker and community to community. In short, Stylistics is the interpretation of text.
In the 1960s,Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that written language be studied as a linguistic medium of communication in itself.[39]Palaeographyis therefore the discipline that studies the evolution of written scripts (as signs and symbols) in language.[40]The formal study of language also led to the growth of fields likepsycholinguistics, which explores the representation and function of language in the mind;neurolinguistics, which studies language processing in the brain;biolinguistics, which studies the biology and evolution of language; andlanguage acquisition, which investigates how children and adults acquire the knowledge of one or more languages.
The fundamental principle of humanistic linguistics, especially rational andlogical grammar, is that language is an invention created by people. A semiotic tradition of linguistic research considers language asign systemwhich arises from the interaction of meaning and form.[41]The organization of linguistic levels is considered computational.[42]Linguistics is essentially seen as relating tosocialandcultural studiesbecause different languages are shaped insocial interactionby thespeech community.[43]Frameworks representing thehumanisticview of language includestructural linguistics, among others.[44]
Structural analysis means dissecting each linguistic level: phonetic, morphological, syntactic, and discourse, to the smallest units. These are collected into inventories (e.g. phoneme, morpheme, lexical classes, phrase types) to study their interconnectedness within a hierarchy of structures and layers.[45]Functional analysis adds to structural analysis the assignment of semantic and other functional roles that each unit may have. For example, a noun phrase may function as the subject or object of the sentence; or theagentorpatient.[46]
Functional linguistics, or functional grammar, is a branch of structural linguistics. In the humanistic reference, the termsstructuralismandfunctionalismare related to their meaning in otherhuman sciences. The difference between formal and functional structuralism lies in the way that the two approaches explain why languages have the properties they have. Functionalexplanationentails the idea that language is a tool for communication, or that communication is the primary function of language. Linguistic forms are consequently explained by an appeal to their functional value, or usefulness. Other structuralist approaches take the perspective that form follows from the inner mechanisms of the bilateral and multilayered language system.[47]
Approaches such ascognitive linguisticsandgenerative grammarstudy linguisticcognitionwith a view towards uncovering thebiologicalunderpinnings of language. InGenerative Grammar, these underpinning are understood as includinginnatedomain-specificgrammatical knowledge. Thus, one of the central concerns of the approach is to discover what aspects of linguistic knowledge are innate and which are not.[48][49]
Cognitive linguistics, in contrast, rejects the notion of innate grammar, and studies how the human mind creates linguisticconstructionsfrom eventschemas,[50]and the impact of cognitive constraints andbiaseson human language.[51]In cognitive linguistics, language is approached via thesenses.[52][53]
A closely related approach isevolutionary linguistics[54]which includes the study of linguistic units ascultural replicators.[55][56]It is possible to study how languagereplicatesandadaptsto themindof theindividualor the speech community.[57][58]Construction grammaris a framework which applies thememeconcept to the study of syntax.[59][60][61][62]
The generative versus evolutionary approach are sometimes calledformalismandfunctionalism, respectively.[63]This reference is however different from the use of the terms inhuman sciences.[64]
Modern linguistics is primarilydescriptive.[65]Linguists describe and explain features of language without making subjective judgments on whether a particular feature or usage is "good" or "bad". This is analogous to practice in other sciences: azoologiststudies the animal kingdom without making subjective judgments on whether a particular species is "better" or "worse" than another.
Prescription, on the other hand, is an attempt to promote particular linguistic usages over others, often favoring a particular dialect or "acrolect". This may have the aim of establishing alinguistic standard, which can aid communication over large geographical areas. It may also, however, be an attempt by speakers of one language or dialect to exert influence over speakers of other languages or dialects (seeLinguistic imperialism). An extreme version of prescriptivism can be found amongcensors, who attempt to eradicate words and structures that they consider to be destructive to society. Prescription, however, may be practised appropriately inlanguage instruction, like inELT, where certain fundamental grammatical rules and lexical items need to be introduced to a second-language speaker who is attempting toacquirethe language.[citation needed]
Most contemporary linguists work under the assumption thatspoken dataandsigned dataare more fundamental thanwritten data. This is because
Nonetheless, linguists agree that the study of written language can be worthwhile and valuable. For research that relies oncorpus linguisticsandcomputational linguistics, written language is often much more convenient for processing large amounts of linguistic data. Large corpora of spoken language are difficult to create and hard to find, and are typicallytranscribedand written. In addition, linguists have turned to text-based discourse occurring in various formats ofcomputer-mediated communicationas a viable site for linguistic inquiry.
The study of writing systems themselves, graphemics, is, in any case, considered a branch of linguistics.
Before the 20th century, linguists analysed language on adiachronicplane, which was historical in focus. This meant that they would compare linguistic features and try to analyse language from the point of view of how it had changed between then and later. However, with the rise of Saussurean linguistics in the 20th century, the focus shifted to a moresynchronicapproach, where the study was geared towards analysis and comparison between different language variations, which existed at the same given point of time.
At another level, thesyntagmaticplane of linguistic analysis entails the comparison between the way words are sequenced, within the syntax of a sentence. For example, the article "the" is followed by a noun, because of the syntagmatic relation between the words. Theparadigmaticplane, on the other hand, focuses on an analysis that is based on the paradigms or concepts that are embedded in a given text. In this case, words of the same type or class may be replaced in the text with each other to achieve the same conceptual understanding.
The earliest activities in thedescription of languagehave been attributed to the6th-century-BCIndian grammarianPāṇini[66][67]who wrote aformal descriptionof theSanskrit languagein hisAṣṭādhyāyī.[68][69]Today, modern-day theories ongrammaremploy many of the principles that were laid down then.[70]
Before the 20th century, the termphilology, first attested in 1716,[71]was commonly used to refer to the study of language, which was then predominantly historical in focus.[72][73]SinceFerdinand de Saussure's insistence on the importance ofsynchronic analysis, however, this focus has shifted[73]and the termphilologyis now generally used for the "study of a language's grammar, history, and literary tradition", especially in the United States[74](where philology has never been very popularly considered as the "science of language").[71]
Although the termlinguistin the sense of "a student of language" dates from 1641,[75]the termlinguisticsis first attested in 1847.[75]It is now the usual term in English for the scientific study of language,[76][77]thoughlinguistic scienceis sometimes used.
Linguistics is amulti-disciplinaryfield of research that combines tools from natural sciences, social sciences,formal sciences, and the humanities.[78][79][80][81]Many linguists, such as David Crystal, conceptualize the field as being primarily scientific.[82]The termlinguistapplies to someone who studies language or is a researcher within the field, or to someone who uses the tools of the discipline to describe and analyse specific languages.[83]
An early formal study of language was in India withPāṇini, the 6th century BC grammarian who formulated 3,959 rules of Sanskritmorphology. Pāṇini's systematic classification of the sounds ofSanskritinto consonants and vowels, and word classes, such as nouns and verbs, was the first known instance of its kind. In the Middle East,Sibawayh, a Persian, made a detailed description of Arabic in AD 760 in his monumental work,Al-kitab fii an-naħw(الكتاب في النحو,The Book on Grammar), the first known author to distinguish betweensoundsandphonemes (sounds as units of a linguistic system). Western interest in the study of languages began somewhat later than in the East,[84]but the grammarians of the classical languages did not use the same methods or reach the same conclusions as their contemporaries in the Indic world. Early interest in language in the West was a part of philosophy, not of grammatical description. The first insights into semantic theory were made byPlatoin hisCratylusdialogue, where he argues that words denote concepts that are eternal and exist in the world of ideas. This work is the first to use the word etymology to describe the history of a word's meaning. Around 280 BC, one ofAlexander the Great's successors founded a university (seeMusaeum) inAlexandria, where a school of philologists studied the ancient texts in Greek, and taught Greek to speakers of other languages. While this school was the first to use the word "grammar" in its modern sense, Plato had used the word in its original meaning as "téchnē grammatikḗ" (Τέχνη Γραμματική), the "art of writing", which is also the title of one of the most important works of the Alexandrine school byDionysius Thrax.[85]Throughout theMiddle Ages, the study of language was subsumed under the topic of philology, the study of ancient languages and texts, practised by such educators asRoger Ascham,Wolfgang Ratke, andJohn Amos Comenius.[86]
In the 18th century, the first use of thecomparative methodbyWilliam Jonessparked the rise ofcomparative linguistics.[87]Bloomfield attributes "the first great scientific linguistic work of the world" toJacob Grimm, who wroteDeutsche Grammatik.[88]It was soon followed by other authors writing similar comparative studies on other language groups of Europe. The study of language was broadened fromIndo-Europeanto language in general byWilhelm von Humboldt, of whom Bloomfield asserts:[88]
This study received its foundation at the hands of the Prussian statesman and scholar Wilhelm von Humboldt (1767–1835), especially in the first volume of his work on Kavi, the literary language of Java, entitledÜber die Verschiedenheit des menschlichen Sprachbaues und ihren Einfluß auf die geistige Entwickelung des Menschengeschlechts(On the Variety of the Structure of Human Language and its Influence upon the Mental Development of the Human Race).
There was a shift of focus from historical and comparative linguistics to synchronic analysis in early 20th century. Structural analysis was improved byLeonard Bloomfield,Louis Hjelmslev; andZellig Harriswho also developed methods ofdiscourse analysis. Functional analysis was developed by thePrague linguistic circleandAndré Martinet. As sound recording devices became commonplace in the 1960s, dialectal recordings were made and archived, and theaudio-lingual methodprovided a technological solution to foreign language learning. The 1960s also saw a new rise of comparative linguistics: the study oflanguage universalsinlinguistic typology. Towards the end of the century the field of linguistics became divided into further areas of interest with the advent oflanguage technologyand digitalizedcorpora.[89][90][91]
Sociolinguistics is the study of how language is shaped by social factors. This sub-discipline focuses on the synchronic approach of linguistics, and looks at how a language in general, or a set of languages, display variation and varieties at a given point in time. The study of language variation and the different varieties of language through dialects, registers, and idiolects can be tackled through a study of style, as well as through analysis of discourse. Sociolinguists research both style and discourse in language, as well as the theoretical factors that are at play between language and society.
Developmental linguistics is the study of the development of linguistic ability in individuals, particularlythe acquisition of languagein childhood. Some of the questions that developmental linguistics looks into are how children acquire different languages, how adults can acquire a second language, and what the process of language acquisition is.[92]
Neurolinguistics is the study of the structures in the human brain that underlie grammar and communication. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models inpsycholinguisticsandtheoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, usingaphasiology,brain imaging, electrophysiology, and computer modelling. Amongst the structures of the brain involved in the mechanisms of neurolinguistics, the cerebellum which contains the highest numbers of neurons has a major role in terms of predictions required to produce language.[93]
Linguists are largely concerned with finding anddescribingthe generalities and varieties both within particular languages and among all languages.Applied linguisticstakes the results of those findings and "applies" them to other areas. Linguistic research is commonly applied to areas such aslanguage education,lexicography, translation,language planning, which involves governmental policy implementation related to language use, andnatural language processing. "Applied linguistics" has been argued to be something of a misnomer.[94]Applied linguists actually focus on making sense of and engineering solutions for real-world linguistic problems, and not literally "applying" existing technical knowledge from linguistics. Moreover, they commonly apply technical knowledge from multiple sources, such as sociology (e.g., conversation analysis) and anthropology. (Constructed languagefits under Applied linguistics.)
Today, computers are widely used in many areas of applied linguistics.Speech synthesisandspeech recognitionuse phonetic and phonemic knowledge to providevoice interfacesto computers. Applications ofcomputational linguisticsinmachine translation,computer-assisted translation, andnatural language processingare areas of applied linguistics that have come to the forefront. Their influence has had an effect on theories of syntax and semantics, as modelling syntactic and semantic theories on computers constraints.
Linguistic analysis is a sub-discipline of applied linguistics used by many governments to verify the claimed nationality of people seeking asylum who do not hold the necessary documentation to prove their claim.[95]This often takes the form of an interview by personnel in an immigration department. Depending on the country, this interview is conducted either in the asylum seeker'snative languagethrough aninterpreteror in an internationallingua francalike English.[95]Australia uses the former method, while Germany employs the latter; the Netherlands uses either method depending on the languages involved.[95]Tape recordings of the interview then undergo language analysis, which can be done either by private contractors or within a department of the government. In this analysis, linguistic features of the asylum seeker are used by analysts to make a determination about the speaker's nationality. The reported findings of the linguistic analysis can play a critical role in the government's decision on the refugee status of the asylum seeker.[95]
Language documentationcombines anthropological inquiry (into the history and culture of language) with linguistic inquiry, in order to describe languages and their grammars.Lexicographyinvolves the documentation of words that form a vocabulary. Such a documentation of a linguistic vocabulary from a particular language is usually compiled in adictionary.Computational linguisticsis concerned with the statistical or rule-based modeling of natural language from a computational perspective. Specific knowledge of language is applied by speakers during the act of translation andinterpretation, as well as inlanguage education– the teaching of a second orforeign language. Policy makers work with governments to implement new plans in education and teaching which are based on linguistic research.
Since the inception of the discipline of linguistics, linguists have been concerned with describing and analysing previouslyundocumented languages. Starting withFranz Boasin the early 1900s, this became the main focus of American linguistics until the rise offormal linguisticsin the mid-20th century. This focus on language documentation was partly motivated by a concern to document the rapidlydisappearinglanguages of indigenous peoples. The ethnographic dimension of the Boasian approach to language description played a role in the development of disciplines such associolinguistics,anthropological linguistics, andlinguistic anthropology, which investigate the relations between language, culture, and society.
The emphasis on linguistic description and documentation has also gained prominence outside North America, with the documentation of rapidly dying indigenous languages becoming a focus in some university programs in linguistics. Language description is a work-intensive endeavour, usually requiring years of field work in the language concerned, so as to equip the linguist to write a sufficiently accurate reference grammar. Further, the task of documentation requires the linguist to collect a substantial corpus in the language in question, consisting of texts and recordings, both sound and video, which can be stored in an accessible format within open repositories, and used for further research.[96]
The sub-field of translation includes the translation of written and spoken texts across media, from digital to print and spoken. To translate literally means to transmute the meaning from one language into another. Translators are often employed by organizations such as travel agencies and governmental embassies to facilitate communication between two speakers who do not know each other's language. Translators are also employed to work withincomputational linguisticssetups likeGoogle Translate, which is an automated program to translate words and phrases between any two or more given languages. Translation is also conducted by publishing houses, which convert works of writing from one language to another in order to reach varied audiences. Cross-national and cross-culturalsurvey researchstudies employ translation to collect comparable data among multilingual populations.[97][98]Academic translators specialize in or are familiar with various other disciplines such as technology, science, law, economics, etc.
Clinical linguistics is the application of linguistic theory to the field ofspeech-language pathology. Speech language pathologists work on corrective measures to treatcommunicationand swallowing disorders.
Computational linguistics is the study of linguistic issues in a way that is "computationally responsible", i.e., taking careful note of computational consideration of algorithmic specification and computational complexity, so that the linguistic theories devised can be shown to exhibit certain desirable computational properties and their implementations. Computational linguists also work on computer language and software development.
Evolutionary linguistics is asociobiologicalapproach to analyzing the emergence of the language faculty through human evolution, and also the application of evolutionary theory to the study of cultural evolution among different languages. It is also a study of the dispersal of various languages across the globe, through movements among ancient communities.[99]
Forensic linguistics is the application of linguistic analysis to forensics. Forensic analysis investigates the style, language, lexical use, and other linguistic and grammatical features used in the legal context to provide evidence in courts of law. Forensic linguists have also used their expertise in the framework of criminal cases.[100][101]
|
https://en.wikipedia.org/wiki/Linguistics
|
Inpolitics, alitmus testis a question asked of a potentialcandidatefor high office, the answer to which would determine whether the nominating official would proceed with the appointment ornomination. The expression is ametaphorbased on thelitmus testin chemistry, in which one is able to test the generalacidityof a substance, but not its exactpH. Those who must approve a nominee may also be said to apply a litmus test to determine whether the nominee will receive their vote. In these contexts, the phrase comes up most often with respect to nominations to thejudiciary.
The metaphor of a litmus test has been used in American politics since the mid-twentieth century.[1]DuringUnited States presidential electioncampaigns, litmus tests the nominees might use are more fervently discussed when vacancies for theU.S. Supreme Courtappear likely. Advocates for various social ideas or policies often wrangle heatedly over what litmus test, if any, the president ought to apply when nominating a new candidate for a spot on the Supreme Court. Support for, or opposition to,abortionis one example of a common decisive factor insingle-issue politics; another might be support ofstrict constructionism. Defenders of litmus tests argue that some issues are so important that it overwhelms other concerns (especially if there are other qualified candidates that pass the test).
The political litmus test is often used when appointing judges. However, this test to determine the political attitude of a nominee is not without error. Supreme Court Chief JusticeEarl Warrenwas appointed under the impression that he was conservative but his tenure was marked by liberal dissents. Today, the litmus test is used along with other methods such as past voting records when selecting political candidates.
TheRepublican Liberty Caucusis opposed to litmus tests for judges, stating in their goals that they "oppose 'litmus tests' for judicial nominees who are qualified and recognize that the sole function of the courts is to interpret the Constitution. We oppose judicial amendments or the crafting of new law by any court."[2]
ProfessorEugene Volokhbelieves that the legitimacy of such tests is a "tough question", and argues that they may undermine the fairness of the judiciary:
Imagine a justice testifies under oath before the Senate about his views on (say) abortion, and later reaches a contrary decision [after carefully examining the arguments]. "Perjury!" partisans on the relevant side will likely cry: They'll assume the statement made with an eye towards confirmation was a lie, rather than that the justice has genuinely changed his mind. Even if no calls for impeachment follow, the rancor and contempt towards the justice would be much greater than if he had simply disappointed his backers' expectations. Faced with that danger, a justice may well feel pressured into deciding the way that he testified, and rejecting attempts at persuasion. Yet that would be a violation of the judge's duty to sincerely consider the parties' arguments.[3]
|
https://en.wikipedia.org/wiki/Litmus_test_(politics)
|
Passingis the ability of a person to be regarded as a member of anidentity group or category, such asracial identity,ethnicity,caste,social class,sexual orientation,gender,religion, age ordisability status, that is often different from their own.[1][2][3][4]Passing may be used to increasesocial acceptance[1][2]to cope withstigmaby removing stigma from the presented self and could result in other social benefits as well. Thus, passing may serve as a form ofself-preservationor self-protection if expressing one's true or prior identity may be dangerous.[4][5]
Passing may require acceptance into a community and may lead to temporary or permanent leave from another community to which an individual previously belonged. Thus, passing can result in separation from one's original self, family, friends, or previous living experiences.[6]Successful passing may contribute to economic security, safety, and stigma avoidance, but it may take an emotional toll as a result of denial of one's previous identity and may lead to depression or self-loathing.[4]When an individual deliberately attempts to "pass" as a member of an identity group, they may actively engage in performance of behaviors that they believe to be associated with membership of that group. Passing practices may also include information management of the passer in attempting to control or conceal any stigmatizing information that may reveal disparity from their presumed identity.[7]
Etymologically, the term is simply the nominalisation of the verbpassin itsphrasaluse withfororas, as in a counterfeitpassing forthe genuine article or an impostorpassing asanother person. It has been in popular use since at least the late 1920s.[8][9][10][11]
Passing, as a sociological concept, was first coined by Erving Goffman as a term for one response to possessing some kind of stigma that is often less visible.[12][13][7][14]Stigma, according to Goffman's framework in his workStigma: Notes on the Management of Spoiled Identity(1963), "refer[s] to an attribute that is deeply discrediting" or "an undesired differentness from what [was] anticipated".[7]According to Goffman, "This discrepancy, when known about or apparent, spoils his social identity; it has the effect of cutting him off from society and from himself so that he stands a discredited person facing an unaccepting world".[7]
Thus, inhabiting an identity associated with stigma may be particularly dangerous and harmful.[12]According to Link and Phelan, Roschelle and Kaufman, and Marvasti, it may lead to loss of opportunities due to status loss and discrimination, alienation and marginalization, harassment and embarrassment, and social rejection.[12][15][16][17]These can be a persistent source of psychological issues.[12]
To resist, manage, and avoid stigma and its associated consequences, individuals might choose to pass as a non-stigmatized identity. According to Nathan Shippee, "Passing communicates a seemingly "normal" self, one that does not apparently possess the stigma."[12]According to Patrick Kermit, "To be suspected of being "not quite human" is the essence of stigmatisation, and passing is a desperate means to the end of appearing fully human in the sense of being like most other people."[14]
When making the decision of whether to pass or not, there are many factors stigmatized actors may consider. Firstly, there is the notion of visibility. How visible their stigma is may problematize how much ease or difficulty they may face in attempting to pass. However, how visible their stigma is may also determine the intensity and frequency of adversity they may face from others as a result of their stigma. Goffman explains, "Traditionally, the question of passing has raised the issue of the "visibility" of a particular stigma, that is, how well or how badly-the stigma is adapted to provide means of communicating that the individual possesses it."[7]Other scholars further emphasize the cruciality of visibility and conclude that "whether a stigma is evident to the audience can mark the difference between being discredited or merely discreditable".[12]
Other factors may include risk, context, and intimacy. Different contexts and situations may make passing more easy or difficult and/or more safe or risky. How well others know the passer may impede their abilities as well. One scholar explains, "Individuals may pass in some situations but not others, effectively creating different arenas of life (depending on whether the stigma is known or not). Goffman claimed that actors develop theories about which situations are risky for disclosure, but risk is only one factor: intimacy with the audience can lead actors to disclose, or to feel guilty for not doing so."[12]In addition to guilt, since passing can sometimes involve the fabrication of a false personal history to aid in concealment of their stigma, passing can complicate personal relationships and cause feelings of shame at having to be dishonest about their identity.[13][18]According to Goffman, "It can be assumed that the possession of a discreditable secret failing takes on a deeper meaning when the persons to whom the individual has not yet revealed himself are not strangers to him but friends. Discovery prejudices not only the current social situation, but established relationships as well; not only the current image others present have of him, but also the one they will have in the future; not only appearances, but also reputation."[7]Relating to this experience of passing, actors may have an ambivalent attachment to their stigma that can cause them to fluctuate between acceptance and rejection of their stigmatized identity. Thus, there may be times when the stigmatized individual will feel more inclined to pass and others when they feel less inclined.[7]
Despite the potentially-distressing and dangerous parts of passing, some passers have expressed a habituation with it. In one study, Shippee accounts that "participants often portrayed it as a normal or mundane event."[12]For those whose stigma invites particularly hostile responses from most of society, passing may become a regular part of everyday life that is necessary to survive in that society.
Regardless, the stigma that passers are subject to is not inherent. As Goffman explains, stigma exists not within the person but between an attribute and an audience. As a result, stigma is socially constructed and differs based on the cultural beliefs, social structures, and situational dynamics of various contexts. Thus, passing is also immersed in different contexts of the socially-structured meaning and behavior of daily life and passing implies familiarity with that knowledge.[12][7][17][16]
Passing has been interpreted in sociology and cultural studies through different analytical lenses such as that of information management by Goffman and that of as cultural performance by Bryant Keith Alexander.
Goffman defines passing as 'the management of undisclosed discrediting information about self."[13][14][7][18][17]Similarly, other scholars add that "Passing is mostly associated with strategies of information management that the discreditable use to pass for normal [in everyday life]".[13]Whereas some individuals' stigma is immediately apparent, passers deal with different problems in that their stigma is not always so obvious. Goffman elaborates "The issue is not that of managing tension generated during social contacts, but rather that of managing information about his failing. To display or not to display; to tell or not to tell; to let on or not to let on; to lie or not to lie; and in each case, to whom, how, when, and where."[7][18]
In Goffman's understanding, individuals possess various symbols that convey social information about us. There are prestige symbols that convey creditable information and there are stigma symbols that convey discrediting information. By managing the visibility and apparentness of their stigma symbols, passers prevent others from learning of their discredited and stigmatized status and remain discreditable. Passing may also include the adoption of certain prestige symbols and personal history or biography of social information that aids to conceal and draw attention away from their actual stigmatized status.[7]
Goffman also briefly notes, "The concealment of creditable facts-reverse passing-of course occurs."[7]Reverse passing, related to terms like "blackfishing", has emerged as a topic of discourse as critics raise concerns overcultural appropriationand accuse nonstigmatized individuals, such as prominent celebritiesKim KardashianandAriana Grande, of concealing creditable information about themselves for some social benefit.[19]Notions of cultural appropriation, racial fetishization, and reverse passing entered public debate particularly in 2015, after a former college instructor and president of theSpokane, Washington,NAACP,Rachel Dolezal, was discovered to be white with no black racial heritage after she had presented herself as black for several years.[19][20]As many point out, reverse passing crucially differs from passing in that individuals who reverse pass are not stigmatized and therefore are not subject to the harms of stigma that may force stigmatized individuals to pass.[19]
Bryant Keith Alexander, a professor of Communication, Performance and Cultural Studies at Loyola Marymount University, defines cultural performance as "a process of delineation using performative practices to mark membership and association." Using this definition, passing is reframed as a method to maintain cultural performance and choose both consciously and unconsciously to participate in other performances. Rather than through the management of symbols and the social information they convey, passers assume "the necessary and performative strategies that signal membership." Alexander reiterates, "Cultural membership is thus maintained primarily through recognizable performative practices." Hence, to successfully pass is to have your cultural performance assessed and validated by others.[21]
With that interpretation, avoiding stigma by passing necessitates intimate understanding and awareness of social constructions of meaning and expected behaviors that signal membership. Shippee explains that "far from merely appraising situations to determine when concealment is required, passing encompasses active interpretations of several aspects of social life. It requires an understanding of cultural conventions, namely: what is considered "normal" and what is required to maintain it; customs of everyday interaction; and the symbolic character of the stigma itself.... Passing, then, embodies a creative mobilization of situational and cultural awareness, structural considerations, self-appraisals, and sense-making".[12]Alexander recognizes that and then asserts that "passing is a product (an assessed state), a process (an active engagement), performative (ritualized repetition of communicative acts), and a reflection of one's positionality (politicized location), knowing that its existential accomplishment always resides in liminality."[21]
Historically and genealogically, the term passing has referred to mixed-race, orbiracialAmericans identifying as or being perceived as belonging to a different racial group. InPassing and the Fictions of Identity, Elaine Ginsberg cites an ad for escaped slave Edmund Kenney as an example of racial passing; Edmund Kenney, a biracial person, was able to pass as white in the United States in the 1800s.[2]In the entry "Passing" for the GLBTQ Encyclopedia Project, Tina Gianoulis states that "for light-skinned African Americans during the times of slavery and the intense periods of racial resegregation that followed, passing for white was a survival tool that allowed them to gain education and employment that would have been denied them had they been recognized as "colored" people." The term passing has since been expanded to include other ethnicities and identity categories. Discriminated groups in North America and Europe may modify their accents, word choices, manner of dress, grooming habits, and even names in an attempt to appear to be members of a majority group or of a privileged minority group.[22][23]
Nella Larsen's 1929 novella,Passing, helped to establish the term after several years of prior use. The writer and subject of the novella is a mixed African-American/Caucasian who passes for white. The novella was written during theHarlem Renaissance, when passing was commonly found in both reality and fiction. Since the 1960sCivil Rights Movement, racial pride has decreased the weight that is given to passing as an important issue for black Americans. Still, it is possible and common for biracial people to pass based on appearance or by hiding or omitting their backgrounds.[24][25]
In "Adjusting the Borders: Bisexual Passing and Queer Theory," Lingel discussesbell hooks' notion of racial passing in conjunction with discussion of bisexual engagement in passing.[6]
Romani peoplehave a history of passing as well, particularly in the United States and often tell outsiders that they belong to other ethnicities such as Latino, Greek, Middle Eastern, or Native American.
Class passing, similar to racial and gender passing, is the concealment or misrepresentation of one'ssocial class. InClass-Passing: Social Mobility in Film and Popular Culture, Gwendolyn Audrey Foster suggests that racial and gender passing is often stigmatized but that class passing is generally accepted asnormative behavior.[26]Class passing is common in the United States and is linked to the notions of theAmerican Dreamand of upward class mobility.[24]
English-language novels that feature class passing includeThe Talented Mr. Ripley,Anne of Green Gables, andHoratio Algernovels. Films featuring class-passing characters includeCatch Me If You Can,My Fair Lady,Pinky,ATL, andAndy Hardy Meets Debutante.[26]Class passing also figures intoreality televisionprograms such asJoe Millionairein which contestants are often immersed in displays of great material wealth or may have to conceal their class status.[26]
The perception of an individual's sexual orientation is often based on their visual identity. The term visual identity refers to the expression of personal, social, and cultural identities through dress and appearance. InVisible Lesbians and Invisible Bisexuals: Appearance and Visual Identities Among Bisexual Women[27]it is proposed that through the expression of a visual identity, others "read" a person's appearance and make assumptions about their wider identity. Therefore, visual identity is a prominent tool of non-verbal communication. The concept of passing is showcased in research byJennifer Taubin herBisexual Women and Beauty Norms.[28]Some participants in the study stated that they attempted to dress as what they perceived as heterosexual when they partnered with a man, and others stated that they tried to dress more like a "lesbian." That exemplifies how visual identities can greatly alter people's immediate assumptions of sexuality. Therefore, presenting oneself as "heterosexual" is effectively "passing."[28]
Passing bysexual orientationoccurs when an individual's perceived sexual orientation or sexuality differs from the sexuality or sexual orientation with which they identify. In the entry "Passing" for the GLBTQ Encyclopedia Project, Tina Gianoulis notes "the presumption of heterosexuality in most modern cultures", which in some parts of the world, such as the United States, may be effectivelycompulsory, "most gay men and lesbians in fact spend a great deal of their lives passing as straight even when they do not do so intentionally."[4]The phrase "in the closet" may be used to describe individual who hide or conceal their sexual orientation.[3][4]InPassing: Identity and Interpretation in Sexuality, Race, and Religion, Maria Sanchez and Linda Schlossberg state that "the dominant social order often implores gay people to stay in the closet (to pass)."[3]Individuals may choose to remain "in the closet" or to pass as heterosexual for a variety of reasons, including a desire to maintain positive relationships with family and policies or requirements associated with employment such as "Don't ask, don't tell", a policy that required passing as heterosexual within the military or armed forces.[3][4]
Bisexual erasurecauses some bisexual individuals to feel the need to engage in passing within presumed predominantly-heterosexual circles and even within LGBTQ circles for fear of stigma.[6]InAdjusting the Borders: Bisexual Passing and Queer Theory, Jessica Lingel notes, "The ramifications of being denied a public sphere in which to practice a sexual identity that isn't labeled licentious or opportunistic leads some women to resort to manufacturing profiles of gayness or straightness to pledge membership within a community."[6]
Genderpassing refers to individuals who are perceived as belonging to a gender identity group that differs from the gender with which they were assigned at birth.[2]InPassing and the Fictions of Identity, Elaine Ginsberg provides the story ofBrandon Teena, who was assigned female at birth but lived as a man, as an example of gender passing in the United States. In 1993, Brandon moved to Falls City, Nebraska, where he initially passed as a man. However, community members discovered that Brandon had been assigned female at birth, and two men in it shot and murdered him.[2]Ginsberg cites for another example of gender passingBilly Tipton, a jazz musician who was also assigned female at birth but lived and performed as a man until his death in 1989.
Within thetransgendercommunity, passing refers to the perception or recognition of trans individuals as belonging to the gender identity to which they are transitioning rather than thesex or genderthey were assigned at birth.[2][4]
Passing as a member of a differentreligionor as not religious at all is not uncommon amongminorityreligious communities.[citation needed]In the entry "Passing" for the GLBTQ Encyclopedia Project, Tina Gianoulis states "at times of rabid anti-Semitism in Europe and the Americas, many Jewish families also either converted to Christianity or passed as Christian" for the sake of survival.[4]Circumcised Jewish males inGermanyduringWorld War IIattempted torestoretheirforeskinsas part of passing asGentile.[citation needed]The filmEuropa, Europaexplores that theme.
Shia Islamhas the doctrine oftaqiyyain which one is lawfully allowed to disavow Islam and profess another faith but secretly remain a Muslim if one's life is at risk. The concept has also been practised by various minority faiths in the Middle East such as theAlawitesandDruze.[29][30][31]
Disability passing may refer to the intentional concealment of impairment to avoid thestigmaofdisability, but it may also describe the exaggeration of an ailment or impairment to receive some benefit, which may take the form of attention or care. InDisability and Passing: Blurring the Lines of Identity, Jeffrey Brune and Daniel Wilson define passing by ability or disability as "the ways that others impose, intentionally or not, a specific disability or non-disability identity on a person."[32]Similarly, in "Compulsory Able-Bodiedness and Queer/Disabled Existence," Robert McRuer argues that "the system of compulsory able-bodiedness...produces disability."[33]
People with disabilities may exaggerate their disabilities when they are evaluated for medical care or accommodations often for fear of being denied support. "There are too many agencies out there with the ostensible purpose of helping us that still believe that as long as we technically can do something, like crab-walking our way into a subway station, we should have to do it," writes Gabe Moses, a wheelchair user who has a limited ability to walk.[34]Those pressures may result in disabled people exaggerating symptoms or tiring out their body before an evaluation so that they are seen on a "bad day," instead of a "good day."
In sports, some mobility impaired individuals have been observed strategically exaggerating the extent of their disability to pass as more disabled than they are and be placed in divisions in which they may be more competitive. In quadriplegic rugby, orwheelchair rugby, some players are described as having "incomplete" quadriplegia in which they may retain some sensation and function in their lower limbs that may allow them to stand and walk in limited capacities. Based on a rule from theUnited States Quad Rugby Association(USQRA) that states that players need only a combination of upper- and lower-extremity impairment that precludes them from playing able-bodied sports, the incomplete quads may play alongside other quadriplegics who have no sensation or function in their lower limbs. That is justified by classifications the USQRA has developed in which certified physical therapists compare arm and muscle flexibility, trunk and torso movement, and ease of chair operation between players and rank them by injury level.
However, inconsistencies between medical diagnoses of injury and those classifications allows players to perform higher levels of impairment for the classifiers and pass for being more disabled than they are. As a result, their ranking may underestimate their capacity and they may attain a competitive advantage over teams with players whose capacity is not equivalent. That policy has raised questions from some about the ethics and fairness of comparing disabilities, as well as about how competition, inclusion, and ability should be defined in the world of sports.[35]
Individuals withinvisible disabilitiessuch as people with mental illness; intellectual or cognitive disabilities; or physical disabilities that are not immediately obvious to others such as IBS, Crohn's disease, or ulcerative colitis may choose whether or not to reveal their identity or to pass as "normal." Passing as non-disabled may protect against discrimination but may also result in lack of support or accusations of faking.
Autisticpeople may employ strategies known as"masking" or "camouflaging"to appear non-autistic.[36]That can involve behavior like suppressing or redirecting repetitive movements (stimming), maintainingeye contactdespite discomfort, mirroring thebody languageandtoneof others, or scripting conversations.[36][37]Masking may be done to reduce the risk of ostracism or abuse.[38]Autistic masking is often exhausting and linked to adverse mental health outcomes such asburnout,depression, andsuicide.[39][40][41]However, that perspective has been challenged in a 2023 review of autistic masking by Valentina Petrolini, Ekaine Rodríguez-Armendariz, and Agustín Vicente who question whether all autistic people see "being autistic" as a central aspect of their identity and whether all autistic people are capable of truly hiding their autistic status. Both conditions, they argue, would have to be fulfilled for the analogy to hold and conclude that only a subgroup of autistic people experiences masking as passing.[36]
Individuals with visible physical impairments or disabilities, such as people with mobility impairment, including individuals who use wheelchairs or scooters, face greater challenges in concealing their disability.[32]
In a study on individuals' experience with prosthetics, the ability of users to be able to pass as if they were "like everybody else" with their prosthetic based on the realistic or unrealistic appearance of the prosthetic was one factor in predicting whether patients would accept or reject prosthetic use. Cosmetic prosthetics that were, for example, skin-colored or had the added appearance of veins, hair, and nails were often harder to adapt to and use, but many individuals expressed a preference for them over more functional and more conspicuous prosthetics to maintain their personal conceptions of social and bodily identity.
One user of prosthetics characterized her device as one that could "maintain her humanness ('half way human'), which in turn prevented her, quite literally, from being seen to have an 'odd' body." Users also discussed wanting prosthetics that could help them maintain a walking gait, which would attract no stares and prosthetics that could be disguised or concealed under clothes in efforts to pass as non-disabled.[18]
Though passing may occur on the basis of a single subordinate identity such as race, often people's intersectional locations involve multiple marginalized identities. Intersectionality provides a framework for seeing the interconnected nature of oppressive systems and how multiple identities interact within them. Gay Asian men possess two key subordinated identities; in combination, they create unique challenges for them when passing. Sometimes, those men must pass as straight to avoid stigma, but around other gay men, they may attempt to pass as a non-racialized person or white to avoid the disinterest or fetishization often encountered upon revealing their Asian identities.[42]By recognizing the hidden intersection of the gendered aspects of gay and Asian male stereotypes, these two distinct experiences make even more sense. Gay men are often stereotyped as effeminate and thereby insufficiently masculine as men. Stereotypes characterizing Asian men as too sexual (overly masculine) or too feminine (hypo-masculine) or even both also exhibit the gendered nature of racial stereotypes.[43]Thus, passing as the dominant racial or sexuality category also often means passing as gender correct.
When Black transgender men transition in the workplace from identifying as female to passing as cisgender men, gendered racial stereotypes characterizing Black men as overly masculine and violent[44]may affect how previously acceptable behaviors will be interpreted. One such Black trans man discovered that he had gone from "being an obnoxious Black woman to a scary Black man" and therefore had to adapt his behavior to gendered scripts to pass.[45]
|
https://en.wikipedia.org/wiki/Passing_(sociology)
|
Phonology(formerly alsophonemicsorphonematics[1][2][3][4][a]) is the branch oflinguisticsthat studies how languages systematically organize theirphonemesor, forsign languages, their constituent parts of signs. The term can also refer specifically to the sound or sign system of a particularlanguage variety. At one time, the study of phonology related only to the study of the systems ofphonemesin spoken languages, but now it may relate to anylinguistic analysiseither:
Sign languageshave a phonological system equivalent to the system of sounds in spoken languages. The building blocks of signs are specifications for movement, location, and handshape.[10]At first, a separate terminology was used for the study of sign phonology ("chereme" instead of "phoneme", etc.), but the concepts are now considered to apply universally to allhuman languages.
The word "phonology" (as in "phonology of English") can refer either to the field of study or to the phonological system of a given language.[11]This is one of the fundamental systems that a language is considered to comprise, like itssyntax, itsmorphologyand itslexicon. The wordphonologycomes fromAncient Greekφωνή,phōnḗ, 'voice, sound', and the suffix-logy(which is from Greekλόγος,lógos, 'word, speech, subject of discussion').
Phonology is typically distinguished fromphonetics, which concerns the physical production, acoustic transmission andperceptionof the sounds or signs of language.[12][13]Phonology describes the way they function within a given language or across languages to encode meaning. For many linguists, phonetics belongs todescriptive linguisticsand phonology totheoretical linguistics, but establishing the phonological system of a language is necessarily an application of theoretical principles to analysis of phonetic evidence in some theories. The distinction was not always made, particularly before the development of the modern concept of thephonemein the mid-20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such aspsycholinguisticsandspeech perception, which result in specific areas likearticulatory phonologyorlaboratory phonology.
Definitions of the field of phonology vary.Nikolai TrubetzkoyinGrundzüge der Phonologie(1939) defines phonology as "the study of sound pertaining to the system of language," as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction betweenlanguageandspeechbeing basicallyFerdinand de Saussure's distinction betweenlangueandparole).[14]More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, and in more narrow terms, "phonology proper is concerned with the function, behavior and organization of sounds as linguistic items."[12]According to Clarket al.(2007), it means the systematic use ofsoundto encode meaning in any spokenhuman language, or the field of linguistics studying that use.[15]
Evidence for a systematic investigation of the sounds of a language appears in the 4th c. BCE Ashtadhyayi, a Sanskrit grammar by Pāṇini. Particularly, within the Shiva Sutras, auxiliary work to the Ashtadhyayi, an inventory of what would be construed as a list of the phonemes of Sanskrit is provided, with a notational scheme for them which is deployed throughout the main text, which concern itself with issues of morphology, syntax and semantics.
Ibn JinniofMosul, a pioneer in phonology, wrote prolifically in the 10th century onArabicmorphology and phonology in works such asKitāb Al-Munṣif,Kitāb Al-Muḥtasab,andKitāb Al-Khaṣāʾiṣ[ar].[16]
The study of phonology as it exists today is defined by the formative studies of the 19th-century Polish scholarJan Baudouin de Courtenay,[17]: 17who (together with his studentsMikołaj KruszewskiandLev Shcherbain theKazan School) shaped the modern usage of the termphonemein a series of lectures in 1876–1877. The wordphonemehad been coined a few years earlier, in 1873, by the French linguistA. Dufriche-Desgenettes. In a paper read at 24 May meeting of theSociété de Linguistique de Paris,[18]Dufriche-Desgenettes proposed forphonemeto serve as a one-word equivalent for the GermanSprachlaut.[19]Baudouin de Courtenay's subsequent work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now calledallophonyandmorphophonology) and may have had an influence on the work of Saussure, according toE. F. K. Koerner.[20]
An influential school of phonology in the interwar period was thePrague school. One of its leading members was PrinceNikolai Trubetzkoy, whoseGrundzüge der Phonologie(Principles of Phonology),[14]published posthumously in 1939, is among the most important works in the field from that period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder ofmorphophonology, but the concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of thearchiphoneme. Another important figure in the Prague school wasRoman Jakobson, one of the most prominent linguists of the 20th century.Louis Hjelmslev'sglossematicsalso contributed with a focus on linguistic structure independent of phonetic realization or semantics.[17]: 175
In 1968,Noam ChomskyandMorris HallepublishedThe Sound Pattern of English(SPE), the basis forgenerative phonology. In that view, phonological representations are sequences ofsegmentsmade up ofdistinctive features. The features were an expansion of earlier work by Roman Jakobson,Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set and have the binary values + or −. There are at least two levels of representation:underlying representationand surface phonetic representation. Ordered phonological rules govern howunderlying representationis transformed into the actual pronunciation (the so-called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the generativists foldedmorphophonologyinto phonology, which both solved and created problems.
Natural phonology is a theory based on the publications of its proponent David Stampe in 1969 and, more explicitly, in 1979. In this view, phonology is based on a set of universalphonological processesthat interact with one another; those that are active and those that are suppressed is language-specific. Rather than acting on segments, phonological processes act ondistinctive featureswithinprosodicgroups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously, but the output of one process may be the input to another. The second most prominent natural phonologist is Patricia Donegan, Stampe's wife; there are many natural phonologists in Europe and a few in the US, such as Geoffrey Nathan. The principles of natural phonology were extended tomorphologybyWolfgang U. Dressler, who founded natural morphology.
In 1976,John Goldsmithintroducedautosegmental phonology. Phonological phenomena are no longer seen as operating ononelinear sequence of segments, called phonemes or feature combinations but rather as involvingsome parallel sequencesof features that reside on multiple tiers. Autosegmental phonology later evolved intofeature geometry, which became the standard theory of representation for theories of the organization of phonology as different as lexical phonology andoptimality theory.
Government phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set ofprinciplesand vary according to their selection of certain binaryparameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that accounts for differences in surface realizations. Principles are held to be inviolable, but parameters may sometimes come into conflict. Prominent figures in this field includeJonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud,Monik Charette, and John Harris.
In a course at the LSA summer institute in 1991,Alan PrinceandPaul Smolenskydevelopedoptimality theory, an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints ordered by importance; a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology byJohn McCarthyandAlan Princeand has become a dominant trend in phonology. The appeal to phonetic grounding of constraints and representational elements (e.g. features) in various approaches has been criticized by proponents of "substance-free phonology", especially byMark HaleandCharles Reiss.[21][22]
An integrated approach to phonological theory that combines synchronic and diachronic accounts to sound patterns was initiated withEvolutionary Phonologyin recent years.[23]
An important part of traditional, pre-generative schools of phonology is studying which sounds can be grouped into distinctive units within a language; these units are known asphonemes. For example, in English, the "p" sound inpotisaspirated(pronounced[pʰ]) while that inspotis not aspirated (pronounced[p]). However, English speakers intuitively treat both sounds as variations (allophones, which cannot give origin tominimal pairs) of the same phonological category, that is of the phoneme/p/. (Traditionally, it would be argued that if an aspirated[pʰ]were interchanged with the unaspirated[p]inspot, native speakers of English would still hear the same words; that is, the two sounds are perceived as "the same"/p/.) In some other languages, however, these two sounds are perceived as different, and they are consequently assigned to different phonemes. For example, inThai,Bengali, andQuechua, there areminimal pairsof words for which aspiration is the only contrasting feature (two words can have different meanings but with the only difference in pronunciation being that one has an aspirated sound where the other has an unaspirated one).
Part of the phonological study of a language therefore involves looking at data (phonetictranscriptionsof the speech ofnative speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However, other considerations often need to be taken into account as well.
The particular contrasts which are phonemic in a language can change over time. At one time,[f]and[v], two sounds that have the same place and manner of articulation and differ in voicing only, wereallophonesof the same phoneme in English, but later came to belong to separate phonemes. This is one of the main factors of historical change of languages as described inhistorical linguistics.
The findings and insights of speech perception and articulation research complicate the traditional and somewhat intuitive idea of interchangeable allophones being perceived as the same phoneme. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to expect to be able to splice words into simple segments without affecting speech perception.
Different linguists therefore take different approaches to the problem of assigning sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language.
Since the early 1960s, theoretical linguists have moved away from the traditional concept of a phoneme, preferring to consider basic units at a more abstract level, as a component ofmorphemes; these units can be calledmorphophonemes, and analysis using this approach is calledmorphophonology.
In addition to the minimal units that can serve the purpose of differentiating meaning (the phonemes), phonology studies how sounds alternate, or replace one another in different forms of the same morpheme (allomorphs), as well as, for example,syllablestructure,stress,feature geometry,tone, andintonation.
Phonology also includes topics such asphonotactics(the phonological constraints on what sounds can appear in what positions in a given language) andphonological alternation(how the pronunciation of a sound changes through the application ofphonological rules, sometimes in a given order that can befeedingorbleeding,[24]) as well asprosody, the study ofsuprasegmentalsand topics such asstressandintonation.
The principles of phonological analysis can be applied independently ofmodalitybecause they are designed to serve as general analytical tools, not language-specific ones. The same principles have been applied to the analysis of sign languages (seePhonemes in sign languages), even though the sublexical units are not instantiated as speech sounds.
|
https://en.wikipedia.org/wiki/Phonology
|
Ingeometry, thetheoremthat theanglesopposite the equal sides of anisosceles triangleare themselves equal is known as thepons asinorum(/ˈpɒnzˌæsɪˈnɔːrəm/PONZass-ih-NOR-əm), Latin for "bridge ofasses", or more descriptively as theisosceles triangle theorem. The theorem appears as Proposition 5 of Book 1 inEuclid'sElements.[1]Itsconverseis also true: if two angles of atriangleare equal, then the sides opposite them are also equal.
Pons asinorumis also usedmetaphoricallyfor a problem or challenge which acts as a test ofcritical thinking, referring to the "asses' bridge's" ability to separate capable and incapable reasoners. Its first known usage in this context was in 1645.[2]
There are two common explanations for the namepons asinorum, the simplest being that the diagram used resembles a physicalbridge. But the more popular explanation is that it is the first real test in theElementsof the intelligence of the reader and functions as a "bridge" to the harder propositions that follow.[3]
Another medieval term for the isosceles triangle theorem wasElefugawhich, according toRoger Bacon, comes from Greekelegia"misery", and Latinfuga"flight", that is "flight of the wretches". Though this etymology is dubious, it is echoed inChaucer'suse of the term "flemyng of wreches" for the theorem.[4]
The nameDulcarnonwas given to the 47th proposition of Book I of Euclid, better known as thePythagorean theorem, after the ArabicDhū 'l qarnainذُو ٱلْقَرْنَيْن, meaning "the owner of the two horns", because diagrams of the theorem showed two smaller squares like horns at the top of the figure. That term has similarly been used as a metaphor for a dilemma.[4]The namepons asinorumhas itself occasionally been applied to the Pythagorean theorem.[5]
Carl Friedrich Gausssupposedly once suggested that understandingEuler's identitymight play a similar role, as a benchmark indicating whether someone could become a first-classmathematician.[6]
Euclid's statement of thepons asinorumincludes a second conclusion that if the equal sides of the triangle are extended below the base, then the angles between the extensions and the base are also equal. Euclid'sproofinvolves drawing auxiliary lines to these extensions. But, as Euclid's commentatorProcluspoints out, Euclid never uses the second conclusion and his proof can be simplified somewhat by drawing the auxiliary lines to the sides of the triangle instead, the rest of the proof proceeding in more or less the same way.
There has been much speculation and debate as to why Euclid added the second conclusion to the theorem, given that it makes the proof more complicated. One plausible explanation, given by Proclus, is that the second conclusion can be used in possible objections to the proofs of later propositions where Euclid does not cover every case.[7]The proof relies heavily on what is today calledside-angle-side(SAS), the previous proposition in theElements, which says that given two triangles for which two pairs of corresponding sides and their included angles are respectivelycongruent, then the triangles are congruent.
Proclus' variation of Euclid's proof proceeds as follows:[8]Let△ABC{\displaystyle \triangle ABC}be an isosceles triangle with congruent sidesAB≅AC{\displaystyle AB\cong AC}. Pick an arbitrary pointD{\displaystyle D}on sideAB{\displaystyle AB}and then construct pointE{\displaystyle E}onAC{\displaystyle AC}to make congruent segmentsAD≅AE{\displaystyle AD\cong AE}. Draw auxiliary line segmentsBE{\displaystyle BE},DC{\displaystyle DC}, andDE{\displaystyle DE}. By side-angle-side, the triangles△BAE≅△CAD{\displaystyle \triangle BAE\cong \triangle CAD}. Therefore∠ABE≅∠ACD{\displaystyle \angle ABE\cong \angle ACD},∠ADC≅∠AEB{\displaystyle \angle ADC\cong \angle AEB}, andBE≅CD{\displaystyle BE\cong CD}. By subtracting congruent line segments,BD≅CE{\displaystyle BD\cong CE}. This sets up another pair of congruent triangles,△DBE≅△ECD{\displaystyle \triangle DBE\cong \triangle ECD}, again by side-angle-side. Therefore∠BDE≅∠CED{\displaystyle \angle BDE\cong \angle CED}and∠BED≅∠CDE{\displaystyle \angle BED\cong \angle CDE}. By subtracting congruent angles,∠BDC≅∠CEB{\displaystyle \angle BDC\cong \angle CEB}. Finally△BDC≅△CEB{\displaystyle \triangle BDC\cong \triangle CEB}by a third application of side-angle-side. Therefore∠CBD≅∠BCE{\displaystyle \angle CBD\cong \angle BCE}, which was to be proved.
Proclus gives a much shorter proof attributed toPappus of Alexandria. This is not only simpler but it requires no additional construction at all. The method of proof is to apply side-angle-side to the triangle and its mirror image. More modern authors, in imitation of the method of proof given for the previous proposition have described this as picking up the triangle, turning it over and laying it down upon itself.[9][10]This method is lampooned byCharles DodgsoninEuclid and his Modern Rivals, calling it an "Irish bull" because it apparently requires the triangle to be in two places at once.[11]
The proof is as follows:[12]LetABCbe an isosceles triangle withABandACbeing the equal sides. Consider the trianglesABCandACB, whereACBis considered a second triangle with verticesA,CandBcorresponding respectively toA,BandCin the original triangle.∠A{\displaystyle \angle A}is equal to itself,AB=ACandAC=AB, so by side-angle-side, trianglesABCandACBare congruent. In particular,∠B=∠C{\displaystyle \angle B=\angle C}.[13]
A standard textbook method is to construct thebisectorof the angle atA.[14]This is simpler than Euclid's proof, but Euclid does not present the construction of an angle bisector until proposition 9. So the order of presentation of Euclid's propositions would have to be changed to avoid the possibility of circular reasoning.
The proof proceeds as follows:[15]As before, let the triangle beABCwithAB=AC. Construct the angle bisector of∠BAC{\displaystyle \angle BAC}and extend it to meetBCatX.AB=ACandAXis equal to itself. Furthermore,∠BAX=∠CAX{\displaystyle \angle BAX=\angle CAX}, so, applying side-angle-side, triangleBAXand triangleCAXare congruent. It follows that the angles atBandCare equal.
Legendreuses a similar construction inÉléments de géométrie, but takingXto be the midpoint ofBC.[16]The proof is similar butside-side-sidemust be used instead of side-angle-side, and side-side-side is not given by Euclid until later in theElements.
In 1876, while a member of theUnited States Congress, future PresidentJames A. Garfielddeveloped a proof using the trapezoid, which was published in theNew England Journal of Education.[17]Mathematics historianWilliam Dunhamwrote that Garfield's trapezoid work was "really a very clever proof."[18]According to theJournal, Garfield arrived at the proof "in mathematical amusements and discussions with other members of congress."[19]
The isosceles triangle theorem holds ininner product spacesover therealorcomplex numbers. In such spaces, given vectorsx,y, andz, the theorem says that ifx+y+z=0{\displaystyle x+y+z=0}and‖x‖=‖y‖,{\displaystyle \|x\|=\|y\|,}then‖x−z‖=‖y−z‖.{\displaystyle \|x-z\|=\|y-z\|.}
Since‖x−z‖2=‖x‖2−2x⋅z+‖z‖2{\displaystyle \|x-z\|^{2}=\|x\|^{2}-2x\cdot z+\|z\|^{2}}andx⋅z=‖x‖‖z‖cosθ,{\displaystyle x\cdot z=\|x\|\|z\|\cos \theta ,}whereθis the angle between the two vectors, the conclusion of this inner product space form of the theorem is equivalent to the statement about equality of angles.
Uses of thepons asinorumas a metaphor for a test of critical thinking include:
A persistent piece of mathematical folklore claims that anartificial intelligenceprogram discovered an original and more elegant proof of this theorem.[22][23]In fact,Marvin Minskyrecounts that he had rediscovered thePappus proof(which he was not aware of) by simulating what a mechanical theorem prover might do.[24][10]
|
https://en.wikipedia.org/wiki/Pons_asinorum
|
"Playing thepronoun game" is the act of concealingsexual orientationin conversation by not using agender-specific pronounfor apartneror a lover, which would reveal the sexual orientation of the person speaking. Someone may employ the pronoun game when conversing with people to whom they have not "come out". In a situation in which revealing one's sexual orientation would have adverse consequences (such as the loss of a job), playing the pronoun game is seen to be a necessary act of concealment.
The pronoun game involves avoiding reference to one's sexual orientation and allowing the listener's assumptions on the matter to prevail. It also involves not drawing the listener's attention to the fact that the sex of a pronoun'santecedentis not being specified. As such, playing the pronoun game involves
|
https://en.wikipedia.org/wiki/Pronoun_game
|
Rhyming slangis a form of slang word construction in theEnglish language. It is especially prevalent amongCockneysin England, and was first used in the early 19th century in theEast End of London; hence its alternative name,Cockney rhyming slang.[2][3]In the US, especially thecriminal underworldof theWest Coastbetween 1880 and 1920, rhyming slang has sometimes been known asAustralian slang.[4][5][6]
The construction of rhyming slang involves replacing a common word with a phrase of two or more words, the last of which rhymes with the original word; then, in almost all cases, omitting, from the end of the phrase, the secondary rhyming word (which is thereafter implied),[7][page needed][8][page needed]making the origin and meaning of the phrase elusive to listeners not in the know.[9][page needed]
The form ofCockneyslang is made clear with the following example. The rhyming phrase "apples and pears" is used to mean "stairs". Following the pattern of omission, "and pears" is dropped, thus the spoken phrase "I'm going up the apples" means "I'm going up the stairs".[10]
The following are further common examples of these phrases:[10][11][12]
In some examples the meaning is further obscured by adding a second iteration of rhyme and truncation to the original rhymed phrase. For example, the word "Aris" is often used to indicate the buttocks. This is the result of a double rhyme, starting with the original rough synonym "arse", which is rhymed with "bottle and glass", leading to "bottle". "Bottle" was then rhymed with "Aristotle" and truncated to "Aris". "Aris" was then rhymed with "plaster of Paris" and truncated to "plaster".[14]
Ghil'ad Zuckermann, alinguistandrevivalist, has proposed a distinction between rhyming slang based on sound only, and phono-semantic rhyming slang, which includes a semantic link between the slang expression and itsreferent(the thing it refers to).[15]: 29An example of rhyming slang based only on sound is the Cockney "tea leaf" (thief).[15]: 29An example ofphono-semanticrhyming slang is the Cockney "sorrowful tale" ((three months in) jail),[15]: 30in which case the person coining the slang term sees a semantic link, sometimes jocular, between the Cockney expression and its referent.[15]: 30
The use of rhyming slang has spread beyond the purely dialectal and some examples are to be found in the mainstream British English lexicon, although many users may be unaware of the origin of those words.[10]
Most of the words changed by this process are nouns, but a few are adjectival, e.g., "bales" of cotton (rotten), or the adjectival phrase "on one's tod" for "on one's own", afterTod Sloan, a famous jockey.[2][18]
Rhyming slang is believed to have originated in the mid-19th century in theEast Endof London, with several sources suggesting some time in the 1840s.[19]: 12[20][21]The Flash Dictionary, of unknown authorship, published in 1921 by Smeeton (48mo), contains a few rhymes.[22]: 3John Camden Hotten's 1859Dictionary of Modern Slang, Cant, and Vulgar Wordslikewise states that it originated in the 1840s ("about twelve or fifteen years ago"), but with "chaunters" and "patterers" in theSeven Dialsarea of London.[20]Hotten'sDictionaryincluded the first known "Glossary of the Rhyming Slang", which included later mainstays such as "frog and toad" (the main road) and "apples and pears" (stairs), as well as many more obscure examples, e.g. "Battle of the Nile" (a tile, a common term for a hat), "Duke of York" (take a walk), and "Top of Rome" (home).[20][23][22]
It remains a matter of speculation exactly how rhyming slang originated, for example, as a linguistic game among friends or as acryptolectdeveloped intentionally to confuse non-locals. If deliberate, it may also have been used to maintain a sense of community, or to allow traders to talk amongst themselves in marketplaces to facilitatecollusion, without customers knowing what they were saying, or by criminals to confuse the police (seethieves' cant).[citation needed]
The academic, lexicographer and radio personalityTerence Dolanhas suggested that rhyming slang was invented by Irish immigrants to London "so the actual English wouldn't understand what they were talking about."[24]
Many examples of rhyming slang are based on locations in London, such as "Peckham Rye", meaning "tie",[25]: 265which dates from the late nineteenth century; "Hampstead Heath", meaning "teeth"[25]: 264(usually as "Hampsteads"), which was first recorded in 1887; and "barnet" (Barnet Fair), meaning "hair",[25]: 231which dates from the 1850s.
In the 20th century, rhyming slang began to be based on the names of celebrities —Gregory Peck(neck;cheque),[25]: 74Ruby Murray[as Ruby] (curry),[25]:159Alan Whicker[as "Alan Whickers"] (knickers),[25]: 3Puff Daddy(caddy),[25]: 147Max Miller(pillow[pronounced/ˈpilə/]),[citation needed]Meryl Streep(cheap),[25]: 119Nat King Cole("dole"),[25]: 221Britney Spears(beers,tears),[25]: 27Henry Halls(balls)[25]: 82— and after pop culture references —Captain Kirk(work),[25]: 33Pop Goes the Weasel(diesel),[25]: 146Mona Lisa(pizza),[25]: 122Mickey Mouse(Scouse),[25]: 120Wallace and Gromit(vomit),[25]: 195Brady Bunch(lunch),[25]: 25Bugs Bunny(money),[25]: 29Scooby-Doo(clue),[25]: 164Winnie the Pooh(shoe),[25]: 199andSchindler's List(pissed).[25]: 163–164Some words have numerous definitions, such as dead (Father Ted, "gone to bed",brown bread),[25]: 220door(Roger Moore,Andrea Corr,George Bernard Shaw,Rory O'Moore),[25]: 221cocaine(Kurt Cobain; [as "Charlie"]Bob Marley,Boutros Boutros-Ghali,Gianluca Vialli,oatsandbarley; [as "line"]Patsy Cline; [as "powder"]Niki Lauda),[25]: 218flares("Lionel Blairs", "Tony Blairs", "Rupert Bears", "Dan Dares"),[25]: 225etc.
Many examples have passed into common usage. Some substitutions have become relatively widespread in England in their contracted form. "To have a butcher's", meaning to have a look, originates from "butcher's hook", an S-shaped hook used by butchers to hang up meat, and dates from the late nineteenth century but has existed independently in general use from around the 1930s simply as "butchers".[25]:30Similarly, "use your loaf", meaning "use your head", derives from "loaf of bread" and also dates from the late nineteenth century but came into independent use in the 1930s.[9][page needed]
Conversely usages have lapsed, or been usurped ("Hounslow Heath" for teeth, was replaced by "Hampsteads" from the heath of the same name, startingc.1887).[26]
In some cases,false etymologiesexist. For example, the term "barney" has been used to mean an altercation or fight since the late nineteenth century, although without a clear derivation.[27]In the 2001 feature filmOcean's Eleven, the explanation for the term is that it derives fromBarney Rubble,[28]the name of a cartoon character from theFlintstonestelevision program many decades later in origin.[25]:14[27]
Rhyming slang is used mainly in London in England but can, to some degree, be understood across the country. Some constructions, however, rely on particular regional accents for the rhymes to work. For instance, the term "Charing Cross" (a place in London), used to mean "horse" since the mid-nineteenth century,[9][page needed]does not work for a speaker without thelot–cloth split, common in London at that time but not nowadays. A similar example is "Joanna" meaning "piano", which is based on the pronunciation of "piano" as "pianna"/piˈænə/.[citation needed]Unique formations also exist in other parts of the United Kingdom, such as in theEast Midlands, where the local accent has formed "Derby Road", which rhymes with "cold".[citation needed]
Outside England, rhyming slang is used in many English-speaking countries in theCommonwealth of Nations, with local variations. For example, in Australian slang, the term for an English person is "pommy", which has been proposed as a rhyme on "pomegranate", pronounced "Pummy Grant", which rhymed with "immigrant".[29][30]
Rhyming slang is continually evolving, and new phrases are introduced all the time; new personalities replace old ones—pop culture introduces new words—as in "I haven't a Scooby" (fromScooby Doo, the eponymous cartoon dog of thecartoon series) meaning "I haven't a clue".[31]
Rhyming slang is often used as a substitute for words regarded as taboo, often to the extent that the association with the taboo word becomes unknown over time. "Berk" (often used to mean "foolish person") originates from the most famous of allfox hunts, the "Berkeley Hunt" meaning "cunt"; "cobblers" (often used in the context "what you said is rubbish") originates from "cobbler's awls", meaning "balls" (as in testicles); and "hampton" (usually "'ampton") meaning "prick" (as in penis) originates from "Hampton Wick" (a place in London) – the second part "wick" also entered common usage as "he gets on my wick" (he is an annoying person).[22]: 74
Lesser taboo terms include "pony and trap" for "crap" (as in defecate, but often used to denote nonsense or low quality); to blow araspberry(rude sound of derision) from raspberry tart for "fart"; "D'Oyly Carte" (an opera company) for "fart"; "Jimmy Riddle" (an American country musician) for "piddle" (as inurinate), "J. Arthur Rank" (a film mogul), "Sherman tank", "Jodrell Bank" or "ham shank" for "wank", "Bristol Cities" (contracted to 'Bristols') for "titties", etc. "Taking the Mick" or "taking the Mickey" is thought to be a rhyming slang form of "taking the piss", where "Mick" came from "Mickey Bliss".[32]
In December 2004Joe Pasquale, winner of the fourth series ofITV'sI'm a Celebrity... Get Me Out of Here!, became well known for his frequent use of the term "Jacobs", forJacob'sCream Crackers, a rhyming slang term for knackers i.e.testicles.
Rhyming slang has been widely used in popular culture including film, television, music, literature, sport and degree classification.
In theBritish undergraduate degree classificationsystem a first class honours degree is known as a "Geoff Hurst" (First) after the English 1966 World Cup footballer. An upper second class degree (a.k.a. a "2:1") is called an "Attila the Hun", and a lower second class ("2:2") a "Desmond Tutu", while a third class degree is known as a "Thora Hird" or "Douglas Hurd".[33]
Cary Grant's character teaches rhyming slang to his female companion inMr. Lucky(1943), describing it as 'Australian rhyming slang'. Rhyming slang is also used and described in a scene of the 1967 filmTo Sir, with LovestarringSidney Poitier, where the English students tell their foreign teacher that the slang is a drag and something for old people.[34]The closing song of the 1969 crime caper,The Italian Job, ("Getta Bloomin' Move On" a.k.a. "The Self Preservation Society") contains many slang terms.
Rhyming slang has been used to lend authenticity to an East End setting. Examples includeLock, Stock and Two Smoking Barrels(1998) (wherein the slang is translated via subtitles in one scene);The Limey(1999);Sexy Beast(2000);Snatch(2000);Ocean's Eleven(2001); andAustin Powers in Goldmember(2002);It's All Gone Pete Tong(2004), after BBC radio disc jockeyPete Tongwhose name is used in this context as rhyming slang for "wrong";Green Street Hooligans(2005). InMargin Call(2011), Will Emerson, played by London-born actorPaul Bettany, asks a friend on the telephone, "How's the trouble and strife?" ("wife").
Cockneys vs Zombies(2012) mocked the genesis of rhyming slang terms when a Cockney character calls zombies "Trafalgars" to even his Cockney fellows' puzzlement; he then explains it thus: "Trafalgar square – fox and hare – hairy Greek – five day week – weak and feeble – pins and needles – needle and stitch – Abercrombie and Fitch – Abercrombie: zombie".
The live-actionDisneyfilmMary Poppins Returnssong "Trip A Little Light Fantastic" involves Cockney rhyming slang in part of its lyrics, and is primarily spoken by the London lamplighters.
In the animated superhero filmSpider-Man: Across the Spider-Verse(2023), characterSpider-Punk, aCamdennative, is heard saying: "I haven't got ascooby" ("clue").[35]
Slang had a resurgence of popular interest in Britain beginning in the 1970s, resulting from its use in a number of London-based television programmes such asSteptoe and Son(1970–74); andNot On Your Nellie(1974–75), starringHylda Bakeras Nellie Pickersgill, alludes to the phrase "not on your Nellie Duff", rhyming slang for "not on your puff" i.e. not on your life. Similarly,The Sweeney(1975–78) alludes to the phrase "Sweeney Todd" for "Flying Squad", a rapid response unit of London's Metropolitan Police. InThe Fall and Rise of Reginald Perrin(1976–79), a comic twist was added to rhyming slang by way of spurious and fabricated examples which a young man had laboriously attempted to explain to his father (e.g. 'dustbins' meaning 'children', as in 'dustbin lids'='kids'; 'Teds' being 'Ted Heath' and thus 'teeth'; and even 'Chitty Chitty' being 'Chitty Chitty Bang Bang', and thus 'rhyming slang'...). It was also featured in an episode ofThe Good Lifein the first season (1975) where Tom and Barbara purchase a wood-burning range from a junk trader called Sam, who litters his language with phony rhyming slang in hopes of convincing suburban residents that he is an authentic traditional Cockney trader. He comes up with a fake story as to the origin of Cockney rhyming slang and is caught out rather quickly. InThe Jeffersonsseason 2 (1976) episode "The Breakup: Part 2",Mr. Bentleyexplains Cockney rhyming slang toGeorge Jefferson, in that "whistle and flute" means "suit", "apples and pears" means "stairs", "plates of meat" means "feet".
The use of rhyming slang was also prominent inMind Your Language(1977–79),Citizen Smith(1977–80),Minder[36][page needed](1979–94),Only Fools and Horses(1981–91), andEastEnders(1985–).Mindercould be quite uncompromising in its use of obscure forms without any clarification. Thus the non-Cockney viewer was obliged to deduce that, say, "iron" was "male homosexual" ('iron'='iron hoof'='poof'). One episode in Series 5 ofSteptoe and Sonwas entitled "Any Old Iron", for the same reason, when Albert thinks that Harold is 'on the turn'. Variations of rhyming slang were also used in sitcomBirds of a Feather, by main characters Sharon and Tracey, often to the confusion of character, Dorian Green, who was unfamiliar with the terms.
One early US show to regularly feature rhyming slang was the Saturday morning children's showThe Bugaloos(1970–72), with the character of Harmony (Wayne Laryea) often incorporating it in his dialogue.
In popular music,Spike Jonesand his City Slickers recorded "So 'Elp Me", based on rhyming slang, in 1950. The 1967Kinkssong "Harry Rag" was based on the usage of the nameHarry Wraggas rhyming slang for "fag" (i.e. acigarette). The idiom made a brief appearance in the UK-based DJ reggae music of the 1980s in the hit "Cockney Translation" bySmiley CultureofSouth London; this was followed a couple of years later by Domenick and Peter Metro's "Cockney and Yardie". London-based artists such asAudio BullysandChas & Dave(and others from elsewhere in the UK, such asThe Streets, who are from Birmingham) frequently use rhyming slang in their songs.
British-born M.C.MF Doomreleased an ode entitled "Rhymin' Slang", after settling in the UK in 2010. The track was released on the 2012JJ DoomalbumKey to the Kuffs.
Another contributor wasLonnie Doneganwho had a song called "My Old Man's a Dustman". In it he says his father has trouble putting on his boots "He's got such a job to pull them up that he calls them daisy roots".[37]
In modern literature, Cockney rhyming slang is used frequently in the novels and short stories ofKim Newman, for instance in the short story collections "The Man from the Diogenes Club" (2006) and "Secret Files of the Diogenes Club" (2007), where it is explained at the end of each book.[38]
It is also parodied inGoing PostalbyTerry Pratchett, which features a geriatric Junior Postman by the name of Tolliver Groat, a speaker of 'Dimwell Arrhythmic Rhyming Slang', the only rhyming slang on theDiscwhichdoes not actually rhyme. Thus, a wig is a 'prunes', from 'syrup of prunes', an obvious parody of the Cockneysyrupfromsyrup of figs – wig. There are numerous other parodies, though it has been pointed out that the result is even more impenetrable than a conventional rhyming slang and so may not be quite so illogical as it seems, given the assumed purpose of rhyming slang as a means of communicating in a manner unintelligible to all but the initiated.
In the bookGoodbye to All ThatbyRobert Graves, a beer is a "broken square" asWelch Fusiliersofficers walk into a pub and order broken squares when they see men from the Black Watch.The Black Watchhad a minor blemish on its record of otherwise unbroken squares. Fistfights ensued.
InDashiell Hammett'sThe Dain Curse, the protagonist exhibits familiarity with Cockney rhyming slang, referring to gambling at dice with the phrase "rats and mice."
Cockney rhyming slang is one of the main influences for the dialect spoken inA Clockwork Orange(1962).[39]The author of the novel,Anthony Burgess, also believed the phrase "as queer as a clockwork orange" was Cockney slang having heard it in a London pub in 1945, and subsequently named it in the title of his book.[40]
In Scottish football, a number of clubs have nicknames taken from rhyming slang.Partick Thistleare known as the "Harry Rags", which is taken from the rhyming slang of their 'official' nickname "the jags".Rangersare known as the "Teddy Bears", which comes from the rhyming slang for "the Gers" (shortened version of Ran-gers).Heart of Midlothianare known as the "Jambos", which comes from "Jam Tarts" which is the rhyming slang for "Hearts" which is the common abbreviation of the club's name.Hibernianare also referred to as "The Cabbage" which comes from Cabbage and Ribs being the rhyming slang for Hibs. The phrase Hampden Roar (originally describing the loud crowd noise emanating from thenational stadium) is employed as "What's the Hampden?",[41]("What's the score?",idiomfor "What's happening / what's going on?").[41][42]
Inrugby league, "meat pie" is used fortry.[43]
|
https://en.wikipedia.org/wiki/Rhyming_slang
|
Shibbolethis asingle sign-onlog-in system for computer networks and theInternet. It allows people to sign in using just one identity to various systems run by federations of different organizations or institutions. The federations are often universities or public service organizations.
The ShibbolethInternet2middlewareinitiative created anarchitectureandopen-sourceimplementation foridentity managementandfederated identity-basedauthenticationandauthorization(oraccess control) infrastructure based onSecurity Assertion Markup Language(SAML). Federated identity allows the sharing of information about users from one security domain to the other organizations in a federation. This allows for cross-domain single sign-on and removes the need for content providers to maintain usernames and passwords.Identity providers(IdPs) supply user information, while service providers (SPs) consume this information and give access to secure content.
The Shibboleth project grew out of Internet2. Today, the project is managed by the Shibboleth Consortium. Two of the most popular software components managed by the Shibboleth Consortium are the Shibboleth Identity Provider and the Shibboleth Service Provider, both of which are implementations ofSAML.
The project was named after anidentifying passphraseused in theBible(Judges12:4–6) becauseEphraimiteswere not able to pronounce "sh".
The Shibboleth project was started in 2000 to facilitate the sharing of resources between organizations with incompatibleauthentication and authorization infrastructures.Architectural workwas performed for over a year prior to any software development. After development and testing, Shibboleth IdP 1.0 was released in July 2003.[1]This was followed by the release of Shibboleth IdP 1.3 in August 2005.
Version 2.0 of the Shibboleth software was a major upgrade released in March 2008.[2]It included both IdP and SP components, but, more importantly, Shibboleth 2.0 supported SAML 2.0.
The Shibboleth and SAML protocols were developed during the same timeframe. From the beginning, Shibboleth was based on SAML, but, where SAML was found lacking, Shibboleth improvised, and the Shibboleth developers implemented features that compensated for missing features inSAML 1.1. Some of these features were later incorporated intoSAML 2.0, and, in that sense, Shibboleth contributed to the evolution of the SAML protocol.
Perhaps the most important contributed feature was the legacy Shibboleth AuthnRequest protocol. Since the SAML 1.1 protocol was inherently an IdP-first protocol, Shibboleth invented a simple HTTP-based authentication request protocol that turned SAML 1.1 into an SP-first protocol. This protocol was first implemented in Shibboleth IdP 1.0 and later refined in Shibboleth IdP 1.3.
Building on that early work, theLiberty Allianceintroduced a fully expanded AuthnRequest protocol into the Liberty Identity Federation Framework. Eventually, Liberty ID-FF 1.2 was contributed to OASIS, which formed the basis for the OASIS SAML 2.0 Standard.[importance?]
Shibboleth is a web-based technology that implements the HTTP/POST artifact and attribute push profiles ofSAML, including both Identity Provider (IdP) and Service Provider (SP) components. Shibboleth 1.3 has its own technical overview,[3]architectural document,[4]and conformance document[5]that build on top of the SAML 1.1 specifications.
In the canonical use case:
Shibboleth supports a number of variations on this base case, including portal-style flows whereby the IdP mints an unsolicited assertion to be delivered in the initial access to the SP, and lazy session initiation, which allows an application to trigger content protection through a method of its choice as required.
Shibboleth 1.3 and earlier do not provide a built-inauthenticationmechanism, but any Web-based authentication mechanism can be used to supply user data for Shibboleth to use. Common systems for this purpose includeCASorPubcookie. The authentication and single-sign-on features of the Java container in which the IdP runs (Tomcat, for example) can also be used.
Shibboleth 2.0 builds onSAML 2.0standards. The IdP in Shibboleth 2.0 has to do additional processing in order to support passive and forced authentication requests in SAML 2.0. The SP can request a specific method of authentication from the IdP. Shibboleth 2.0 supports additional encryption capacity.
Shibboleth's access control is performed by matching attributes supplied by IdPs against rules defined by SPs. An attribute is any piece of information about a user, such as "member of this community", "Alice Smith", or "licensed under contract A". User identity is considered an attribute, and is only passed when explicitly required, which preserves user privacy. Attributes can be written in Java or pulled from directories and databases. StandardX.520attributes are most commonly used, but new attributes can be arbitrarily defined as long as they are understood and interpreted similarly by the IdP and SP in a transaction.
Trust between domains is implemented using public key cryptography (often simplyTLSserver certificates) and metadata that describes providers. The use of information passed is controlled through agreements. Federations are often used to simplify these relationships by aggregating large numbers of providers that agree to use common rules and contracts.
Shibboleth is open-source and provided under the Apache 2 license. Many extensions have been contributed by other groups.[citation needed]
|
https://en.wikipedia.org/wiki/Shibboleth_Single_Sign-on_architecture
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.