text
stringlengths
11
320k
source
stringlengths
26
161
Inmathematics, atubular neighborhoodof asubmanifoldof asmooth manifoldis anopen setaround it resembling thenormal bundle. The idea behind a tubular neighborhood can be explained in a simple example. Consider asmoothcurve in the plane without self-intersections. On each point on thecurvedraw a lineperpendicularto the curve. Unless the curve is straight, these lines will intersect among themselves in a rather complicated fashion. However, if one looks only in a narrow band around the curve, the portions of the lines in that band will not intersect, and will cover the entire band without gaps. This band is a tubular neighborhood. In general, letSbe asubmanifoldof amanifoldM, and letNbe thenormal bundleofSinM. HereSplays the role of the curve andMthe role of the plane containing the curve. Consider the natural map which establishes abijectivecorrespondence between thezero sectionN0{\displaystyle N_{0}}ofNand the submanifoldSofM. An extensionjof this map to the entire normal bundleNwith values inMsuch thatj(N){\displaystyle j(N)}is an open set inMandjis ahomeomorphismbetweenNandj(N){\displaystyle j(N)}is called a tubular neighbourhood. Often one calls the open setT=j(N),{\displaystyle T=j(N),}rather thanjitself, a tubular neighbourhood ofS, it is assumed implicitly that the homeomorphismjmappingNtoTexists. Anormal tubeto asmoothcurve is amanifolddefined as theunionof all discs such that LetS⊆M{\displaystyle S\subseteq M}be smooth manifolds. A tubular neighborhood ofS{\displaystyle S}inM{\displaystyle M}is avector bundleπ:E→S{\displaystyle \pi :E\to S}together with a smooth mapJ:E→M{\displaystyle J:E\to M}such that The normal bundle is a tubular neighborhood and because of the diffeomorphism condition in the second point, all tubular neighborhood have the same dimension, namely (the dimension of the vector bundle considered as a manifold is) that ofM.{\displaystyle M.} Generalizations of smooth manifolds yield generalizations of tubular neighborhoods, such as regular neighborhoods, orspherical fibrationsforPoincaré spaces. These generalizations are used to produce analogs to the normal bundle, or rather to thestable normal bundle, which are replacements for the tangent bundle (which does not admit a direct description for these spaces).
https://en.wikipedia.org/wiki/Tubular_neighborhood
The following is a list of namedtopologiesortopological spaces, many of which are counterexamples intopologyand related branches ofmathematics. This is not a list ofpropertiesthat a topology or topological space might possess; for that, seeList of general topology topicsandTopological property. The following topologies are a known source of counterexamples forpoint-set topology. List ofnatural topologies. Compactificationsinclude: This lists named topologies ofuniform convergence.
https://en.wikipedia.org/wiki/List_of_topologies
Intopologyand related fields ofmathematics, atopological spaceXis called aregular spaceif everyclosed subsetCofXand a pointpnot contained inChave non-overlappingopen neighborhoods.[1]ThuspandCcan beseparatedby neighborhoods. This condition is known asAxiom T3. The term "T3space" usually means "a regularHausdorff space". These conditions are examples ofseparation axioms. Atopological spaceXis aregular spaceif, given anyclosed setFand anypointxthat does not belong toF, there exists aneighbourhoodUofxand a neighbourhoodVofFthat aredisjoint. Concisely put, it must be possible toseparatexandFwith disjoint neighborhoods. AT3spaceorregular Hausdorff spaceis a topological space that is both regular and aHausdorff space. (A Hausdorff space or T2space is a topological space in which any two distinct points are separated by neighbourhoods.) It turns out that a space is T3if and only if it is both regular and T0. (A T0orKolmogorov spaceis a topological space in which any two distinct points aretopologically distinguishable, i.e., for every pair of distinct points, at least one of them has anopen neighborhoodnot containing the other.) Indeed, if a space is Hausdorff then it is T0, and each T0regular space is Hausdorff: given two distinct points, at least one of them misses the closure of the other one, so (by regularity) there exist disjoint neighborhoods separating one point from (the closure of) the other. Although the definitions presented here for "regular" and "T3" are not uncommon, there is significant variation in the literature: some authors switch the definitions of "regular" and "T3" as they are used here, or use both terms interchangeably. This article uses the term "regular" freely, but will usually say "regular Hausdorff", which is unambiguous, instead of the less precise "T3". For more on this issue, seeHistory of the separation axioms. Alocally regular spaceis a topological space where every point has an open neighbourhood that is regular. Every regular space is locally regular, but the converse is not true. A classical example of a locally regular space that is not regular is thebug-eyed line. A regular space is necessarily alsopreregular, i.e., any twotopologically distinguishablepoints can be separated by neighbourhoods. Since a Hausdorff space is the same as a preregularT0space, a regular space which is also T0must be Hausdorff (and thus T3). In fact, a regular Hausdorff space satisfies the slightly stronger conditionT2½. (However, such a space need not becompletely Hausdorff.) Thus, the definition of T3may cite T0,T1, or T2½instead of T2(Hausdorffness); all are equivalent in the context of regular spaces. Speaking more theoretically, the conditions of regularity and T3-ness are related byKolmogorov quotients. A space is regular if and only if its Kolmogorov quotient is T3; and, as mentioned, a space is T3if and only if it's both regular and T0. Thus a regular space encountered in practice can usually be assumed to be T3, by replacing the space with its Kolmogorov quotient. There are many results for topological spaces that hold for both regular and Hausdorff spaces. Most of the time, these results hold for all preregular spaces; they were listed for regular and Hausdorff spaces separately because the idea of preregular spaces came later. On the other hand, those results that are truly about regularity generally don't also apply to nonregular Hausdorff spaces. There are many situations where another condition of topological spaces (such asnormality,pseudonormality,paracompactness, orlocal compactness) will imply regularity if some weaker separation axiom, such as preregularity, is satisfied.[2]Such conditions often come in two versions: a regular version and a Hausdorff version. Although Hausdorff spaces aren't generally regular, a Hausdorff space that is also (say) locally compact will be regular, because any Hausdorff space is preregular. Thus from a certain point of view, regularity is not really the issue here, and we could impose a weaker condition instead to get the same result. However, definitions are usually still phrased in terms of regularity, since this condition is more well known than any weaker one. Most topological spaces studied inmathematical analysisare regular; in fact, they are usuallycompletely regular, which is a stronger condition. Regular spaces should also be contrasted withnormal spaces. Azero-dimensional spacewith respect to thesmall inductive dimensionhas abaseconsisting ofclopen sets. Every such space is regular. As described above, anycompletely regular spaceis regular, and any T0space that is notHausdorff(and hence not preregular) cannot be regular. Most examples of regular and nonregular spaces studied in mathematics may be found in those two articles. On the other hand, spaces that are regular but not completely regular, or preregular but not regular, are usually constructed only to providecounterexamplesto conjectures, showing the boundaries of possibletheorems. Of course, one can easily find regular spaces that are not T0, and thus not Hausdorff, such as anindiscrete space, but these examples provide more insight on theT0axiomthan on regularity. An example of a regular space that is not completely regular is theTychonoff corkscrew. Most interesting spaces in mathematics that are regular also satisfy some stronger condition. Thus, regular spaces are usually studied to find properties and theorems, such as the ones below, that are actually applied to completely regular spaces, typically in analysis. There exist Hausdorff spaces that are not regular. An example is theK-topologyon the setR{\displaystyle \mathbb {R} }of real numbers. More generally, ifC{\displaystyle C}is a fixed nonclosed subset ofR{\displaystyle \mathbb {R} }with empty interior with respect to the usual Euclidean topology, one can construct a finer topology onR{\displaystyle \mathbb {R} }by taking as abasethe collection of all setsU{\displaystyle U}andU∖C{\displaystyle U\setminus C}forU{\displaystyle U}open in the usual topology. That topology will be Hausdorff, but not regular. Suppose thatXis a regular space. Then, given any pointxand neighbourhoodGofx, there is a closed neighbourhoodEofxthat is asubsetofG. In fancier terms, the closed neighbourhoods ofxform alocal baseatx. In fact, this property characterises regular spaces; if the closed neighbourhoods of each point in a topological space form a local base at that point, then the space must be regular. Taking theinteriorsof these closed neighbourhoods, we see that theregular open setsform abasefor the open sets of the regular spaceX. This property is actually weaker than regularity; a topological space whose regular open sets form a base issemiregular.
https://en.wikipedia.org/wiki/Regular_space
Asemiregular spaceis atopological spacewhoseregular open sets(sets that equal the interiors of their closures) form abasefor the topology.[1] Everyregular spaceis semiregular, and every topological space may be embedded into a semiregular space.[1] The spaceX=R2∪{0∗}{\displaystyle X=\mathbb {R} ^{2}\cup \{0^{*}\}}with thedouble origin topology[2]and theArens square[3]are examples of spaces that areHausdorffsemiregular, but not regular.
https://en.wikipedia.org/wiki/Semiregular_space
Intopologyand related fields ofmathematics, there are several restrictions that one often makes on the kinds oftopological spacesthat one wishes to consider. Some of these restrictions are given by theseparation axioms. These are sometimes calledTychonoff separation axioms, afterAndrey Tychonoff. The separation axioms are not fundamentalaxiomslike those ofset theory, but rather defining properties which may be specified to distinguish certain types of topological spaces. The separation axioms are denoted with the letter "T" after theGermanTrennungsaxiom("separation axiom"), and increasing numerical subscripts denote stronger and stronger properties. The precise definitions of theseparation axioms have varied over time. Especially in older literature, different authors might have different definitions of each condition. Before we define the separation axioms themselves, we give concrete meaning to the concept ofseparated sets(and points) intopological spaces. (Separated sets are not the same asseparated spaces, defined in the next section.) The separation axioms are about the use of topological means to distinguishdisjoint setsanddistinctpoints. It's not enough for elements of a topological space to be distinct (that is,unequal); we may want them to betopologically distinguishable. Similarly, it's not enough forsubsetsof a topological space to be disjoint; we may want them to beseparated(in any of various ways). The separation axioms all say, in one way or another, that points or sets that are distinguishable or separated in some weak sense must also be distinguishable or separated in some stronger sense. LetXbe a topological space. Then two pointsxandyinXaretopologically distinguishableif they do not have exactly the sameneighbourhoods(or equivalently the same open neighbourhoods); that is, at least one of them has a neighbourhood that is not a neighbourhood of the other (or equivalently there is anopen setthat one point belongs to but the other point does not). That is, at least one of the points does not belong to the other'sclosure. Two pointsxandyareseparatedif each of them has a neighbourhood that is not a neighbourhood of the other; that is, neither belongs to the other'sclosure. More generally, two subsetsAandBofXareseparatedif each is disjoint from the other's closure, though the closures themselves do not have to be disjoint. Equivalently, each subset is included in an open set disjoint from the other subset. All of the remaining conditions for separation of sets may also be applied to points (or to a point and a set) by using singleton sets. Pointsxandywill be considered separated, by neighbourhoods, by closed neighbourhoods, by a continuous function, precisely by a function, if and only if their singleton sets {x} and {y} are separated according to the corresponding criterion. SubsetsAandBareseparated by neighbourhoodsif they have disjoint neighbourhoods. They areseparated by closed neighbourhoodsif they have disjoint closed neighbourhoods. They areseparated by a continuous functionif there exists acontinuous functionffrom the spaceXto thereal lineRsuch that A is a subset of thepreimagef−1({0}) and B is a subset of the preimagef−1({1}). Finally, they areprecisely separated by a continuous functionif there exists a continuous functionffromXtoRsuch thatAequals the preimagef−1({0}) andBequalsf−1({1}). These conditions are given in order of increasing strength: Any two topologically distinguishable points must be distinct, and any two separated points must be topologically distinguishable. Any two separated sets must be disjoint, any two sets separated by neighbourhoods must be separated, and so on. These definitions all use essentially thepreliminary definitionsabove. Many of these names havealternative meanings in some of mathematical literature; for example, the meanings of "normal" and "T4" are sometimes interchanged, similarly "regular" and "T3", etc. Many of the concepts also have several names; however, the one listed first is always least likely to be ambiguous. Most of these axioms have alternative definitions with the same meaning; the definitions given here fall into a consistent pattern that relates the various notions of separation defined in the previous section. Other possible definitions can be found in the individual articles. In all of the following definitions,Xis again atopological space. The following table summarizes the separation axioms as well as the implications between them: cells which are merged represent equivalent properties, each axiom implies the ones in the cells to its left, and if we assume the T1axiom, then each axiom also implies the ones in the cells above it (for example, all normal T1spaces are also completely regular). The T0axiom is special in that it can not only be added to a property (so that completely regular plus T0is Tychonoff) but also be subtracted from a property (so that Hausdorff minus T0is R1), in a fairly precise sense; seeKolmogorov quotientfor more information. When applied to the separation axioms, this leads to the relationships in the table to the left below. In this table, one goes from the right side to the left side by adding the requirement of T0, and one goes from the left side to the right side by removing that requirement, using the Kolmogorov quotient operation. (The names in parentheses given on the left side of this table are generally ambiguous or at least less well known; but they are used in the diagram below.) Other than the inclusion or exclusion of T0, the relationships between the separation axioms are indicated in the diagram to the right. In this diagram, the non-T0version of a condition is on the left side of the slash, and the T0version is on the right side. Letters are used forabbreviationas follows: "P" = "perfectly", "C" = "completely", "N" = "normal", and "R" (without a subscript) = "regular". A bullet indicates that there is no special name for a space at that spot. The dash at the bottom indicates no condition. Two properties may be combined using this diagram by following the diagram upwards until both branches meet. For example, if a space is both completely normal ("CN") and completely Hausdorff ("CT2"), then following both branches up, one finds the spot "•/T5". Since completely Hausdorff spaces are T0(even though completely normal spaces may not be), one takes the T0side of the slash, so a completely normal completely Hausdorff space is the same as a T5space (less ambiguously known as a completely normal Hausdorff space, as can be seen in the table above). As can be seen from the diagram, normal and R0together imply a host of other properties, since combining the two properties leads through the many nodes on the right-side branch. Since regularity is the most well known of these, spaces that are both normal and R0are typically called "normal regular spaces". In a somewhat similar fashion, spaces that are both normal and T1are often called "normal Hausdorff spaces" by people that wish to avoid the ambiguous "T" notation. These conventions can be generalised to other regular spaces and Hausdorff spaces. [NB: This diagram does not reflect that perfectly normal spaces are always regular; the editors are working on this now.] There are some other conditions on topological spaces that are sometimes classified with the separation axioms, but these don't fit in with the usual separation axioms as completely. Other than their definitions, they aren't discussed here; see their individual articles.
https://en.wikipedia.org/wiki/Separation_axiom
Inmathematics,localization of a categoryconsists of adding to acategoryinversemorphismsfor some collection of morphisms, constraining them to becomeisomorphisms. This is formally similar to the process oflocalization of a ring; it in general makes objects isomorphic that were not so before. Inhomotopy theory, for example, there are many examples of mappings that are invertibleup tohomotopy; and so large classes ofhomotopy equivalentspaces[clarification needed].Calculus of fractionsis another name for working in a localized category. AcategoryCconsists of objects andmorphismsbetween these objects. The morphisms reflect relations between the objects. In many situations, it is meaningful to replaceCby another categoryC'in which certain morphisms are forced to be isomorphisms. This process is called localization. For example, in the category ofR-modules(for some fixed commutative ringR) the multiplication by a fixed elementrofRis typically (i.e., unlessris aunit) not an isomorphism: The category that is most closely related toR-modules, but where this mapisan isomorphism turns out to be the category ofR[S−1]{\displaystyle R[S^{-1}]}-modules. HereR[S−1]{\displaystyle R[S^{-1}]}is thelocalizationofRwith respect to the (multiplicatively closed) subsetSconsisting of all powers ofr,S={1,r,r2,r3,…}{\displaystyle S=\{1,r,r^{2},r^{3},\dots \}}The expression "most closely related" is formalized by two conditions: first, there is afunctor sending anyR-module to itslocalizationwith respect toS. Moreover, given any categoryCand any functor sending the multiplication map byron anyR-module (see above) to an isomorphism ofC, there is a unique functor such thatF=G∘φ{\displaystyle F=G\circ \varphi }. The above examples of localization ofR-modules is abstracted in the following definition. In this shape, it applies in many more examples, some of which are sketched below. Given acategoryCand some classWofmorphismsinC, the localizationC[W−1] is another category which is obtained by inverting all the morphisms inW. More formally, it is characterized by auniversal property: there is a natural localization functorC→C[W−1] and given another categoryD, a functorF:C→Dfactors uniquely overC[W−1] if and only ifFsends all arrows inWto isomorphisms. Thus, the localization of the category is unique up to unique isomorphism of categories, provided that it exists. One construction of the localization is done by declaring that its objects are the same as those inC, but the morphisms are enhanced by adding a formal inverse for each morphism inW. Under suitable hypotheses onW,[1]the morphisms from objectXto objectYare given byroofs (whereX'is an arbitrary object ofCandfis in the given classWof morphisms), modulo certain equivalence relations. These relations turn the map going in the "wrong" direction into an inverse off. This "calculus of fractions" can be seen as a generalization of the construction of rational numbers as equivalence classes of pairs of integers. This procedure, however, in general yields aproper classof morphisms betweenXandY. Typically, the morphisms in a category are only allowed to form a set. Some authors simply ignore such set-theoretic issues. A rigorous construction of localization of categories, avoiding these set-theoretic issues, was one of the initial reasons for the development of the theory ofmodel categories: a model categoryMis a category in which there are three classes of maps; one of these classes is the class ofweak equivalences. Thehomotopy categoryHo(M) is then the localization with respect to the weak equivalences. The axioms of a model category ensure that this localization can be defined without set-theoretical difficulties. Some authors also define alocalizationof a categoryCto be anidempotentand coaugmented functor. A coaugmented functor is a pair(L,l)whereL:C → Cis anendofunctorandl:Id → Lis a natural transformation from the identity functor toL(called the coaugmentation). A coaugmented functor is idempotent if, for everyX, both mapsL(lX),lL(X):L(X) → LL(X)are isomorphisms. It can be proven that in this case, both maps are equal.[2] This definition is related to the one given above as follows: applying the first definition, there is, in many situations, not only a canonical functorC→C[W−1]{\displaystyle C\to C[W^{-1}]}, but also a functor in the opposite direction, For example, modules over the localizationR[S−1]{\displaystyle R[S^{-1}]}of a ring are also modules overRitself, giving a functor In this case, the composition is a localization ofCin the sense of an idempotent and coaugmented functor. Serreintroduced the idea of working inhomotopy theorymodulosome classCofabelian groups. This meant that groupsAandBwere treated as isomorphic, if for exampleA/Blay inC. In the theory ofmodulesover acommutative ringR, whenRhasKrull dimension≥ 2, it can be useful to treat modulesMandNaspseudo-isomorphicifM/Nhassupportof codimension at least two. This idea is much used inIwasawa theory. Thederived categoryof anabelian categoryis much used inhomological algebra. It is the localization of the category of chain complexes (up to homotopy) with respect to thequasi-isomorphisms. Given anabelian categoryAand aSerre subcategoryB,one can define thequotient categoryA/B,which is an abelian category equipped with anexact functorfromAtoA/Bthat isessentially surjectiveand has kernelB.This quotient category can be constructed as a localization ofAby the class of morphisms whose kernel and cokernel are both inB. Anisogenyfrom anabelian varietyAto another oneBis a surjective morphism with finitekernel. Some theorems on abelian varieties require the idea ofabelian variety up to isogenyfor their convenient statement. For example, given an abelian subvarietyA1ofA, there is another subvarietyA2ofAsuch that isisogenoustoA(Poincaré's reducibility theorem: see for exampleAbelian VarietiesbyDavid Mumford). To call this adirect sumdecomposition, we should work in the category of abelian varieties up to isogeny. Thelocalization of a topological space, introduced byDennis Sullivan, produces another topological space whose homology is a localization of the homology of the original space. A much more general concept fromhomotopical algebra, including as special cases both the localization of spaces and of categories, is theBousfield localizationof amodel category. Bousfield localization forces certain maps to becomeweak equivalences, which is in general weaker than forcing them to become isomorphisms.[3]
https://en.wikipedia.org/wiki/Localization_of_a_category
Inmathematics, well-behavedtopological spacescan belocalizedat primes, in a similar way to thelocalization of a ringat a prime. This construction was described byDennis Sullivanin 1970 lecture notes that were finally published in (Sullivan 2005). The reason to do this was in line with an idea of makingtopology, more preciselyalgebraic topology, more geometric. Localization of a spaceXis a geometric form of the algebraic device of choosing 'coefficients' in order to simplify the algebra, in a given problem. Instead of that, the localization can be applied to the spaceX, directly, giving a second spaceY. We letAbe asubringof therational numbers, and letXbe asimply connectedCW complex. Then there is a simply connected CW complexYtogether with a map fromXtoYsuch that This spaceYis unique up tohomotopy equivalence, and is called thelocalizationofXatA. IfAis the localization ofZat a primep, then the spaceYis called thelocalizationofXatp. The map fromXtoYinducesisomorphismsfrom theA-localizations of the homology and homotopy groups ofXto the homology and homotopy groups ofY. Category:Localization (mathematics) Thiscommutative algebra-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Localization_of_a_topological_space
Incryptography,Treyferis ablock cipher/MACdesigned in 1997 by Gideon Yuval. Aimed atsmart cardapplications, the algorithm is extremely simple and compact; it can be implemented in just 29 bytes of8051machine code.[1] Treyfer has a rather smallkey sizeandblock sizeof 64 bits each. All operations are byte-oriented, and there is a single 8×8-bitS-box. The S-box is left undefined; the implementation can simply use whatever data is available in memory. In each round, each byte has added to it the S-box value of the sum of akeybyte and the previous data byte, then it is rotated left one bit. The design attempts to compensate for the simplicity of this round transformation by using 32 rounds. Due to the simplicity of itskey schedule, using the same eight key bytes in each round, Treyfer was one of the first ciphers shown to be susceptible to aslide attack. This cryptanalysis, which is independent of the number of rounds and the choice of S-box, requires 232known plaintextsand 244computation time. A simple implementation of Treyfer can be done as follows[2] This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Treyfer
Asconis a family oflightweightauthenticated ciphersthat had been selected by USNational Institute of Standards and Technology(NIST) for future standardization of the lightweight cryptography.[2] Ascon was developed in 2014 by a team of researchers fromGraz University of Technology,Infineon Technologies, Lamarr Security Research, andRadboud University.[3]The cipher family was chosen as a finalist of theCAESAR Competition[3]in February 2019. NIST had announced its decision on February 7, 2023[3]with the following intermediate steps that would lead to the eventual standardization:[2] The design is based on asponge constructionalong the lines of SpongeWrap and MonkeyDuplex. This design makes it easy to reuse Ascon in multiple ways (as a cipher,hash, or aMAC).[4]As of February 2023, the Ascon suite contained seven ciphers,[3]including:[5] The main components have been borrowed from other designs:[4] The ciphers are parameterizable by thekey lengthk(up to 128 bits), "rate" (block size)r, and two numbers ofroundsa,b. All algorithms supportauthenticated encryptionwith plaintext P andadditional authenticated dataA (that remains unencrypted). The encryption input also includes a publicnonceN, the output -authentication tagT, size of the ciphertext C is the same as that of P. The decryption uses N, A, C, and T as inputs and produces either P or signals verification failure if the message has been altered. Nonce and tag have the same size as the key K (kbits).[6] In the CAESAR submission, two sets of parameters were recommended:[6] The data in both A and P is padded with a single bit with the value of 1 and a number of zeros to the nearest multiple ofrbits. As an exception, if A is an empty string, there is no padding at all.[7] The state consists of 320 bits, so the capacityc=320−r{\displaystyle c=320-r}.[8]The state is initialized by an initialization vector IV (constant for each cipher type, e.g., hex 80400c0600000000 for Ascon-128) concatenated with K and N.[9] The initial state is transformed by applyingatimes the transformation functionp(pa{\displaystyle p^{a}}). On encryption, each word of A || P is XORed into the state and thepis appliedbtimes (pb{\displaystyle p^{b}}). The ciphertext C is contained in the firstrbits of the result of the XOR. Decryption is near-identical to encryption.[8]The final stage that produces the tag T consists of another application ofpa{\displaystyle p^{a}}; the special values are XORed into the lastcbits after the initialization, the end of A, and before the finalization.[7] Transformationpconsists of three layers: Hash values of an empty string (i.e., a zero-length input text) for both the XOF and non-XOF variants.[10] Even a small change in the message will (with overwhelming probability) result in a different hash, due to theavalanche effect. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Ascon_(cipher)
TheNational Institute of Standards and Technology(NIST) is an agency of theUnited States Department of Commercewhose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized intophysical sciencelaboratory programs that includenanoscale science and technology,engineering,information technology,neutronresearch, material measurement, and physical measurement. From 1901 to 1988, the agency was named theNational Bureau of Standards.[4] TheArticles of Confederation, ratified by the colonies in 1781, provided: The United States in Congress assembled shall also have the sole and exclusive right and power of regulating the alloy and value of coin struck by their own authority, or by that of the respective states—fixing the standards of weights and measures throughout the United States.[5] Article 1, section 8, of theConstitution of the United States, ratified in 1789, granted these powers to the new Congress: "The Congress shall have power ... To coin money, regulate the value thereof, and of foreign coin, and fix the standard of weights and measures".[6] In January 1790,PresidentGeorge Washington, in his firstannual message to Congress, said, "Uniformity in the currency, weights, and measures of the United States is an object of great importance, and will, I am persuaded, be duly attended to."[7] On October 25, 1791, Washington again appealed Congress: A uniformity of the weights and measures of the country is among the important objects submitted to you by the Constitution and if it can be derived from a standard at once invariable and universal, must be no less honorable to the public council than conducive to the public convenience.[8] In 1821,PresidentJohn Quincy Adamsdeclared, "Weights and measures may be ranked among the necessities of life to every individual of human society.".[9]Nevertheless, it was not until 1838 that the United States government adopted a uniform set of standards.[6] From 1830 until 1901, the role of overseeing weights and measures was carried out by the Office of Standard Weights and Measures, which was part of the Survey of the Coast—renamed the United States Coast Survey in 1836 and theUnited States Coast and Geodetic Surveyin 1878—in theUnited States Department of the Treasury.[10][11][12] In 1901, in response to a bill proposed by CongressmanJames H. Southard(R, Ohio), the Bureau of Standards was founded with the mandate to provide standard weights and measures, and to serve as the national physical laboratory for the United States. Southard had previously sponsored a bill for metric conversion of the United States.[13] PresidentTheodore RooseveltappointedSamuel W. Strattonas the first director. The budget for the first year of operation was $40,000. The Bureau took custody of the copies of thekilogramandmeterbars that were the standards for US measures, and set up a program to providemetrologyservices for United States scientific and commercial users. A laboratory site was constructed inWashington, DC, and instruments were acquired from the national physical laboratories of Europe. In addition to weights and measures, the Bureau developed instruments for electrical units and for measurement of light. In 1905 a meeting was called that would be the first "National Conference on Weights and Measures". Initially conceived as purely ametrologyagency, the Bureau of Standards was directed byHerbert Hooverto set up divisions to develop commercial standards for materials and products.[13]Some of these standards were for products intended for government use, but product standards also affected private-sector consumption. Quality standards were developed for products including some types of clothing, automobile brake systems and headlamps,antifreeze, and electrical safety. DuringWorld War I, the Bureau worked on multiple problems related to war production, even operating its own facility to produceoptical glasswhen European supplies were cut off. Between the wars,Harry Diamondof the Bureau developed ablind approachradio aircraft landing system. DuringWorld War II, military research and developmentwas carried out, including development ofradio propagationforecast methods, theproximity fuzeand the standardized airframe used originally forProject Pigeon, and shortly afterwards the autonomously radar-guidedBatanti-ship guided bomb and theKingfisher familyof torpedo-carrying missiles. In 1948, financed by the United States Air Force, the Bureau began design and construction ofSEAC, the Standards Eastern Automatic Computer. The computer went into operation in May 1950 using a combination ofvacuum tubesand solid-statediodelogic. About the same time theStandards Western Automatic Computer, was built at the Los Angeles office of the NBS byHarry Huskeyand used for research there. A mobile version,DYSEAC, was built for the Signal Corps in 1954. Due to a changing mission, the "National Bureau of Standards" became the "National Institute of Standards and Technology" in 1988.[10]Following theSeptember 11, 2001attacks, under the National Construction Safety Team Act (NCST), NIST conducted the official investigation into thecollapse of the World Trade Centerbuildings. Following the 2021Surfside condominium building collapse, NIST sent engineers to the site to investigate the cause of the collapse.[14] In 2019, NIST launched a program named NIST on a Chip to decrease the size of instruments from lab machines to chip size. Applications include aircraft testing, communication with satellites for navigation purposes, and temperature and pressure.[15] In 2023, theBiden administrationbegan plans to create a U.S. AI Safety Institute within NIST to coordinateAI safetymatters. According toThe Washington Post, NIST is considered "notoriously underfunded and understaffed", which could present an obstacle to these efforts.[16] NIST, known between 1901 and 1988 as the National Bureau of Standards (NBS), is ameasurement standards laboratory, also known as the National Metrological Institute (NMI), which is a non-regulatory agency of theUnited States Department of Commerce. The institute's official mission is to:[17] Promote U.S. innovation and industrial competitiveness by advancingmeasurement science,standards, andtechnologyin ways that enhance economic security and improve ourquality of life. NIST had an operatingbudgetforfiscal year2007 (October 1, 2006 – September 30, 2007) of about $843.3 million. NIST's 2009 budget was $992 million, and it also received $610 million as part of theAmerican Recovery and Reinvestment Act.[18]NIST employs about 2,900 scientists, engineers, technicians, and support and administrative personnel. About 1,800 NIST associates (guest researchers and engineers from American companies and foreign countries) complement the staff. In addition, NIST partners with 1,400 manufacturing specialists and staff at nearly 350 affiliated centers around the country. NIST publishes theHandbook 44that provides the "Specifications, tolerances, and other technical requirements for weighing and measuring devices". TheCongress of 1866made use of the metric system in commerce a legally protected activity through the passage ofMetric Act of 1866.[19]On May 20, 1875, 17 out of 20 countries signed a document known as theMetric Conventionor theTreaty of the Meter, which established theInternational Bureau of Weights and Measuresunder the control of an international committee elected by theGeneral Conference on Weights and Measures.[20] NIST is headquartered inGaithersburg, Maryland, and operates a facility inBoulder, Colorado, which was dedicated byPresidentEisenhowerin 1954.[21][22][23]NIST's activities are organized into laboratory programs and extramural programs. Effective October 1, 2010, NIST was realigned by reducing the number of NIST laboratory units from ten to six.[24]NIST Laboratories include:[25] Extramural programs include: NIST's Boulder laboratories are best known forNIST‑F1, which houses anatomic clock. NIST‑F1 serves as the source of the nation's official time. From its measurement of the natural resonance frequency ofcesium—which defines thesecond—NIST broadcaststime signalsvialongwaveradio stationWWVBnearFort Collins, Colorado, andshortwaveradio stationsWWVandWWVH, located near Fort Collins andKekaha, Hawaii, respectively.[33] NIST also operates aneutronscience user facility: theNIST Center for Neutron Research(NCNR). The NCNR provides scientists access to a variety ofneutron scatteringinstruments, which they use in many research fields (materials science, fuel cells, biotechnology, etc.). The SURF III Synchrotron Ultraviolet Radiation Facility is a source ofsynchrotron radiation, in continuous operation since 1961. SURF III now serves as the US national standard for source-based radiometry throughout the generalized optical spectrum. AllNASA-borne, extreme-ultraviolet observation instruments have been calibrated at SURF since the 1970s, and SURF is used for the measurement and characterization of systems forextreme ultraviolet lithography. The Center for Nanoscale Science and Technology (CNST) performs research innanotechnology, both through internal research efforts and by running a user-accessiblecleanroomnanomanufacturingfacility. This "NanoFab" is equipped with tools forlithographicpatterning and imaging (e.g.,electron microscopesandatomic force microscopes). NIST has seven standing committees: As part of its mission, NIST supplies industry, academia, government, and other users with over 1,300Standard Reference Materials(SRMs). These artifacts are certified as having specific characteristics or component content, used as calibration standards for measuring equipment and procedures, quality control benchmarks for industrial processes, and experimental control samples. NIST publishes theHandbook 44each year after the annual meeting of theNational Conference on Weights and Measures(NCWM). Each edition is developed through cooperation of theCommittee on Specifications and Tolerancesof the NCWM and theWeights and Measures Division(WMD) of NIST. The purpose of the book is a partial fulfillment of the statutory responsibility for "cooperation with the states in securing uniformity of weights and measures laws and methods of inspection". NIST has been publishing various forms of what is now theHandbook 44since 1918 and began publication under the current name in 1949. The 2010 edition conforms to the concept of the primary use of the SI (metric) measurements recommended by theOmnibus Foreign Trade and Competitiveness Act of 1988.[34][35] NIST is developing government-wideidentity documentstandards for federal employees and contractors to prevent unauthorized persons from gaining access to government buildings and computer systems.[36] In 2002, theNational Construction Safety Team Actmandated NIST to conduct an investigation into thecollapse of the World Trade Centerbuildings 1 and 2 and the 47-story 7 World Trade Center. The "World Trade Center Collapse Investigation", directed by lead investigator Shyam Sunder,[37]covered three aspects, including a technical building andfire safetyinvestigation to study the factors contributing to the probable cause of the collapses of the WTC Towers (WTC 1 and 2) and WTC 7. NIST also established a research and development program to provide the technical basis for improved building and fire codes, standards, and practices, and a dissemination and technical assistance program to engage leaders of the construction and building community in implementing proposed changes to practices, standards, and codes. NIST also is providing practical guidance and tools to better prepare facility owners, contractors, architects, engineers, emergency responders, and regulatory authorities to respond to future disasters. The investigation portion of the response plan was completed with the release of the final report on 7 World Trade Center on November 20, 2008. The final report on the WTC Towers—including 30 recommendations for improving building and occupant safety—was released on October 26, 2005.[38] NIST works in conjunction with theTechnical Guidelines Development Committeeof theElection Assistance Commissionto develop theVoluntary Voting System Guidelinesforvoting machinesand other election technology. In February 2014 NIST published theNIST Cybersecurity Frameworkthat serves as voluntary guidance for organizations to manage and reduce cybersecurity risk.[39]It was later amended and Version 1.1 was published in April 2018.[40]Executive Order13800, Strengthening the Cybersecurity of Federal Networks andCritical Infrastructure, made the Framework mandatory for U.S. federal government agencies.[39]An extension to the NIST Cybersecurity Framework is theCybersecurity Maturity Model (CMMC)which was introduced in 2019 (though the origin of CMMC began with Executive Order 13556).[41] It emphasizes the importance of implementingZero-trust architecture (ZTA)which focuses on protecting resources over the network perimeter. ZTA utilizes zero trust principles which include "never trust, always verify", "assume breach" and "least privileged access" tosafeguardusers, assets, and resources. Since ZTA holds no implicit trust to users within the network perimeter, authentication and authorization are performed at every stage of a digital transaction. This reduces the risk of unauthorized access to resources.[42] NIST released a draft of the CSF 2.0 for public comment through November 4, 2023. NIST decided to update the framework to make it more applicable to small and medium size enterprises that use the framework, as well as to accommodate the constantly changing nature of cybersecurity.[43] In August 2024, NIST released a final set of encryption tools designed to withstand the attack of aquantum computer.Thesepost-quantum encryptionstandards secure a wide range of electronic information, from confidential email messages to e-commerce transactions that propel the modern economy.[44] Four scientific researchers at NIST have been awardedNobel Prizesfor work inphysics:William Daniel Phillipsin 1997,Eric Allin Cornellin 2001,John Lewis Hallin 2005 andDavid Jeffrey Winelandin 2012, which is the largest number for any US government laboratory. All four were recognized for their work related tolaser coolingof atoms, which is directly related to the development and advancement of the atomic clock. In 2011,Dan Shechtmanwas awarded the Nobel Prize in chemistry for his work onquasicrystalsin theMetallurgyDivision from 1982 to 1984. In addition,John Werner Cahnwas awarded the 2011 Kyoto Prize for Materials Science, and theNational Medal of Sciencehas been awarded to NIST researchers Cahn (1998) and Wineland (2007). Other notable people who have worked at NBS or NIST include: Since 1989, the director of NIST has been a Presidential appointee and is confirmed by theUnited States Senate,[45]and since that year the average tenure of NIST directors has fallen from 11 years to 2 years in duration. Since the 2011 reorganization of NIST, the director also holds the title of Under Secretary of Commerce for Standards and Technology. Seventeen individuals have officially held the position (in addition to seven acting directors who have served on a temporary basis). NIST holdspatentson behalf of theFederal government of the United States,[46]with at least one of them being custodial to protect public domain use, such as one for aChip-scale atomic clock, developed by a NIST team as part of aDARPAcompetition.[47] In September 2013, bothThe GuardianandThe New York Timesreported that NIST allowed theNational Security Agency(NSA) to insert acryptographically secure pseudorandom number generatorcalledDual EC DRBGinto NIST standardSP 800-90that had akleptographicbackdoorthat the NSA can use to covertly predict the future outputs of thispseudorandom number generatorthereby allowing the surreptitious decryption of data.[48]Both papers report[49][50]that the NSA worked covertly to get its own version of SP 800-90 approved for worldwide use in 2006. The whistle-blowing document states that "eventually, NSA became the sole editor". The reports confirm suspicions and technical grounds publicly raised by cryptographers in 2007 that the EC-DRBG could contain akleptographicbackdoor (perhaps placed in the standard by NSA).[51] NIST responded to the allegations, stating that "NIST works to publish the strongest cryptographic standards possible" and that it uses "a transparent, public process to rigorously vet our recommended standards".[52]The agency stated that "there has been some confusion about the standards development process and the role of different organizations in it...The National Security Agency (NSA) participates in the NIST cryptography process because of its recognized expertise. NIST is also required by statute to consult with the NSA."[53]Recognizing the concerns expressed, the agency reopened the public comment period for the SP800-90 publications, promising that "if vulnerabilities are found in these or any other NIST standards, we will work with the cryptographic community to address them as quickly as possible".[54]Due to public concern of thiscryptovirologyattack, NIST rescinded the EC-DRBG algorithm from the NIST SP 800-90 standard.[55] In addition to these journals, NIST (and the National Bureau of Standards before it) has a robust technical reports publishing arm. NIST technical reports are published in several dozen series, which cover a wide range of topics, from computer technology to construction to aspects of standardization including weights, measures and reference data.[56]In addition to technical reports, NIST scientists publish many journal and conference papers each year; an database of these, along with more recent technical reports, can be found on the NIST website.[57]
https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
8.0.2macOS30 May 2024; 11 months ago(2024-05-30)8.0.2Linux30 May 2024; 11 months ago(2024-05-30)8.0.2Android30 May 2024; 11 months ago(2024-05-30) TheBerkeley Open Infrastructure for Network Computing[2](BOINC, pronounced/bɔɪŋk/– rhymes with "oink"[3]) is anopen-sourcemiddlewaresystem forvolunteer computing(a type ofdistributed computing).[4]Developed originally to supportSETI@home,[5]it became the platform for many other applications in areas as diverse asmedicine,molecular biology,mathematics,linguistics,climatology,environmental science, andastrophysics, among others.[6]The purpose of BOINC is to enable researchers to utilizeprocessing resourcesofpersonal computersand other devices around the world. BOINC development began with a group based at theSpace Sciences Laboratory(SSL) at theUniversity of California, Berkeley, and led byDavid P. Anderson, who also led SETI@home. As a high-performance volunteer computing platform, BOINC brings together 34,236 active participants employing 136,341 active computers (hosts) worldwide, processing daily on average 20.164PetaFLOPSas of 16 November 2021[update][7](it would be the 21st largest processing capability in the world compared with an individualsupercomputer).[8]TheNational Science Foundation(NSF) funds BOINC through awards SCI/0221529,[9]SCI/0438443[10]and SCI/0721124.[11]Guinness World Recordsranks BOINC as the largestcomputing gridin the world.[12] BOINCcoderuns on variousoperating systems, includingMicrosoft Windows,macOS,Android,[13]Linux, andFreeBSD.[14]BOINC isfree softwarereleased under the terms of theGNU Lesser General Public License(LGPL). BOINC was originally developed to manage theSETI@homeproject.David P. Andersonhas said that he chose its name because he wanted something that was not "imposing", but rather "light, catchy, and maybe - like 'Unix' - a littlerisqué", so he "played around with various acronyms and settled on 'BOINC'".[15] The original SETI client was a non-BOINC software exclusively for SETI@home. It was one of the firstvolunteer computingprojects, and not designed with a high level of security. As a result, some participants in the project attempted to cheat the project to gain "credits", while others submitted entirely falsified work. BOINC was designed, in part, to combat these security breaches.[16] The BOINC project started in February 2002, and its first version was released on April 10, 2002. The first BOINC-based project wasPredictor@home, launched on June 9, 2004. In 2009,AQUA@homedeployed multi-threaded CPU applications for the first time,[17]followed by the firstOpenCLapplication in 2010. As of 15 August 2022, there are 33 projects on the official list.[18]There are also, however, BOINC projects not included on the official list. Each year, an international BOINC Workshop is hosted to increase collaboration among project administrators. In 2021, the workshop was hosted virtually.[19] While not affiliated with BOINC officially, there have been several independent projects that reward BOINC users for their participation, includingCharity Engine(sweepstakes based on processing power with prizes funded by private entities who purchase computational time of CE users), Bitcoin Utopia (now defunct), andGridcoin(a blockchain which mints coins based on processing power). BOINC issoftwarethat can exploit the unusedCPUandGPUcycles oncomputer hardwareto perform scientific computing. In 2008, BOINC's website announced thatNvidiahad developed a language calledCUDAthat uses GPUs for scientific computing. With NVIDIA's assistance, several BOINC-based projects (e.g.,MilkyWay@home.SETI@home) developed applications that run on NVIDIA GPUs using CUDA. BOINC added support for theATI/AMDfamily of GPUs in October 2009. The GPU applications run from 2 to 10 times faster than the former CPU-only versions. GPU support (viaOpenCL) was added for computers usingmacOSwith AMD Radeon graphic cards, with the current BOINC client supporting OpenCL on Windows, Linux, and macOS. GPU support is also provided forIntelGPUs.[20] BOINC consists of aserversystem andclient softwarethat communicate to process and distribute work units and return results. A BOINC app also exists for Android, allowing every person owning an Android device – smartphone, tablet and/or Kindle – to share their unused computing power. The user is allowed to select the research projects they want to support, if it is in the app's available project list. By default, the application will allow computing only when the device is connected to a WiFi network, is being charged, and the battery has a charge of at least 90%.[21]Some of these settings can be changed to users needs. Not all BOINC projects are available[22]and some of the projects are not compatible with all versions of Android operating system or availability of work is intermittent. Currently available projects[22]are Asteroids@home,Einstein@Home,LHC@home,Moo! Wrapper,Rosetta@home,World Community GridandYoyo@home[ru]. As of September 2021, the most recent version of the mobile application can only be downloaded from the BOINC website or the F-Droid repository as the official Google Play store does not allow downloading and running executables not signed by the app developer and each BOINC project has their own executable files. BOINC can be controlled remotely byremote procedure calls(RPC), from thecommand line, and from a BOINC Manager. BOINC Manager currently has two "views": theAdvanced Viewand theSimplifiedGUI. TheGrid Viewwas removed in the 6.6.x clients as it was redundant. The appearance (skin) of the Simplified GUI is user-customizable, in that users can create their own designs. A BOINC Account Manager is an application that manages multiple BOINC project accounts across multiple computers (CPUs) and operating systems. Account managers were designed for people who are new to BOINC or have several computers participating in several projects. The account manager concept was conceived and developed jointly byGridRepublicand BOINC. Current and past account managers include: BOINC is used by many groups and individuals. Some BOINC projects are based at universities and research labs while others are independent areas of research or interest.[24] 2 Spy Hill Research Mission 2: Develop forum software forInteractions in Understanding the Universe[89] University of Southern California
https://en.wikipedia.org/wiki/Berkeley_Open_Infrastructure_for_Network_Computing
cksumis acommandinUnixandUnix-likeoperating systemsthat generates achecksumvalue for a file or stream of data. The cksum command reads each file given in its arguments, orstandard inputif no arguments are provided, and outputs the file's 32-bitcyclic redundancy check(CRC) checksum andbytecount.[1]The CRC output by cksum is different from the CRC-32 used in zip, PNG and zlib.[2] Thecksumcommand can be used to verify that files transferred by unreliable means arrived intact.[1]However, the CRC checksum calculated by thecksumcommand is notcryptographically secure: While it guards againstaccidentalcorruption (it is unlikely that the corrupted data will have the same checksum as the intended data), it is not difficult for an attacker todeliberatelycorrupt the file in a specific way that its checksum is unchanged. Unix-like systems typically include other commands for cryptographically secure checksums, such assha256sum. The command is available as a separate package forMicrosoft Windowsas part of theUnxUtilscollection ofnativeWin32portsof common GNU Unix-like utilities.[3] Latest GNU Coreutils cksum provides additional checksum algorithms via -a option, as an extension beyond POSIX.[1] The standardcksumcommand, as found on most Unix and Unix-like operating systems (includingLinux,*BSD,[4][5][6]macOS, andSolaris[7]) uses a CRC algorithm based on theethernet standard frame check[8]and is therefore interoperable between implementations. This is in contrast to thesum command, which is not as interoperable and not compatible with the CRC-32 calculation. OnTru64operating systems, thecksumcommand returns a different CRC value, unless theenvironment variableCMD_ENVis set toxpg4.[citation needed] cksumuses thegenerator polynomial0x04C11DB7 and appends to the message its length inlittle endianrepresentation. That length hasnull bytestrimmed on the right end.[8] where4038471504represents the checksum value and75represents the file size oftest.txt.
https://en.wikipedia.org/wiki/Cksum
sha1sumis acomputer programthat calculates and verifiesSHA-1hashes. It is commonly used to verify the integrity of files. It (or a variant) is installed by default on mostLinux distributions. Typically distributed alongsidesha1sumaresha224sum,sha256sum,sha384sumandsha512sum, which use a specificSHA-2hash function andb2sum,[1]which uses theBLAKE2cryptographic hash function. The SHA-1 variants areprovenvulnerable tocollision attacks, and users should instead use, for example, a SHA-2 variant such assha256sumor theBLAKE2variantb2sumto prevent tampering by an adversary.[2][3] It is included inGNU Core Utilities,[4]Busybox(excludingb2sum),[5]andToybox(excludingb2sum).[6]Ports to a wide variety of systems are available, includingMicrosoft Windows. To create a file with a SHA-1 hash in it, if one is not provided: If distributing one file, the.sha1file extensionmay be appended to the filename e.g.: The output contains one line per file of the form "{hash} SPACE (ASTERISK|SPACE) [{directory} SLASH] {filename}". (Note well, if the hash digest creation is performed in text mode instead of binary mode, then there will be two space characters instead of a single space character and an asterisk.) For example: To verify that a file was downloaded correctly or that it has not been tampered with: sha1sumcan only create checksums of one or multiple files inside a directory, but not of a directory tree, i.e. of subdirectories, sub-subdirectories, etc. and the files they contain. This is possible by usingsha1sumin combination with thefindcommand with the-execoption, or bypipingthe output fromfindintoxargs.sha1deepcan create checksums of a directory tree. To usesha1sumwithfind: Likewise, piping the output fromfindintoxargsyields the same output:
https://en.wikipedia.org/wiki/Sha1sum
Hashcashis aproof-of-work systemused to limitemail spamanddenial-of-service attacks. Hashcash was proposed in 1997 byAdam Back[1]and described more formally in Back's 2002 paper "Hashcash – A Denial of Service Counter-Measure".[2]In Hashcash the client has to concatenate a random number with a string several times and hash this new string. It then has to do so over and over until a hash beginning with a certain number of zeros is found.[3] The idea "...to require a user to compute a moderately hard, but not intractable function..." was proposed by [Cynthia Dwork] and [Moni Naor] in their 1992 paper "Pricing via Processing or Combatting Junk Mail".[4] Hashcash is a cryptographic hash-based proof-of-work algorithm that requires a selectable amount of work to compute, but the proof can be verified efficiently. For email uses, a textual encoding of a hashcash stamp is added to theheaderof an email to prove the sender has expended a modest amount of CPU time calculating the stamp prior to sending the email. In other words, as the sender has taken a certain amount of time to generate the stamp and send the email, it is unlikely that they are a spammer. The receiver can, at negligible computational cost, verify that the stamp is valid. However, the only known way to find a header with the necessary properties isbrute force, trying random values until the answer is found; though testing an individual string is easy, satisfactory answers are rare enough that it will require a substantial number of tries to find the answer. The hypothesis is that spammers, whose business model relies on their ability to send large numbers of emails with very little cost per message, will cease to be profitable if there is even a small cost for each spam they send. Receivers can verify whether a sender made such an investment and use the results to help filter email. The header line looks something like this:[5] The header contains: The header contains the recipient's email address, the date of the message, and information proving that the required computation has been performed. The presence of the recipient's email address requires that a different header be computed for each recipient. The date allows the recipient to record headers received recently and to ensure that the header is unique to the email message. The sender prepares a header and appends a counter value initialized to a random number. It then computes the 160-bitSHA-1hashof the header. If the first 20 bits (i.e. the 5 most significant hex digits) of the hash are all zeros, then this is an acceptable header. If not, then the sender increments the counter and tries the hash again. Out of 2160possible hash values, there are 2140hash values that satisfy this criterion. Thus the chance of randomly selecting a header that will have 20 zeros as the beginning of the hash is 1 in 220(approx. 106, or about one in a million). The number of times that the sender needs to try to get a valid hash value is modeled bygeometric distribution. Hence the sender will on average have to try 220values to find a valid header. Given reasonable estimates of the time needed to compute the hash, this would take about one second to find. No more efficient method than this brute force approach is known to find a valid header. A normal user on a desktop PC would not be significantly inconvenienced by the processing time required to generate the Hashcash string. However, spammers would suffer significantly due to the large number of spam messages sent by them. Technically the system is implemented with the following steps: If the hash string passes all of these tests, it is considered a valid hash string. All of these tests take far less time and disk space than receiving the body content of the e-mail. The time needed to compute such a hash collision isexponentialwith the number of zero bits. So additional zero bits can be added (doubling the amount of time needed to compute a hash with each additional zero bit) until it is too expensive for spammers to generate valid header lines. Confirming that the header is valid is much faster and always takes the same amount of time, no matter how many zero bits are required for a valid header, since this requires only a single hashing operation. The Hashcash system has the advantage overmicropaymentproposals applying to legitimate e-mail that no real money is involved. Neither the sender nor recipient need to pay, thus the administrative issues involved with any micropayment system and moral issues related to charging for e-mail are entirely avoided. On the other hand, as Hashcash requires potentially significant computational resources to be expended on each e-mail being sent, it is somewhat difficult to tune the ideal amount of average time one wishes clients to expend computing a valid header. This can mean sacrificing accessibility from low-endembedded systemsor else running the risk of hostile hosts not being challenged enough to provide an effective filter from spam. Hashcash is also fairly simple to implement in mail user agents and spam filters. No central server is needed. Hashcash can be incrementally deployed—the extra Hashcash header is ignored when it is received by mail clients that do not understand it. One plausible analysis[6]concluded that only one of the following cases is likely: either non-spam e-mail will get stuck due to lack of processing power of the sender, or spam e-mail is bound to still get through. Examples of each include, respectively, a centralized e-mail topology (like amailing list), in which some server is to send an enormous number oflegitimatee-mails, andbotnetsor cluster farms with which spammers can increase their processing power enormously. Most of these issues may be addressed. E.g., botnets may expire faster because users notice the high CPU load and take counter-measures, and mailing list servers can be registered in white lists on the subscribers' hosts and thus be relieved from the hashcash challenges. Another projected problem is that computers continue to get faster according toMoore's law. So the difficulty of the calculations required must be increased over time. However, developing countries can be expected to use older hardware, which means that they will find it increasingly difficult to participate in the e-mail system. This also applies to lower-income individuals in developed countries who cannot afford the latest hardware. Like hashcash,cryptocurrenciesuse a hash function as their proof-of-work system. The rise of cryptocurrency has created a demand forASIC-based mining machines. Although most cryptocurrencies use theSHA-256hash function, the same ASIC technology could be used to create hashcash solvers that are three orders of magnitude faster than a consumer CPU, reducing the computational hurdle for spammers. In contrast to hashcash in mail applications that relies on recipients to set manually an amount of work intended todetermalicious senders, theBitcoin cryptocurrency networkemploys a different hash-basedproof-of-workchallenge toenablecompetitiveBitcoin mining. A Bitcoin miner runs a computer program that collects unconfirmed transactions from users on the network. Together, these can form a "block" and earn a payment to the miner, but a block is only accepted by the network if its hash meets the network's difficulty target. Thus, as in hashcash, miners must discover by brute force the"nonce"that, when included in the block, results in an acceptable hash. Unlike hashcash, Bitcoin's difficulty target doesnotspecify a minimum number of leading zeros in the hash. Instead, the hash is interpreted as a (very large) integer, and this integer must be less than the target integer. This is necessary because the Bitcoin network must periodically adjust its difficulty level to maintain an average time of 10 minutes between successive blocks. If only leading zeros were considered, then the difficulty could only be doubled or halved, causing the adjustment to greatly overshoot or undershoot in response to small changes in the average block time. Still, the number of leading zeros in the target serves as a good approximation of the current difficulty. In January 2020,block #614525had 74 leading zeros. Hashcash was used as a potential solution forfalse positiveswith automated spam filtering systems, as legitimate users will rarely be inconvenienced by the extra time it takes to mine a stamp.[7]SpamAssassinwas able to check for Hashcash stamps since version 2.70 until version 3.4.2, assigning a negative score (i.e. less likely to be spam) for valid, unspent Hashcash stamps. However, although the hashcash plugin is on by default, it still needs to be configured with a list of address patterns that must match against the Hashcash resource field before it will be used.[8]Support was removed from SpamAssassin'strunkon 2019-06-26, affecting version 3.4.3 and beyond.[9] The Penny Post software project[10]onSourceForgeimplements Hashcash in theMozilla Thunderbirdemail client.[11]The project is named for the historical availability of conventional mailing services that cost the sender just one penny; seePenny Postfor information about such mailing services in history. Microsoft also designed and implemented a now deprecated[12]open specification called "Email Postmark". It is similar to Hashcash.[13]This was part of Microsoft's Coordinated Spam Reduction Initiative (CSRI).[14]The Microsoft email postmark variant of Hashcash is implemented in the Microsoft mail infrastructure components Exchange, Outlook, and Hotmail. The format differences between Hashcash and Microsoft's email postmark are that postmark hashes the body in addition to the recipient, uses a modifiedSHA-1as the hash function, and uses multiple sub-puzzles to reduce proof of work variance. Like e-mail,blogsoften fall victim tocomment spam. Some blog owners have used hashcash scripts written in theJavaScriptlanguage to slow down comment spammers.[15]Some scripts (such as wp-hashcash) claim to implement hashcash but instead depend on JavaScript obfuscation to force the client to generate a matching key; while this does require some processing power, it does not use the hashcash algorithm or hashcash stamps. In a digital marketplace, service providers can use hashcash to build reputation to attract clients. To build reputation, a service provider first selects a public key as its ID, and then discovers by brute force a nonce that, when concatenated to the ID, results in a hash digest with several leading zeros. The more zeros, the higher the reputation.[16] Hashcash is not patented, and the reference implementation[17]and most of the other implementations are free software. Hashcash is included or available for manyLinux distributions. RSA has made IPR statements to the IETF about client-puzzles[18]in the context of an RFC[19]that described client-puzzles (not hashcash). The RFC included hashcash in the title and referenced hashcash, but the mechanism described in it is a known-solution interactive challenge which is more akin to Client-Puzzles; hashcash is non-interactive and therefore does not have a known solution. In any case RSA's IPR statement can not apply to hashcash because hashcash predates[1](March 1997) the client-puzzles publication[20](February 1999) and the client-puzzles patent filing US7197639[21](February 2000).
https://en.wikipedia.org/wiki/Hashcash
Trusted timestampingis the process ofsecurelykeeping track of the creation and modification time of a document. Security here means that no one—not even the owner of the document—should be able to change it once it has been recorded provided that the timestamper's integrity is never compromised. The administrative aspect involves setting up a publicly available, trusted timestamp management infrastructure to collect, process and renew timestamps. The idea of timestamping information is centuries old. For example, whenRobert HookediscoveredHooke's lawin 1660, he did not want to publish it yet, but wanted to be able to claim priority. So he published theanagramceiiinosssttuvand later published the translationut tensio sic vis(Latin for "as is the extension, so is the force"). Similarly,Galileofirst published his discovery of the phases of Venus in the anagram form. Sir Isaac Newton, in responding to questions fromLeibnizin a letter in 1677, concealed the details of his"fluxional technique"with an anagram: Trusted digital timestamping has first been discussed in literature byStuart HaberandW. Scott Stornetta.[1] There are many timestamping schemes with different security goals: Coverage in standards: For systematic classification and evaluation of timestamping schemes see works by Masashi Une.[2] According to the RFC 3161 standard, a trusted timestamp is atimestampissued by aTrusted Third Party(TTP) acting as aTime Stamping Authority(TSA). It is used to prove the existence of certain data before a certain point (e.g. contracts, research data, medical records, ...) without the possibility that the owner can backdate the timestamps. Multiple TSAs can be used to increase reliability and reduce vulnerability. The newerANSI ASC X9.95 Standardfortrusted timestampsaugments the RFC 3161 standard with data-level security requirements to ensuredata integrityagainst a reliable time source that is provable to any third party. This standard has been applied to authenticating digitally signed data forregulatory compliance, financial transactions, and legal evidence. The technique is based ondigital signaturesandhash functions. First a hash is calculated from the data. A hash is a sort of digital fingerprint of the original data: a string of bits that is practically impossible to duplicate with any other set of data. If the original data is changed then this will result in a completely different hash. This hash is sent to the TSA. The TSA concatenates a timestamp to the hash and calculates the hash of this concatenation. This hash is in turndigitally signedwith theprivate keyof the TSA. This signed hash + the timestamp is sent back to the requester of the timestamp who stores these with the original data (see diagram). Since the original data cannot be calculated from the hash (because thehash functionis aone way function), the TSA never gets to see the original data, which allows the use of this method for confidential data. Anyone trusting the timestamper can then verify that the document wasnotcreatedafterthe date that the timestamper vouches. It can also no longer be repudiated that the requester of the timestamp was in possession of the original data at the time given by the timestamp. To prove this (see diagram) thehashof the original data is calculated, the timestamp given by the TSA is appended to it and the hash of the result of this concatenation is calculated, call this hash A. Then thedigital signatureof the TSA needs to be validated. This is done by decrypting the digital signature using public key of TSA, producing hash B. Hash A is then compared with hash B inside the signed TSA message to confirm they are equal, proving that the timestamp and message is unaltered and was issued by the TSA. If not, then either the timestamp was altered or the timestamp was not issued by the TSA. With the advent ofcryptocurrencieslikebitcoin, it has become possible to get some level of secure timestamp accuracy in adecentralizedand tamper-proof manner. Digital data can be hashed and the hash can be incorporated into a transaction stored in theblockchain, which serves as evidence of the time at which that data existed.[3][4]Forproof of workblockchains, the security derives from the tremendous amount of computational effort performed after the hash was submitted to the blockchain. Tampering with the timestamp would require more computational resources than the rest of the network combined, and cannot be done unnoticed in an actively defended blockchain. However, the design and implementation of Bitcoin in particular makes its timestamps vulnerable to some degree of manipulation, allowing timestamps up to two hours in the future, and accepting new blocks with timestamps earlier than the previous block.[5] The decentralized timestamping approach using the blockchain has also found applications in other areas, such as indashboard cameras, to secure the integrity of video files at the time of their recording,[6]or to prove priority for creative content and ideas shared on social media platforms.[7]
https://en.wikipedia.org/wiki/Trusted_timestamping
TheAll Species Foundation(stylized asALL Species Foundation) was an organization aiming to catalog allspeciesonEarthby 2025 through their All Species Inventory initiative.[1]The project was launched in 2000 byKevin Kelly,Stewart Brandand Ryan Phelan.[2][3]Along with other similar efforts, the All Species Foundation was promoted as an important step forward in expanding, modernizing and digitizing the field oftaxonomy.[4]The Foundation started with a large grant from the Schlinger Foundation but had difficulty finding continued funding.[5]In 2007 the project ceased activity and "[handed] off [its] mission to theEncyclopedia of Life".[2] The All Species Foundation received some critique for its approach to defining and identifying species. Anopen letterexpressed concern over thespecies problem, a fundamental issue in taxonomy of what exactly defines a species. The letter argued that failing to acknowledge and account for this fundamental issue could undermine the use of the database for conservation andbiodiversitypreservation.[6]
https://en.wikipedia.org/wiki/All_Species_Foundation
Comparative linguisticsis a branch ofhistorical linguisticsthat is concerned with comparing languages to establish theirhistoricalrelatedness. Genetic relatednessimplies a common origin orproto-languageand comparative linguistics aims to constructlanguage families, to reconstruct proto-languages and specify the changes that have resulted in the documented languages. To maintain a clear distinction between attested and reconstructed forms, comparative linguists prefix an asterisk to any form that is not found in surviving texts. A number of methods for carrying out language classification have been developed, ranging from simple inspection to computerised hypothesis testing. Such methods have gone through a long process of development. The fundamental technique of comparative linguistics is to comparephonologicalsystems,morphologicalsystems,syntaxand the lexicon of two or more languages using techniques such as thecomparative method. In principle, every difference between two related languages should be explicable to a high degree of plausibility; systematic changes, for example in phonological or morphological systems are expected to be highly regular (consistent). In practice, the comparison may be more restricted, e.g. just to the lexicon. In some methods it may be possible to reconstruct an earlierproto-language. Although the proto-languages reconstructed by the comparative method are hypothetical, a reconstruction may have predictive power. The most notable example of this isFerdinand de Saussure's proposal that theIndo-Europeanconsonantsystem containedlaryngeals, a type of consonant attested in no Indo-European language known at the time. The hypothesis was vindicated with the discovery ofHittite, which proved to have exactly the consonants Saussure had hypothesized in the environments he had predicted. Where languages are derived from a very distant ancestor, and are thus more distantly related, the comparative method becomes less practicable.[1]In particular, attempting to relate two reconstructed proto-languages by the comparative method has not generally produced results that have met with wide acceptance.[citation needed]The method has also not been very good at unambiguously identifying sub-families; thus, different scholars[who?]have produced conflicting results, for example in Indo-European.[citation needed]A number of methods based on statistical analysis of vocabulary have been developed to try and overcome this limitation, such aslexicostatisticsandmass comparison. The former uses lexicalcognateslike the comparative method, while the latter uses onlylexical similarity. The theoretical basis of such methods is that vocabulary items can be matched without a detailed language reconstruction and that comparing enough vocabulary items will negate individual inaccuracies; thus, they can be used to determine relatedness but not to determine the proto-language. The earliest method of this type was the comparative method, which was developed over many years, culminating in the nineteenth century. This uses a long word list and detailed study. However, it has been criticized for example as subjective, informal, and lacking testability.[2]The comparative method uses information from two or more languages and allows reconstruction of the ancestral language. The method ofinternal reconstructionuses only a single language, with comparison of word variants, to perform the same function. Internal reconstruction is more resistant to interference but usually has a limited available base of utilizable words and is able to reconstruct only certain changes (those that have left traces as morphophonological variations). In the twentieth century an alternative method,lexicostatistics, was developed, which is mainly associated withMorris Swadeshbut is based on earlier work. This uses a short word list of basic vocabulary in the various languages for comparisons. Swadesh used 100 (earlier 200) items that are assumed to be cognate (on the basis of phonetic similarity) in the languages being compared, though other lists have also been used. Distance measures are derived by examination of language pairs but such methods reduce the information. An outgrowth of lexicostatistics isglottochronology, initially developed in the 1950s, which proposed a mathematical formula for establishing the date when two languages separated, based on percentage of a core vocabulary of culturally independent words. In its simplest form a constant rate of change is assumed, though later versions allow variance but still fail to achieve reliability. Glottochronology has met with mounting scepticism, and is seldom applied today. Dating estimates can now be generated by computerised methods that have fewer restrictions, calculating rates from the data. However, no mathematical means of producing proto-language split-times on the basis of lexical retention has been proven reliable. Another controversial method, developed byJoseph Greenberg, ismass comparison.[3]The method, which disavows any ability to date developments, aims simply to show which languages are more and less close to each other. Greenberg suggested that the method is useful for preliminary grouping of languages known to be related as a first step toward more in-depth comparative analysis.[4]However, since mass comparison eschews the establishment of regular changes, it is flatly rejected by the majority of historical linguists.[5] Recently, computerised statistical hypothesis testing methods have been developed which are related to both thecomparative methodandlexicostatistics. Character based methods are similar to the former and distanced based methods are similar to the latter (seeQuantitative comparative linguistics). The characters used can be morphological or grammatical as well as lexical.[6]Since the mid-1990s these more sophisticated tree- and network-basedphylogeneticmethods have been used to investigate the relationships between languages and to determine approximate dates for proto-languages. These are considered by many to show promise but are not wholly accepted by traditionalists.[7]However, they are not intended to replace older methods but to supplement them.[8]Such statistical methods cannot be used to derive the features of a proto-language, apart from the fact of the existence of shared items of the compared vocabulary. These approaches have been challenged for their methodological problems, since without a reconstruction or at least a detailed list of phonological correspondences there can be no demonstration that two words in different languages are cognate.[citation needed] There are other branches of linguistics that involve comparing languages, which are not, however, part ofcomparative linguistics: Comparative linguistics includes the study of the historical relationships of languages using the comparative method to search for regular (i.e., recurring) correspondences between the languages' phonology, grammar, and core vocabulary, and through hypothesis testing, which involves examining specific patterns of similarity and difference across languages; some persons with little or no specialization in the field sometimes attempt to establish historical associations between languages by noting similarities between them, in a way that is consideredpseudoscientificby specialists (e.g. spurious comparisons betweenAncient Egyptianand languages likeWolof, as proposed byDiopin the 1960s[9]). The most common method applied in pseudoscientific language comparisons is to search two or more languages for words that seem similar in their sound and meaning. While similarities of this kind often seem convincing to laypersons, linguistic scientists consider this kind of comparison to be unreliable for two primary reasons. First, the method applied is not well-defined: the criterion of similarity is subjective and thus not subject toverification or falsification, which is contrary to the principles of the scientific method. Second, the large size of all languages' vocabulary and a relatively limited inventory of articulated sounds used by most languages makes it easy to find coincidentally similar words between languages.[citation needed][10] There are sometimes political or religious reasons for associating languages in ways that some linguists would dispute. For example, it has been suggested that theTuranianorUral–Altaic languagegroup, which relatesSamiand other languages to theMongolian language, was used to justifyracismtowards the Sami in particular.[11]There are also strong, albeitarealnotgenetic, similarities between theUralicandAltaiclanguages which provided an innocent basis for this theory. In 1930sTurkey, some promoted theSun Language Theory, one that showed thatTurkic languageswere close to the original language. Some believers inAbrahamic religionstry to derive their native languages fromClassical Hebrew, asHerbert W. Armstrong, a proponent ofBritish Israelism, who said that the wordBritishcomes from Hebrewbritmeaning 'covenant' andishmeaning 'man', supposedly proving that the British people are the 'covenant people' of God. AndLithuanian-AmericanarchaeologistMarija Gimbutasargued during the mid-1900s that Basque is clearly related to the extinctPictishand Etruscan languages, in attempt to show that Basque was a remnant of an "Old European culture".[12]In theDissertatio de origine gentium Americanarum(1625), the Dutch lawyerHugo Grotius"proves" that the American Indians (Mohawks) speak a language (lingua Maquaasiorum) derived from Scandinavian languages (Grotius was on Sweden's payroll), supporting Swedish colonial pretensions in America. The Dutch doctorJohannes Goropius Becanus, in hisOrigines Antverpiana(1580) admitsQuis est enim qui non amet patrium sermonem("Who does not love his fathers' language?"), whilst asserting that Hebrew is derived from Dutch. The FrenchmanÉloi Johanneauclaimed in 1818 (Mélanges d'origines étymologiques et de questions grammaticales) that the Celtic language is the oldest, and the mother of all others. In 1759,Joseph de Guignestheorized (Mémoire dans lequel on prouve que les Chinois sont une colonie égyptienne) that the Chinese and Egyptians were related, the former being a colony of the latter. In 1885,Edward Tregear(The Aryan Maori) compared the Maori and "Aryan" languages.Jean Prat[fr], in his 1941Les langues nitales, claimed that the Bantu languages of Africa are descended from Latin, coining the French linguistic termnitalein doing so. Just as Egyptian is related to Brabantic, followingBecanusin hisHieroglyphica, still using comparative methods. The first practitioners of comparative linguistics were not universally acclaimed: upon reading Becanus' book,Scaligerwrote, "never did I read greater nonsense", andLeibnizcoined the termgoropism(fromGoropius) to designate a far-sought, ridiculous etymology. There have also been assertions that humans are descended from non-primate animals, with the use of the voice being the primary basis for comparison.Jean-Pierre Brisset(in La Grande Nouvelle, around 1900) believed and claimed that humans evolved from frogs through linguistic connections, arguing that the croaking of frogs resembles spoken French. He suggested that the French word logement, meaning 'dwelling,' originated from the word l'eau, which means 'water.'[13]
https://en.wikipedia.org/wiki/Comparative_linguistics
Inlinguistics, thecomparative methodis a technique for studying the development of languages by performing a feature-by-feature comparison of two or more languages withcommon descentfrom a shared ancestor and then extrapolating backwards to infer the properties of that ancestor. The comparative method may be contrasted with the method ofinternal reconstructionin which the internal development of a single language is inferred by the analysis of features within that language.[1]Ordinarily, both methods are used together to reconstruct prehistoric phases of languages; to fill in gaps in the historical record of a language; to discover the development of phonological, morphological and other linguistic systems and to confirm or to refute hypothesised relationships between languages. The comparative method emerged in the early 19th century with the birth ofIndo-European studies, then took a definite scientific approach with the works of theNeogrammariansin the late 19th–early 20th century.[2]Key contributions were made by the Danish scholarsRasmus Rask(1787–1832) andKarl Verner(1846–1896), and the German scholarJacob Grimm(1785–1863). The first linguist to offer reconstructed forms from aproto-languagewasAugust Schleicher(1821–1868) in hisCompendium der vergleichenden Grammatik der indogermanischen Sprachen, originally published in 1861.[3]Here is Schleicher's explanation of why he offered reconstructed forms:[4] In the present work an attempt is made to set forth the inferredIndo-European original languageside by side with its really existent derived languages. Besides the advantages offered by such a plan, in setting immediately before the eyes of the student the final results of the investigation in a more concrete form, and thereby rendering easier his insight into the nature of particularIndo-European languages, there is, I think, another of no less importance gained by it, namely that it shows the baselessness of the assumption that the non-Indian Indo-European languages were derived from Old-Indian (Sanskrit). The aim of the comparative method is to highlight and interpret systematicphonologicalandsemanticcorrespondences between two or moreattested languages. If those correspondences cannot be rationally explained as the result oflinguistic universalsorlanguage contact(borrowings,areal influence, etc.), and if they are sufficiently numerous, regular, and systematic that they cannot be dismissed aschance similarities, then it must be assumed that they descend from a single parent language called the 'proto-language'.[5][6] A sequence of regularsound changes(along with their underlying sound laws) can then be postulated to explain the correspondences between the attested forms, which eventually allows for thereconstructionof a proto-language by the methodical comparison of "linguistic facts" within a generalized system of correspondences.[7] Every linguistic fact is part of a whole in which everything is connected to everything else. One detail must not be linked to another detail, but one linguistic system to another. Relation is considered to be "established beyond a reasonable doubt" if a reconstruction of the common ancestor is feasible.[8] The ultimate proof of genetic relationship, and to many linguists' minds the only real proof, lies in a successful reconstruction of the ancestral forms from which the semantically corresponding cognates can be derived. In some cases, this reconstruction can only be partial, generally because the compared languages are too scarcely attested, the temporal distance between them and their proto-language is too deep, or their internal evolution render many of the sound laws obscure to researchers. In such case, a relation is considered plausible, but uncertain.[9] Descentis defined as transmission across the generations: children learn a language from the parents' generation and, after being influenced by their peers, transmit it to the next generation, and so on. For example, a continuous chain of speakers across the centuries linksVulgar Latinto all of its modern descendants. Two languages aregeneticallyrelatedif they descended from the sameancestor language.[10]For example,ItalianandFrenchboth come fromLatinand therefore belong to the same family, theRomance languages.[11]Having a large component of vocabulary from a certain origin is not sufficient to establish relatedness; for example, heavyborrowingfromArabicintoPersianhas caused more of thevocabularyof Modern Persian to be from Arabic than from the direct ancestor of Persian,Proto-Indo-Iranian, but Persian remains a member of the Indo-Iranian family and is not considered "related" to Arabic.[12] However, it is possible for languages to have different degrees of relatedness.English, for example, is related to bothGermanandRussianbut is more closely related to the former than to the latter. Although all three languages share a common ancestor,Proto-Indo-European, English and German also share a more recent common ancestor,Proto-Germanic, but Russian does not. Therefore, English and German are considered to belong to a subgroup of Indo-European that Russian does not belong to, theGermanic languages.[13] The division of related languages into subgroups is accomplished by findingshared linguistic innovationsthat differentiate them from the parent language. For instance, English and German both exhibit the effects of a collection of sound changes known asGrimm's Law, which Russian was not affected by. The fact that English and German share this innovation is seen as evidence of English and German's more recent common ancestor—since the innovation actually took place within that common ancestor, before English and German diverged into separate languages. On the other hand,shared retentionsfrom the parent language are not sufficient evidence of a sub-group. For example, German and Russian both retain from Proto-Indo-European a contrast between thedative caseand theaccusative case, which English has lost. However, that similarity between German and Russian is not evidence that German is more closely related to Russian than to English but means only that theinnovationin question, the loss of the accusative/dative distinction, happened more recently in English than the divergence of English from German. Inclassical antiquity, Romans were aware of the similarities between Greek and Latin, but did not study them systematically. They sometimes explained them mythologically, as the result of Rome being a Greek colony speaking a debased dialect.[14] Even though grammarians of Antiquity had access to other languages around them (Oscan,Umbrian,Etruscan,Gaulish,Egyptian,Parthian...), they showed little interest in comparing, studying, or just documenting them. Comparison between languages really began after classical antiquity. In the 9th or 10th century AD,Yehuda Ibn Qurayshcompared the phonology and morphology of Hebrew, Aramaic and Arabic but attributed the resemblance to the Biblical story of Babel, with Abraham, Isaac and Joseph retaining Adam's language, with other languages at various removes becoming more altered from the original Hebrew.[15] In publications of 1647 and 1654,Marcus Zuerius van Boxhornfirst described a rigorous methodology for historical linguistic comparisons[16]and proposed the existence of anIndo-Europeanproto-language, which he called "Scythian", unrelated to Hebrew but ancestral to Germanic, Greek, Romance, Persian, Sanskrit, Slavic, Celtic and Baltic languages. The Scythian theory was further developed byAndreas Jäger(1686) andWilliam Wotton(1713), who made early forays to reconstruct the primitive common language. In 1710 and 1723,Lambert ten Katefirst formulated the regularity ofsound laws, introducing among others the termroot vowel.[16] Another early systematic attempt to prove the relationship between two languages on the basis of similarity ofgrammarandlexiconwas made by the HungarianJános Sajnovicsin 1770, when he attempted to demonstrate the relationship betweenSamiandHungarian. That work was later extended to allFinno-Ugric languagesin 1799 by his countrymanSamuel Gyarmathi.[17]However, the origin of modernhistorical linguisticsis often traced back toSir William Jones, an Englishphilologistliving inIndia, who in 1786 made his famousobservation:[18] TheSanscrit language, whatever be its antiquity, is of a wonderful structure; more perfect than theGreek, more copious than theLatin, and more exquisitely refined than either, yet bearing to both of them a stronger affinity, both in the roots of verbs and the forms of grammar, than could possibly have been produced by accident; so strong indeed, that no philologer could examine them all three, without believing them to have sprung from some common source, which, perhaps, no longer exists. There is a similar reason, though not quite so forcible, for supposing that both theGothickand theCeltick, though blended with a very different idiom, had the same origin with the Sanscrit; and theold Persianmight be added to the same family. The comparative method developed out of attempts to reconstruct the proto-language mentioned by Jones, which he did not name but subsequent linguists have labelledProto-Indo-European(PIE). The first professional comparison between theIndo-European languagesthat were then known was made by the German linguistFranz Boppin 1816. He did not attempt a reconstruction but demonstrated that Greek, Latin and Sanskrit shared a common structure and a common lexicon.[19]In 1808,Friedrich Schlegelfirst stated the importance of using the eldest possible form of a language when trying to prove its relationships;[20]in 1818,Rasmus Christian Raskdeveloped the principle of regular sound-changes to explain his observations of similarities between individual words in the Germanic languages and their cognates in Greek andLatin.[21]Jacob Grimm, better known for hisFairy Tales, used the comparative method inDeutsche Grammatik(published 1819–1837 in four volumes), which attempted to show the development of theGermanic languagesfrom a common origin, which was the first systematic study ofdiachroniclanguage change.[22] Both Rask and Grimm were unable to explain apparent exceptions to the sound laws that they had discovered. AlthoughHermann Grassmannexplained one of the anomalies with the publication ofGrassmann's lawin 1862,[23]Karl Vernermade a methodological breakthrough in 1875, when he identified a pattern now known asVerner's law, the first sound-law based on comparative evidence showing that aphonologicalchange in onephonemecould depend on other factors within the same word (such as neighbouring phonemes and the position of theaccent[24]), which are now calledconditioning environments. Similar discoveries made by theJunggrammatiker(usually translated as "Neogrammarians") at theUniversity of Leipzigin the late 19th century led them to conclude that all sound changes were ultimately regular, resulting in the famous statement byKarl BrugmannandHermann Osthoffin 1878 that "sound laws have no exceptions".[2]That idea is fundamental to the modern comparative method since it necessarily assumes regular correspondences between sounds in related languages and thus regular sound changes from the proto-language. TheNeogrammarian hypothesisled to the application of the comparative method to reconstructProto-Indo-EuropeansinceIndo-Europeanwas then by far the most well-studied language family. Linguists working with other families soon followed suit, and the comparative method quickly became the established method for uncovering linguistic relationships.[17] There is no fixed set of steps to be followed in the application of the comparative method, but some steps are suggested byLyle Campbell[25]andTerry Crowley,[26]who are both authors of introductory texts in historical linguistics. This abbreviated summary is based on their concepts of how to proceed. This step involves making lists of words that are likely cognates among the languages being compared. If there is a regularly-recurring match between the phonetic structure of basic words with similar meanings, a genetic kinship can probably then be established.[27]For example, linguists looking at thePolynesian familymight come up with a list similar to the following (their actual list would be much longer):[28] Borrowingsorfalse cognatescan skew or obscure the correct data.[29]For example, Englishtaboo([tæbu]) is like the six Polynesian forms because of borrowing from Tongan into English, not because of a genetic similarity.[30]That problem can usually be overcome by using basic vocabulary, such as kinship terms, numbers, body parts and pronouns.[31]Nonetheless, even basic vocabulary can be sometimes borrowed.Finnish, for example, borrowed the word for "mother",äiti, from Proto-Germanic *aiþį̄ (compare toGothicaiþei).[32]Englishborrowed the pronouns "they", "them", and "their(s)" fromNorse.[33]Thaiand various otherEast Asian languagesborrowed their numbers fromChinese. An extreme case is represented byPirahã, aMuran languageof South America, which has been controversially[34]claimed to have borrowed all of itspronounsfromNheengatu.[35][36] The next step involves determining the regular sound-correspondences exhibited by the lists of potential cognates. For example, in the Polynesian data above, it is apparent that words that containtin most of the languages listed have cognates in Hawaiian withkin the same position. That is visible in multiple cognate sets: the words glossed as 'one', 'three', 'man' and 'taboo' all show the relationship. The situation is called a "regular correspondence" betweenkin Hawaiian andtin the other Polynesian languages. Similarly, a regular correspondence can be seen between Hawaiian and Rapanuih, Tongan and Samoanf, Maoriɸ, and Rarotonganʔ. Mere phonetic similarity, as betweenEnglishdayandLatindies(both with the same meaning), has no probative value.[37]English initiald-does notregularlymatchLatind-[38]since a large set of English and Latin non-borrowed cognates cannot be assembled such that Englishdrepeatedly and consistently corresponds to Latindat the beginning of a word, and whatever sporadic matches can be observed are due either to chance (as in the above example) or toborrowing(for example, Latindiabolusand Englishdevil, both ultimately of Greek origin[39]). However, English and Latin exhibit a regular correspondence oft-:d-[38](in which "A : B" means "A corresponds to B"), as in the following examples:[40] If there are many regular correspondence sets of this kind (the more, the better), a common origin becomes a virtual certainty, particularly if some of the correspondences are non-trivial or unusual.[27] During the late 18th to late 19th century, two major developments improved the method's effectiveness. First, it was found that many sound changes are conditioned by a specificcontext. For example, in bothGreekandSanskrit, anaspiratedstopevolved into an unaspirated one, but only if a second aspirate occurred later in the same word;[41]this isGrassmann's law, first described forSanskritbySanskrit grammarianPāṇini[42]and promulgated byHermann Grassmannin 1863. Second, it was found that sometimes sound changes occurred in contexts that were later lost. For instance, in Sanskritvelars(k-like sounds) were replaced bypalatals(ch-like sounds) whenever the following vowel was*ior*e.[43]Subsequent to this change, all instances of*ewere replaced bya.[44]The situation could be reconstructed only because the original distribution ofeandacould be recovered from the evidence of otherIndo-European languages.[45]For instance, theLatinsuffixque, "and", preserves the original*evowel that caused the consonant shift in Sanskrit: Verner's Law, discovered byKarl Vernerc.1875, provides a similar case: thevoicingof consonants inGermanic languagesunderwent a change that was determined by the position of the old Indo-Europeanaccent. Following the change, the accent shifted to initial position.[46]Verner solved the puzzle by comparing the Germanic voicing pattern with Greek and Sanskrit accent patterns. This stage of the comparative method, therefore, involves examining the correspondence sets discovered in step 2 and seeing which of them apply only in certain contexts. If two (or more) sets apply incomplementary distribution, they can be assumed to reflect a single originalphoneme: "some sound changes, particularly conditioned sound changes, can result in a proto-sound being associated with more than one correspondence set".[47] For example, the following potential cognate list can be established forRomance languages, which descend fromLatin: They evidence two correspondence sets,k : kandk :ʃ: Since Frenchʃoccurs only beforeawhere the other languages also havea, and Frenchkoccurs elsewhere, the difference is caused by different environments (being beforeaconditions the change), and the sets are complementary. They can, therefore, be assumed to reflect a single proto-phoneme (in this case*k, spelled ⟨c⟩ inLatin).[48]The original Latin words arecorpus,crudus,catenaandcaptiare, all with an initialk. If more evidence along those lines were given, one might conclude that an alteration of the originalktook place because of a different environment. A more complex case involves consonant clusters inProto-Algonquian. The AlgonquianistLeonard Bloomfieldused the reflexes of the clusters in four of the daughter languages to reconstruct the following correspondence sets:[49] Although all five correspondence sets overlap with one another in various places, they are not in complementary distribution and so Bloomfield recognised that a different cluster must be reconstructed for each set. His reconstructions were, respectively,*hk,*xk,*čk(=[t͡ʃk]),*šk(=[ʃk]), andçk(in which'x'and'ç'are arbitrary symbols, rather than attempts to guess the phonetic value of the proto-phonemes).[50] Typology assists in deciding what reconstruction best fits the data. For example, the voicing of voiceless stops between vowels is common, but the devoicing of voiced stops in that environment is rare. If a correspondence-t-:-d-between vowels is found in two languages, the proto-phonemeis more likely to be*-t-, with a development to the voiced form in the second language. The opposite reconstruction would represent a rare type. However, unusual sound changes occur. TheProto-Indo-Europeanword fortwo, for example, is reconstructed as*dwō, which is reflected inClassical Armenianaserku. Several other cognates demonstrate a regular change*dw-→erk-in Armenian.[51]Similarly, in Bearlake, a dialect of theAthabaskan languageofSlavey, there has been a sound change of Proto-Athabaskan*ts→ Bearlakekʷ.[52]It is very unlikely that*dw-changed directly intoerk-and*tsintokʷ, but they probably instead went through several intermediate steps before they arrived at the later forms. It is not phonetic similarity that matters for the comparative method but rather regular sound correspondences.[37] By theprinciple of economy, the reconstruction of a proto-phoneme should require as few sound changes as possible to arrive at the modern reflexes in the daughter languages. For example,Algonquian languagesexhibit the following correspondence set:[53][54] The simplest reconstruction for this set would be either*mor*b. Both*m→band*b→mare likely. Becausemoccurs in five of the languages andbin only one of them, if*bis reconstructed, it is necessary to assume five separate changes of*b→m, but if*mis reconstructed, it is necessary to assume only one change of*m→band so*mwould be most economical. That argument assumes the languages other than Arapaho to be at least partly independent of one another. If they all formed a common subgroup, the development*b→mwould have to be assumed to have occurred only once. In the final step, the linguist checks to see how the proto-phonemesfit the knowntypological constraints. For example, a hypothetical system, has only onevoiced stop,*b, and although it has analveolarand avelar nasal,*nand*ŋ, there is no correspondinglabial nasal. However, languages generally maintain symmetry in their phonemic inventories.[55]In this case, a linguist might attempt to investigate the possibilities that either what was earlier reconstructed as*bis in fact*mor that the*nand*ŋare in fact*dand*g. Even a symmetrical system can be typologically suspicious. For example, here is the traditionalProto-Indo-Europeanstop inventory:[56] An earlier voiceless aspirated row was removed on grounds of insufficient evidence. Since the mid-20th century, a number of linguists have argued that this phonology is implausible[57]and that it is extremely unlikely for a language to have a voiced aspirated (breathy voice) series without a corresponding voiceless aspirated series. Thomas GamkrelidzeandVyacheslav Ivanovprovided a potential solution and argued that the series that are traditionally reconstructed as plain voiced should be reconstructed asglottalized: eitherimplosive(ɓ,ɗ,ɠ)orejective(pʼ,tʼ,kʼ). The plain voiceless and voiced aspirated series would thus be replaced by just voiceless and voiced, with aspiration being a non-distinctive quality of both.[58]That example of the application of linguistic typology to linguistic reconstruction has become known as theglottalic theory. It has a large number of proponents but is not generally accepted.[59] The reconstruction of proto-sounds logically precedes the reconstruction of grammaticalmorphemes(word-forming affixes and inflectional endings), patterns ofdeclensionandconjugationand so on. The full reconstruction of an unrecorded protolanguage is an open-ended task. The limitations of the comparative method were recognized by the very linguists who developed it,[60]but it is still seen as a valuable tool. In the case of Indo-European, the method seemed at least a partial validation of the centuries-old search for anUrsprache, the original language. The others were presumed to be ordered in afamily tree, which was thetree modelof theneogrammarians. The archaeologists followed suit and attempted to find archaeological evidence of a culture or cultures that could be presumed to have spoken aproto-language, such asVere Gordon Childe'sThe Aryans: a study of Indo-European origins, 1926. Childe was a philologist turned archaeologist. Those views culminated in theSiedlungsarchaologie, or "settlement-archaeology", ofGustaf Kossinna, becoming known as "Kossinna's Law". Kossinna asserted that cultures represent ethnic groups, including their languages, but his law was rejected after World War II. The fall of Kossinna's Law removed the temporal and spatial framework previously applied to many proto-languages. Fox concludes:[61] The Comparative Methodas suchis not, in fact, historical; it provides evidence of linguistic relationships to which we may give a historical interpretation.... [Our increased knowledge about the historical processes involved] has probably made historical linguists less prone to equate the idealizations required by the method with historical reality.... Provided we keep [the interpretation of the results and the method itself] apart, the Comparative Method can continue to be used in the reconstruction of earlier stages of languages. Proto-languages can be verified in many historical instances, such as Latin.[62][63]Although no longer a law, settlement-archaeology is known to be essentially valid for some cultures that straddle history and prehistory, such as the Celtic Iron Age (mainly Celtic) andMycenaean civilization(mainly Greek). None of those models can be or have been completely rejected, but none is sufficient alone. The foundation of the comparative method, and of comparative linguistics in general, is theNeogrammarians' fundamental assumption that "sound laws have no exceptions". When it was initially proposed, critics of the Neogrammarians proposed an alternate position that summarised by the maxim "each word has its own history".[64]Several types of change actually alter words in irregular ways. Unless identified, they may hide or distort laws and cause false perceptions of relationship. All languagesborrow wordsfrom other languages in various contexts. Loanwords imitate the form of the donor language, as in Finnickuningas, from Proto-Germanic *kuningaz('king'), with possible adaptations to the local phonology, as in Japanesesakkā, from Englishsoccer. At first sight, borrowed words may mislead the investigator into seeing a genetic relationship, although they can more easily be identified with information on the historical stages of both the donor and receiver languages. Inherently, words that were borrowed from a common source (such as Englishcoffeeand Basquekafe, ultimately from Arabicqahwah) do share a genetic relationship, although limited to the history of this word. Borrowing on a larger scale occurs inareal diffusion, when features are adopted by contiguous languages over a geographical area. The borrowing may bephonological,morphologicalorlexical. A false proto-language over the area may be reconstructed for them or may be taken to be a third language serving as a source of diffused features.[65] Several areal features and other influences may converge to form aSprachbund, a wider region sharing features that appear to be related but are diffusional. For instance, theMainland Southeast Asia linguistic area, before it was recognised, suggested several false classifications of such languages asChinese,ThaiandVietnamese. Sporadic changes, such as irregular inflections, compounding and abbreviation, do not follow any laws. For example, theSpanishwordspalabra('word'),peligro('danger') andmilagro('miracle') would have beenparabla,periglo,miragloby regular sound changes from the Latinparabŏla,perīcŭlumandmīrācŭlum, but therandlchanged places by sporadicmetathesis.[66] Analogyis the sporadic change of a feature to be like another feature in the same or a different language. It may affect a single word or be generalized to an entire class of features, such as a verb paradigm. An example is theRussianword fornine. The word, by regular sound changes fromProto-Slavic, should have been/nʲevʲatʲ/, but it is in fact/dʲevʲatʲ/. It is believed that the initialnʲ-changed todʲ-under influence of the word for "ten" in Russian,/dʲesʲatʲ/.[67] Those who study contemporary language changes, such asWilliam Labov, acknowledge that even a systematic sound change is applied at first inconsistently, with the percentage of its occurrence in a person's speech dependent on various social factors.[68]The sound change seems to gradually spread in a process known aslexical diffusion. While it does not invalidate the Neogrammarians' axiom that "sound laws have no exceptions", the gradual application of the very sound laws shows that they do not always apply to all lexical items at the same time. Hock notes,[69]"While it probably is true in the long run every word has its own history, it is not justified to conclude as some linguists have, that therefore the Neogrammarian position on the nature of linguistic change is falsified". The comparative method cannot recover aspects of a language that were not inherited in its daughter idioms. For instance, theLatin declensionpattern was lost inRomance languages, resulting in an impossibility to fully reconstruct such a feature via systematic comparison.[70] The comparative method is used to construct a tree model (GermanStammbaum) of language evolution,[71]in which daughter languages are seen as branching from theproto-language, gradually growing more distant from it through accumulatedphonological,morpho-syntactic, andlexicalchanges. The tree model features nodes that are presumed to be distinct proto-languages existing independently in distinct regions during distinct historical times. The reconstruction of unattested proto-languages lends itself to that illusion since they cannot be verified, and the linguist is free to select whatever definite times and places seems best. Right from the outset of Indo-European studies, however,Thomas Youngsaid:[74] It is not, however, very easy to say what the definition should be that should constitute a separate language, but it seems most natural to call those languages distinct, of which the one cannot be understood by common persons in the habit of speaking the other.... Still, however, it may remain doubtfull whether the Danes and the Swedes could not, in general, understand each other tolerably well... nor is it possible to say if the twenty ways of pronouncing the sounds, belonging to the Chinese characters, ought or ought not to be considered as so many languages or dialects.... But,... the languages so nearly allied must stand next to each other in a systematic order… The assumption of uniformity in a proto-language, implicit in the comparative method, is problematic. Even small language communities always have differences indialect, whether they are based on area, gender, class or other factors. ThePirahã languageofBrazilis spoken by only several hundred people but has at least two different dialects, one spoken by men and one by women.[75]Campbell points out:[76] It is not so much that the comparative method 'assumes' no variation; rather, it is just that there is nothing built into the comparative method which would allow it to address variation directly.... This assumption of uniformity is a reasonable idealization; it does no more damage to the understanding of the language than, say, modern reference grammars do which concentrate on a language's general structure, typically leaving out consideration of regional or social variation. Different dialects, as they evolve into separate languages, remain in contact with and influence one another. Even after they are considered distinct, languages near one another continue to influence one another and often share grammatical, phonological, andlexical innovations. A change in one language of a family may spread to neighboring languages, and multiple waves of change are communicated like waves across language and dialect boundaries, each with its own randomly delimited range.[77]If a language is divided into an inventory of features, each with its own time and range (isoglosses), they do not all coincide. History and prehistory may not offer a time and place for a distinct coincidence, as may be the case forProto-Italic, for which the proto-language is only a concept. However, Hock[78]observes: The discovery in the late nineteenth century thatisoglossescan cut across well-established linguistic boundaries at first created considerable attention and controversy. And it became fashionable to oppose a wave theory to a tree theory.... Today, however, it is quite evident that the phenomena referred to by these two terms are complementary aspects of linguistic change.... The reconstruction of unknown proto-languages is inherently subjective. In theProto-Algonquianexample above, the choice of*mas the parentphonemeis onlylikely, notcertain. It is conceivable that a Proto-Algonquian language with*bin those positions split into two branches, one that preserved*band one that changed it to*minstead, and while the first branch developed only intoArapaho, the second spread out more widely and developed into all the otherAlgonquiantribes. It is also possible that the nearest common ancestor of theAlgonquian languagesused some other sound instead, such as*p, which eventually mutated to*bin one branch and to*min the other. Examples of strikingly complicated and even circular developments are indeed known to have occurred (such as Proto-Indo-European*t> Pre-Proto-Germanic*þ>Proto-Germanic*ð> Proto-West-Germanic*d>Old High Germantinfater> Modern GermanVater), but in the absence of any evidence or other reason to postulate a more complicated development, the preference of a simpler explanation is justified by the principle of parsimony, also known asOccam's razor. Since reconstruction involves many such choices, some linguists[who?]prefer to view the reconstructed features as abstract representations of sound correspondences, rather than as objects with a historical time and place.[citation needed] The existence of proto-languages and the validity of the comparative method is verifiable if the reconstruction can be matched to a known language, which may be known only as a shadow in theloanwordsof another language. For example,Finnic languagessuch asFinnishhave borrowed many words from an early stage ofGermanic, and the shape of the loans matches the forms that have been reconstructed forProto-Germanic. Finnishkuningas'king' andkaunis'beautiful' match the Germanic reconstructions *kuningazand *skauniz(> GermanKönig'king',schön'beautiful').[79] Thewave modelwas developed in the 1870s as an alternative to the tree model to represent the historical patterns of language diversification. Both the tree-based and the wave-based representations are compatible with the comparative method.[80] By contrast, some approaches are incompatible with the comparative method, including contentiousglottochronologyand even more controversialmass lexical comparisonconsidered by most historical linguists to be flawed and unreliable.[81]
https://en.wikipedia.org/wiki/Comparative_method
Anendangered languageormoribund languageis alanguagethat is at risk of disappearing as its speakers die out or shift to speaking other languages.[1]Language loss occurs when the language has no more native speakers and becomes a "dead language". If no one can speak the language at all, it becomes an "extinct language". A dead language may still be studied through recordings or writings, but it is still dead or extinct unless there arefluentspeakers left.[2]Although languages have always become extinct throughout human history, endangered languages are currently dying at an accelerated rate because ofglobalization,mass migration, cultural replacement,imperialism,neocolonialism[3]andlinguicide(language killing).[4][better source needed] Language shiftmost commonly occurs when speakers switch to a languageassociated with social or economic poweror one spoken more widely, leading to the gradual decline and eventual death of the endangered language. The process of language shift is often influenced by factors such as globalisation, economic authorities, and the perceived prestige of certain languages. The ultimate result is the loss of linguistic diversity and cultural heritage within affected communities. The general consensus is that there are between 6,000[5]and 7,000 languages currently spoken. Some linguists estimate that between 50% and 90% of them will be severely endangered or dead by the year 2100.[3]The20 most common languages, each with more than 50 million speakers, are spoken by 50% of the world's population, but most languages are spoken by fewer than 10,000 people.[3] The first step towards language death ispotential endangerment. This is when a language faces strong external pressure, but there are still communities of speakers who pass the language to their children. The second stage isendangerment. Once a language has reached the endangerment stage, there are only a few speakers left and children are, for the most part, not learning the language. The third stage of language extinction isseriously endangered. During this stage, a language is unlikely to survive another generation and will soon be extinct. The fourth stage ismoribund, followed by the fifth stageextinction. Many projects are under way aimed at preventing or slowing language loss byrevitalizingendangered languages and promoting education and literacy in minority languages, often involving joint projects between language communities and linguists.[6]Across the world, many countries have enactedspecific legislationaimed at protecting and stabilizing the language of indigenousspeech communities. Recognizing that most of the world's endangered languages are unlikely to be revitalized, many linguists are also working ondocumentingthe thousands of languages of the world about which little or nothing is known. Some widely spoken languages have endangered regionaldialects, such as the varieties ofEnglishspoken on the American east coast, such asEastern New England English. The total number of contemporary languages in the world is not known, and it is not well defined what constitutes a separate language as opposed to a dialect. Estimates vary depending on the extent and means of the research undertaken, and the definition of a distinct language and the current state of knowledge of remote and isolated language communities. The number of known languages varies over time as some of them become extinct and others are newly discovered. An accurate number of languages in the world was not yet known until the use of universal,systematic surveysin the later half of the twentieth century.[7]The majority of linguists in the early twentieth century refrained from making estimates. Before then, estimates were frequently the product of guesswork and very low.[8] One of the most active research agencies isSIL International, which maintains a database,Ethnologue, kept up to date by the contributions of linguists globally.[9] Ethnologue's 2005 count of languages in its database, excluding duplicates in different countries, was 6,912, of which 32.8% (2,269) were in Asia, and 30.3% (2,092) in Africa.[10]This contemporary tally must be regarded as a variable number within a range. Areas with a particularly large number of languages that are nearing extinction include:Eastern Siberia,[citation needed]Central Siberia,Northern Australia,Central America, and theNorthwest Pacific Plateau. Other hotspots areOklahomaand theSouthern Coneof South America. Almost all of the study of language endangerment has been with spoken languages. A UNESCO study of endangered languages does not mention sign languages.[11]However, somesign languagesare also endangered, such asAlipur Village Sign Language(AVSL) of India,[12]Adamorobe Sign Languageof Ghana,Ban Khor Sign Languageof Thailand, andPlains Indian Sign Language.[13][14]Many sign languages are used by small communities; small changes in their environment (such as contact with a larger sign language or dispersal of the deaf community) can lead to the endangerment and loss of their traditional sign language. Methods are being developed to assess the vitality of sign languages.[15] While there is no definite threshold for identifying a language as endangered,UNESCO's 2003 document entitledLanguage vitality and endangerment[16]outlines nine factors for determining language vitality: Many languages, for example some inIndonesia, have tens of thousands of speakers but are endangered because children are no longer learning them, and speakers are shifting to using thenational language(e.g.Indonesian) in place of local languages. In contrast, a language with only 500 speakers might be considered very much alive if it is the primary language of a community, and is the first (or only) spoken language of all children in that community.[citation needed] Asserting that "Language diversity is essential to the human heritage", UNESCO's Ad Hoc Expert Group on Endangered Languages offers this definition of an endangered language: "... when its speakers cease to use it, use it in an increasingly reduced number of communicative domains, and cease to pass it on from one generation to the next. That is, there are no new speakers, adults or children."[16] UNESCO operates with four levels of language endangerment between "safe" (not endangered) and "extinct" (no living speakers), based on intergenerational transfer: "vulnerable" (not spoken by children outside the home), "definitely endangered" (children not speaking), "severely endangered" (only spoken by the oldest generations), and "critically endangered" (spoken by few members of the oldest generation, oftensemi-speakers).[5]UNESCO'sAtlas of the World's Languages in Dangercategorises 2,473 languages by level of endangerment.[17] Using an alternative scheme of classification, linguistMichael E. Kraussdefines languages as "safe" if it is considered that children will probably be speaking them in 100 years; "endangered" if children will probably not be speaking them in 100 years (approximately 60–80% of languages fall into this category) and "moribund" if children are not speaking them now.[18] Many scholars have devised techniques for determining whether languages are endangered. One of the earliest is GIDS (Graded Intergenerational Disruption Scale) proposed byJoshua Fishmanin 1991.[19]In 2011 an entire issue ofJournal of Multilingual and Multicultural Developmentwas devoted to the study of ethnolinguistic vitality, Vol. 32.2, 2011, with several authors presenting their own tools for measuring language vitality. A number of other published works on measuring language vitality have been published, prepared by authors with varying situations and applications in mind.[20][21][22][23][24][25] According to theCambridge Handbook of Endangered Languages,[3]there are four main types of causes of language endangerment: Causes that put the populations that speak the languages in physical danger, such as: Causes that prevent or discourage speakers from using a language, such as: Often multiple of these causes act at the same time. Poverty, disease and disasters often affect minority groups disproportionately, for example causing the dispersal of speaker populations and decreased survival rates for those who stay behind. Among the causes of language endangerment cultural, political and economicmarginalizationaccounts for most of the world's language endangerment. Scholars distinguish between several types of marginalization: Economic dominance negatively affects minority languages when poverty leads people to migrate towards the cities or to other countries, thus dispersing the speakers. Cultural dominance occurs when literature and higher education is only accessible in the majority language. Political dominance occurs when education and political activity is carried out exclusively in a majority language. Historically, in colonies, and elsewhere where speakers of different languages have come into contact, some languages have been considered superior to others: often one language has attained a dominant position in a country. Speakers of endangered languages may themselves come to associate their language with negative values such as poverty, illiteracy and social stigma, causing them to wish to adopt the dominant language that is associated with social and economical progress andmodernity.[3]Immigrants moving into an area may lead to the endangerment of the autochthonous language.[32] Dialects and accents have seen similar levels of endangerment during the 21st century due to similar reasons.[33][34][35][36] Language endangerment affects both the languages themselves and the people that speak them. This also affects the essence of a culture. As communities lose their language, they often lose parts of their cultural traditions that are tied to that language. Examples include songs, myths, poetry, local remedies, ecological and geological knowledge, as well as language behaviors that are not easily translated.[37]Furthermore, the social structure of one's community is often reflected through speech and language behavior. This pattern is even more prominent in dialects. This may in turn affect the sense of identity of the individual and the community as a whole, producing a weakened social cohesion as their values and traditions are replaced with new ones. This is sometimes characterized asanomie. Losing a language may also have political consequences as some countries confer different political statuses or privileges on minority ethnic groups, often defining ethnicity in terms of language. In turn, communities that lose their language may also lose political legitimacy as a community with specialcollective rights.[citation needed]Language can also be considered as scientific knowledge in topics such as medicine, philosophy, botany, and more. It reflects a community's practices when dealing with the environment and each other. When a language is lost, this knowledge is often lost as well.[38] In contrast, language revitalization is correlated with better health outcomes in indigenous communities.[39] During language loss—sometimes referred to asobsolescencein the linguistic literature—the language that is being lost generally undergoes changes as speakers make their language more similar to the language that they are shifting to. For example, gradually losing grammatical or phonological complexities that are not found in the dominant language.[40][41] Generally the accelerated pace of language endangerment is considered to be a problem by linguists and by the speakers. However, some linguists, such as the phoneticianPeter Ladefoged, have argued that language death is a natural part of the process of human cultural development, and that languages die because communities stop speaking them for their own reasons. Ladefoged argued that linguists should simply document and describe languages scientifically, but not seek to interfere with the processes of language loss.[42]A similar view has been argued at length by linguistSalikoko Mufwene, who sees the cycles of language death and emergence of new languages throughcreolizationas a continuous ongoing process.[43][44][45] A majority of linguists do consider that language loss is an ethical problem, as they consider that most communities would prefer to maintain their languages if given a real choice. They also consider it a scientific problem, because language loss on the scale currently taking place will mean that future linguists will only have access to a fraction of the world's linguistic diversity, therefore their picture of what human language is—and can be—will be limited.[46][47][48][49][50] Some linguists consider linguistic diversity to be analogous to biological diversity, and compare language endangerment towildlife endangerment.[51] Linguists, members of endangered language communities, governments, nongovernmental organizations, and international organizations such as UNESCO and the European Union are actively working to save and stabilize endangered languages.[3]Once a language is determined to be endangered, there are three steps that can be taken in order to stabilize or rescue the language. The first is language documentation, the second is language revitalization and the third is language maintenance.[3] Language documentationis the documentation in writing and audio-visual recording ofgrammar, vocabulary, and oral traditions (e.g. stories, songs, religious texts) of endangered languages. It entails producing descriptive grammars, collections of texts and dictionaries of the languages, and it requires the establishment of a secure archive where the material can be stored once it is produced so that it can be accessed by future generations of speakers or scientists.[3] Language revitalizationis the process by which a language community through political, community, and educational means attempts to increase the number of active speakers of the endangered language.[3]This process is also sometimes referred to aslanguage revival or reversing language shift.[3]For case studies of this process, see Anderson (2014).[52]Applied linguistics and education are helpful in revitalizing endangered languages.[53]Vocabulary and courses are available online for a number of endangered languages.[54] Language maintenance refers to the support given to languages that need for their survival to be protected from outsiders who can ultimately affect the number of speakers of a language.[3]UNESCO seeks to prevent language extinction by promoting and supporting the language in education, culture, communication and information, and science.[55] Another option is "post-vernacular maintenance": the teaching of some words and concepts of the lost language, rather than revival proper.[56] As of June 2012 the United States has aJ-1 specialist visa, which allows indigenous language experts who do not have academic training to enter the U.S. as experts aiming to share their knowledge and expand their skills.[57]
https://en.wikipedia.org/wiki/Endangered_language
Ethnologue: Languages of the Worldis an annual reference publication in print and online that provides statistics and other information on theliving languagesof the world. It is the world's most comprehensive catalogue of languages.[2]It was first issued in 1951, and is now published bySIL International, an AmericanevangelicalChristian non-profit organization. Ethnologuehas been published by SIL Global (formerly known as the Summer Institute of Linguistics), a Christianlinguisticservice organization with an international office inDallas, Texas. The organization studies numerous minority languages to facilitate language development, and to work with speakers of such language communities in translating portions of the Bible into their languages.[3]Despite the Christian orientation of its publisher,Ethnologueis not ideologically or theologically biased.[4] Ethnologueincludes alternative names andautonyms, the number of L1 and L2 speakers,language prestige, domains of use,literacy rates, locations, dialects,language classification,linguistic affiliations,typology, language maps, country maps, publication and use in media, availability of theBiblein each language and dialect described,religious affiliationsof speakers, a cursory description of revitalization efforts where reported,intelligibilityandlexical similaritywith other dialects and languages, writing scripts, an estimate of language viability using theExpanded Graded Intergenerational Disruption Scale(EGIDS), and bibliographic resources.[5][6][7][8][9]Coverage varies depending on languages.[5][6]For instance, as of 2008, information onword orderwas present for 15% of entries while religious affiliations were mentioned for 38% of languages.[5]According toLyle Campbell"language maps are highly valuable" and most country maps are of high quality and user-friendly.[5] Ethnologuegathers information from SIL's thousands offield linguists,[1]surveys done by linguists and literacy specialists, observations ofBible translators, andcrowdsourcedcontributions.[6][10]SIL's field linguists use an online collaborative research system to review current data, update it, or request its removal.[11]SIL has a team of editors by geographical area who prepare reports to Ethnologue's general editor. These reports combine opinions from SIL area experts and feedback solicited from non-SIL linguists. Editors have to find compromises when opinions differ.[12]Most of SIL's linguists have taken three to four semesters of graduate linguistics courses, and half of them have a master's degree. They're trained by 300 PhD linguists in SIL.[13] The determination of what characteristics define a single language depends uponsociolinguisticevaluation by various scholars; as the preface toEthnologuestates, "Not all scholars share the same set of criteria for what constitutes a 'language' and what features define a 'dialect'."[5]The criteria used byEthnologuearemutual intelligibilityand the existence or absence of a common literature or ethnolinguistic identity.[5][12][14]The number of languages identified has been steadily increasing, from 5,445 in the 10th edition (in 1984) to 6,909 in the 16th (in 2009), partly due to governments according designation as languages to mutually intelligible varieties and partly due to SIL establishing new Bible translation teams.[15]Ethnologuecodes were used as the base to create the newISO 639-3international standard. Since 2007,Ethnologuerelies only on this standard,administeredby SIL International,[16]to determine what is listed as a language.[5] In addition to choosing a primary name for a language,Ethnologueprovides listings of other name(s) for the language and any dialects that are used by its speakers, government, foreigners and neighbors. Also included are any names that have been commonly referenced historically, regardless of whether a name is considered official, politically correct or offensive; this allows more complete historic research to be done. These lists of names are not necessarily complete. Ethnologuewas founded in 1951 by Richard S. Pittman and was initially focused on minority languages, to share information on Bible translation needs.[17][18]The first edition included information on 46 languages.[18][17]Hand-drawn maps were introduced in the fourth edition (1953).[18]The seventh edition (1969) listed 4,493 languages.[18][17]In 1971,Ethnologueexpanded its coverage to all known languages of the world.[18][17] Ethnologuedatabase was created in 1971 at theUniversity of Oklahomaunder a grant from theNational Science Foundation.[18]In 1974 the database was moved toCornell University.[18][17]Since 2000, the database has been maintained by SIL International in their Dallas headquarters.[18][17]In 1997 (13th edition), the website became the primary means of access.[18][17] In 1984,Ethnologuereleased a three-letter coding system, called an 'SIL code', to identify each language that it described. This set of codes significantly exceeded the scope of other existing standards, e.g.ISO 639-1andISO 639-2.[19][18][17] The 14th edition, published in 2000, included 7,148 language codes. In 2002,Ethnologuewas asked to work with theInternational Organization for Standardization(ISO) to integrate its codes into a draft international standard.Ethnologuecodes have then been adopted by ISO as the international standard,ISO 639-3.[12][5]The 15th edition ofEthnologuewas the first edition to use this standard. This standard is now administered separately from Ethnologue. SIL International is theregistration authorityfor languages names and codes,[5]according to rules established by ISO.[16]Since thenEthnologuerelies on the standard to determine what is listed as a language.[17]In only one case,Ethnologueand the ISO standards treat languages slightly differently. ISO 639-3 considersAkanto be amacrolanguageconsisting of two distinct languages,TwiandFante, whereasEthnologueconsiders Twi and Fante to be dialects of a single language (Akan), since they are mutually intelligible. This anomaly resulted because the ISO 639-2 standard has separate codes for Twi and Fante, which have separate literary traditions, and all 639-2 codes for individual languages are automatically part of 639-3, even though 639-3 would not normally assign them separate codes. In 2014, with the 17th edition,Ethnologueintroduced a numerical code for language status using a framework calledEGIDS (Expanded Graded Intergenerational Disruption Scale), an elaboration ofFishman'sGIDS (Graded Intergenerational Disruption Scale). It ranks a language from 0 for aninternational languageto 10 for anextinct language, i.e. a language with which no-one retains a sense of ethnic identity.[20] In 2015, SIL's funds decreased and in December 2015,Ethnologuelaunched a meteredpaywallto cover its cost, as it is financially self-sustaining.[1]Users inhigh-income countrieswho wanted to refer to more than seven pages of data per month had to buy apaid subscription.[21][1]The 18th edition released that year included a new section onlanguage policycountry by country.[22][23] In 2016,Ethnologueadded date aboutlanguage planningagencies to the 19th edition.[24] As of 2017,Ethnologue's 20th edition described 237language familiesincluding 86language isolatesand six typological categories, namelysign languages,creoles,pidgins,mixed languages,constructed languages, and as yetunclassified languages.[25] The early focus of the Ethnologue was on native use (L1) but was gradually expanded to cover L2 use as well.[26] In 2019,Ethnologuedisabled trial views and introduced ahard paywallto cover its nearly $1 million in annual operating costs (website maintenance, security, researchers, and SIL's 5,000 field linguists).[1][27]Subscriptions start at $480 per person per year,[1]while full access costs $2,400 per person per year.[9]Users inlow and middle-income countriesas defined by theWorld Bankare eligible for free access and there are discounts for libraries and independent researchers.[9]Subscribers are mostly institutions: 40% of the world's top 50 universities subscribe toEthnologue,[6]and it is also sold to business intelligence firms and Fortune 500 companies.[1]The introduction of the paywall was harshly criticized by the community of linguists who rely onEthnologueto do their work and cannot afford the subscription[1]The same year,Ethnologuelaunched its contributor program to fill gaps and improve accuracy,[28][27]allowing contributors to submit corrections and additions and to get a complimentary access to the website.[29]Ethnologue's editors gradually review crowdsourced contributions before publication.[30][6]As 2019 was theInternational Year of Indigenous Languages, this edition focused onlanguage loss: it added the date when last fluent speaker of the language died, standardized the age range of language users, and improved theEGIDSestimates.[31] In 2020, the 23rd edition listed 7,117 living languages, an increase of 6 living languages from the 22nd edition. In this edition,Ethnologueexpanded its coverage ofimmigrant languages: previous editions only had full entries for languages considered to be "established" within a country. From this edition,Ethnologueincludes data about first and second languages ofrefugees, temporaryforeign workersand immigrants.[32][6] In 2021, the 24th edition had 7,139 modern languages, an increase of 22 living languages from the 23rd edition. Editors especially improved data aboutlanguage shiftin this edition.[33] In 2022, the 25th edition listed a total of 7,151 living languages, an increase of 12 living languages from the 24th edition. This edition specifically improved theuse of languages in education.[34] In 2023, the 26th edition listed a total of 7,168 living languages, an increase of 17 living languages from the 25th edition. In 2024, the 27th edition listed a total of 7,164 living languages, a decrease of 4 living languages from the 26th edition.[35] In 1986,William Bright, then editor of the journalLanguage, wrote ofEthnologuethat it "is indispensable for any reference shelf on the languages of the world".[36]The 2003International Encyclopedia of LinguisticsdescribedEthnologueas "a comprehensive listing of the world's languages, with genetic classification",[37]and follows Ethnologue's classification.[12]In 2005, linguistsLindsay J. WhaleyandLenore Grenobleconsidered thatEthnologue"continues to provide the most comprehensive and reliable count of numbers of speakers of the world's languages", still they recognize that "individual language surveys may have far more accurate counts for a specific language, butThe Ethnologueis unique in bringing together speaker statistics on a global scale".[38]In 2006,computational linguistsJohn C. Paolillo and Anupam Das conducted a systematic evaluation of available information on language populations for theUNESCO Institute for Statistics. They reported thatEthnologueandLinguaspherewere the only comprehensive sources of information about language populations and thatEthnologuehad more specific information. They concluded that: "the language statistics available today in the form of theEthnologuepopulation counts are already good enough to be useful"[39]According to linguistWilliam Poser,Ethnologuewas, as of 2006, the "best single source of information" on language classification.[40]In 2008 linguistsLyle Campbelland Verónica Grondona highly commendedEthnologueinLanguage. They described it as a highly valuable catalogue of the world's languages that "has become the standard reference" and whose "usefulness is hard to overestimate". They concluded thatEthnologuewas "truly excellent, highly valuable, and the very best book of its sort available."[5] In a review ofEthnologue's 2009 edition inEthnopolitics,Richard O. Collin, professor of politics, noted that "Ethnologuehas become a standard resource for scholars in the other social sciences: anthropologists, economists, sociologists and, obviously, sociolinguists". According to Collin,Ethnologueis "stronger in languages spoken by indigenous peoples in economically less-developed portions of the world" and "when recent in-depth country-studies have been conducted, information can be very good; unfortunately [...] data are sometimes old".[4] In 2012, linguistAsya PereltsvaigdescribedEthnologueas "a reasonably good source of thorough and reliable geographical and demographic information about the world's languages".[41]She added in 2021 that its maps "are generally fairly accurate although they often depict the linguistic situation as it once was or as someone might imagine it to be but not as it actually is".[42]Linguist George Tucker Childs wrote in 2012 that: "Ethnologueis the most widely referenced source for information on languages of the world", but he added that regarding African languages, "when evaluated against recent field experience [Ethnologue] seems at least out of date".[43]In 2014,Ethnologueadmitted that some of its data was out-of-date and switched from a four-year publication cycle (in print and online) to yearly online updates.[44] In 2017,Robert PhillipsonandTove Skutnabb-KangasdescribedEthnologueas "the most comprehensive global source list for (mostly oral) languages".[45]According to the 2018Oxford Research Encyclopedia of Linguistics,Ethnologueis a "comprehensive, frequently updated [database] on languages and language families'.[46]According toquantitative linguistsSimon Greenhill,Ethnologueoffers, as of 2018, "sufficiently accurate reflections of speaker population size".[47]Linguists Lyle Campbell and Kenneth Lee Rehg wrote in 2018 thatEthnologuewas "the best source that list the non-endangered languages of the world".[48]Lyle Campbell and Russell Barlow also noted that the 2017 edition ofEthnologue"improved [its] classification markedly". They note thatEthnologue's genealogy is similar to that of theWorld Atlas of Language Structures(WALS) but different from that of theCatalogue of Endangered Languages(ELCat) and Glottolog.[49]LinguistLisa Matthewsoncommented in 2020 thatEthnologueoffers "accurate information about speaker numbers".[50]In a 2021 review ofEthnologueand Glottolog, linguistShobhana Chelliahnoted that "For better or worse, the impact of the site is indeed considerable. [...] Clearly, the site has influence on the field of linguistics and beyond." She added that she, among other linguists, integratedEthnologuein her linguistics classes."[6] TheEncyclopedia of Language and LinguisticsusesEthnologueas its primary source for the list of languages and language maps.[51]According to linguistSuzanne Romaine,Ethnologueis also the leading source for research onlanguage diversity.[52]According toThe Oxford Handbook of Language and Society,Ethnologueis "the standard reference source for the listing and enumeration of Endangered Languages, and for all known and "living" languages of the world"."[53]Similarly, linguistDavid BradleydescribesEthnologueas "the most comprehensive effort to document the level of endangerment in languages around the world."[54]The USNational Science FoundationusesEthnologueto determine which languages are endangered.[6]According to Hammarström et al.,Ethnologueis, as of 2022, one of the three global databases documenting language endangerment with theAtlas of the World's Languages in Dangerand the Catalogue of Endangered Languages (ELCat).[55]The University of HawaiiKaipuleohonelanguage archive usesEthnologue's metadata as well.[6]TheWorld Atlas of Language StructuresusesEthnologue's genealogical classification.[56]TheRosetta ProjectusesEthnologue's language metadata.[57] In 2005, linguistHarald Hammarströmwrote thatEthnologuewas consistent with specialist views most of the time and was a catalog "of very high absolute value and by far the best of its kind".[58][12]In 2011, Hammarström createdGlottologin response to the lack of a comprehensive language bibliography, especially inEthnologue.[59][60][61]In 2015, Hammarström reviewed the 16th, 17th, and 18th editions ofEthnologueand described the frequent lack of citations as its only "serious fault" from a scientific perspective. He concluded: "Ethnologueis at present still better than any other nonderivative work of the same scope. [It] is an impressively comprehensive catalogue of world languages, and it is far superior to anything else produced prior to 2009. In particular, it is superior by virtue of being explicit."[62]According to Hammarström, as of 2016,Ethnologueand Glottolog are the only global-scale continually maintained inventories of the world's languages. The main difference is thatEthnologueincludes additional information (such as speaker numbers or vitality) but lacks systematic sources for the information given. In contrast, Glottolog provides no language context information but points to primary sources for further data.[63][64]Contrary toEthnologue, Glottolog does not run its own surveys,[1]but it usesEthnologueas one of its primary sources.[1][65]As of 2019, Hammarström usesEthnologuein his articles, noting that it "has (unsourced, but) detailed information associated with each speech variety, such as speaker numbers and map location".[66]In response to feedback about the lack of references,Ethnologueadded in 2013 a link on each language to language resources from theOpen Language Archives Community(OLAC)[67]Ethnologueacknowledges that it rarely quotes any source verbatim but cites sources wherever specific statements are directly attributed to them, and corrects missing attributions upon notification.[68]The website provides a list of all of the references cited.[69][70]In her 2021 review, Shobhana Chelliah noted that Glottolog aims to be better thanEthnologuein language classification and genetic and areal relationships by using linguists' original sources.[6] Starting with the 17th edition,Ethnologuehas been published every year,[23]onFebruary 21, which isInternational Mother Language Day.[32]
https://en.wikipedia.org/wiki/Ethnologue
Inlinguistics,language deathoccurs when alanguageloses itslastnative speaker. By extension,language extinctionis when the language is no longer known, including bysecond-languagespeakers, when it becomes known as anextinct language. A related term islinguicide,[1]the death of a language from natural or political causes. The disappearance of a minor language as a result of the absorption or replacement by a major language is sometimes called "glottophagy".[2] Language death is a process in which the level of aspeech community'slinguistic competencein theirlanguage varietydecreases, eventually resulting in no native or fluent speakers of the variety. Language death can affect any language form, includingdialects. Language death should not be confused withlanguage attrition(also called language loss), which describes the loss of proficiency in afirst languageof an individual.[3] In themodern period(c.1500 CE–present; following the rise ofcolonialism), language death has typically resulted from the process ofcultural assimilationleading tolanguage shiftand the gradual abandonment of a native language in favour of a foreignlingua franca, largely those ofEuropean countries.[4][5][6] As of the 2000s, a total of roughly 7,000 natively spoken languages existed worldwide. Most of these are minor languages in danger of extinction; one estimate published in 2004 expected that some 90% of the currently spoken languages will have become extinct by 2050.[7][8]Ethnologuerecorded 7,358 living languages known in 2001,[9]but on 20 May 2015, Ethnologue reported only 7,102 known living languages; and on 23 February 2016, Ethnologue reported only 7,097 known living languages.[10] Language death is typically the outcome oflanguage shiftand may manifest itself in one of the following ways: The most common process leading to language death is one in which a community of speakers of onelanguagebecomesbilingualwith another language, and graduallyshiftsallegiance to the second language until they cease to use their original,heritage language. This is a process ofassimilationwhich may be voluntary or may be forced upon a population. Speakers of some languages, particularly regional or minority languages, may decide to abandon them because of economic or utilitarian reasons, in favor of languages regarded as having greater utility or prestige. Languages with a small, geographically isolated population of speakers can die when their speakers are wiped out bygenocide,disease, ornatural disaster. A language is often declared to be dead even before the last native speaker of the language has died. If there are only a few elderly speakers of a language remaining, and they no longer use that language for communication, then the language is effectively dead. A language that has reached such a reduced stage of use is generally considered moribund.[3]Half of the spoken languages of the world are not being taught to new generations of children.[3]Once a language is no longer a native language—that is, if no children are being socialized into it as their primary language—the process of transmission is ended and the language itself will not survive past the current generations.[16] Language death is rarely a sudden event, but a slow process of each generation learning less and less of the language until its use is relegated to the domain of traditional use, such as in poetry and song. Typically the transmission of the language from adults to children becomes more and more restricted, to the final setting that adults speaking the language will raise children who never acquire fluency. One example of this process reaching its conclusion is that of theDalmatian language. During language loss—sometimes referred to asobsolescencein the linguistic literature—the language that is being lost generally undergoes changes as speakers make their language more similar to the language to which they are shifting. This process of change has been described by Appel (1983) in two categories, though they are not mutually exclusive. Often speakers replace elements of their own language with something from the language they are shifting toward. Also, if their heritage language has an element that the new language does not, speakers may drop it. WithinIndigenous communities, the death of language has consequences for individuals and the communities as a whole. There have been links made between their health (both physically and mentally) and the death of their traditional language. Language is an important part of their identity and as such is linked to their well-being.[19] One study conducted on aboriginal youth suicide rates inCanadafound that Indigenous communities in which a majority of members speak the traditional language exhibit low suicide rates while suicide rates were six times higher in groups where less than half of its members communicate in their ancestral language.[20] Another study was also conducted on aboriginal peoples inAlberta, Canada. There was a link found between their traditional language knowledge and the prevalence of diabetes. The greater their knowledge was of their traditional language, the lower the prevalence of diabetes was within their communities.[21] Language revitalization is an attempt to slow or reverse language death.[22]Revitalization programs are ongoing in many languages, and have had varying degrees of success. Therevival of the Hebrew languageinIsraelis the only example of a language's acquiring newfirst languagespeakers after it became extinct in everyday use for an extended period, being used only as aliturgical language.[23]Even in the case ofHebrew, there is a theory that argues that "the Hebrew revivalists who wished to speak pure Hebrew failed. The result is a fascinating and multifaceted Israeli language, which is not only multi-layered but also multi-sourced. The revival of a clinically dead language is unlikely without cross-fertilization from the revivalists' mother tongue(s)."[24] Other cases of language revitalization that have seen some degree of success areWelsh,Basque,Hawaiian, andNavajo.[25] Reasons for language revitalization vary: they can include physical danger affecting those whose language is dying, economic danger such as theexploitation of natural resources, political danger such asgenocide, or cultural danger such asassimilation.[26]During the past century, it is estimated that more than 2,000 languages have already become extinct. TheUnited Nations(UN) estimates that more than half of the languages spoken today have fewer than 10,000 speakers and that a quarter have fewer than 1,000 speakers; and that, unless there are some efforts to maintain them, over the next hundred years most of these will become extinct.[27]These figures are often cited as reasons why language revitalization is necessary to preserve linguistic diversity. Culture and identity are also frequently cited reasons for language revitalization, when a language is perceived as a unique "cultural treasure".[28]A community often sees language as a unique part of their culture, connecting them with their ancestors or with the land, making up an essential part of their history and self-image.[29] According toGhil'ad Zuckermann, "language reclamation will become increasingly relevant as people seek to recover their cultural autonomy, empower their spiritual and intellectual sovereignty, and improve wellbeing. There are various ethical, aesthetic, and utilitarian benefits of language revival—for example, historical justice, diversity, and employability, respectively."[1] Googlelaunched theEndangered Languages Projectaimed at helping preserve languages that are at risk of extinction. Its goal is to compile up-to-date information about endangered languages and share the latest research about them. AnthropologistAkira Yamamoto has identified nine factors that he believes will help prevent language death:[16] Linguists distinguish between language "death" and the process where a language becomes a "dead language" through normallanguage change, a linguistic phenomenon analogous topseudoextinction. This happens when a language in the course of its normal development gradually morphs into something that is then recognized as a separate, different language, leaving the old form with no native speakers. Thus, for example,Old Englishmay be regarded as a "dead language" although it changed and developed intoMiddle English,Early Modern EnglishandModern English. Dialects of a language can also die, contributing to the overall language death. For example, theAinu languageis slowly dying: "The UNESCO Atlas of the World's Languages in Danger lists Hokkaido Ainu as critically endangered with 15 speakers ... and bothSakhalinandKuril Ainuas extinct."[30]The language vitality for Ainu has weakened because of Japanese becoming the favoured language for education since the end of the nineteenth century. Education in Japanese heavily impacted the decline in use of the Ainu language because of forced linguistic assimilation.[31][32] The process of language change may also involve the splitting up of a language into a family of severaldaughter languages, leaving the common parent language "dead". This happened toLatin, which (throughVulgar Latin) eventually developed into theRomance languages, and toSanskrit, which (throughPrakrit) developed into theNew Indo-Aryan languages. Such a process is normally not described as "language death", because it involves an unbroken chain of normal transmission of the language from one generation to the next, with only minute changes at every single point in the chain. Thus with regard to Latin, for example, there is no point at which Latin "died"; it evolved in different ways in different geographic areas, and its modern forms are now identified by a plethora of different names such as French, Portuguese, Spanish, Italian, etc. Language shift can be used to understand the evolution of Latin into the various modern forms. Language shift, which could lead to language death, occurs because of a shift in language behaviour from a speech community. Contact with other languages and cultures causes change in behaviour to the original language which creates language shift.[12] Except in case of linguicide, languages do not suddenly become extinct; they become moribund as the community of speakers graduallyshiftsto using other languages. As speakers shift, there are discernible, if subtle, changes in language behavior. These changes in behavior lead to a change of linguistic vitality in the community. There are a variety of systems that have been proposed for measuring the vitality of a language in a community. One of the earliest is the GIDS (Graded Intergenerational Disruption Scale) proposed byJoshua Fishmanin 1991.[33]A noteworthy publishing milestone in measuring language vitality is an entire issue ofJournal of Multilingual and Multicultural Developmentdevoted to the study of ethnolinguistic vitality, Vol. 32.2, 2011, with several authors presenting their own tools for measuring language vitality. A number of other published works on measuring language vitality have been published, prepared by authors with varying situations and applications in mind. These include works byArienne Dwyer,[34]Martin Ehala,[35]M. Lynne Landwehr,[36]Mark Karan,[37]András Kornai,[38]and Paul Lewis and Gary Simons.[39]
https://en.wikipedia.org/wiki/Language_death
Language revitalization, also referred to aslanguage revivalorreversing language shift, is an attempt to halt or reverse the decline of a language or to revive an extinct one.[1][2]Those involved can include linguists, cultural or community groups, or governments. Some argue for a distinction betweenlanguage revival(the resurrection of anextinct languagewith no existing native speakers) andlanguage revitalization(the rescue of a "dying" language). There has only been one successful instance of a complete language revival:that of the Hebrew language.[3] Languages targeted for language revitalization include those whose use and prominence isseverely limited. Sometimes various tactics of language revitalization can even be used to try to reviveextinct languages. Though the goals of language revitalization vary greatly from case to case, they typically involve attempting to expand the number of speakers and use of a language, or trying to maintain the current level of use to protect the language from extinction orlanguage death. Reasons for revitalization vary: they can include physical danger affecting those whose language is dying, economic danger such as the exploitation of indigenous natural resources, political danger such as genocide, or cultural danger/assimilation.[4]In recent times[when?]alone, it is estimated that more than 2000 languages have already become extinct. The UN estimates that more than half of the languages spoken today have fewer than 10,000 speakers and that a quarter have fewer than 1,000 speakers; and that, unless there are some efforts to maintain them, over the next hundred years most of these will become extinct.[5]These figures are often cited as reasons why language revitalization is necessary to preserve linguistic diversity. Culture and identity are also frequently cited reasons for language revitalization, when a language is perceived as a unique "cultural treasure."[6]A community often sees language as a unique part of its culture, connecting it with its ancestors or with the land, making up an essential part of its history and self-image.[7] Language revitalization is also closely tied to the linguistic field oflanguage documentation. In this field, linguists try to create a complete record of a language's grammar, vocabulary, and linguistic features. This practice can often lead to more concern for the revitalization of a specific language on study. Furthermore, the task of documentation is often taken on with the goal of revitalization in mind.[8] Uses a six-point scale is as follows:[9] Another scale for identifying degrees of language endangerment is used in a 2003 paper ("Language Vitality and Endangerment") commissioned byUNESCOfrom an international group of linguists. The linguists, among other goals and priorities, create a scale with six degrees for language vitality and endangerment.[10]They also propose nine factors or criteria (six of which use the six-degree scale) to "characterize a language’s overall sociolinguistic situation".[10]The nine factors with their respective scales are: One of the most important preliminary steps in language revitalization/recovering involves establishing the degree to which a particular language has been “dislocated.” This helps involved parties find the best way to assist or revive the language.[11] There are many different theories or models that attempt to lay out a plan for language revitalization. One of these is provided by celebrated linguistJoshua Fishman. Fishman's model for reviving threatened (or sleeping) languages, or for making them sustainable,[12][13]consists of an eight-stage process. Efforts should be concentrated on the earlier stages of restoration until they have been consolidated before proceeding to the later stages. The eight stages are: This model of language revival is intended to direct efforts to where they are most effective and to avoid wasting energy trying to achieve the later stages of recovery when the earlier stages have not been achieved. For instance, it is probably wasteful to campaign for the use of a language on television or in government services if hardly any families are in the habit of using the language. Additionally, Tasaku Tsunoda describes a range of different techniques or methods that speakers can use to try to revitalize a language, including techniques to revive extinct languages and maintain weak ones. The techniques he lists are often limited to the current vitality of the language. He claims that theimmersionmethod cannot be used to revitalize an extinct or moribund language. In contrast, the master-apprentice method of one-on-one transmission on language proficiency can be used with moribund languages. Several other methods of revitalization, including those that rely on technology such as recordings or media, can be used for languages in any state of viability.[14] David Crystal, in his bookLanguage Death, proposes that language revitalization is more likely to be successful if its speakers: In her book,Endangered Languages: An Introduction,Sarah Thomasonnotes the success of revival efforts for modern Hebrew and the relative success of revitalizing Maori in New Zealand (seeSpecific Examplesbelow). One notable factor these two examples share is that the children were raised in fully immersive environments.[16]In the case of Hebrew, it was on early collective-communities calledkibbutzim.[17]For the Maori language In New Zealand, this was done through alanguage nest.[18] Ghil'ad Zuckermannproposes "Revival Linguistics" as a new linguistic discipline and paradigm. Zuckermann's term 'Revival Linguistics' is modelled upon 'Contact Linguistics'. Revival linguistics inter alia explores the universal constraints and mechanisms involved in language reclamation, renewal and revitalization. It draws perspicacious comparative insights from one revival attempt to another, thus acting as an epistemological bridge between parallel discourses in various local attempts to revive sleeping tongues all over the globe.[19] According to Zuckermann, "revival linguistics combines scientific studies of native language acquisition and foreign language learning. After all, language reclamation is the most extreme case of second-language learning. Revival linguistics complements the established area ofdocumentary linguistics, which records endangered languages before they fall asleep."[20] Zuckermann proposes that "revival linguistics changes the field of historical linguistics by, for instance, weakening the familytree model, which implies that a language has only one parent."[20] There are disagreements in the field of language revitalization as to the degree that revival should concentrate on maintaining the traditional language, versus allowing simplification or widespread borrowing from themajority language. Zuckermann acknowledges the presence of "local peculiarities and idiosyncrasies"[20]but suggests that "there are linguistic constraints applicable to all revival attempts. Mastering them would help revivalists and first nations' leaders to work more efficiently. For example, it is easier to resurrect basic vocabulary and verbal conjugations than sounds and word order. Revivalists should be realistic and abandon discouraging, counter-productive slogans such as "Give us authenticity or give us death!"[20] Nancy Dorianhas pointed out that conservative attitudes towardloanwordsand grammatical changes often hamper efforts to revitalize endangered languages (as withTiwiin Australia), and that a division can exist between educated revitalizers, interested in historicity, and remaining speakers interested in locally authentic idiom (as has sometimes occurred withIrish). Some have argued that structural compromise may, in fact, enhance the prospects of survival, as may have been the case with English in the post-Norman period.[21] Other linguists have argued that when language revitalization borrows heavily from the majority language, the result is a new language, perhaps acreoleorpidgin.[22]For example, the existence of "Neo-Hawaiian" as a separate language from "Traditional Hawaiian" has been proposed, due to the heavy influence of English on every aspect of the revived Hawaiian language.[23]This has also been proposed for Irish, with a sharp division between "Urban Irish" (spoken by second-language speakers) and traditional Irish (as spoken as a first language inGaeltachtareas). Ó Béarra stated: "[to] follow the syntax and idiomatic conventions of English, [would be] producing what amounts to little more than English in Irish drag."[24]With regard to the then-moribundManx language, the scholar T. F. O'Rahilly stated, "When a language surrenders itself to foreign idiom, and when all its speakers become bilingual, the penalty is death."[25]Neil McRae has stated that the uses ofScottish Gaelicare becoming increasingly tokenistic, and native Gaelic idiom is being lost in favor of artificial terms created by second-language speakers.[26] The total revival of adead language(in the sense of having nonative speakers) to become the shared means of communication of a self-sustaining community of several millionfirst languagespeakers has happened only once, in the case ofHebrew, resulting inModern Hebrew– now thenational languageofIsrael. In this case, there was a unique set of historical and cultural characteristics that facilitated the revival. (SeeRevival of the Hebrew language.) Hebrew, once largely aliturgical language, was re-established as a means of everyday communication by Jews, some of whom had lived in what is now the State of Israel, starting in the nineteenth century. It is the world's most famous and successful example of language revitalization. In a related development,literary languageswithoutnative speakersenjoyed great prestige and practical utility aslingua francas, often counting millions of fluent speakers at a time. In many such cases, a decline in the use of the literary language, sometimes precipitous, was later accompanied by a strong renewal. This happened, for example, in the revival ofClassical Latinin theRenaissance, and the revival ofSanskritin the early centuries AD. An analogous phenomenon in contemporaryArabic-speaking areas is the expanded use of the literary language (Modern Standard Arabic, a form of theClassical Arabicof the 6th century AD). This is taught to all educated speakers and is used in radio broadcasts, formal discussions, etc.[27] In addition, literary languages have sometimes risen to the level of becomingfirst languagesof very large language communities. An example is standardItalian, which originated as a literary language based on the language of 13th-centuryFlorence, especially as used by such important Florentine writers asDante,PetrarchandBoccaccio. This language existed for several centuries primarily as a literary vehicle, with few native speakers; even as late as 1861, on the eve ofItalian unification, the language only counted about 500,000 speakers (many non-native), out of a total population ofc.22,000,000. The subsequent success of the language has been through conscious development, where speakers of any of the numerousItalian languageswere taught standard Italian as asecond languageand subsequently imparted it to their children, who learned it as a first language.[citation needed]Of course this came at the expense of local Italian languages, most of which are nowendangered. Success was enjoyed in similar circumstances byHigh German,standard Czech,Castilian Spanishand other languages. TheCoptic languagebegan its decline when Arabic became the predominant language in Egypt.Pope Shenouda IIIestablished the Coptic Language Institute in December 1976 in Saint Mark's Coptic Orthodox Cathedral inCairofor the purpose of reviving the Coptic language.[28][29] In recent years, a growing number ofNative Americantribes have been trying to revitalize their languages.[30][31]For example, there are apps (including phrases, word lists and dictionaries) in many Native languages includingCree,Cherokee,Chickasaw,Lakota,Ojibwe,Oneida,Massachusett,Navajo,Halq'emeylem,Gwych'in, andLushootseed. Wampanoag, a language spoken by the people of the same name in Massachusetts, underwent a language revival project led byJessie Little Doe Baird, a trained linguist. Members of the tribe use the extensive written records that exist in their language, including a translation of the Bible and legal documents, in order to learn and teach Wampanoag. The project has seen children speaking the language fluently for the first time in over 100 years.[32][33]In addition, there are currently attempts at reviving theChochenyo languageof California, which had become extinct. Efforts are being made by the Confederated Tribes of the Grand Ronde Community and others to keepChinook Jargon, also known asChinuk Wawa, alive. This is helped by the corpus of songs and stories collected fromVictoria Howardand published byMelville Jacobs.[34][35] The open-source platformFirstVoiceshosts community-managed websites for 85 language revitalization projects, covering multiple varieties of 33 Indigenous languages inBritish Columbiaas well as over a dozen languages from "elsewhere in Canada and around the globe", along with 17 dictionary apps.[36] Similar to other indigenous languages,Tlingitis critically endangered.[37]Fewer than 100 fluent Elders existed as of 2017.[37]From 2013 to 2014, the language activist, author, and teacher, Sʔímlaʔxw Michele K. Johnson from the Syilx Nation, attempted to teach two hopeful learners of Tlingit in the Yukon.[37]Her methods included textbook creation, sequenced immersion curriculum, and film assessment.[37]The aim was to assist in the creation of adult speakers that are of parent-age, so that they too can begin teaching the language. In 2020, X̱ʼuneiLance Twitchellled a Tlingit online class withOuter Coast College. Dozens of students participated.[38]He is an associate professor of Alaska Native Languages in the School of Arts and Sciences at theUniversity of Alaska Southeastwhich offers a minor in Tlingit language and an emphasis on Alaska Native Languages and Studies within a Bachelorʼs degree in Liberal Arts.[39] Kichwais the variety of theQuechualanguage spoken inEcuadorand is one of the most widely spoken indigenous languages in South America. Despite this fact, Kichwa is a threatened language, mainly because of the expansion of Spanish in South America. One community of original Kichwa speakers, Lagunas, was one of the first indigenous communities to switch to the Spanish language.[40]According to King, this was because of the increase of trade and business with the large Spanish-speaking town nearby. The Lagunas people assert that it was not for cultural assimilation purposes, as they value their cultural identity highly.[40]However, once this contact was made, language for the Lagunas people shifted through generations, to Kichwa and Spanish bilingualism and now is essentially Spanish monolingualism. The feelings of the Lagunas people present a dichotomy with language use, as most of the Lagunas members speak Spanish exclusively and only know a few words in Kichwa. The prospects for Kichwa language revitalization are not promising, as parents depend on schooling for this purpose, which is not nearly as effective as continual language exposure in the home.[41]Schooling in the Lagunas community, although having a conscious focus on teaching Kichwa, consists of mainly passive interaction, reading, and writing in Kichwa.[42]In addition to grassroots efforts, national language revitalization organizations, likeCONAIE, focus attention on non-Spanish speaking indigenous children, who represent a large minority in the country. Another national initiative, Bilingual Intercultural Education Project (PEBI), was ineffective in language revitalization because instruction was given in Kichwa and Spanish was taught as a second language to children who were almost exclusively Spanish monolinguals. Although some techniques seem ineffective, Kendall A. King provides several suggestions: Specific suggestions include imparting an elevated perception of the language in schools, focusing on grassroots efforts both in school and the home, and maintaining national and regional attention.[41] Therevival of the Hebrew languageis the only successful example of a revived dead language.[3]TheHebrew languagesurvived into the medieval period as the language ofJewish liturgyandrabbinic literature. With the rise ofZionismin the 19th century, it was revived as a spoken and literary language, becoming primarily a spokenlingua francaamong the early Jewish immigrants toOttoman Palestineand received the official status in the 1922 constitution of theBritish Mandate for Palestineand subsequently of theState of Israel.[43] There have been recent attempts at revivingSanskritin India.[44][45][46]However, despite these attempts, there are no first language speakers of Sanskrit in India.[47][48][49]In each of India's recent decennial censuses, several thousand citizens[a]have reported Sanskrit to be their mother tongue. However, these reports are thought to signify a wish to be aligned with the prestige of the language, rather than being genuinely indicative of the presence of thousands of L1 Sanskrit speakers in India. There has also been a rise of so-called "Sanskrit villages",[46][50]but experts have cast doubt on the extent to which Sanskrit is really spoken in such villages.[47][51] TheSoyot languageof the small-numberedSoyotsinBuryatia,Russia, one of theSiberian Turkic languages, has been reconstructed and a Soyot-Buryat-Russiandictionary was published in 2002. The language is currently taught in some elementary schools.[52] TheAinu languageof the indigenousAinu peopleof northern Japan is currently moribund, but efforts are underway to revive it. A 2006 survey of theHokkaidoAinu indicated that only 4.6% of Ainu surveyed were able to converse in or "speak a little" Ainu.[53]As of 2001, Ainu was not taught in any elementary or secondary schools in Japan, but was offered at numerous language centres and universities in Hokkaido, as well as at Tokyo'sChiba University.[54] In China, theManchu languageis one of the most endangered languages, with speakers only in three small areas of Manchuria remaining.[55]Some enthusiasts are trying to revive the language oftheir ancestorsusing available dictionaries and textbooks, and even occasional visits toQapqal Xibe Autonomous CountyinXinjiang, where the relatedXibe languageis still spoken natively.[56] In the Philippines, a local variety of Spanish that was primarily based onMexican Spanishwas thelingua francaof the country since Spanish colonization in 1565 and was an official language alongsideFilipino(standardizedTagalog) andEnglishuntil 1987, following the ratification of a new constitution, where it was re-designated as a voluntary language. As a result of its loss as an official language and years of marginalization at the official level during and after American colonization, the use of Spanish amongst the overall populace decreased dramatically and became moribund, with the remaining native speakers left being mostly elderly people.[57][58][59] The language has seen a gradual revival, however, due to official promotion under the administration of former PresidentGloria Macapagal Arroyo.[60][61]Schools were encouraged to offer Spanish, French, and Japanese as foreign language electives.[62]Results were immediate as the job demand for Spanish speakers had increased since 2008.[63]As of 2010, theInstituto Cervantesin Manila reported the number of Spanish-speakers in the country with native or non-native knowledge at approximately 3 million, the figure albeit including those who speak the Spanish-based creoleChavacano.[64] Complementing government efforts is a notable surge of exposure through themainstream mediaand, more recently,music-streamingservices.[65][66] TheWestern Armenianlanguage, has been classified as adefinitely endangered languagein theAtlas of the World's Languages in Danger(2010),[67]as most speakers of the dialect remain in diasporic communities away from their homeland in Anatolia, following theArmenian genocide. In spite of this, there have been various efforts[68]to revitalize the language, especially within theLos Angeles communitywhere the majority of Western Armenians reside. Within her dissertation, Shushan Karapetian discusses at length the decline of the Armenian language in the United States, and new means for keeping and reviving Western Armenian, such as the creation of the Saroyan Committee or the Armenian Language Preservation Committee, launched in 2013.[69]Other attempts at language revitalization can be seen within theUniversity of California in Irvine.[70]Armenian is also one of the languages Los Angeles County is required to provide voting information in.[71]The DPSS (California Department of Social Services) also identifies Armenian as one of its "threshold languages".[72] In Thailand, there exists aChong languagerevitalization project, headed by Suwilai Premsrirat.[73] InEurope, in the 19th and early 20th centuries, the use of both local and learnedlanguagesdeclined as the central governments of the different states imposed their vernacular language as the standard throughout education and official use (this was the case in theUnited Kingdom,France,Spain,ItalyandGreece, and to some extent, inGermanyandAustria-Hungary).[citation needed] In the last few decades,[when?]localnationalismandhuman rightsmovements have made a moremulticulturalpolicy standard in European states; sharp condemnation of the earlier practices of suppressing regional languages was expressed in the use of such terms as "linguicide". InFrancoist Spain,Basque languageuse was discouraged by the government'srepressive policies. In the Basque Country, "Francoist repression was not only political, but also linguistic and cultural."[74]Franco'sregime suppressed Basque from official discourse, education, and publishing,[75]making it illegal to register newborn babies under Basque names,[76]and even requiring tombstone engravings in Basque to be removed.[77]In some provinces the public use of Basque was suppressed, with people fined for speaking it.[78]Public use of Basque was frowned upon by supporters of the regime, often regarded as a sign of anti-Francoism orseparatism.[79]in the late 1960s. Since 1968, Basque has been immersed in a revitalisation process, facing formidable obstacles. However, significant progress has been made in numerous areas. Six main factors have been identified to explain its relative success: While those six factors influenced the revitalisation process, the extensive development and use oflanguage technologiesis also considered a significant additional factor.[81]Overall, in the 1960s and later, the trend reversed and education and publishing in Basque began to flourish.[82]A sociolinguistic survey shows that there has been a steady increase in Basque speakers since the 1990s, and the percentage of young speakers exceeds that of the old.[83] One of the best known European attempts at language revitalization concerns theIrish language. While English is dominant through most of Ireland, Irish, aCeltic language, is still spoken in certain areas calledGaeltachtaí,[84]but there it is in serious decline.[85]The challenges faced by the language over the last few centuries have included exclusion from important domains, social denigration, the death or emigration of many Irish speakers during theIrish famineof the 1840s, and continued emigration since. Efforts to revitalise Irish were being made, however, from the mid-1800s, and were associated with a desire for Irish political independence.[84]Contemporary Irish language revitalization has chiefly involved teaching Irish as a compulsory language in mainstream English-speaking schools. But the failure to teach it in an effective and engaging way means (as linguist Andrew Carnie notes) that students do not acquire the fluency needed for the lasting viability of the language, and this leads to boredom and resentment. Carnie also noted a lack of media in Irish (2006),[84]though this is no longer the case. The decline of the Gaeltachtaí and the failure of state-directed revitalisation have been countered by an urban revival movement. This is largely based on an independent community-based school system, known generally asGaelscoileanna. These schools teach entirely through Irish and their number is growing, with over thirty such schools in Dublin alone.[86]They are an important element in the creation of a network of urban Irish speakers (known as Gaeilgeoirí), who tend to be young, well-educated and middle-class. It is now likely that this group has acquired critical mass, a fact reflected in the expansion of Irish-language media.[87]Irish language television has enjoyed particular success.[88]It has been argued that they tend to be better educated than monolingual English speakers and enjoy higher social status.[89]They represent the transition of Irish to a modern urban world, with an accompanying rise in prestige. There are also current attempts to revive the related language ofScottish Gaelic, which was suppressed following the formation of the United Kingdom, and entered further decline due to theHighland clearances. Currently[when?], Gaelic is only spoken widely in theWestern Islesand some relatively small areas of theHighlands and Islands. The decline in fluent Gaelic speakers has slowed; however, the population center has shifted to L2 speakers in urban areas, especially Glasgow.[90][91] Another Celtic language,Manx, lost itslast native speakerin 1974 and was declared extinct byUNESCOin 2009, but never completely fell from use.[92]The language is now taught in primary and secondary schools, including as a teaching medium at theBunscoill Ghaelgagh, used in some public events and spoken as a second language by approximately 1,800 people.[93]Revitalization efforts include radio shows in Manx Gaelic and social media and online resources. The Manx government has also been involved in the effort by creating organizations such as the Manx Heritage Foundation (Culture Vannin) and the position of Manx Language Officer.[94]The government has released an official Manx Language Strategy for 2017–2021.[95] There have been a number of attempts to revive theCornish language, both privately and some under theCornish Language Partnership. Some of the activities have included translation of the Christian scriptures,[96]a guild of bards,[97]and the promotion ofCornish literaturein modern Cornish, including novels and poetry. TheRomaniarriving in the Iberian Peninsula developed an IberianRomanidialect. As time passed, Romani ceased to be a full language and becameCaló, acantmixing Iberian Romance grammar and Romani vocabulary. With sedentarization and obligatory instruction in the official languages, Caló is used less and less. As Iberian Romani proper is extinct and as Caló is endangered, some people are trying to revitalise the language. The Spanish politicianJuan de Dios Ramírez HerediapromotesRomanò-Kalò, a variant ofInternational Romani, enriched by Caló words.[98]His goal is to reunify the Caló and Romani roots. The Livonian language, a Finnic language, once spoken on about a third of modern-day Latvian territory,[99]died in the 21st century with the death of the last native speakerGrizelda Kristiņaon 2 June 2013.[100]Today there are about 210 people mainly living in Latvia who identify themselves as Livonian and speak the language on the A1-A2 level according to the Common European Framework of Reference for Languages and between 20 and 40 people who speak the language on level B1 and up.[101]Today all speakers learn Livonian as a second language. There are different programs educating Latvians on the cultural and linguistic heritage of Livonians and the fact that most Latvians have common Livonian descent.[102] Programs worth mentioning include: The Livonian linguistic and cultural heritage is included in the Latvian cultural canon[109]and the protection, revitalization and development of Livonian as an indigenous language is guaranteed by Latvian law[110] A few linguists and philologists are involved in reviving a reconstructed form of the extinctOld Prussian languagefrom Luther's catechisms, the Elbing Vocabulary, place names, and Prussian loanwords in theLow Prussian dialectofLow German. Several dozen people use the language inLithuania,Kaliningrad, andPoland, including a few children who are natively bilingual.[111] The Prusaspirā Society has published its translation ofAntoine de Saint-Exupéry'sThe Little Prince. The book was translated by Piotr Szatkowski (Pīteris Šātkis) and released in 2015.[112]The other efforts of Baltic Prussian societies include the development of online dictionaries, learning apps and games. There also have been several attempts to produce music with lyrics written in the revived Baltic Prussian language, most notably in the Kaliningrad Oblast byRomowe Rikoito,[113]Kellan and Āustras Laīwan, but also in Lithuania byKūlgrindain their 2005 albumPrūsų Giesmės(Prussian Hymns),[114]and inLatviaby Rasa Ensemble in 1988[115]andValdis Muktupāvelsin his 2005oratorio"Pārcēlātājs Pontifex" featuring several parts sung in Prussian.[116] Important in this revival wasVytautas Mažiulis, who died on 11 April 2009, and his pupilLetas Palmaitis, leader of the experiment and author of the websitePrussian Reconstructions.[117]Two late contributors were Prāncis Arellis (Pranciškus Erelis), Lithuania, and Dailūns Russinis (Dailonis Rusiņš), Latvia. After them,Twankstas GlabbisfromKaliningrad oblastandNērtiks Pamedīnsfrom East-Prussia, now PolishWarmia-Masuriaactively joined.[citation needed] TheYola languagerevival movement has cultivated in Wexford in recent years, and the “Gabble Ing Yola” resource center for Yola materials claims there are around 140 speakers of the Yola language today.[118] The European colonization of Australia, and the consequent damage sustained byAboriginalcommunities, had a catastrophic effect on indigenous languages, especially in the southeast and south of the country, leaving some with no living traditional native speakers. A number of Aboriginal communities inVictoriaand elsewhere are now trying to revive some of theAboriginal Australian languages. The work is typically directed by a group ofAboriginal eldersand other knowledgeable people, with community language workers doing most of the research and teaching. They analyze the data, develop spelling systems and vocabulary and prepare resources. Decisions are made in collaboration. Some communities employ linguists, and there are also linguists who have worked independently,[119]such asLuise HercusandPeter K. Austin. One of the best cases of relative success in language revitalization is the case ofMaori, also known aste reo Māori. It is the ancestral tongue of the indigenous Maori people of New Zealand and a vehicle for prose narrative, sung poetry, and genealogical recital.[127]The history of the Maori people is taught in Maori in sacred learning houses through oral transmission. Even after Maori became a written language, the oral tradition was preserved.[127] Once European colonization began, many laws were enacted in order to promote the use of English over Maori among indigenous people.[127]The Education Ordinance Act of 1847 mandated school instruction in English and established boarding schools to speed up assimilation of Maori youths into European culture. The Native School Act of 1858 forbade Māori from being spoken in schools. During the 1970s, a group of young Maori people, theNgā Tamatoa, successfully campaigned for Maori to be taught in schools.[127]Also, Kōhanga Reo, Māori language preschools, called language nests, were established.[128]The emphasis was on teaching children the language at a young age, a very effective strategy for language learning. The Maori Language Commission was formed in 1987, leading to a number of national reforms aimed at revitalizing Maori.[127]They include media programmes broadcast in Maori, undergraduate college programmes taught in Maori, and an annual Maori language week. Eachiwi(tribe) created a language planning programme catering to its specific circumstances. These efforts have resulted in a steady increase in children being taught in Maori in schools since 1996.[127] On six of the seven inhabited islands ofHawaii, Hawaiian was displaced by English and is no longer used as the daily language of communication. The one exception isNiʻihau, whereHawaiianhas never been displaced, has never been endangered, and is still used almost exclusively. Efforts to revive the language have increased in recent decades. Hawaiianlanguage immersionschools are now open to children whose families want to retain (or introduce) Hawaiian language into the next generation. The localNational Public Radiostation features a short segment titled "Hawaiian word of the day". Additionally, the Sunday editions of theHonolulu Star-Bulletinand its successor, theHonolulu Star-Advertiser, feature a brief article calledKauakūkalahale, written entirely in Hawaiian by a student.[129] Language revitalization efforts are ongoing around the world. Revitalization teams are utilizing modern technologies to increase contact with indigenous languages and to recordtraditional knowledge. In Mexico, theMixtecpeople'slanguageheavily revolves around the interaction between climate, nature, and what it means for their livelihood.[citation needed]UNESCO's LINKS (Local and Indigenous Knowledge) program recently underwent a project to create a glossary of Mixtec terms and phrases related to climate. UNESCO believes that the traditional knowledge of the Mixtec people via their deep connection with weather phenomena can provide insight on ways toaddress climate change. Their intention in creating the glossary is to "facilitate discussions between experts and the holders of traditional knowledge".[130] In Canada, theWapikoni Mobileproject travels to indigenous communities and provides lessons in film making. Program leaders travel across Canada with mobile audiovisual production units, and aim to provide indigenous youth with a way to connect with their culture through a film topic of their choosing. The Wapikona project submits its films to events around the world as an attempt to spread knowledge of indigenous culture and language.[131] Of the youth in Rapa Nui (Easter Island), ten percent learn their mother language. The rest of the community has adopted Spanish in order to communicate with the outside world and support its tourism industry. Through a collaboration between UNESCO and the ChileanCorporación Nacional de Desarrollo Indigena, the Department of Rapa Nui Language and Culture at the Lorenzo Baeza Vega School was created. Since 1990, the department has created primary education texts in theRapa Nui language. In 2017, the Nid Rapa Nui, anon-governmental organizationwas also created with the goal of establishing a school that teaches courses entirely in Rapa Nui.[132] Language revitalisation has been linked to increased health outcomes for Indigenous communities involved in reclaiming traditional language. Benefits range from improved mental health for community members, increasing connectedness to culture, identity, and a sense of wholeness. Indigenous languages are a core element in the formation of identity, providing pathways for cultural expression, agency, spiritual and ancestral connection.[133]Connection to culture is considered to play an important role in childhood development,[134]and is a UN convention right.[135] Colonisation and subsequent linguicide carried out through policies such as those that created Australia'sStolen Generationshave damaged this connection. It has been proposed that language revitalization may play an important role in counteringintergenerational traumathat has been caused.[136]Researchers at theUniversity of AdelaideandSouth Australian Health and Medical Research Institutehave found that language revitalisation ofAboriginal languagesis linked to better mental health.[137]One study in theBarngarlaCommunity inSouth Australiahas been looking holistically at the positive benefits of language reclamation, healing mental and emotional scars, and building connections to community and country that underpin wellness and wholeness. The study identified the Barngarla peoples' connection to theirlanguageas a strong component of developing a strong cultural and personal identity; the people are as connected to language as they are to culture, and culture is key to their identity.[133]Some proponents claim that language reclamation is a form of empowerment and builds strong connections with community and wholeness.[138] John McWhorterhas argued that programs to reviveindigenous languageswill almost never be very effective because of the practical difficulties involved. He also argues that the death of a language does not necessarily mean the death of a culture. Indigenous expression is still possible even when the original language has disappeared, as with Native American groups and as evidenced by the vitality ofblack American culturein the United States, among people who speak notYorubabut English. He argues that language death is, ironically, a sign of hitherto isolated peoples migrating and sharing space: "To maintain distinct languages across generations happens only amidst unusually tenacious self-isolation—such as that of theAmish—or brutal segregation".[139] Kenan Malikhas also argued that it is "irrational" to try to preserve all the world's languages, as language death is natural and in many cases inevitable, even with intervention. He proposes that language death improves communication by ensuring more people speak the same language. This may benefit the economy and reduce conflict.[140][141] The protection ofminority languagesfrom extinction is often not a concern for speakers of the dominant language. There is often prejudice and deliberate persecution of minority languages, in order to appropriate the cultural and economic capital of minority groups.[142]At other times governments deem that the cost of revitalization programs and creating linguistically diverse materials is too great to take on.[143]
https://en.wikipedia.org/wiki/Language_revitalization
Lexibankis a linguistics database managed by theMax Planck Institute for Evolutionary AnthropologyinLeipzig, Germany.[1]The database consists of over 100 standardizedwordlists(datasets) that are independently curated.[2][3] Lexibank datasets are presented in the Cross-Linguistic Data Format (CLDF).[4] Phonological and lexical features are automatically computed in Lexibank.[2] The datasets are publicly accessible and are archived atZenodo[5]and are also publicly available onGitHub.[6]Lexibank is also part of theCross-Linguistic Linked Dataproject. All of the datasets are released under theCC BY 4.0license. Applications of the database includehistorical linguisticsand comparativephonology. The following is a list of Lexibank (version 0.2) datasets as of 17 June 2022.[7]
https://en.wikipedia.org/wiki/Lexibank
Mass comparisonis a method developed byJoseph Greenbergto determine the level ofgenetic relatednessbetween languages. It is now usually calledmultilateral comparison. Mass comparison has been referred to as a "methodological deception" and is rejected by most linguists, and its continued use is primarily restricted tofringelinguistics.[1][2] Some of the top-level relationships Greenberg named are now generally accepted thanks to analysis with other, more widely accepted linguistic techniques, though they had already been posited by others (e.g.Afro-AsiaticandNiger–Congo). Others are accepted by many though disputed by some prominent specialists (e.g.Nilo-Saharan), while others are almost universally rejected (e.g.Eurasiatic,KhoisanandAmerind). The idea of mass comparison method is that a group of languages is related when they show numerous resemblances in vocabulary, includingpronouns, andmorphemes, forming an interlocking pattern common to the group. Unlike thecomparative method, mass comparison does not require any regular or systematic correspondences between the languages compared; all that is required is an impressionistic feeling of similarity. Greenberg does not establish a clear standard for determining relatedness; he does not set a standard for what he considers a "resemblance" or how many resemblances are needed to prove relationship.[3] Mass comparison is done by setting up a table of basic vocabulary items and their forms in the languages to be compared for resemblances. The table can also include common morphemes. The following table was used by[4]to illustrate the technique. It shows the forms of six items of basic vocabulary in nine different languages, identified by letters. According to Greenberg, basic relationships can be determined without any experience in the case of languages that are fairly closely related, though knowledge of probable paths of sound change acquired throughtypologyallows one to go farther faster. For instance, the pathp>fis extremely frequent, but the pathf>pis much less so, enabling one to hypothesize thatfi:piandfik:pixare indeed related and go back to protoform *piand *pik/x. Similarly, while knowledge thatk>xis extremely frequent,x>kis much less so enables one to choose *pikover *pix. Thus, according to Greenberg (2005:318), phonological considerations come into play from the very beginning, even though mass comparison does not attempt to produce reconstructions ofprotolanguagesas these belong to a later phase of study. The tables used in actual mass comparison involve much larger numbers of items and languages. The items included may be either lexical, such as 'hand', 'sky', and 'go', or morphological, such asPLURALandMASCULINE.[5]For Greenberg, the results achieved through mass comparison approached certainty:[6]"The presence of fundamental vocabulary resemblances and resemblances in items with grammatical function, particularly if recurrent through a number of languages, is a sure indication of genetic relationship." As a tool for identifyinggenetic relationshipsbetween languages, mass comparison is an alternative to thecomparative method. Proponents of mass comparison, such as Greenberg, claim that the comparative method is unnecessary to identify genetic relationships; furthermore, they claim that it can only be used once relationships are identified using mass comparison, making mass comparison the "first step" in determining relationships (1957:44). This contrasts with mainstreamcomparative linguistics, which relies on the comparative method to aid in identifying genetic relationships; specifically, it involves comparing data from two or more languages. If sets of recurrent sound correspondences are found, the languages are most likely related; if further investigation confirms the potential relationship,reconstructed ancestral formscan be set up using the collated sound correspondences.[3] However, Greenberg did not entirely disavow the comparative method; he stated that "once we have a well-established stock I go about comparing and reconstructing just like anyone else, as can be seen in my various contributions to historical linguistics" (1990, quoted in Ruhlen 1994:285) and accused mainstream linguists of spreading "the strange and widely disseminated notion that I seek to replace the comparative method with a new and strange invention of my own" (2002:2). Earlier in his career, before he fully developed mass comparison, he even stated that his methodology did not "conflict in any fashion with the traditional comparative method" (1957:44). However, Greenberg sees the comparative method as playing no role in determining relationships, significantly reducing its importance compared to traditional methods of linguistic comparison. In effect, his approach of mass comparison sidelined the comparative method with a "new and strange invention of his own".[3] Reflecting the methodologicalempiricismalso present in histypologicalwork, he viewed facts as of greater weight than their interpretations, stating (1957:45): The presence of frequent errors in Greenberg's data has been pointed out by linguists such asLyle CampbellandAlexander Vovin, who see it as fatally undermining Greenberg's attempt to demonstrate the reliability of mass comparison. Campbell notes in his discussion of Greenberg'sAmerindproposal that "nearly every specialist finds extensive distortions and inaccuracies in Greenberg's data"; for example,Willem Adelaar, a specialist in Andean languages, has stated that "the number of erroneous forms [in Greenberg's data] probably exceeds that of the correct forms". Some forms in Greenberg's data even appear to be attributed to the wrong language. Greenberg also neglects knownsound changesthat languages have undergone; once these are taken into account, many of the resemblances he points out vanish. Greenberg's data also contains errors of a more systematic sort: for instance, he groups unrelated languages together based on outdated classifications or because they have similar names.[3][7][8] Greenberg also arbitrarily deems certain portions of a word to beaffixeswhen affixes of the requisitephonologicalshape are unknown to make words cohere better with his data. Conversely, Greenberg frequently employs affixed forms in his data, failing to recognise actual morphemic boundaries; when affixes are removed, the words often no longer bear any resemblance to his "Amerind" reconstructions.[7][9]Greenberg has responded to this criticism by claiming that "the method of multilateral comparison is so powerful that it will give reliable results even with the poorest data. Incorrect material should merely have a randomizing effect”. This has hardly reassured critics of the method, who are far from convincing of the method's "power".[9] A prominent criticism of mass comparison is that it cannot distinguishborrowedforms from inherited ones, unlike comparative reconstruction, which is able to do so through regular sound correspondences. Undetected borrowings within Greenberg's data support this claim; for instance, he lists "cognates" ofUwabaxita"machete", even though it is a borrowing fromSpanishmachete.[7][10]admits that "in particular and infrequent instances the question of borrowing may be doubtful" when using mass comparison, but claims that basic vocabulary is unlikely to be borrowed compared to cultural vocabulary, stating that "where a mass of resemblances is due to borrowing, they will tend to appear in cultural vocabulary and to cluster in certain semantic areas which reflect the cultural nature of the contact." Mainstream linguists accept this premise, but claim that it does not suffice for distinguishing borrowings frominherited vocabulary.[7] According to him, any type of linguistic item may be borrowed "on occasion", but "fundamental vocabulary is proof against mass borrowing". However, languages can and do borrow basic vocabulary. For instance, in the words of Campbell,Finnishhas borrowed "from itsBalticandGermanicneighbors various terms for basic kinship and body parts, including 'mother', 'daughter', 'sister', 'tooth', 'navel', 'neck', 'thigh', and 'fur'". Greenberg continues by stating that "[D]erivational, inflectional, and pronominal morphemes and morph alternations are the least subject of all to borrowing"; he does incorporatemorphologicalandpronominalcorrelations when performing mass comparison, but they are peripheral and few in number compared to hislexicalcomparisons. Greenberg himself acknowledges the peripheral role they play in his data by saying that they are "not really necessary". Furthermore, the correlations he lists are neither exclusive to or universally found within the languages which he compares. Greenberg is correct in pointing out that borrowing of pronouns or morphology is rare, but it cannot be ruled out without recourse to a method more sophisticated than mass comparison.[3][7][11] Greenberg continues by claiming that "[R]ecurrent sound correspondences" do not suffice to detect borrowing, since "where loans are numerous, they often show such correspondences".[12]However, Greenberg misrepresents the practices of mainstreamcomparative linguisticshere; few linguists advocate using sound correspondences to the exclusion of all other kinds of evidence. This additional evidence often helps separate borrowings from inherited vocabulary; for instance, Campbell mentions how "[c]ertain sorts of patterned grammatical evidence (that which resists explanation from borrowing, accident, ortypologyanduniversals) can be important testimony, independent of the issue of sound correspondences".[11]It may not always be possible to separate borrowed and inherited material, but any method has its limits; in the vast majority of cases, the difference can discerned.[3] Cross-linguistically, chance resemblances between unrelated lexical items are common, due to the large amount oflexemespresent across the world's languages; for instance, Englishmuchand Spanishmuchoare demonstrably unrelated, despite their similar phonological shape. This means that many of the resemblances found through mass comparison are likely to be coincidental. Greenberg worsens this issue by reconstructing a common ancestor when only a small proportion of the languages he compares actually display a match for any given lexical item, effectively allowing him to cherry-pick similar-looking lexical items from a wide array of languages.[9]Though they are less susceptible to borrowing, pronouns and morphology also typically display a restricted subset of a language'sphonemic inventory, making cross-linguistic chance resemblances more likely.[3] Greenberg also allows for a wide semantic latitude when comparing items; while widely accepted linguistic comparisons do allow for a degree of semantic latitude, what he allows for is incommensurably greater; for instance, one of his comparisons involves words for "night", "excrement", and "grass".[9] Proponents of mass comparison often neglect to exclude classes of words that are usually considered to be unreliable for proving linguistic relationships. For instance, Greenberg made no attempt to excludeonomatopoeicwords from his data. Onomatopoeic words are often excluded from linguistic comparison, as similar-sounding onomatopoeic words can easily evolve in parallel. Though it is impossible to make a definite judgement as to whether a word is onomatopoeic, certainsemantic fields, such as "blow" and "suck", show a cross-linguistic tendency to be onomatopoeic; making such a judgement may require deep analysis of a type that mass comparison makes difficult. Similarly, Greenberg neglected to exclude items affected bysound symbolism, which often distorts the original shape of lexical items, from his data. Finally, "nursery words", such as"mama" and "papa"lack evidential value in linguistic comparison, as they are usually thought to derive from the soundsinfantsmake when beginning toacquire languages. Advocates of mass comparison often avoid taking sufficient care to exclude nursery words; one,Merritt Ruhlenhas even attempted to downplay the problems inherent in using them in linguistic comparison.[3][7]The fact that many ofindigenous languages of the Americashave pronouns that begin withnasal stops, which Greenberg sees as evidence of common ancestry, may ultimately also be linked to early speech development;AlgonquianspecialistIves Goddardnotes that "A gesture equivalent to that used to articulate the soundnis the single most important voluntary muscular activity of a nursing infant".[13] Since the development ofcomparative linguisticsin the 19th century, a linguist who claims that two languages are related, whether or not there exists historical evidence, is expected to back up that claim by presenting general rules that describe the differences between their lexicons, morphologies, and grammars. The procedure is described in detail in thecomparative methodarticle. For instance, one could demonstrate thatSpanishis related toItalianby showing that many words of the former can be mapped to corresponding words of the latter by a relatively small set of replacement rules—such as the correspondence of initiales-ands-, final-osand-i, etc. Many similar correspondences exist between the grammars of the two languages. Since those systematic correspondences are extremely unlikely to be random coincidences, the most likely explanation by far is that the two languages have evolved from a single ancestral tongue (Latin, in this case). All pre-historical language groupings that are widely accepted today—such as theIndo-European,Uralic,Algonquian, andBantufamilies—have been established this way. The actual development of the comparative method was a more gradual process than Greenberg's detractors suppose. It has three decisive moments. The first wasRasmus Rask's observation in 1818 of a possible regular sound change in Germanic consonants. The second wasJacob Grimm's extension of this observation into a general principle (Grimm's law) in 1822. The third wasKarl Verner's resolution of an irregularity in this sound change (Verner's law) in 1875. Only in 1861 didAugust Schleicher, for the first time, present systematic reconstructions of Indo-European proto-forms (Lehmann 1993:26). Schleicher, however, viewed these reconstructions as extremely tentative (1874:8). He never claimed that they proved the existence of the Indo-European family, which he accepted as a given from previous research—primarily that ofFranz Bopp, his great predecessor in Indo-European studies. Karl Brugmann, who succeeded Schleicher as the leading authority on Indo-European, and the otherNeogrammariansof the late 19th century, distilled the work of these scholars into the famous (if often disputed) principle that "every sound change, insofar as it occurs automatically, takes place according to laws that admit of no exception" (Brugmann 1878).[14] The Neogrammarians did not, however, regard regular sound correspondences or comparative reconstructions as relevant to the proof of genetic relationship between languages. In fact, they made almost no statements on how languages are to be classified (Greenberg 2005:158). The only Neogrammarian to deal with this question wasBerthold Delbrück, Brugmann's collaborator on theGrundriß der vergleichenden Grammatik der indogermanischen Sprachen(Greenberg 2005:158-159, 288). According to Delbrück (1904:121-122, quoted in Greenberg 2005:159), Bopp had claimed to prove the existence of Indo-European in the following way: Furthermore, Delbrück took the position later enunciated by Greenberg on the priority of etymologies to sound laws (1884:47, quoted in Greenberg 2005:288): "obvious etymologies are the material from which sound laws are drawn." The opinion that sound correspondences or, in another version of the opinion, reconstruction of a proto-language are necessary to show relationship between languages thus dates from the 20th, not the 19th century, and was never a position of the Neogrammarians. Indo-European was recognized by scholars such asWilliam Jones(1786) and Franz Bopp (1816) long before the development of the comparative method. Furthermore, Indo-European was not the first language family to be recognized by students of language.Semitichad been recognized by European scholars in the 17th century,Finno-Ugricin the 18th.Dravidianwas recognized in the mid-19th century byRobert Caldwell(1856), well before the publication of Schleicher's comparative reconstructions. Finally, the supposition that all of the language families generally accepted by linguists today have been established by the comparative method is untrue. Some families were accepted for decades before comparative reconstructions of them were put forward, for exampleAfro-AsiaticandSino-Tibetan. Many languages are generally accepted as belonging to a language family even though no comparative reconstruction exists, often because the languages are only attested in fragmentary form, such as theAnatolianlanguageLydian(Greenberg 2005:161). Conversely, detailed comparative reconstructions exist for some language families which nonetheless remain controversial, such asAltaic. Detractors of Altaic point out that the data collected to show by comparativism the existence of the family is scarce, wrong and non sufficient. Keep in mind that regular phonological correspondences need thousands of lexicon lists to be prepared and compared before being established, and these lists are lacking for many of the proposed families identified through mass comparison. Furthermore, other specific problems affect "comparative" lists of both proposals, like the late attestation for Altaic languages, or the comparison of not certain proto-forms.[15][16] Greenberg claimed that he was at bottom merely continuing the simple but effective method of language classification that had resulted in the discovery of numerous language families prior to the elaboration of thecomparative method(1955:1-2, 2005:75) and that had continued to do so thereafter, as in the classification ofHittiteas Indo-European in 1917 (Greenberg 2005:160-161). This method consists in essentially two things: resemblances in basic vocabulary and resemblances in inflectional morphemes. If mass comparison differs from it in any obvious way, it would seem to be in the theoretization of an approach that had previously been applied in a relatively ad hoc manner and in the following additions: The positions of Greenberg and his critics therefore appear to provide a starkly contrasted alternative: Besides systematic changes, languages are also subject to random mutations (such as borrowings from other languages, irregular inflections, compounding, and abbreviation) that affect one word at a time, or small subsets of words. For example, Spanishperro(dog), which does not come from Latin, cannot be rule-mapped to its Italian equivalentcane(the Spanish wordcanis the Latin-derived equivalent but is much less used in everyday conversations, being reserved for more formal purposes). As those sporadic changes accumulate, they will increasingly obscure the systematic ones—just as enough dirt and scratches on a photograph will eventually make the face unrecognisable.[9] In spite of the apparently intractable nature of the conflict between Greenberg and his critics, a few linguists have begun to argue for its resolution.Edward Vajda, noted for his recent proposal ofDené–Yeniseian, attempts to stake out a position that is sympathetic to both Greenberg's approach and that of its critics, such as Lyle Campbell andJohanna Nichols.[17] Anti-Greenbergian Greenbergian
https://en.wikipedia.org/wiki/Mass_comparison
TheRosetta Projectis a global collaboration of language specialists and native speakers working to develop a contemporary version of the historicRosetta Stone. Run by theLong Now Foundation, the project aims to create a survey and near-permanent archive of 1,500languagesthat can enablecomparative linguisticresearch and education and might help recover or revitalize lost languages in the future.[1][2] The project works through an open-contribution, open-review process similar to the one that created theOxford English Dictionary. The archive will be publicly available in three media: aHD-Rosettamicro-etchednickelalloydisc three inches (7.62 cm) across with a 2,000-year life expectancy; a single-volume monumental reference book; and a growing online archive. Half to 90 percent of the world's languages are predicted to disappear in the next century, many with little or no significant documentation.[citation needed]Some of these languages have fewer than one thousand speakers left. Others are considered to be dying out becauselanguage policybased on anofficial languageis increasing the prevalence of major languages that are used as themedium of instructionin public schools and national media. (For example,Tok Pisinis "slowly crowding out" otherlanguages of Papua New Guinea.)[3] Much linguistic description, especially the description of languages with few speakers, remains hidden in personal research files or poorly preserved in under-funded archives. As part of the effort to secure this critical legacy of linguistic diversity, the Long Now Foundation plans a broad online survey and near-permanent physical archive of 1,500 of the approximately 7,000 human languages. The project has three overlapping goals: The 1,500-languagecorpusexpands on the parallel-text structure of the original Rosetta Stone through archiving ten descriptive components for each of the 1,500 selected languages. The goal is anopen source"LinuxofLinguistics"—an effort of collaborative online scholarship drawing on the expertise and contributions of thousands of academic specialists and native speakers around the world. The project is also organising formal archive research groups atStanford,Yale,Berkeley, the AmericanLibrary of Congress, and the AmericanSummer Institute of Linguistics(and its offices inDallas). The resulting Rosetta archives are publicly available in three different media: In a direct analogy to itsnamesake, theRosetta spacecraftcarried a micro-etched pure nickel prototype of the Rosetta disc donated by theLong Now Foundation. The disc was inscribed with 6,500 pages of language translations. The Rosetta spacecraft, with the Rosetta disc launched on 2 March 2004. On 6 August 2014, the spacecraft reached the comet67P/Churyumov–Gerasimenko. On 30 September 2016, the Rosetta spacecraft ended its mission by hard-landing on the comet in its Ma'at region. The Rosetta disc, Rosetta spacecraft and comet are now all on a 6.44 year orbit around the Sun. A "Version 1.0" of theHD-Rosettadisc was completed on 3 November 2008.[5]The disc contains over 13,000 pages of information using over 1,500 languages,[6]which can be read after magnifying by 650 times with a microscope. In early 2017, the Rosetta Wearable Disk was released.[7]It was developed using a similar manufacturing process as the first edition of the Rosetta Disk, the main difference being that the final archive is about 2 cm (0.79 in) in diameter, thus enabling wearing as an ornament on the human body. One side has instructions in eight different languages and scripts (Bahasa Indonesia, English, Hindi, Mandarin, Modern Standard Arabic, Spanish, Swahili, and Russian), and the other an archive of over 1000 human languages assembled in 2016. By November 2017, the initial run of 100 disks had all been sold, but new releases are planned.[6]
https://en.wikipedia.org/wiki/Rosetta_Project
ASwadesh list(/ˈswɑːdɛʃ/) is a compilation oftentatively universalconcepts for the purposes oflexicostatistics. That is, a Swadesh list is a list of forms and concepts which all languages, without exception, have terms for, such as star, hand, water, kill, sleep, and so forth. The number of such terms is small – a few hundred at most, or possibly less than a hundred. The inclusion or exclusion of many terms is subject to debate among linguists; thus, there are several different lists, and some authors may refer to "Swadesh lists". The Swadesh list is named after linguistMorris Swadesh. Translations of a Swadesh list into a set of languages allow for researchers to quantify the interrelatedness of those languages. Swadesh lists are used inlexicostatistics(the quantitative assessment of the genealogical relatedness of languages) andglottochronology(the dating of language divergence). For instance, the terms on a Swadesh list can be compared between two languages (since both languages will have them) to see if they are related and how closely, thus giving useful information that can be further applied to comparison of the languages. (Actual lexicostatistics is quite complicated, and usually sets of languages are compared.) Morris Swadeshcreated several versions of his list. He started[1]with a list of 215 meanings (falsely introduced as a list of 225 meanings in the paper due to a spelling error[2]), which he reduced to 165 words for theSalish-Spokane-Kalispel language. In 1952, he published a list of 215 meanings,[3]of which he suggested the removal of 16 for being unclear or notuniversal, with one added to arrive at 200 words. In 1955,[4]he wrote, "The only solution appears to be a drastic weeding out of the list, in the realization that quality is at least as important as quantity. Even the new list has defects, but they are relatively mild and few in number." After minor corrections, the final 100-word list was published posthumously in 1971[5]and 1972. Other versions of lexicostatistical test lists were published e.g. byRobert Lees(1953), John A. Rea (1958:145f),Dell Hymes(1960:6), E. Cross (1964 with 241 concepts), W. J. Samarin (1967:220f), D. Wilson (1969 with 57 meanings),Lionel Bender(1969), R. L. Oswald (1971),Winfred P. Lehmann(1984:35f), D. Ringe (1992, passim, different versions),Sergei Starostin(1984, passim, different versions),William S-Y. Wang(1994), M. Lohr (2000, 128 meanings in 18 languages). B. Kessler (2002), and many others. TheConcepticon,[6]a project hosted at theCross-Linguistic Linked Data(CLLD) project, collects various concept lists (including classical Swadesh lists) across different linguistic areas and times, currently listing 240 different concept lists.[7] Frequently used and widely available on the internet, is the version byIsidore Dyen(1992, 200 meanings of 95 language variants). Since 2010, a team aroundMichael Dunnhas tried to update and enhance that list.[8] In origin, the words in the Swadesh lists were chosen for their universal, culturally independent availability in as many languages as possible, regardless of their stability (how prone the word is to changing, as all words do over time to a greater or lesser extent, which can includeborrowingfrom another language). However, stability may be important. The stability of terms on a Swadesh list under language change and the potential use of this fact for purposes ofglottochronology(study of how languages develop and branch apart over time) have been analyzed by numerous authors, including Marisa Lohr 1999, 2000.[9] The Swadesh list was put together by Morris Swadesh on the basis of his intuition. Similar more recent lists, such as theDolgopolsky list(1964) or theLeipzig–Jakarta list(2009), are based on systematic data from many different languages, but they are not yet as widely known nor as widely used as the Swadesh list. Lexicostatistical test lists are used inlexicostatisticsto define subgroupings of languages, and inglottochronologyto "provide dates for branching points in the tree".[10]The task of defining (and counting the number) of cognate words in the list is far from trivial, and often is subject to dispute, because cognates do not necessarily look similar, and recognition of cognates presupposes knowledge of thesound lawsof the respective languages. Swadesh's final list, published in 1971,[5]contains 100 terms. Explanations of the terms can be found in Swadesh 1952[3]or, where noted by a dagger (†), in Swadesh 1955. Note that only this original sequence clarifies the correct meaning which is lost in an alphabetical order, e.g., in the case "27. bark" (originally without the specification here added). ^"Claw" was only added in 1955, but again replaced by many well-known specialists with(finger)nail, because expressions for "claw" are not available in many old, extinct, or lesser known languages. The 110-itemGlobal Lexicostatistical Databaselist uses the original 100-item Swadesh list, in addition to 10 other words from the Swadesh–Yakhontov list.[11] The most used list nowadays is the Swadesh 207-word list, adapted from Swadesh 1952.[3] In Wiktionary ("Swadesh lists by language"), Panlex[12][13]and in Palisto's "Swadesh Word List of Indo-European languages",[14]hundreds of Swadesh lists in this form can be found. TheSwadesh–Yakhontov listis a 35-word subset of the Swadesh list posited as especially stable by Russian linguistSergei Yakhontovaround the 1960s, although the list was only officially published in 1991.[15]It has been used inlexicostatisticsby linguists such asSergei Starostin. With their Swadesh numbers, they are:[16] Holmanet al.(2008) found that in identifying the relationships betweenChinese dialectsthe Swadesh–Yakhontov list was less accurate than the original Swadesh-100 list. Further they found that a different (40-word) list (also known as theASJP list) was just as accurate as the Swadesh-100 list. However, they calculated the relative stability of the words by comparing retentions between languages in established language families. They found no statistically significant difference in the correlations in the families of the Old versus the New World. The ranked Swadesh-100 list, with Swadesh numbers and relative stability, is as follows (Holmanet al.,Appendix. Asterisked words appear on the 40-word list): In studying thesign languages of VietnamandThailand, linguist James Woodward noted that the traditional Swadesh list applied to spoken languages was unsuited forsign languages. The Swadesh list results in overestimation of the relationships between sign languages, due to indexical signs such as pronouns and parts of the body. The modified list is as follows, in mostly alphabetical order:[17]
https://en.wikipedia.org/wiki/Swadesh_list
TheWorld Atlas of Language Structures(WALS) is adatabaseof structural (phonological,grammatical,lexical) properties of languages gathered from descriptive materials.[1]It was first published byOxford University Pressas a book withCD-ROMin 2005, and was released as the second edition on theInternetin April 2008. It is maintained by theMax Planck Institute for Evolutionary Anthropologyand by theMax Planck Digital Library. The editors areMartin Haspelmath,Matthew S. Dryer,David GilandBernard Comrie.[1] The atlas provides information on the location, linguistic affiliation and basictypologicalfeatures of a great number of the world's languages. It interacts withOpenStreetMapmaps. The information of the atlas is published under theCreative CommonsAttribution 4.0 International license. It is part of theCross-Linguistic Linked Dataproject hosted by theMax Planck Institute for the Science of Human History.[2] This article about a book onlanguage,linguisticsortranslationis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/World_Atlas_of_Language_Structures
Incryptography,PBKDF1andPBKDF2(Password-Based Key Derivation Function 1and2) arekey derivation functionswith a sliding computational cost, used to reduce vulnerability tobrute-force attacks.[1] PBKDF2 is part ofRSA Laboratories'Public-Key Cryptography Standards(PKCS) series, specifically PKCS#5 v2.0, also published asInternet Engineering Task Force's RFC2898. It supersedes PBKDF1, which could only produce derived keys up to 160 bits long.[2]RFC8018 (PKCS#5 v2.1), published in 2017, recommends PBKDF2 for password hashing.[3] PBKDF2 applies apseudorandom function, such ashash-based message authentication code(HMAC), to the inputpasswordorpassphrasealong with asaltvalue and repeats the process many times to produce aderived key, which can then be used as acryptographic keyin subsequent operations. The added computational work makespassword crackingmuch more difficult, and is known askey stretching. When the standard was written in the year 2000 the recommended minimum number of iterations was 1,000, but the parameter is intended to be increased over time as CPU speeds increase. AKerberosstandard in 2005 recommended 4,096 iterations;[1]Applereportedly used 2,000 foriOS 3, and 10,000 foriOS 4;[4]whileLastPassin 2011 used 5,000 iterations forJavaScriptclients and 100,000 iterations for server-side hashing.[5]In 2023,OWASPrecommended to use 600,000 iterations for PBKDF2-HMAC-SHA256 and 210,000 for PBKDF2-HMAC-SHA512.[6] Having a salt added to the password reduces the ability to use precomputed hashes (rainbow tables) for attacks, and means that multiple passwords have to be tested individually, not all at once. The public key cryptography standard recommends a salt length of at least 64 bits.[7]The USNational Institute of Standards and Technologyrecommends a salt length of at least 128 bits.[8] PBKDF2 has five input parameters:[9] where: EachhLen-bit blockTiof derived keyDK, is computed as follows (with+marking string concatenation): The functionFis thexor(^) ofciterations of chained PRFs. The first iteration of PRF usesPasswordas the PRF key andSaltconcatenated withiencoded as a big-endian 32-bit integer as the input. (Note thatiis a 1-based index.) Subsequent iterations of PRF usePasswordas the PRF key and the output of the previous PRF computation as the input: where: For example,WPA2uses: PBKDF1 had a simpler process: the initialU(calledTin this version) is created byPRF(Password+Salt), and the following ones are simplyPRF(Uprevious). The key is extracted as the firstdkLenbits of the final hash, which is why there is a size limit.[9] PBKDF2 has an interesting property when using HMAC as its pseudo-random function. It is possible to trivially construct any number of different password pairs with collisions within each pair.[10]If a supplied password is longer than the block size of the underlying HMAC hash function, the password is first pre-hashed into a digest, and that digest is instead used as the password. For example, the following password is too long: therefore, when using HMAC-SHA1, it is pre-hashed using SHA-1 into: Which can be represented in ASCII as: This means regardless of the salt or iterations, PBKDF2-HMAC-SHA1 will generate the same key bytes for the passwords: For example, using: The following two function calls: will generate the same derived key bytes (17EB4014C8C461C300E9B61518B9A18B). These derived key collisions do not represent a security vulnerability – as one still must know the original password in order to generate thehashof the password.[11] One weakness of PBKDF2 is that while its number of iterations can be adjusted to make it take an arbitrarily large amount of computing time, it can be implemented with a small circuit and very little RAM, which makes brute-force attacks usingapplication-specific integrated circuitsorgraphics processing unitsrelatively cheap.[12]Thebcryptpassword hashing function requires a larger amount of RAM (but still not tunable separately, i.e. fixed for a given amount of CPU time) and is significantly stronger against such attacks,[13]while the more modernscryptkey derivation function can use arbitrarily large amounts of memory and is therefore more resistant to ASIC and GPU attacks.[12] In 2013, thePassword Hashing Competition(PHC) was held to develop a more resistant approach. On 20 July 2015Argon2was selected as the final PHC winner, with special recognition given to four other password hashing schemes: Catena,Lyra2,yescryptand Makwa.[14]Another alternative isBalloon hashing, which is recommended inNIST password guidelines.[15] To limit abrute-force attack, it is possible to make each password attempt require an online interaction, without harming the confidentiality of the password. This can be done using anoblivious pseudorandom functionto performpassword hardening.[16]This can be done as alternative to, or as an additional step in, a PBKDF.
https://en.wikipedia.org/wiki/PBKDF2
bcryptis apassword-hashing functiondesigned byNiels Provosand David Mazières. It is based on theBlowfishcipher and presented atUSENIXin 1999.[1]Besides incorporating asaltto protect againstrainbow tableattacks, bcrypt is an adaptive function: over time, the iteration count can be increased to make it slower, so it remains resistant tobrute-force searchattacks even with increasing computation power. The bcrypt function is the default passwordhash algorithmforOpenBSD,[2][non-primary source needed]and was the default for someLinux distributionssuch asSUSE Linux.[3] There are implementations of bcrypt inC,C++,C#,Embarcadero Delphi,Elixir,[4]Go,[5]Java,[6][7]JavaScript,[8]Perl,PHP,Ruby,Python,Rust,[9]V (Vlang),[10]Zig[11]and other languages. Blowfish is notable among block ciphers for its expensive key setup phase. It starts off with subkeys in a standard state, then uses this state to perform a block encryption using part of the key, and uses the result of that encryption (which is more accurate at hashing) to replace some of the subkeys. Then it uses this modified state to encrypt another part of the key, and uses the result to replace more of the subkeys. It proceeds in this fashion, using a progressively modified state to hash the key and replace bits of state, until all subkeys have been set. Provos and Mazières took advantage of this, and took it further. They developed a new key setup algorithm for Blowfish, dubbing the resulting cipher "Eksblowfish" ("expensive key schedule Blowfish"). The key setup begins with a modified form of the standard Blowfish key setup, in which both the salt and password are used to set all subkeys. There are then a number of rounds in which the standard Blowfish keying algorithm is applied, using alternatively the salt and the password as the key, each round starting with the subkey state from the previous round. In theory, this is no stronger than the standard Blowfish key schedule, but the number of rekeying rounds is configurable; this process can therefore be made arbitrarily slow, which helps deter brute-force attacks upon the hash or salt. The input to the bcrypt function is the password string (up to 72 bytes), a numeric cost, and a 16-byte (128-bit) salt value. The salt is typically a random value. The bcrypt function uses these inputs to compute a 24-byte (192-bit) hash. The final output of the bcrypt function is a string of the form: For example, with input passwordabc123xyz, cost12, and a random salt, the output of bcrypt is the string Where: The base-64 encoding in bcrypt uses the table./ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789,[12]which differs fromRFC4648Base64encoding. $2$ (1999) The original bcrypt specification defined a prefix of$2$. This follows theModularCryptFormat[13]format used when storing passwords in the OpenBSD password file: $2a$ The original specification did not define how to handle non-ASCII characters, nor how to handle a null terminator. The specification was revised to specify that when hashing strings: With this change, the version was changed to$2a$[14] $2x$, $2y$ (June 2011) In June 2011, a bug was discovered incrypt_blowfish, a PHP implementation of bcrypt. It was mis-handling characters with the 8th bit set.[15]They suggested that system administrators update their existing password database, replacing$2a$with$2x$, to indicate that those hashes are bad (and need to use the old broken algorithm). They also suggested the idea of havingcrypt_blowfishemit$2y$for hashes generated by the fixed algorithm. Nobody else, including Canonical and OpenBSD, adopted the idea of 2x/2y. This version marker change was limited tocrypt_blowfish. $2b$ (February 2014) A bug was discovered in the OpenBSD implementation of bcrypt. It was using an unsigned 8-bit value to hold the length of the password.[14][16][17]For passwords longer than 255 bytes, instead of being truncated at 72 bytes the password would be truncated at the lesser of 72 or the lengthmodulo256. For example, a 260 byte password would be truncated at 4 bytes rather than truncated at 72 bytes. bcrypt was created for OpenBSD. When they had a bug in their library, they decided to bump the version number.[clarification needed] The bcrypt function below encrypts the text"OrpheanBeholderScryDoubt"64 times usingBlowfish. In bcrypt the usual Blowfish key setup function is replaced with anexpensivekey setup (EksBlowfishSetup) function: The bcrypt algorithm depends heavily on its "Eksblowfish" key setup algorithm, which runs as follows: InitialState works as in the original Blowfish algorithm, populating the P-array and S-box entries with the fractional part ofπ{\displaystyle \pi }in hexadecimal. The ExpandKey function does the following: Hence,ExpandKey(state, 0,key)is the same as regular Blowfish key schedule since all XORs with the all-zero salt value are ineffectual.ExpandKey(state, 0,salt)is similar, but uses the salt as a 128-bit key. Many implementations of bcrypt truncate the password to the first 72 bytes, following the OpenBSD implementation. The mathematical algorithm itself requires initialization with 18 32-bit subkeys (equivalent to 72 octets/bytes). The original specification of bcrypt does not mandate any one particular method for mapping text-based passwords fromuserlandinto numeric values for the algorithm. One brief comment in the text mentions, but does not mandate, the possibility of simply using the ASCII encoded value of a character string: "Finally, the key argument is a secret encryption key, which can be a user-chosen password of up to 56 bytes (including a terminating zero byte when the key is an ASCII string)."[1] Note that the quote above mentions passwords "up to 56 bytes" even though the algorithm itself makes use of a 72 byte initial value. Although Provos and Mazières do not state the reason for the shorter restriction, they may have been motivated by the following statement fromBruce Schneier's original specification of Blowfish, "The 448 [bit] limit on the key size ensures that the [sic] every bit of every subkey depends on every bit of the key."[18] Implementations have varied in their approach of converting passwords into initial numeric values, including sometimes reducing the strength of passwords containing non-ASCII characters.[19] It is important to note that bcrypt is not akey derivation function (KDF). For example, bcrypt cannot be used to derive a 512-bit key from a password. At the same time, algorithms likepbkdf2,scrypt, andargon2arepassword-based key derivation functions - where the output is then used for the purpose of password hashing rather than just key derivation. Password hashing generally needs to complete < 1000 ms. In this scenario, bcrypt is stronger than pbkdf2, scrypt, and argon2. bcrypt has a maximum password length of 72 bytes. This maximum comes from the first operation of theExpandKeyfunction that usesXORon the 18 4-byte subkeys (P) with the password: The password (which is UTF-8 encoded), is repeated until it is 72-bytes long. For example, a password of: Is repeated until it matches the 72-bytes of the 18 P per-round subkeys: In the worst case a password is limited to 18 characters, when every character requires 4 bytes of UTF-8 encoding. For example: In 2024 a single-sign-on service byOkta, Inc.announced a vulnerability due to the password being concatenated after the username and the pair hashed with bcrypt, resulting in the password being ignored for logins with a long-enough username.[27] The bcrypt algorithm involves repeatedly encrypting the 24-byte text: This generates 24 bytes of ciphertext, e.g.: The canonical OpenBSD implementation truncates this to 23 bytes:[28] It is unclear why the canonical implementation deletes 8-bits from the resulting password hash.[citation needed] These 23 bytes become 31 characters when base-64 encoded: The encoding used by the canonical OpenBSD implementation uses the sameBase64alphabet ascrypt, which is./ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789.[12]This means the encoding is not compatible with the more commonRFC 4648.[citation needed]
https://en.wikipedia.org/wiki/Bcrypt
Incryptography,scrypt(pronounced "ess crypt"[1]) is a password-basedkey derivation functioncreated byColin Percivalin March 2009, originally for theTarsnaponline backup service.[2][3]The algorithm was specifically designed to make it costly to perform large-scalecustom hardware attacksby requiring large amounts of memory. In 2016, the scrypt algorithm was published byIETFas RFC 7914.[4]A simplified version of scrypt is used as aproof-of-workscheme by a number ofcryptocurrencies, first implemented by an anonymous programmer called ArtForz in Tenebrix and followed by Fairbrix andLitecoinsoon after.[5] A password-basedkey derivation function(password-based KDF) is generally designed to be computationally intensive, so that it takes a relatively long time to compute (say on the order of several hundred milliseconds). Legitimate users only need to perform the function once per operation (e.g., authentication), and so the time required is negligible. However, abrute-force attackwould likely need to perform the operation billions of times, at which point the time requirements become significant and, ideally, prohibitive. Previous password-based KDFs (such as the popularPBKDF2fromRSA Laboratories) have relatively low resource demands, meaning they do not require elaborate hardware or very much memory to perform. They are therefore easily and cheaply implemented in hardware (for instance on anASICor even anFPGA). This allows an attacker with sufficient resources to launch a large-scale parallel attack by building hundreds or even thousands of implementations of the algorithm in hardware and having each search a different subset of the key space. This divides the amount of time needed to complete a brute-force attack by the number of implementations available, very possibly bringing it down to a reasonable time frame. The scrypt function is designed to hinder such attempts by raising the resource demands of the algorithm. Specifically, the algorithm is designed to use a large amount of memory compared to other password-based KDFs,[6]making the size and the cost of a hardware implementation much more expensive, and therefore limiting the amount of parallelism an attacker can use, for a given amount of financial resources. The large memory requirements of scrypt come from a large vector ofpseudorandombit strings that are generated as part of the algorithm. Once the vector is generated, the elements of it are accessed in a pseudo-random order and combined to produce the derived key. A straightforward implementation would need to keep the entire vector in RAM so that it can be accessed as needed. Because the elements of the vector are generated algorithmically, each element could be generatedon the flyas needed, only storing one element in memory at a time and therefore cutting the memory requirements significantly. However, the generation of each element is intended to be computationally expensive, and the elements are expected to be accessed many times throughout the execution of the function. Thus there is a significant trade-off in speed to get rid of the large memory requirements. This sort oftime–memory trade-offoften exists in computer algorithms: speed can be increased at the cost of using more memory, or memory requirements decreased at the cost of performing more operations and taking longer. The idea behind scrypt is to deliberately make this trade-off costly in either direction. Thus an attacker could use an implementation that doesn't require many resources (and can therefore be massively parallelized with limited expense) but runs very slowly, or use an implementation that runs more quickly but has very large memory requirements and is therefore more expensive to parallelize. WherePBKDF2(P, S, c, dkLen)notation is defined in RFC 2898, where c is an iteration count. This notation is used by RFC 7914 for specifying a usage of PBKDF2 with c = 1. Where RFC 7914 definesIntegerify(X)as the result of interpreting the last 64 bytes of X as alittle-endianinteger A1. Since Iterations equals 2 to the power of N, only thefirstCeiling(N / 8)bytes among thelast64 bytes of X, interpreted as alittle-endianinteger A2, are actually needed to computeIntegerify(X) mod Iterations = A1mod Iterations = A2mod Iterations. WhereSalsa20/8is the 8-round version ofSalsa20. Scrypt is used in many cryptocurrencies as aproof-of-workalgorithm (more precisely, as the hash function in theHashcashproof-of-work algorithm). It was first implemented for Tenebrix (released in September 2011) and served as the basis forLitecoinandDogecoin, which also adopted its scrypt algorithm.[7][8]Mining ofcryptocurrenciesthat use scrypt is often performed on graphics processing units (GPUs) since GPUs tend to have significantly more processing power (for some algorithms) compared to the CPU.[9]This led to shortages of high end GPUs due to the rising price of these currencies in the months of November and December 2013.[10] The scrypt utility was written in May 2009 by Colin Percival as a demonstration of the scrypt key derivation function.[2][3]It's available in mostLinuxandBSDdistributions.
https://en.wikipedia.org/wiki/Scrypt
Argon2is akey derivation functionthat was selected as the winner of the 2015Password Hashing Competition.[1][2]It was designed byAlex Biryukov, Daniel Dinu, andDmitry Khovratovichfrom theUniversity of Luxembourg.[3]The reference implementation of Argon2 is released under aCreative Commons CC0license (i.e.public domain) or theApache License 2.0, and provides three related versions: All three modes allow specification by three parameters that control: While there is no publiccryptanalysisapplicable to Argon2d, there are two published attacks on the Argon2i function. The first attack is applicable only to the old version of Argon2i, while the second has been extended to the latest version (1.3).[5] The first attack shows that it is possible to compute a single-pass Argon2i function using between a quarter and a fifth of the desired space with no time penalty, and compute a multiple-pass Argon2i using onlyN/e(≈N/2.72) space with no time penalty.[6]According to the Argon2 authors, this attack vector was fixed in version 1.3.[7] The second attack shows that Argon2i can be computed by an algorithm which has complexity O(n7/4log(n)) for all choices of parametersσ(space cost),τ(time cost), and thread-count such thatn=σ∗τ.[8]The Argon2 authors claim that this attack is not efficient if Argon2i is used with three or more passes.[7]However, Joël Alwen and Jeremiah Blocki improved the attack and showed that in order for the attack to fail, Argon2i v1.3 needs more than 10 passes over memory.[5] To address these concerns, RFC9106 recommends using Argon2id to largely mitigate such attacks.[9] Source:[4] Argon2 makes use of a hash function capable of producing digests up to 232bytes long. This hash function is internally built uponBlake2. As of May 2023,OWASP'sPassword Storage Cheat Sheetrecommends that people "use Argon2id with a minimum configuration of 19 MiB of memory, an iteration count of 2, and 1 degree of parallelism."[10] OWASP recommends that Argon2id should be preferred over Argon2d and Argon2i because it provides a balanced resistance to both GPU-based attacks and side-channel attacks.[10] OWASP further notes that the following Argon2id options provide equivalent cryptographic strength and simply trade off memory usage for compute workload:[10]
https://en.wikipedia.org/wiki/Argon2
Ananti-keylogger(oranti–keystroke logger) is a type of software specifically designed for the detection ofkeystroke loggersoftware; often, such software will also incorporate the ability to delete or at least immobilize hidden keystroke logger software on a computer. In comparison to mostanti-virusoranti-spywaresoftware, the primary difference is that an anti-keylogger does not make a distinction between alegitimatekeystroke-logging program and anillegitimatekeystroke-logging program (such asmalware); all keystroke-logging programs are flagged and optionally removed, whether they appear to be legitimate keystroke-logging software or not. The anti-keylogger is efficient in managing malicious users. It can detect the keyloggers and terminate them from the system.[1] Keyloggers are sometimes part of malware packages downloaded onto computers without the owners' knowledge. Detecting the presence of a keylogger on a computer can be difficult. So-called anti- keylogging programs have been developed to thwart keylogging systems, and these are often effective when used properly. Anti-keyloggers are used both by large organizations as well as individuals in order to scan for and remove (or in some cases simply immobilize)keystroke loggingsoftware on a computer. It is generally advised the software developers that anti-keylogging scans be run on a regular basis in order to reduce the amount of time during which a keylogger may record keystrokes. For example, if a system is scanned once every three days, there is a maximum of only three days during which a keylogger could be hidden on the system and recording keystrokes. Public computersare extremely susceptible to the installation ofkeystroke loggingsoftware and hardware, and there are documented instances of this occurring.[2]Public computers are particularly susceptible to keyloggers because any number of people can gain access to the machine and install both ahardware keyloggerand a software keylogger, either or both of which can be secretly installed in a matter of minutes.[3]Anti-keyloggers are often used on a daily basis to ensure that public computers are not infected with keyloggers, and are safe for public use. Keyloggers have been prevalent in the online gaming industry, being used to secretly record a gamer's access credentials, user name and password, when logging into an account; this information is sent back to the hacker. The hacker can sign on later to the account and change the password to the account, thus stealing it. World of Warcrafthas been of particular importance to game hackers and has been the target of numerous keylogging viruses. Anti-keyloggers are used by manyWorld of Warcraftand other gaming community members in order to try to keep their gaming accounts secure. Financial institutionshave become the target of keyloggers,[4]particularly those institutions which do not use advanced security features such asPINpads or screen keyboards.[5]Anti-keyloggers are used to run regular scans of any computer on which banking or client information is accessed, protecting passwords, banking information, and credit card numbers from identity thieves. The most common use of an anti-keylogger is by individuals wishing to protect their privacy while using their computer; uses range from protecting financial information used in online banking, any passwords, personal communication, and virtually any other information which may be typed into a computer. Keyloggers are often installed by people known by the computer's owner, and many times have been installed by an ex-partner hoping to spy on their ex-partner's activities, particularly chat.[6] This type of software has a signature base, that is strategic information that helps to uniquely identify a keylogger, and the list contains as many known keyloggers as possible. Some vendors make some effort or availability of an up-to-date listing for download by customers. Each time a 'System Scan' is run, this software compares the contents of the hard disk drive, item by item, against the list, looking for any matches. This type of software is a rather widespread one, but it has its own drawbacks The biggest drawback of signature-based anti-keyloggers is that one can only be protected from keyloggers found on the signature-base list, thus staying vulnerable to unknown or unrecognized keyloggers. A criminal can download one of many famous keyloggers, change it just enough, and the anti-keylogger won't recognize it. This software doesn't use signature bases, it uses a checklist of known features, attributes, and methods that keyloggers are known to use. It analyzes the methods of work of all the modules in a PC, thus blocking the activity of any module that is similar to the work of keyloggers. Though this method gives better keylogging protection than signature-based anti-keyloggers, it has its own drawbacks. One of them is that this type of software blocks non-keyloggers also. Several 'non-harmful' software modules, either part of the operating system or part of legitimate apps, use processes which keyloggers also use, which can trigger afalse positive. Usually all the non signature-based keyloggers have the option to allow the user to unblock selected modules, but this can cause difficulties for inexperienced users who are unable to discern good modules from bad modules when manually choosing to block or unblock.
https://en.wikipedia.org/wiki/Anti-keylogger
Incryptography,black-bag cryptanalysisis aeuphemismfor the acquisition of cryptographic secrets viaburglary, or other covert means – rather than mathematical or technicalcryptanalytic attack. The term refers to the black bag of equipment that a burglar would carry or ablack bag operation. As withrubber-hose cryptanalysis, this is technically not a form of cryptanalysis; the term is usedsardonically. However, given the free availability of very high strength cryptographic systems, this type of attack is a much more serious threat to most users than mathematical attacks because it is often much easier to attempt to circumvent cryptographic systems (e.g. steal the password) than to attack them directly. Regardless of the technique used, such methods are intended to capture highly sensitive information e.g.cryptographic keys, key-rings,passwordsor unencrypted plaintext. The required information is usually copied without removing or destroying it, so capture often takes place without the victim realizing it has occurred. In addition to burglary, the covert means might include the installation ofkeystroke logging[1]ortrojan horsesoftware or hardware installed on (or near to) target computers or ancillary devices. It is even possible tomonitor the electromagnetic emissions of computer displaysor keyboards[2][3]from a distance of 20 metres (or more), and thereby decode what has been typed. This could be done by surveillance technicians, or via some form ofbugconcealed somewhere in the room.[4]Although sophisticated technology is often used, black bag cryptanalysis can also be as simple as the process of copying a password which someone has unwisely written down on a piece of paper and left inside their desk drawer. The case ofUnited States v. Scarfohighlighted one instance in which FBI agents using asneak and peek warrantplaced a keystroke logger on an alleged criminal gang leader.[5]
https://en.wikipedia.org/wiki/Black-bag_cryptanalysis
Computer and network surveillanceis the monitoring of computer activity and data stored locally on a computer or data being transferred overcomputer networkssuch as theInternet. This monitoring is often carried out covertly and may be completed by governments, corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent government agencies. Computer and network surveillance programs are widespread today and almost allInternet trafficcan be monitored.[1] Surveillance allows governments and other agencies to maintainsocial control, recognize and monitor threats or any suspicious or abnormal activity,[2]and prevent and investigatecriminalactivities. With the advent of programs such as theTotal Information Awarenessprogram, technologies such ashigh-speed surveillance computersandbiometricssoftware, and laws such as theCommunications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.[3] Manycivil rightsandprivacygroups, such asReporters Without Borders, theElectronic Frontier Foundation, and theAmerican Civil Liberties Union, have expressed concern that increasing surveillance of citizens will result in amass surveillancesociety, with limited political and/or personal freedoms. Such fear has led to numerous lawsuits such asHepting v. AT&T.[3][4]ThehacktivistgroupAnonymoushas hacked into government websites in protest of what it considers "draconian surveillance".[5][6] The vast majority of computer surveillance involves the monitoring ofpersonal dataandtrafficon theInternet.[7]For example, in the United States, theCommunications Assistance For Law Enforcement Actmandates that all phone calls andbroadbandinternet traffic(emails,web traffic,instant messaging, etc.) be available for unimpeded, real-time monitoring by Federal law enforcement agencies.[8][9][10] Packet capture(also known as "packet sniffing") is the monitoring of data traffic on anetwork.[11]Data sent between computers over theInternetor between any networks takes the form of small chunks called packets, which are routed to their destination and assembled back into a complete message. Apacket capture applianceintercepts these packets, so that they may be examined and analyzed. Computer technology is needed to performtraffic analysisand sift through intercepted data to look for important/useful information. Under theCommunications Assistance For Law Enforcement Act, all U.S. telecommunications providers are required to install such packet capture technology so that Federal law enforcement and intelligence agencies are able to intercept all of their customers'broadband Internetandvoice over Internet protocol(VoIP) traffic. These technologies can be used both by the intelligence and for illegal activities.[12] There is far too much data gathered by these packet sniffers for human investigators to manually search through. Thus, automated Internet surveillance computers sift through the vast amount of intercepted Internet traffic, filtering out, and reporting to investigators those bits of information which are "interesting", for example, the use of certain words or phrases, visiting certain types of web sites, or communicating via email or chat with a certain individual or group.[13]Billions of dollars per year are spent by agencies such as theInformation Awareness Office,NSA, and theFBI, for the development, purchase, implementation, and operation of systems which intercept and analyze this data, extracting only the information that is useful to law enforcement and intelligence agencies.[14] Similar systems are now used byIranian Security dept.to more easily distinguish between peaceful citizens and terrorists. All of the technology has been allegedly installed by GermanSiemens AGand FinnishNokia.[15] The Internet's rapid development has become a primary form of communication. More people are potentially subject to Internet surveillance. There are advantages and disadvantages tonetwork monitoring. For instance, systems described as "Web 2.0"[16]have greatly impacted modern society. Tim O’ Reilly, who first explained the concept of "Web 2.0",[16]stated that Web 2.0 provides communication platforms that are "user generated", with self-produced content, motivating more people to communicate with friends online.[17]However, Internet surveillance also has a disadvantage. One researcher fromUppsala Universitysaid "Web 2.0 surveillance is directed at large user groups who help to hegemonically produce and reproduce surveillance by providing user-generated (self-produced) content. We can characterize Web 2.0 surveillance as mass self-surveillance".[18]Surveillance companies monitor people while they are focused on work or entertainment. Yet, employers themselves alsomonitor their employees. They do so in order to protect the company's assets and to control public communications but most importantly, to make sure that their employees are actively working and being productive.[19]This can emotionally affect people; this is because it can cause emotions like jealousy. A research group states "...we set out to test the prediction that feelings of jealousy lead to 'creeping' on a partner through Facebook, and that women are particularly likely to engage in partner monitoring in response to jealousy".[20]The study shows that women can become jealous of other people when they are in an online group. Virtual assistantshave become socially integrated into many people's lives. Currently, virtual assistants such as Amazon's Alexa or Apple's Siri cannot call 911 or local services.[21]They are constantly listening for command and recording parts of conversations that will help improve algorithms. If the law enforcement is able to be called using a virtual assistant, the law enforcement would then be able to have access to all the information saved for the device.[21]The device is connected to the home's internet, because of this law enforcement would be the exact location of the individual calling for law enforcement.[21]While the virtual assistance devices are popular, many debates the lack of privacy. The devices are listening to every conversation the owner is having. Even if the owner is not talking to a virtual assistant, the device is still listening to the conversation in hopes that the owner will need assistance, as well as to gather data.[22] Corporate surveillanceof computer activity is very common. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used as a form ofbusiness intelligence, which enables the corporation to better tailor their products and/or services to be desirable by their customers. The data can also be sold to other corporations so that they can use it for the aforementioned purpose, or it can be used for direct marketing purposes, such astargeted advertisements, where ads are targeted to the user of the search engine by analyzing their search history and emails[23](if they use free webmail services), which are kept in a database.[24] Such type of surveillance is also used to establish business purposes of monitoring, which may include the following: The second component of prevention is determining the ownership of technology resources. The ownership of the firm's networks, servers, computers, files, and e-mail should be explicitly stated. There should be a distinction between an employee's personal electronic devices, which should be limited and proscribed, and those owned by the firm. For instance,Google Searchstores identifying information for each web search. AnIP addressand the search phrase used are stored in a database for up to 18 months.[25]Google also scans the content of emails of users of its Gmail webmail service in order to create targeted advertising based on what people are talking about in their personal email correspondences.[26]Google is, by far, the largest Internet advertising agency—millions of sites place Google's advertising banners and links on their websites in order to earn money from visitors who click on the ads. Each page containing Google advertisements adds, reads, and modifies"cookies"on each visitor's computer.[27]These cookiestrack the useracross all of these sites and gather information about their web surfing habits, keeping track of which sites they visit, and what they do when they are on these sites. This information, along with the informationfrom their email accounts, and search engine histories, is stored by Google to use to build aprofileof the user to deliver better-targeted advertising.[26] The United States government often gains access to these databases, either by producing a warrant for it, or by simply asking. TheDepartment of Homeland Securityhas openly stated that it uses data collected from consumer credit and direct marketing agencies for augmenting the profiles of individuals that it is monitoring.[24] In addition to monitoring information sent over a computer network, there is also a way to examine data stored on a computer'shard drive, and to monitor the activities of a person using the computer. A surveillance program installed on a computer can search the contents of the hard drive for suspicious data, can monitor computer use, collectpasswords, and/or report back activities in real-time to its operator through the Internet connection.[28]A keylogger is an example of this type of program. Normal keylogging programs store their data on the local hard drive, but some are programmed to automatically transmit data over the network to a remote computer or Web server. There are multiple ways of installing such software. The most common is remote installation, using abackdoorcreated by acomputer virusortrojan. This tactic has the advantage of potentially subjecting multiple computers to surveillance. Viruses often spread to thousands or millions of computers, and leave "backdoors" which are accessible over a network connection, and enable an intruder to remotely install software and execute commands. These viruses and trojans are sometimes developed by government agencies, such asCIPAVandMagic Lantern. More often, however, viruses created by other people orspywareinstalled by marketing agencies can be used to gain access through the security breaches that they create.[29] Another method is"cracking"into the computer to gain access over a network. An attacker can then install surveillance software remotely.Serversand computers with permanentbroadbandconnections are most vulnerable to this type of attack.[30]Another source of security cracking is employees giving out information or users using brute force tactics to guess their password.[31] One can also physically place surveillance software on a computer by gaining entry to the place where the computer is stored and install it from acompact disc,floppy disk, orthumbdrive. This method shares a disadvantage with hardware devices in that it requiresphysical accessto the computer.[32]One well-known worm that uses this method of spreading itself isStuxnet.[33] One common form of surveillance is tocreate maps of social networksbased on data fromsocial networking sitesas well as fromtraffic analysisinformation from phone call records such as those in theNSA call database,[34]and internet traffic data gathered underCALEA. Thesesocial network"maps" are thendata minedto extract useful information such as personal interests, friendships and affiliations, wants, beliefs, thoughts, and activities.[35][36][37] Many U.S. government agencies such as theDefense Advanced Research Projects Agency (DARPA), theNational Security Agency (NSA), and theDepartment of Homeland Security (DHS)are currently investing heavily in research involving social network analysis.[38][39]The intelligence community believes that the biggest threat to the U.S. comes from decentralized, leaderless, geographically dispersed groups. These types of threats are most easily countered by finding important nodes in the network, and removing them. To do this requires a detailed map of the network.[37][40] Jason Ethier of Northeastern University, in his study of modern social network analysis, said the following of the Scalable Social Network Analysis Program developed by theInformation Awareness Office: The purpose of the SSNA algorithms program is to extend techniques of social network analysis to assist with distinguishing potential terrorist cells from legitimate groups of people ... In order to be successful SSNA will require information on the social interactions of the majority of people around the globe. Since the Defense Department cannot easily distinguish between peaceful citizens and terrorists, it will be necessary for them to gather data on innocent civilians as well as on potential terrorists. With only commercially available equipment, it has been shown that it is possible to monitor computers from a distance by detecting theradiationemitted by theCRT monitor. This form of computer surveillance, known asTEMPEST, involves reading electromagnetic emanations from computing devices in order to extract data from them at distances of hundreds of meters.[41][42][43] IBM researchers have also found that, for most computer keyboards, each key emits a slightly different noise when pressed. The differences are individually identifiable under some conditions, and so it's possible to log key strokes without actually requiring logging software to run on the associated computer.[44][45] In 2015, lawmakers in California passed a law prohibiting any investigative personnel in the state to force businesses to hand over digital communication without a warrant, calling this Electronic Communications Privacy Act.[46]At the same time in California, state senator Jerry Hill introduced a bill making law enforcement agencies to disclose more information on their usage and information from theStingray phone trackerdevice.[46]As the law took into effect in January 2016, it will now require cities to operate with new guidelines in relation to how and when law enforcement use this device.[46]Some legislators and those holding a public office have disagreed with this technology because of the warrantless tracking, but now if a city wants to use this device, it must be heard by a public hearing.[46]Some cities have pulled out of using the StingRay such as Santa Clara County. And it has also been shown, byAdi Shamiret al., that even the high frequencynoiseemitted by aCPUincludes information about the instructions being executed.[47] In German-speaking countries, spyware used or made by the government is sometimes calledgovware.[48]Some countries like Switzerland and Germany have a legal framework governing the use of such software.[49][50]Known examples include the SwissMiniPanzer and MegaPanzerand the GermanR2D2 (trojan). Policeware is a software designed to police citizens by monitoring the discussion and interaction of its citizens.[51]Within the U.S.,Carnivorewas the first incarnation of secretly installed e-mail monitoring software installed in Internet service providers' networks to log computer communication, including transmitted e-mails.[52]Magic Lanternis another such application, this time running in a targeted computer in a trojan style and performing keystroke logging.CIPAV, deployed by the FBI, is a multi-purpose spyware/trojan. TheClipper Chip, formerly known as MYK-78, is a small hardware chip that the government can install into phones, designed in the nineties. It was intended to secure private communication and data by reading voice messages that are encoded and decode them. The Clipper Chip was designed during the Clinton administration to, “…protect personal safety and national security against a developing information anarchy that fosters criminals, terrorists and foreign foes.”[53]The government portrayed it as the solution to the secret codes orcryptographickeys that the age of technology created. Thus, this has raised controversy in the public, because the Clipper Chip is thought to have been the next “Big Brother” tool. This led to the failure of the Clipper proposal, even though there have been many attempts to push the agenda.[54] The "Consumer Broadband and Digital Television Promotion Act" (CBDTPA) was a bill proposed in the United States Congress. CBDTPA was known as the "Security Systems and Standards Certification Act" (SSSCA) while in draft form and was killed in committee in 2002. Had CBDTPA become law, it would have prohibited technology that could be used to read digital content under copyright (such as music, video, and e-books) withoutdigital rights management(DRM) that prevented access to this material without the permission of the copyright holder.[55] Surveillanceandcensorshipare different. Surveillance can be performed without censorship, but it is harder to engage in censorship without some forms of surveillance.[56]And even when surveillance does not lead directly to censorship, the widespread knowledge or belief that a person, their computer, or their use of the Internet is under surveillance can lead toself-censorship.[57] In March 2013Reporters Without Bordersissued aSpecial report on Internet surveillancethat examines the use of technology that monitors online activity and intercepts electronic communication in order to arrest journalists,citizen-journalists, and dissidents. The report includes a list of "State Enemies of the Internet",Bahrain,China,Iran,Syria, andVietnam, countries whose governments are involved in active, intrusive surveillance of news providers, resulting in grave violations of freedom of information and human rights. Computer and network surveillance is on the increase in these countries. The report also includes a second list of "Corporate Enemies of the Internet", includingAmesys(France),Blue Coat Systems(U.S.), Gamma (UK and Germany), Hacking Team (Italy), and Trovicor (Germany), companies that sell products that are liable to be used by governments to violate human rights and freedom of information. Neither list is exhaustive and they are likely to be expanded in the future.[58] Protection of sources is no longer just a matter of journalistic ethics. Journalists should equip themselves with a "digital survival kit" if they are exchanging sensitive information online, storing it on a computer hard-drive or mobile phone.[58][59]Individuals associated with high-profile rights organizations, dissident groups, protest groups, or reform groups are urged to take extra precautions to protect their online identities.[60] Countermeasures against surveillance vary based on the type of eavesdropping targeted. Electromagnetic eavesdropping, such as TEMPEST and its derivatives, often requires hardware shielding, such asFaraday cages, to block unintended emissions. To prevent interception of data in transit, encryption is a key defense. When properly implemented withend-to-end encryption, or while using tools such asTor, and provided the device remains uncompromised and free from direct monitoring via electromagnetic analysis, audio recording, or similar methodologies, the content of communication is generally considered secure. For a number of years, numerous government initiatives have sought toweaken encryptionor introduce backdoors for law enforcement access.[61]Privacy advocates and the broader technology industry strongly oppose these measures,[62]arguing that any backdoor would inevitably be discovered and exploited by malicious actors. Such vulnerabilities would endanger everyone's private data[63]while failing to hinder criminals, who could switch to alternative platforms or create their own encrypted systems. Surveillance remains effective even when encryption is correctly employed, by exploiting metadata that is often accessible to packet sniffers unless countermeasures are applied.[64]This includesDNSqueries,IP addresses, phone numbers,URLs, timestamps, and communication durations, which can reveal significant information about user activity and interactions or associations with aperson of interest. Yan, W. (2019) Introduction to Intelligent Surveillance: Surveillance Data Capture, Transmission, and Analytics, Springer.
https://en.wikipedia.org/wiki/Computer_surveillance
Digital footprintordigital shadowrefers to one's unique set oftraceabledigital activities, actions, contributions, and communications manifested on theInternetordigital devices.[1][2][3][4]Digital footprints can be classified as either passive or active. Passive footprints consist of a user's web-browsing activity and information stored ascookies. Active footprints are intentionally created by users to share information on websites orsocial media.[5]While the term usually applies to a person, a digital footprint can also refer to a business, organization or corporation.[6] The use of a digital footprint has both positive and negative consequences. On one side, it is the subject of manyprivacy issues.[7]For example, without an individual's authorization, strangers can piece together information about that individual by only usingsearch engines. Socialinequalitiesare exacerbated by the limited access afforded tomarginalized communities.[8]Corporations are also able to produce customized ads based on browsing history. On the other hand, others can reap the benefits by profiting off their digital footprint as social mediainfluencers. Furthermore, employers use a candidate's digital footprint foronline vettingand assessing fit due to its reduced cost and accessibility.[citation needed]Between two equal candidates, a candidate with a positive digital footprint may have an advantage. As technology usage becomes more widespread, even children generate larger digital footprints with potential positive and negative consequences such as college admissions. Media and information literacy frameworks and educational efforts promote awareness of digital footprints as part of a citizen's digital privacy.[9]Since it is hard not to have a digital footprint, it is in one's best interest to create a positive one. Passive digital footprints are a data trail that an individual involuntarily leaves online.[10][11]They can be stored in various ways depending on the situation. A footprint may be stored in an online database as a "hit" in an online environment. The footprint may track the user'sIP address, when it was created, where it came from, and the footprint later being analyzed. In anofflineenvironment,administratorscan access and view the machine's actions without seeing who performed them. Examples of passive digital footprints are apps that usegeolocations, websites that download cookies onto your appliance, orbrowser history. Although passive digital footprints are inevitable, they can be lessened by deleting old accounts, usingprivacy settings(public or private accounts), and occasionally online searching yourself to see the information left behind.[12] Active digital footprints are deliberate, as they are posted or shared information willingly. They can also be stored in a variety of ways depending on the situation. A digital footprint can be stored when a userlogsinto a site and makes apostor change; the registered name is connected to the edit in an online environment. Examples of active digital footprints include social media posts, video or image uploads, or changes to various websites.[11] Digital footprints are not adigital identityorpassport, but the content andmetadatacollected impactsinternet privacy,trust,security, digitalreputation, andrecommendation. As the digital world expands and integrates with more aspects of life, ownership and rights concerning data become increasingly important. Digital footprints are controversial in that privacy and openness compete.[13]Scott McNealy, CEO ofSun Microsystems, said in 1999Get Over Itwhen referring to privacy on the Internet.[14]The quote later became a commonly used phrase in discussing private data and what companies do with it.[15]Digital footprints are a privacy concern because they are a set of traceable actions, contributions, and ideas shared by users. It can be tracked and can allow internet users to learn about human actions.[16] Interested parties use Internet footprints for several reasons; includingcyber-vetting,[17]where interviewers could research applicants based on their online activities. Internet footprints are also used by law enforcement agencies to provide information unavailable otherwise due to a lack ofprobable cause.[18]Also, digital footprints are used by marketers to find what products a user is interested in or to inspire ones' interest in a particular product based on similar interests.[19] Social networking systemsmay record the activities of individuals, with data becoming alife stream. Suchsocial mediausage and roaming services allow digital tracing data to include individual interests, social groups, behaviors, and location. Such data is gathered from sensors within devices and collected and analyzed without user awareness.[20]When many users choose to share personal information about themselves through social media platforms, including places they visited, timelines and their connections, they are unaware of the privacy setting choices and the security consequences associated with them.[21]Many social media sites, likeFacebook, collect an extensive amount of information that can be used to piece together a user's personality. Information gathered from social media, such as the number of friends a user has, can predict whether or not the user has an introvert or extrovert personality. Moreover, a survey of SNS users revealed that 87% identified their work or education level, 84% identified their full date of birth, 78% identified their location, and 23% listed their phone numbers.[21] While one's digital footprint may infer personal information, such as demographic traits, sexual orientation, race, religious and political views, personality, or intelligence[22]without individuals' knowledge, it also exposes individuals' private psychological spheres into the social sphere.[23]Lifeloggingis an example of an indiscriminate collection of information concerning an individual's life and behavior.[24]There are actions to take to make a digital footprint challenging to track.[25]An example of the usage or interpretation of data trails is through Facebook-influenced creditworthiness ratings,[26]the judicial investigations around German social scientist Andrej Holm,[27]advertisement-junk mails by the American companyOfficeMax[28]or the border incident of Canadian citizen Ellen Richardson.[29] An increasing number of employers are evaluating applicants by their digital footprint through their interaction on social media due to its reduced cost and easy accessibility[30]during the hiring process. By using such resources, employers can gain more insight on candidates beyond their well-scripted interview responses and perfected resumes.[31]Candidates who display poor communication skills, use inappropriate language, or use drugs or alcohol are rated lower.[32]Conversely, a candidate with a professional or family-oriented social media presence receives higher ratings.[33]Employers also assess a candidate through their digital footprint to determine if a candidate is a good cultural fit[34]for their organization.[35]Suppose a candidate upholds an organization's values or shows existing passion for its mission. In that case, the candidate is more likely to integrate within the organization and could accomplish more than the average person. Although these assessments are known not to be accurate predictors of performance or turnover rates,[36]employers still use digital footprints to evaluate their applicants. Thus, job seekers prefer to create a social media presence that would be viewed positively from a professional point of view. In some professions, maintaining a digital footprint is essential. People will search the internet for specific doctors and their reviews. Half of the search results for a particular physician link to third-party rating websites.[37]For this reason, prospective patients may unknowingly choose their physicians based on their digital footprint in addition to online reviews. Furthermore, a generation relies on social media for livelihood asinfluencersby using their digital footprint. These influencers have dedicated fan bases that may be eager to follow recommendations. As a result, marketers pay influencers to promote their products among their followers, since this medium may yield better returns than traditional advertising.[38][39]Consequently, one's career may be reliant on their digital footprint. Generation Alphawill not be the first generation born into the internet world. As such, a child's digital footprint is becoming more significant than ever before and their consequences may be unclear. As a result of parenting enthusiasm, an increasing amount of parents will create social media accounts for their children at a young age, sometimes even before they are born.[40]Parents may post up to 13,000 photos of a child on social media in their celebratory state before their teen years of everyday life or birthday celebrations.[41]Furthermore, these children are predicted to post 70,000 times online on their own by 18.[41]The advent of posting on social media creates many opportunities to gather data from minors. Since an identity's basic components contain a name, birth date, and address, these children are susceptible toidentity theft.[42]While parents may assume that privacy settings may prevent children's photos and data from being exposed, they also have to trust that their followers will not be compromised. Outsiders may take the images to pose as these children's parents or post the content publicly.[43]For example, during theFacebook-Cambridge Analytica data scandal, friends of friends leaked data to data miners. Due to the child's presence on social media, their privacy may be at risk. Some professionals argue that young people entering the workforce should consider the effect of their digital footprint on theirmarketabilityand professionalism.[44]Having a digital footprint may be very good for students, as college admissions staff and potential employers may decide to research into prospective student's and employee's online profiles, leading to an enormous impact on the students' futures.[44]Teens will be set up for more success if they consider the kind of impact they are making and how it can affect their future. Instead, someone who acts apathetic towards the impression they are making online will struggle if they one day choose to attend college or enter into the workforce.[45]Teens who plan to receive ahigher educationwill have their digital footprint reviewed and assessed as a part of theapplicationprocess.[46]Besides, if the teens that have the intention of receiving a higher education are planning to do so with financial help and scholarships, then they need to consider that their digital footprint will be evaluated in the application process to getscholarships.[47] Digital footprints may reinforce existingsocial inequalities. In a conceptual overview of this topic, researchers argue that both actively and passively generated digital footprints represent a new dimension of digital inequality, withmarginalized groupssystematically disadvantaged in terms of online visibility and opportunity.[48]Corporations and governments increasingly rely on algorithms that use digital footprints to automate decisions across areas like employment, credit, and public services, amplifying existing social inequalities.[48]Because marginalized groups often have less extensive or lower-quality digital footprints, they are at greater risk of being misrepresented, excluded, or disadvantaged by these algorithmic processes.[48]Examples of low-quality digital footprints include lack of data on online databases that track credit scores, legal history or medical history.[48]People from higher socio-economic backgrounds are more likely to leave favorable or carefully curated digital footprints than enable accelerated access to critical services, financial assistance, and jobs.[48] An example of digital inequality is access to essentiale-governmentservices. In theUnited Kingdom, individuals lacking a sufficient digital footprint face challenges in verify their identities.[49]This new barriers to services such aspublic housingandhealthcarecreating a “double disadvantage".[49]A double disadvantage compounds existing issues in digital access by excluded from digital life lack both access and the digital reputation required to navigate public systems.[49]Other communities with private access or open access to technology anddigital educationfrom an early age will have greater access to government e-services.[49] The United Nations International Children's Emergency Fund's (UNICEF)State of the World’s Children2017report highlights how digital footprints are linked to broader issues of equity, inclusion, and safety, emphasizing that marginalized communities experience greater risks in digital environments.[50] Mediaandinformation literacy(MIL) encompasses the knowledge and skills necessary to access, evaluate, and create information across different media platforms.[51]Understanding and managing one's digital footprint is increasingly recognized as a core component of MIL. Scholars suggest that digital footprint literacy falls under privacy literacy, which refers to the ability to critically manage and protect personal information in online environments.[52]Studies indicate that disparities in MIL access across countries and socio-demographic groups contribute to uneven abilities to manage digital footprints safely.[51] Organizations likeUNESCOand UNICEF advocate for integrating MIL frameworks into formal education systems as a way to mitigate digital inequalities.[51][53]However, there remains a notable lack of standardized MIL curricula globally, particularly concerning privacy literacy and digital footprint management. In response to these gaps, researchers in 2022 developed the "5Ds of Privacy Literacy" educational framework, which emphasizes teaching students to "define, describe, discern, determine, and decide" appropriate information flows based on context.[9]Grounded insocioculturallearning theory, the 5Ds encourage students to make privacy decisions thoughtfully, rather than simply adhering to universal rules.[9]Sociocultural learning theory means that students learn privacy skills not just by memorizing rules, but by actively engaging with real-world social situations, discussing them with others, and practicing decisions in authentic, contextualized settings. This framework highlights that part of digital footprint literacy includes awareness about how our behaviors are tracked online. Companies can infer demographic attributes such as age, gender, and political orientation without explicit disclosure.[54]This is often done without users' awareness.[54]Educating students about these practices aims to promote critical thinking about personal data trails. Another part of digital footprint literacy is being able to critically assess your own digital footprint. Initiatives likeAustralia's "Best Footprint Forward" program have implemented digital footprint education using real-world examples to teach critical self-assessment of online presence.[55]Similarly, theConnecticut State Department of Educationrecommends incorporatingdigital citizenship,internet safety, and media literacy into K–12 education standards.[56]
https://en.wikipedia.org/wiki/Digital_footprint
Hardware keyloggersare used forkeystroke logging, a method of capturing and recording computer users' keystrokes, including sensitive passwords.[1]They can be implemented viaBIOS-levelfirmware, or alternatively, via a device plugged inline between acomputer keyboardand a computer. They log all keyboard activity to their internal memory. Hardware keyloggers have an advantage over software keyloggers as they can begin logging from the moment a computer is turned on (and are therefore able to intercept passwords for theBIOSordisk encryption software). All hardware keylogger devices have to have the following: Generally, recorded data is retrieved by typing a special password into a computer text editor. The hardware keylogger plugged in between the keyboard and computer detects that the password has been typed and then presents the computer with "typed" data to produce a menu. Beyond text menu some keyloggers offer a high-speed download to speed up retrieval of stored data; this can be via USB mass-storage enumeration or with a USB or serial download adapter. Typically the memory capacity of a hardware keylogger may range from a fewkilobytesto severalgigabytes, with each keystroke recorded typically consuming abyteof memory. Denial or monitoring ofphysical accessto sensitive computers, e.g. by closed-circuit video surveillance and access control, is the most effective means of preventing hardware keylogger installation. Visual inspection is the easiest way of detecting hardware keyloggers. But there are also some techniques that can be used for most hardware keyloggers on the market, to detect them via software. In cases in which the computer case is hidden from view (e.g. at some public access kiosks where the case is in a locked box and only a monitor, keyboard, and mouse are exposed to view) and the user has no possibility to run software checks, a user might thwart a keylogger by typing part of a password, using the mouse to move to a text editor or other window, typing some garbage text, mousing back to the password window, typing the next part of the password, etc. so that the keylogger will record an unintelligible mix of garbage and password text.[4]Generalkeystroke logging countermeasuresare also options against hardware keyloggers. The main risk associated with keylogger use is that physical access is needed twice: initially to install the keylogger, and secondly to retrieve it. Thus, if the victim discovers the keylogger, they can then set up asting operationto catch the person in the act of retrieving it. This could include camera surveillance or the review of access card swipe records to determine who gained physical access to the area during the time period that the keylogger was removed.
https://en.wikipedia.org/wiki/Hardware_keylogger
Areverse connectionis usually used to bypassfirewallrestrictions onopen ports.[1]A firewall usually blocks incoming connections on closed ports, but does not block outgoingtraffic. In a normal forward connection, aclientconnects to aserverthrough the server'sopen port, but in the case of a reverse connection, the client opens the port that the server connects to.[2]The most common way a reverse connection is used is to bypassfirewallandroutersecurity restrictions.[3] For example, abackdoorrunning on a computer behind a firewall that blocks incoming connections can easily open an outbound connection to a remote host on the Internet. Once the connection is established, the remote host can send commands to the backdoor.Remote administration tools (RAT)that use a reverse connection usually sendSYNpackets to the client'sIP address. The client listens for these SYN packets and accepts the desired connections. If a computer is sending SYN packets or is connected to the client's computer, the connections can be discovered by using the netstat command or a common port listener like “Active Ports”. If the Internet connection is closed down and an application still tries to connect to remote hosts it may be infected with malware.Keyloggersand other malicious programs are harder to detect once installed, because they connect only once per session. Note that SYN packets by themselves are not necessarily a cause for alarm, as they are a standard part of all TCP connections. There are honest uses for using reverse connections, for example to allow hosts behind a NAT firewall to be administered remotely. These hosts do not normally have public IP addresses, and so must either have ports forwarded at the firewall, or open reverse connections to a central administration server. Thiscomputer networkingarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Reverse_connection
Session replayis the ability to replay a visitor's journey on aweb siteor within amobile applicationorweb application. Replay can include the user's view (browser or screen output), user input (keyboardandmouse inputs), andlogs of network eventsor console logs. Session replay is supposed to help improvecustomer experience[1]and help identify obstacles in conversion processes on websites. However, it can also be used to study a website'susability,customer behavior, and the handling ofcustomer servicequestions as the customer journey, with all interactions, can be replayed. Some organizations also use this capability to analyse fraudulent behavior on websites. Some solutions augment the session replay with advanced analytics that can identify segments of customers that are struggling to use the website.[2]This allows for the replay capability to be used much more efficiently and reduces the need to replay other customer sessions unnecessarily. There are generally two ways to capture and replay visitor sessions, client side andtag-free server side. There are many tag-based solutions that offer video-like replay of a visitor's session. While replay is analogous to video, it is more accurately a reproduction of a specific user's experience detailing mouse movements,clicks, taps, and scrolls. The underlying data for the session recordings is captured by tagging pages. Some advanced tools are able to access theDocument Object Model (DOM)directly and can play back most interactions within the DOM including all mutations with a high degree of accuracy. There are a number of tools that provide similar functions, with the advantage of being able to replay the entire client experience in a movie-like format. It also can deal with modernsingle-page applications. The disadvantage is that the tracking script can easily be detected and blocked by anyad blockerwhich has become normal (2017: 615M devices with active adblock).[3] These solutions capture all website traffic and replay every visitor interaction from every device, and these include all mobile users from any location. Sessions are replayed step-by-step, providing the ability to search, locate and analyze aspects of a visitor's session including clicks andformentry. Server-side solutions require hardware and software to be installed "on premises." An advantage of server-side recording is that the solution cannot be blocked. However, one will not be able to see a video-like replay of client-side activities such as scrolling and mouse movements. This also poorly handles modern single-page applications. A hybrid approach combines the advantages without the weaknesses. The hybrid approach ensures that every session is recorded (important for compliance) by server-side capturing and enriched with client-side tracking data of mouse movements, clicks, scrolling, keystrokes, and user behavior (driven by customer experience insights). This approach works very well with modern single-page applications. There is the presence of a movie-like replay and 100% compliant capturing. This can be deployed either "on premises" or asSoftware as a service(SaaS). All of the tools listed below are available as Software as a service (SaaS) solutions.
https://en.wikipedia.org/wiki/Session_replay
Spyware(aportmanteauforspying software) is anymalwarethat aims to gather information about a person or organization and send it to another entity in a way that harms the user by violating theirprivacy, endangering their device's security, or other means. This behavior may be present in other malware and in legitimate software. Websites may engage in spyware behaviors likeweb tracking. Hardware devices may also be affected.[1] Spyware is frequently associated withadvertisingand involves many of the sameissues. Because these behaviors are so common, and can have non-harmful uses, providing a precise definition of spyware is a difficult task.[2] The first recorded use of the termspywareoccurred on October 16, 1995, in aUsenetpost that poked fun atMicrosoft'sbusiness model.[4]Spywareat first denotedsoftwaremeant forespionagepurposes. However, in early 2000 the founder ofZone Labs, Gregor Freund, used the term in a press release for theZoneAlarm Personal Firewall.[5]Later in 2000, a parent using ZoneAlarm was alerted to the fact thatReader Rabbit, educational software marketed to children by theMatteltoy company, was surreptitiously sending data back to Mattel.[6]Since then, "spyware" has taken on its present sense. According to a 2005 study byAOLand the National Cyber-Security Alliance, 61 percent of surveyed users' computers were infected with some form of spyware. 92 percent of surveyed users with spyware reported that they did not know of its presence, and 91 percent reported that they had not given permission for the installation of the spyware.[7]As of 2006[update], spyware has become one of the preeminent security threats to computer systems running Microsoft Windowsoperating systems. Computers on whichInternet Explorer(IE) was the primarybrowserare particularly vulnerable to such attacks, not only because IE was the most widely used,[8]but also because its tight integration with Windows allows spyware access to crucial parts of the operating system.[8][9] BeforeInternet Explorer 6SP2 was released as part ofWindows XP Service Pack 2, the browser would automatically display an installation window for anyActiveXcomponent that a website wanted to install. The combination of user ignorance about these changes, and the assumption byInternet Explorerthat allActiveXcomponents are benign, helped to spread spyware significantly. Many spyware components would also make use ofexploitsinJavaScript, Internet Explorer and Windows to install without user knowledge or permission. TheWindows Registrycontains multiple sections where modification of key values allows software to be executed automatically when the operating system boots. Spyware can exploit this design to circumvent attempts at removal. The spyware typically links itself to each location in theregistrythat allows execution. Once running, the spyware will periodically check if any of these links are removed. If so, they will be automatically restored. This ensures that the spyware will execute when the operating system is booted, even if some (or most) of the registry links are removed. Spyware is mostly classified into four types:adware, system monitors, tracking includingweb tracking, andtrojans;[10]examples of other notorious types includedigital rights managementcapabilities that "phone home",keyloggers,rootkits, andweb beacons. These four categories are not mutually exclusive and they have similar tactics in attacking networks and devices.[11]The main goal is to install, hack into the network, avoid being detected, and safely remove themselves from the network.[11] Spyware is mostly used for the stealing information and storing Internet users' movements on the Web and serving up pop-up ads to Internet users.[12]Whenever spyware is used for malicious purposes, its presence is typically hidden from the user and can be difficult to detect. Some spyware, such askeyloggers, may be installed by the owner of a shared, corporate, orpublic computerintentionally in order to monitor users. While the termspywaresuggests software that monitors a user's computer, the functions of spyware can extend beyond simple monitoring. Spyware can collect almost any type of data, including personal information likeinternet surfinghabits, user logins, and bank or credit account information. Spyware can also interfere with a user's control of a computer by installing additional software or redirectingweb browsers.[13]Some spyware can change computer settings, which can result in slow Internet connection speeds, un-authorized changes in browser settings, or changes to software settings. Sometimes, spyware is included along with genuine software, and may come from a malicious website or may have been added to the intentional functionality of genuine software (see the paragraph aboutFacebook, below). In response to the emergence of spyware, a small industry has sprung up dealing inanti-spywaresoftware. Running anti-spyware software has become a widely recognized element ofcomputer securitypractices, especially for computers runningMicrosoft Windows. A number of jurisdictions have passed anti-spyware laws, which usually target any software that is surreptitiously installed to control a user's computer. In German-speaking countries, spyware used or made by the government is calledgovwareby computer experts (in common parlance:Regierungstrojaner, literally "Government Trojan"). Govware is typically a trojan horse software used to intercept communications from the target computer. Some countries, like Switzerland and Germany, have a legal framework governing the use of such software.[14][15]In the US, the term "policeware" has been used for similar purposes.[16] Use of the term "spyware" has eventually declined as the practice of tracking users has been pushed ever further into the mainstream by major websites and data mining companies; these generally break no known laws and compel users to be tracked, not by fraudulent practicesper se, but by the default settings created for users and the language of terms-of-service agreements. In one documented example, on CBS/CNet News reported, on March 7, 2011, an analysis inThe Wall Street Journalrevealed the practice ofFacebookand other websites oftracking users' browsing activity, which is linked to their identity, far beyond users' visits and activity on the Facebook site itself. The report stated: "Here's how it works. You go to Facebook, you log in, you spend some time there, and then ... you move on without logging out. Let's say the next site you go to isThe New York Times. Those buttons, without you clicking on them, have just reported back to Facebook andTwitterthat you went there and also your identity within those accounts. Let's say you moved on to something like a site about depression. This one also has a tweet button, aGooglewidget, and those, too, can report back who you are and that you went there."The Wall Street Journalanalysis was researched by Brian Kennish, founder of Disconnect, Inc.[17] Spyware does not necessarily spread in the same way as avirusorwormbecause infected systems generally do not attempt to transmit or copy the software to other computers. Instead, spyware installs itself on a system by deceiving the user or byexploitingsoftware vulnerabilities. Most spyware is installed without knowledge, or by using deceptive tactics. Spyware may try to deceive users by bundling itself with desirable software. Other common tactics are using aTrojan horse, spy gadgets that look like normal devices but turn out to be something else, such as a USB Keylogger. These devices actually are connected to the device as memory units but are capable of recording each stroke made on the keyboard. Some spyware authors infect a system through security holes in the Web browser or in other software. When the user navigates to a Web page controlled by the spyware author, the page contains code which attacks the browser and forces the download and installation of spyware. The installation of spyware frequently involvesInternet Explorer. Its popularity and history of security issues have made it a frequent target. Its deep integration with the Windows environment make it susceptible to attack into theWindowsoperating system. Internet Explorer also serves as a point of attachment for spyware in the form ofBrowser Helper Objects, which modify the browser's behaviour. A spyware program rarely operates alone on a computer; an affected machine usually has multiple infections. Users frequently notice unwanted behavior and degradation of system performance. A spyware infestation can create significant unwantedCPUactivity, disk usage, and network traffic. Stability issues, such as applications freezing, failure to boot, and system-wide crashes are also common. Usually, this effect is intentional, but may be caused from the malware simply requiring large amounts of computing power, disk space, or network usage. Spyware, which interferes with networking software commonly causes difficulty connecting to the Internet. In some infections, the spyware is not even evident. Users assume in those situations that the performance issues relate to faulty hardware, Windows installation problems, or anothermalwareinfection. Some owners of badly infected systems resort to contactingtechnical supportexperts, or even buying a new computer because the existing system "has become too slow". Badly infected systems may require a clean reinstallation of all their software in order to return to full functionality. Moreover, some types of spyware disable softwarefirewallsandantivirus software, and/or reduce browser security settings, which opens the system to furtheropportunistic infections. Some spyware disables or even removes competing spyware programs, on the grounds that more spyware-related annoyances increase the likelihood that users will take action to remove the programs.[18] Keyloggersare sometimes part of malware packages downloaded onto computers without the owners' knowledge. Some keylogger software is freely available on the internet, while others are commercial or private applications. Most keyloggers allow not only keyboard keystrokes to be captured, they also are often capable of collecting screen captures from the computer. A typical Windows user hasadministrative privileges, mostly for convenience. Because of this, any program the user runs has unrestricted access to the system. As with otheroperating systems, Windows users are able to follow theprinciple of least privilegeand use non-administratoraccounts. Alternatively, they can reduce theprivilegesof specific vulnerable Internet-facingprocesses, such asInternet Explorer. SinceWindows Vistais, by default, a computer administrator that runs everything under limited user privileges, when a program requires administrative privileges, aUser Account Controlpop-up will prompt the user to allow or deny the action. This improves on the design used by previous versions of Windows. Spyware is also known as tracking software. As the spyware threat has evolved, a number of techniques have emerged to counteract it. These include programs designed to remove or block spyware, as well as various user practices which reduce the chance of getting spyware on a system. Nonetheless, spyware remains a costly problem. When a large number of pieces of spyware have infected a Windows computer, the only remedy may involvebacking upuser data, and fully reinstalling theoperating system. For instance, some spyware cannot be completely removed with tools fromSymantec,Microsoft,PC Tools (company). Many programmers and some commercial firms have released products designed to remove or block spyware. Programs such as PC Tools'Spyware Doctor, Lavasoft'sAd-Aware SEand Patrick Kolla'sSpybot - Search & Destroyrapidly gained popularity as tools to remove, and in some cases intercept, spyware programs. In December 2004,Microsoftacquired theGIANT AntiSpywaresoftware,[19]re‑branding it asMicrosoft AntiSpyware (Beta 1)and releasing it as a free download for Genuine Windows XP and Windows 2003 users. In November, 2005, it was renamedWindows Defender.[20][21] Major anti-virus firms such asSymantec,PC Tools,McAfeeandSophoshave also added anti-spyware features to their existing anti-virus products. Early on, anti-virus firms expressed reluctance to add anti-spyware functions, citing lawsuits brought by spyware authors against the authors of web sites and programs which described their products as "spyware". However, recent versions of these major firms home and business anti-virus products do include anti-spyware functions, albeit treated differently from viruses. Symantec Anti-Virus, for instance, categorizes spyware programs as "extended threats" and now offersreal-time protectionagainst these threats. Other Anti-spyware tools include FlexiSPY, Mobilespy, mSPY, TheWiSPY, and UMobix.[22] Anti-spyware programs can combat spyware in two ways: Such programs inspect the contents of theWindows registry,operating systemfiles, andinstalled programs, and remove files and entries which match a list of known spyware. Real-time protection from spyware works identically to real-time anti-virus protection: the software scans disk files at download time, and blocks the activity of components known to represent spyware. In some cases, it may also intercept attempts to install start-up items or to modify browser settings. Earlier versions of anti-spyware programs focused chiefly on detection and removal. Javacool Software'sSpywareBlaster, one of the first to offer real-time protection, blocked the installation ofActiveX-based spyware. Like most anti-virus software, many anti-spyware/adware tools require a frequently updated database of threats. As new spyware programs are released, anti-spyware developers discover and evaluate them, adding to the list of known spyware, which allows the software to detect and remove new spyware. As a result, anti-spyware software is of limited usefulness without regular updates. Updates may be installed automatically or manually. A popular generic spyware removal tool used by those that requires a certain degree of expertise isHijackThis, which scans certain areas of the Windows OS where spyware often resides and presents a list with items to delete manually. As most of the items are legitimate windows files/registry entries it is advised for those who are less knowledgeable on this subject to post a HijackThis log on the numerous antispyware sites and let the experts decide what to delete. If a spyware program is not blocked and manages to get itself installed, it may resist attempts to terminate or uninstall it. Some programs work in pairs: when an anti-spyware scanner (or the user) terminates one running process, the other one respawns the killed program. Likewise, some spyware will detect attempts to remove registry keys and immediately add them again. Usually, booting the infected computer insafe modeallows an anti-spyware program a better chance of removing persistent spyware. Killing the process tree may also work. To detect spyware, computer users have found several practices useful in addition to installing anti-spyware programs. Many users have installed aweb browserother thanInternet Explorer, such asMozilla FirefoxorGoogle Chrome. Though no browser is completely safe, Internet Explorer was once at a greater risk for spyware infection due to its large user base as well as vulnerabilities such asActiveXbut these three major browsers are now close to equivalent when it comes to security.[23][24] SomeISPs—particularly colleges and universities—have taken a different approach to blocking spyware: they use their networkfirewallsandweb proxiesto block access to Web sites known to install spyware. On March 31, 2005,Cornell University's Information Technology department released a report detailing the behavior of one particular piece of proxy-based spyware,Marketscore, and the steps the university took to intercept it.[25]Many other educational institutions have taken similar steps. Individual users can also installfirewallsfrom a variety of companies. These monitor the flow of information going to and from a networked computer and provide protection against spyware and malware. Some users install a largehosts filewhich prevents the user's computer from connecting to known spyware-related web addresses. Spyware may get installed via certainsharewareprograms offered for download. Downloading programs only from reputable sources can provide some protection from this source of attack.[26] Individual users can use cellphone / computer with physical (electric) switch, or isolated electronic switch that disconnects microphone, camera without bypass and keep it in disconnected position where not in use, that limits information that spyware can collect. (Policy recommended by NIST Guidelines for Managing the Security of Mobile Devices, 2013). A few spyware vendors, notably180 Solutions, have written what theNew York Timeshas dubbed "stealware", and what spyware researcherBen Edelmantermsaffiliate fraud, a form ofclick fraud. Stealware diverts the payment ofaffiliate marketingrevenues from the legitimate affiliate to the spyware vendor. Spyware which attacksaffiliate networksplaces the spyware operator's affiliate tag on the user's activity – replacing any other tag, if there is one. The spyware operator is the only party that gains from this. The user has their choices thwarted, a legitimate affiliate loses revenue, networks' reputations are injured, and vendors are harmed by having to pay out affiliate revenues to an "affiliate" who is not party to a contract.[27]Affiliate fraudis a violation of theterms of serviceof most affiliate marketing networks. Mobile devices can also be vulnerable tochargeware, which manipulates users into illegitimate mobile charges. In one case, spyware has been closely associated withidentity theft.[28]In August 2005, researchers from security software firm Sunbelt Software suspected the creators of the common CoolWebSearch spyware had used it to transmit "chat sessions,user names,passwords, bank information, etc.";[29]however it turned out that "it actually (was) its own sophisticated criminal little trojan that's independent of CWS."[30]This case was investigated by theFBI. TheFederal Trade Commissionestimates that 27.3 million Americans have been victims of identity theft, and that financial losses from identity theft totaled nearly $48 billion for businesses and financial institutions and at least $5 billion in out-of-pocket expenses for individuals.[31] Some copy-protection technologies have borrowed from spyware. In 2005,Sony BMG Music Entertainmentwasfound to be usingrootkitsin itsXCPdigital rights managementtechnology[32]Like spyware, not only was it difficult to detect and uninstall, it was so poorly written that most efforts to remove it could have rendered computers unable to function.Texas Attorney GeneralGreg Abbottfiled suit,[33]and three separateclass-actionsuits were filed.[34]Sony BMG later provided a workaround on its website to help users remove it.[35] Beginning on April 25, 2006, Microsoft'sWindows Genuine AdvantageNotifications application[36]was installed on most Windows PCs as a "critical security update". While the main purpose of this deliberately uninstallable application is to ensure the copy of Windows on the machine was lawfully purchased and installed, it also installs software that has been accused of "phoning home" on a daily basis, like spyware.[37][38]It can be removed with the RemoveWGA tool. Stalkerwareis spyware that has been used to monitor electronic activities of partners in intimate relationships. At least one software package, Loverspy, was specifically marketed for this purpose. Depending on local laws regarding communal/marital property, observing a partner's online activity without their consent may be illegal; the author of Loverspy and several users of the product were indicted in California in 2005 on charges of wiretapping and various computer crimes.[39] Anti-spyware programs often report Web advertisers'HTTP cookies, the small text files that track browsing activity, as spyware. While they are not always inherently malicious, many users object to third parties using space on their personal computers for their business purposes, and many anti-spyware programs offer to remove them.[40] Shameware or "accountability software" is a type of spyware that is not hidden from the user, but operates with their knowledge, if not necessarily their consent. Parents, religious leaders or other authority figures may require their children or congregation members to install such software, which is intended to detect the viewing ofpornographyor other content deemed inappropriate, and to report it to the authority figure, who may then confront the user about it.[41] These common spyware programs illustrate the diversity of behaviors found in these attacks. Note that as with computer viruses, researchers give names to spyware programs which may not be used by their creators. Programs may be grouped into "families" based not on shared program code, but on common behaviors, or by "following the money" of apparent financial or business connections. For instance, a number of the spyware programs distributed byClariaare collectively known as "Gator". Likewise, programs that are frequently installed together may be described as parts of the same spyware package, even if they function separately. Spyware vendors includeNSO Group, which in the 2010s sold spyware to governments for spying onhuman rights activistsandjournalists.[42][43][44]NSO Group was investigated byCitizen Lab.[42][44] Malicious programmers have released a large number ofrogue(fake) anti-spyware programs, and widely distributed Webbanner adscan warn users that their computers have been infected with spyware, directing them to purchase programs which do not actually remove spyware—or else, may add more spyware of their own.[45][46] The recent[update]proliferation of fake or spoofed antivirus products that bill themselves as antispyware can be troublesome. Users may receive popups prompting them to install them to protect their computer, when it will in fact add spyware. It is recommended that users do not install any freeware claiming to be anti-spyware unless it is verified to be legitimate. Some known offenders include: Fake antivirus products constitute 15 percent of all malware.[48] On January 26, 2006, Microsoft and the Washington state attorney general filed suit against Secure Computer for its Spyware Cleaner product.[49] Unauthorized access to a computer is illegal undercomputer crimelaws, such as the U.S.Computer Fraud and Abuse Act, the U.K.'sComputer Misuse Act, and similar laws in other countries. Since owners of computers infected with spyware generally claim that they never authorized the installation, aprima faciereading would suggest that the promulgation of spyware would count as a criminal act. Law enforcement has often pursued the authors of other malware, particularly viruses. However, few spyware developers have been prosecuted, and many operate openly as strictly legitimate businesses, though some have faced lawsuits.[50][51] Spyware producers argue that, contrary to the users' claims, users do in fact giveconsentto installations. Spyware that comes bundled withsharewareapplications may be described in thelegalesetext of anend-user license agreement(EULA). Many users habitually ignore these purported contracts, but spyware companies such as Claria say these demonstrate that users have consented. Despite the ubiquity ofEULAsagreements, under which a single click can be taken as consent to the entire text, relatively littlecaselawhas resulted from their use. It has been established in mostcommon lawjurisdictions that this type of agreement can be a binding contractin certain circumstances.[52]This does not, however, mean that every such agreement is a contract, or that every term in one is enforceable. Some jurisdictions, including the U.S. states ofIowa[53]andWashington,[54]have passed laws criminalizing some forms of spyware. Such laws make it illegal for anyone other than the owner or operator of a computer to install software that alters Web-browser settings, monitors keystrokes, or disables computer-security software. In the United States, lawmakers introduced a bill in 2005 entitled theInternet Spyware Prevention Act, which would imprison creators of spyware.[55] Additionally, several diplomatic efforts have been made to curb the growing usage of Spywares. Launched by France and the UK in early 2024, the Pall Mall Process[56]aims to address the proliferation and irresponsible use of commercial cyber intrusion capabilities. The USFederal Trade Commissionhas suedInternet marketingorganizations under the "unfairness doctrine"[57]to make them stop infecting consumers' PCs with spyware. In one case, that against Seismic Entertainment Productions, the FTC accused the defendants of developing a program that seized control of PCs nationwide, infected them with spyware and other malicious software, bombarded them with a barrage of pop-up advertising for Seismic's clients, exposed the PCs to security risks, and caused them to malfunction. Seismic then offered to sell the victims an "antispyware" program to fix the computers, and stop the popups and other problems that Seismic had caused. On November 21, 2006, a settlement was entered in federal court under which a $1.75 million judgment was imposed in one case and $1.86 million in another, but the defendants were insolvent[58] In a second case, brought against CyberSpy Software LLC, theFTCcharged that CyberSpy marketed and sold "RemoteSpy" keylogger spyware to clients who would then secretly monitor unsuspecting consumers' computers. According to the FTC, Cyberspy touted RemoteSpy as a "100% undetectable" way to "Spy on Anyone. From Anywhere." The FTC has obtained a temporary order prohibiting the defendants from selling the software and disconnecting from the Internet any of their servers that collect, store, or provide access to information that this software has gathered. The case is still in its preliminary stages. A complaint filed by theElectronic Privacy Information Center(EPIC) brought the RemoteSpy software to the FTC's attention.[59] An administrative fine, the first of its kind in Europe, has been issued by the Independent Authority of Posts and Telecommunications (OPTA) from the Netherlands. It applied fines in total value of Euro 1,000,000 for infecting 22 million computers. The spyware concerned is called DollarRevenue. The law articles that have been violated are art. 4.1 of the Decision on universal service providers and on the interests of end users; the fines have been issued based on art. 15.4 taken together with art. 15.10 of the Dutch telecommunications law.[60] FormerNew York State Attorney Generaland formerGovernor of New YorkEliot Spitzerhas pursued spyware companies for fraudulent installation of software.[61]In a suit brought in 2005 by Spitzer, the California firmIntermix Media, Inc.ended up settling, by agreeing to pay US$7.5 million and to stop distributing spyware.[62] The hijacking of Web advertisements has also led to litigation. In June 2002, a number of large Web publishers suedClariafor replacing advertisements, but settled out of court. Courts have not yet had to decide whether advertisers can be heldliablefor spyware that displays their ads. In many cases, the companies whose advertisements appear in spyware pop-ups do not directly do business with the spyware firm. Rather, they have contracted with anadvertising agency, which in turn contracts with an online subcontractor who gets paid by the number of "impressions" or appearances of the advertisement. Some major firms such asDell ComputerandMercedes-Benzhave sacked advertising agencies that have run their ads in spyware.[63] Litigation has gone both ways. Since "spyware" has become a commonpejorative, some makers have filedlibelanddefamationactions when their products have been so described. In 2003, Gator (now known as Claria) filed suit against the websitePC Pitstopfor describing its program as "spyware".[64]PC Pitstop settled, agreeing not to use the word "spyware", but continues to describe harm caused by the Gator/Claria software.[65]As a result, other anti-spyware and anti-virus companies have also used other terms such as "potentially unwanted programs" orgreywareto denote these products. In the 2010WebcamGatecase, plaintiffs charged two suburban Philadelphia high schools secretly spied on students by surreptitiously and remotely activating webcams embedded in school-issued laptops the students were using at home, and therefore infringed on their privacy rights. The school loaded each student's computer withLANrev's remote activation tracking software. This included the now-discontinued "TheftTrack". While TheftTrack was not enabled by default on the software, the program allowed the school district to elect to activate it, and to choose which of the TheftTrack surveillance options the school wanted to enable.[66] TheftTrack allowed school district employees to secretly remotely activate the webcam embedded in the student's laptop, above the laptop's screen. That allowed school officials to secretly take photos through the webcam, of whatever was in front of it and in its line of sight, and send the photos to the school's server. The LANrev software disabled the webcams for all other uses (e.g., students were unable to usePhoto Boothorvideo chat), so most students mistakenly believed their webcams did not work at all. On top of the webcam surveillance, TheftTrack allowed school officials to take screenshots and send them to the school's server. School officials were also granted the ability to take snapshots of instant messages, web browsing, music playlists, and written compositions. The schools admitted to secretly snapping over 66,000 webshots andscreenshots, including webcam shots of students in their bedrooms.[66][67][68]
https://en.wikipedia.org/wiki/Spyware
Incomputing, atrojan horse(or simplytrojan;[1]often capitalized,[2]but see below) is a kind ofmalwarethat misleads users as to its true intent by disguising itself as a normal program. Trojans are generally spread by some form ofsocial engineering. For example, a user may be duped into executing anemailattachment disguised to appear innocuous (e.g., a routine form to be filled in), or into clicking on a fake advertisement on theInternet. Although their payload can be anything, many modern forms act as abackdoor, contacting a controller who can then have unauthorized access to the affected device.[3]Ransomwareattacks are often carried out using a trojan. Unlikecomputer virusesandworms, trojans generally do not attempt to inject themselves into other files or otherwise propagate themselves.[4] The term is derived from theancient Greekstory of the deceptiveTrojan Horsethat led to the fall of the city ofTroy.[2] It is unclear where and when the computing concept, and this term for it, originated; but by 1971 the firstUnixmanual assumed its readers knew both.[5] Another early reference is in a US Air Force report in 1974 on the analysis of vulnerability in theMulticscomputer systems.[6] The term "Trojan horse" was popularized byKen Thompsonin his 1983Turing Awardacceptance lecture "Reflections on Trusting Trust",[7]subtitled: "To what extent should one trust a statement that a program is free of Trojan horses? Perhaps it is more important to trust the people who wrote the software." He mentioned that he knew about the possible existence of trojans from a report on the security of Multics.[8][9] The computer term "Trojan horse" is derived from the legendaryTrojan Horseof the ancient city ofTroy. For this reason "Trojan" is often capitalized, especially in older sources. However, many modernstyle guides[10]and dictionaries[1]suggest a lower-case "trojan" for this technical use. Once installed, trojans may perform a range of malicious actions. Many tend to contact one or moreCommand and Control(C2) servers across the Internet and await instruction. Since individual trojans typically use a specific set of ports for this communication, it can be relatively simple to detect them. Moreover, other malware could potentially "take over" the trojan, using it as a proxy for malicious action.[11] In German-speaking countries,spywareused or made by the government is sometimes calledgovware. Govware is typically used to intercept communications from the target device. Some countries like Switzerland and Germany have a legal framework governing the use of such software.[12][13]Examples of govware trojans include the SwissMiniPanzer and MegaPanzer[14]and theGerman "state trojan" nicknamed R2D2.[12]German govware works by exploiting security gaps unknown to the general public and accessing smartphone data before it becomes encrypted via other applications.[15] Due to the popularity ofbotnetsamong hackers and the availability of advertising services that permit authors to violate their users' privacy, trojans are becoming more common. According to a survey conducted byBitDefenderfrom January to June 2009, "Trojan-type malware is on the rise, accounting for 83% of the global malware detected in the world." trojans have a relationship with worms, as they spread with the help given by worms and travel across the internet with them.[16]BitDefender has stated that approximately 15% of computers are members of a botnet, usually recruited by a trojan infection.[17] Recent investigations have revealed that the trojan-horse method has been used as an attack oncloud computingsystems. A trojan attack on cloud systems tries to insert an application or service into the system that can impact the cloud services by changing or stopping the functionalities. When the cloud system identifies the attacks as legitimate, the service or application is performed which can damage and infect the cloud system.[18] A trojan horse is aprogramthat purports to perform some legitimate function, yet upon execution it compromises the user's security.[19]One simple example[20]is the following malicious version of the Linuxlscommand. An attacker would place this executable script in a publicly writable and "high-traffic" location (e.g.,/tmp/ls). Then, any victim who tried to runlsfrom that directory —if and only ifthe victim's executable searchPATHunwisely[20]included the current directory.— would execute/tmp/lsinstead of/usr/bin/ls, and have their home directory deleted. Similar scripts could hijack other common commands; for example, a script purporting to be thesudocommand (which prompts for the user's password) could instead mail that password to the attacker.[19] In these examples, the malicious program imitates the name of a well-known useful program, rather than pretending to be a novel and unfamiliar (but harmless) program. As such, these examples also resembletyposquattingandsupply chain attacks.
https://en.wikipedia.org/wiki/Trojan_horse_(computing)
Avirtual keyboardis a software component that allows theinput of characterswithout the need for physical keys.[1]Interaction with a virtualkeyboardhappens mostly via atouchscreen interface, but can also take place in a different form when invirtualoraugmented reality. On a desktop computer, a virtual keyboard might provide an alternative input mechanism for users withdisabilitieswho cannot use a conventional keyboard, formulti-lingualusers who switch frequently between different character sets or alphabets, which may be confusing over time, or for users who are lacking a traditional keyboard. Virtual keyboards may utilize the following: VariousJavaScriptvirtual keyboards have been created on web browsers, allowing users to type their own languages on foreign keyboards.Multitouchscreens allow the creation of virtualchorded keyboardsfortablet computers,[7]touchscreens,touchpads, andwired gloves.[8][9] Virtual keyboards are commonly used as an on-screen input method in devices with no physical keyboard where there is no room for one, such as apocket computer,personal digital assistant(PDA),tablet computer, ortouchscreen-equippedmobile phone. Text is commonly inputted either by tapping a virtual keyboard or finger-tracing.[10]Virtual keyboards are also featured inemulation softwarefor systems that have fewer buttons than a computer keyboard would have. The four main approaches to enter text into aPDAwere: virtual keyboards operated by a stylus, external USB keyboards, handwritten keyboards, and stroke recognition. Microsoft's mobile operating system approach was to simulate a completely functional keyboard, resulting in an overloaded layout.[11]Without support for multi-touch technology, PDA vitural keyboards had usability constraints. WhenApplepresented theiPhonein 2007, not including a physical keyboard was seen as a detriment.[12]However, Apple brought themulti-touchtechnology into the device, overcoming the usability problems of PDAs. The most common mobile operating systems,AndroidandiOS, give the developer community the possibility to individually develop custom virtual keyboards. TheAndroidSDK provides an "InputMethodService".[13]This service provides a standard implementation of an input method, enabling the Android development community to implement their own keyboard layouts. The InputMethodService ships with it on Keyboard View.[14]While the InputMethodService can be used to customize key and gesture inputs, the Keyboard Class loads anXMLdescription of a keyboard and stores the attributes of the keys.[15] As a result, it is possible to install different keyboard versions on anAndroiddevice, and the keyboard is only an application, most frequently downloaded among them beingGboardandSwiftKey; a simple activation over the Android settings menu is possible.[16] Apple'siOSoperating system allows the development of custom keyboards, however no access is given to thedictionaryor general keyboard settings. iOS automatically switches between system and custom keyboards if the user enters text into the text input field.[17][18] The UIInputViewController is the primary view controller for a custom keyboard app extension. This controller provides different methods for the implementation of a custom keyboard, such as a user interface for a custom keyboard, obtaining a supplementary lexicon or changing the primary language of a custom keyboard.[19] Microsoft Windowsprovides the virtual keyboard throughCommon Text Frameworkservice. Diverse scientific papers at the beginning of the 2000s showed even before the invention of smartphones, thatpredicting words, based on what the user is typing, assisted in increasing the typing speed.[20][21]At the beginning of development of this keyboard feature, prediction was mainly based on static dictionaries.Googleimplemented the predicting method in 2013 in Android 4.4. This development was mainly driven by third party keyboard providers, such asSwiftKeyandSwype.[22]In 2014 Apple presentediOS 8[23]which includes a newpredictive typingfeature called Quick Type, which displays word predictions above the keyboard as the user types. Haptic feedbackprovides for tactile confirmation that a key has been successfully triggered i.e. the user hears and feels a "click" as a key is pressed. Utilizinghysteresis, the feel of a physical key can be emulated to an even greater degree. In this case, there is an initial "click" that is heard and felt as the virtual key is pressed down, but then as finger pressure is reduced once the key is triggered, there is a further "unclick" sound and sensation as if a physical key is respringing back to its original unclicked state. This behaviour is explained in Aleks Oniszczak & Scott Mackenzie's 2004 paper "A Comparison of Two Input Methods for Keypads on Mobile Devices" which first introduced haptic feedback with hysteresis on a virtual keyboard.[24] Keyboards are needed in different digital areas.smartphonesand devices that create virtual worlds, for example,virtual realityoraugmented realityglasses, need to provide text input possibilities. Anopticalvirtual keyboard was invented and patented byIBMengineers in 1992.[25]It optically detects and analyses human hand and finger motions and interprets them as operations on a physically non-existent input device, such as a surface having painted keys. This allows it to emulate unlimited types of manually operated input devices including mouse or keyboard. All mechanical input units can be replaced by such virtual devices, optimized for the current application and for the user's physiology maintaining the speed, simplicity, and unambiguity of manual data input. One example of this technology is the "Selfie Type" - a keyboard technology for mobile phones made bySamsung Electronics. It was intended to use thefront-facing camera(the selfie camera) to track the user's fingers, enabling the user to type on an "invisible keyboard" on a table or another surface in front of the phone.[26][27]It was introduced at theConsumer Electronics Show2020[28][29][30]and was expected to be launched in the same year but never did. The basic idea of a virtual keyboard in anaugmented realityenvironment is to give the user a text input possibility. A common approach is to render a flat keyboard into augmented reality, e.g. using theUnityTouchScreenKeyboard. TheMicrosoftHoloLensenables the user to point at letters on the keyboard by moving his head.[31] Another approach was researched by the Korean KJIST U-VR Lab in 2003. Their suggestion was to use wearables to track the finger motion to replace a physical keyboard with virtual ones. They also tried to give audiovisual feedback to the user, when a key got hit. The basic idea was to give the user a more natural way to enter text, based on what he is used to.[32] The Magic Leap 1 fromMagic Leapimplements a virtual keyboard with augmented reality.[33] The challenge, as inaugmented reality, is to give the user the possibility to enter text in a completely virtual environment. Most augmented reality systems don'ttrack the handsof the user. So many available systems provide the possibility to point at letters.[34] In September 2016,Googlereleased a virtual keyboard app for theirDaydream[35]virtual reality headset. To enter text, the user points at letters using the controller.[36] In February 2017,Logitechpresented an experimental approach to bring their keyboards into the virtual environment. TheViveTracker and the Logitech G gaming keyboard track finger movement without wearing a glove. Fifty kits were sent to exclusive developers, enabling them, in combination with Logitech's BRIDGE developers kit, to test and experiment with the new technology.[37][38] Virtual keyboards may be used in some cases to reduce the risk ofkeystroke logging.[39]For example,Westpac's online banking service uses a virtual keyboard for password entry, as doesTreasuryDirect(see picture). It is more difficult formalwareto monitor the display and mouse to obtain the data entered via the virtual keyboard than it is to monitor real keystrokes. However, it is possible, for example by recordingscreenshotsat regular intervals or upon each mouse click.[40][41] Virtual keyboards may preventkeystroke inference attacksbut can also act as keyloggers throughtelemetryand can leak sensitive information through text suggestions.[42] The use of an on-screen keyboard on which the user "types" with mouse clicks can increase the risk of password disclosure byshoulder surfing, because:
https://en.wikipedia.org/wiki/Virtual_keyboard
Web trackingis the practice by which operators ofwebsitesand third parties collect, store and share information about visitors' activities on theWorld Wide Web. Analysis of a user's behaviour may be used to provide content that enables the operator to infer their preferences and may be of interest to various parties, such as advertisers.[1][2]Web tracking can be part of visitor management.[3] The uses of web tracking include the following: Every device connected to the Internet is assigned a uniqueIP address, which is needed to enable devices to communicate with each other. With appropriate software on the host website, the IP address of visitors to the site can be logged and can also be used to determine the visitor'sgeographical location.[8][9]Logging the IP address can, for example, monitor if a person voted more than once, as well as their viewing pattern. Knowing the visitor's location indicates, besides other things, the country. This may, for example, result in prices being quoted in the local currency, the price or the range of goods that are available, special conditions applying and in some cases requests from or responses to a certain country being blocked entirely. Internet users maycircumvent censorshipandgeo-blockingand protect personal identity and location to stay anonymous on the internet using aVPNconnection. AHTTP cookieis code and information embedded onto a user's device by a website when the user visits the website.[10]The website might then retrieve the information on the cookie on subsequent visits to the website by the user. Cookies can be used to customise the user's browsing experience and to deliver targeted ads.[11]Some browsing activities that cookies can store are: A first-party cookie is created by the website the user is visiting. These cookies are considered "good" since they help the user rather than spy on them. The main goal of first-party cookies is to recognize the user and their preferences so that their desired settings can be applied.[12] A third-party cookie is created by websites other than the one a user visits. They insert additional tracking code that can record a user's online activity. On-site analytics refers to data collection on the current site. It is used to measure many aspects of user interactions, including the number of times a user visits.[13] Restrictions on third-party cookies introduced byweb browsersare bypassed by some tracking companies using a technique calledCNAME cloaking, where a third-party tracking service is assigned aDNSrecord in the first-party origin domain (usuallyCNAME) so that it's masqueraded as first-party even though it's a separate entity in legal and organizational terms. This technique is blocked by some browsers and ad blockers using block lists of known trackers.[14][15] ETags can be used to track unique users,[16]asHTTP cookiesare increasingly being deleted by privacy-aware users. In July 2011,Ashkan Soltaniand a team of researchers atUC Berkeleyreported that a number of websites, includingHulu, were using ETags for tracking purposes.[17]Hulu and KISSmetrics have both ceased "respawning" as of 29 July 2011,[18]as KISSmetrics and over 20 of its clients are facing aclass-action lawsuitover the use of "undeletable"tracking cookiespartially involving the use of ETags.[19] Because ETags are cached by the browser and returned with subsequent requests for the same resource, a tracking server can simply repeat any ETag received from the browser to ensure an assigned ETag persists indefinitely (in a similar way topersistent cookies). Additional caching headers can also enhance the preservation of ETag data.[20] Web browsing is linked to a user's personal information. Location, interests, purchases, and more can be revealed just by what page a user visits. This allows them to draw conclusions about a user, and analyze patterns of activity.[33]Use of web tracking can be controversial when applied in the context of a private individual; and to varying degrees is subject to legislation such as theEU's eCommerce Directiveand the UK'sData Protection Act. When it is done without the knowledge of a user, it may be considered a breach ofbrowser security. In abusiness-to-businesscontext, understanding a visitor's behavior in order to identifybuying intentionsis seen by many commercial organizations as an effective way totargetmarketingactivities.[34]Visiting companies can be approached, both online and offline, with marketing and salespropositionswhich are relevant to their current requirements. From the point of view of a sales organization, engaging with a potential customer when they are actively looking to buy can produce savings in otherwise wasted marketing funds. The most advanced protection tools are or includeFirefox's tracking protection and thebrowser add-onsuBlock OriginandPrivacy Badger.[32][35][36] Moreover, they may include the browser add-onNoScript, the use of an alternative search engine likeDuckDuckGoand the use of aVPN. However, VPNs cost money and as of 2023 NoScript may "make general web browsing a pain".[36] On mobile, the most advanced method may be the use of themobile browserFirefox Focus, which mitigates web tracking on mobile to a large extent, including Total Cookie Protection and similar to the private mode in the conventional Firefox browser.[37][38][39] Users can also control third-party web tracking to some extent by other means. Opt-out cookies let users block websites from installing future cookies. Websites may be blocked from installing third-party advertisers or cookies on a browser, which will prevent tracking on the user's page.[40]Do Not Trackis a web browser setting that can request a web application to disable the tracking of a user. Enabling this feature will send a request to the website users are on to voluntarily disable their cross-site user tracking. Contrary to popular belief, browserprivacy modedoes not prevent (all) tracking attempts because it usually only blocks the storage of information on the visitor site (cookies). It does not help, however, against the variousfingerprintingmethods. Such fingerprints can bede-anonymized.[41]When using a privacy mode, one may not stay logged into a website, and preferences may be lost, because the cookies storing those preferences are deleted by the browser automatically. Someweb browsersuse "tracking protection" or "tracking prevention" features to block web trackers.[42]The teams behind theNoScriptanduBlockadd-ons have assisted with developingFirefox'sSmartBlock capabilities.[43] Search Engines To safeguard user data from tracking by search engines, various privacy focused search engines have been developed as viable alternatives. Examples of such search engines includeDuckDuckGo,MetaGer, andSwiscows, which prioritize preventing the storage and tracking of user activity. It's worth noting that while these alternatives offer enhanced privacy, some may not guarantee complete anonymity, and a few might be less user-friendly compared to mainstream search engines such asGoogleandMicrosoft Bing.[44]
https://en.wikipedia.org/wiki/Web_tracking
Password-based cryptographyis the study of password-based key encryption, decryption, and authorization. It generally refers two distinct classes of methods: Some systems attempt to derive a cryptographic key directly from a password. However, such practice is generally ill-advised when there is a threat ofbrute-force attack. Techniques to mitigate such attack includepassphrasesand iterated (deliberately slow) password-based key derivation functions such asPBKDF2(RFC 2898). Password-authenticated key agreementsystems allow two or more parties that agree on a password (or password-related data) to derive shared keys without exposing the password or keys to network attack.[1]Earlier generations ofchallenge–response authenticationsystems have also been used with passwords, but these have generally been subject to eavesdropping and/or brute-force attacks on the password. This cryptography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Password-based_cryptography
Automatic Warning System(AWS) is a railway safety system invented and predominantly used in the United Kingdom. It provides a train driver with an audible indication of whether the nextsignalthey are approaching is clear or at caution.[1]Depending on the upcoming signal state, the AWS will either produce a 'horn' sound (as a warning indication), or a 'bell' sound (as a clear indication). If the train driver fails to acknowledge a warning indication, an emergency brake application is initiated by the AWS; if the driver correctly acknowledges the warning indication, by pressing an acknowledgement button, then a visual 'sunflower' is displayed to the driver, as a reminder of the warning. AWS is a system based on trains detecting magnetic fields. These magnetic fields are created by permanent magnets and electromagnets installed on the track. The polarity and sequence of magnetic fields detected by a train determine the type of indication given to the train driver. A magnet, known as anAWS magnetis installed on the track center line. Themagnetic fieldof the magnet is set based on the next signal aspect.[1]The train detects the polarity of magnetic field via an AWS receiver, permanently mounted under the train.[1] An AWS magnet is made up of 1 permanent magnet, and an optional electromagnet. The permanent magnet is generally uncontrollable, and always produces a constant magnetic field of unchanging polarity. A train running over the permanent magnet will deliver an AWS warning indication to the train driver. The optional electromagnet can be used to provide the train driver with an AWS clear indication. If the train AWS detects a second magnetic field of a certain polarity after the first permanent magnet, then the AWS displays a clear indication instead of a warning indication. The train detects the electromagnet polarity after the permanent magnet polarity. This is because the optional electromagnet is always installed after the permanent magnet (in the direction of travel). The electromagnet is connected to the greensignal aspect, so the driver will only receive an AWS clear indication if the signal is clear (green). The permanent magnet always produces asouth pole. If the electromagnet is energized to produce a north pole, the AWS will give the driver an AWS clear indication. Multiple unit trains have an AWS receiver at each end. Vehicles that can operate singly (single car DMUs and locomotives) only have one; this could be either at the front or rear depending on the direction the vehicle is traveling in. The equipment on a train consists of; The polarities in this example are relevant to the UK. The permanent magnet produces a south pole in the UK. Other countries may use permanent magnet that produces a north pole. The key operational principle is that the electromagnet produces the opposite pole of the permanent magnet. A train is driving towards a signal that shows clear (green). The train runs over the AWS magnet (which is two magnets, first a permanent magnet and then an electromagnet). The electromagnet is energized. The AWS receiver detects a magnetic field in the sequence:South, North. The south pole comes from the permanent magnet, and the north pole comes from the electromagnet. This south then north sequence gives an AWS clear indication to the driver. A train is driving towards a signal that shows caution (yellow). The train runs over the AWS magnet (which is two magnets, first a permanent magnet and then an electromagnet). The electromagnet is de-energized (i.e. it is not powered). The AWS receiver detects only one magnetic field in the sequence:South. The reason only one magnetic field was detected is because the electromagnet was not energized. This makes the electromagnet invisible to the AWS receiver. This south pole by itself results in an AWS warning indication to the driver. As the train approaches a signal, it will pass over an AWS magnet. The AWS visual indicator ('sunflower') in the driver's cab will change toall black. If the signal being approached is displaying a 'clear' aspect, then AWS will sound a bell tone (modern trains have an electronic sounder that makes a distinctive 'ping') and leave the 'sunflower' black. This AWS clear indication lets the driver know that the next signal is showing 'clear' and that the AWS system is working. If the next signal is displaying a restrictive aspect (e.g. caution or stop) the AWS audible indicator will sound a continuous alarm. The driver then has approximately 2 seconds to press and release the AWS acknowledgement button (if the driver keeps the button held down, the AWS will not be cancelled).[1]After pressing the AWS acknowledgement button, the AWS audible indicator is silenced and the AWS visual indicator changes to a pattern of black and yellow spokes. This yellow spoke pattern persists until the train reaches the next AWS magnet and serves as a reminder to the driver of the restrictive signal aspect they passed. As afail-safemechanism, if the driver fails to press the AWS acknowledgement button for a warning indication in sufficient time, the emergency brakes will automatically apply, bringing the train to a stop. After stopping, the driver can now press the AWS acknowledgement button, and the brakes will automatically release after a safety time out period has elapsed. AWS works in the same way as for signals, except that a fixed magnet is located at the service braking distance before the speed reduction and no electromagnet is provided (or needed). A single fixed magnet will always cause a warning indication to the driver, which the driver must acknowledge to prevent the emergency brake applying. A trackside warning board will also advise the driver of the speed requirement ahead. This list of limitations is not exhaustive: Early devices used a mechanical connection between the signal and the locomotive. In 1840, the locomotive engineerEdward Buryexperimented with a system whereby a lever at track level, connected to the signal, sounded the locomotive's whistle and turned a cab-mounted red lamp. Ten years later, ColonelWilliam Yollandof theRailway Inspectoratewas calling for a system that not only alerted the driver but also automatically applied the brakes when signals were passed at danger but no satisfactory method of bringing this about was found.[2] In 1873, United Kingdom Patent No. 3286 was granted to Charles Davidson and Charles Duffy Williams for a system in which, if a signal were passed at danger, a trackside lever operated the locomotive's whistle, applied the brake, shut off steam and alerted the guard.[3]Numerous similar patents followed but they all bore the same disadvantage – that they could not be used at higher speeds for risk of damage to the mechanism – and they came to nothing. In Germany, the Kofler system used arms projecting from signal posts to engage with a pair of levers, one representingcautionand the otherstop, mounted on the locomotive cab roof. To address the problem of operation at speed, the sprung mounting for the levers was connected directly to the locomotive'saxle boxto ensure correct alignment.[4]When Berlin'sS-Bahnwas electrified in 1929, a development of this system, with the contact levers moved from the roofs to the sides of the trains, was installed at the same time.[citation needed] The first useful device was invented byVincent Ravenof theNorth Eastern Railwayin 1895, patent number 23384. Although this provided audible warning only, it did indicate to the driver when points ahead were set for a diverging route. By 1909, the company had installed it on about 100 miles of track. In 1907Frank Wyatt Prenticepatented a radio signalling system using a continuous cable laid between the rails energized by aspark generatorto relay "Hertzian Waves" to the locomotive. When the electrical waves were active they caused metal filings in acohereron the locomotive to clump together and allow a current from a battery to pass. The signal was turned off if theblockwere not "clear"; no current passed through the coherer and arelayturned a white or green light in the cab to red and applied the brakes.[5]TheLondon & South Western Railwayinstalled the system on itsHampton Court branch linein 1911, but shortly after removed it when the line waselectrified.[6] The first system to be put into wide use was developed in 1905 by theGreat Western Railway(GWR) and protected by UK patents 12661 and 25955. Its benefits over previous systems were that it could be used at high speed and that it sounded a confirmation in the cab when a signal was passed at clear. In the final version of the GWR system, the locomotives were fitted with asolenoid-operated valve into the vacuum train pipe, maintained in the closed position by a battery. At each distant signal, a long ramp was placed between the rails. This ramp consisted of a straight metal blade set edge-on, almost parallel to the direction of travel (the blade was slightly offset from parallel so in its fixed position it would not wear a groove into the locomotives' contact shoes), mounted on a wooden support. As the locomotive passed over the ramp, a sprung contact shoe beneath the locomotive was lifted and the battery circuit holding closed the brake valve was broken. In the case of a clear signal, current from a lineside battery energising the ramp (but at opposite polarity) passed to the locomotive through the contact and maintained the brake valve in the closed position, with the reversed-polarity current ringing a bell in the cab. To ensure that the mechanism had time to act when the locomotive was travelling at high speed, and the external current therefore supplied only for an instant, a "slow releasing relay" both extended the period of operation and supplemented the power from the external supply with current from the locomotive battery. Each distant signal had its own battery, operating at 12.5 V or more; theresistanceif the power came directly from the controlling signal box was thought too great (the locomotive equipment required 500mA). Instead, a 3 V circuit from a switch in the signal box operated arelayin the battery box. When the signal was at 'caution' or 'danger', the ramp battery was disconnected and so could not replace the locomotive's battery current: the brake valve solenoid would then be released causing air to be admitted to the vacuum train pipe via a siren which provided an audible warning as well as slowly applying the train brakes. The driver was then expected to cancel the warning (restoring the system to its normal state) and apply the brakes under his own control - if he did not the brake valve solenoid would remain open, causing all vacuum to be lost and the brakes to be fully applied after about 15 seconds. The warning was cancelled by the driver depressing a spring-laden toggle lever on the ATC apparatus in the cab; the key and circuitry was arranged so that it was the lever returning to its normal position after being depressed and not the depressing of the lever that reset the system - this was to prevent the system being overridden by drivers jamming the lever in the downward position or the lever accidentally becoming stuck in such a position. In normal use the locomotive battery was subject to constant drain holding closed the valve in the vacuum train pipe so to keep this to a minimum an automatic cut-off switch was incorporated which disconnected the battery when the locomotive was not in use and the vacuum in the train pipe had dropped away.[7] It was possible for specially equipped GWR locomotives to operate over shared lineselectrifiedon the third-rail principle (Smithfield Market,Paddington SuburbanandAddison Road). At the entrance to the electrified sections a particular, high-profile contact ramp (4+1⁄2in [110 mm] instead of the usual2+1⁄2in [64 mm]) raised the locomotive's contact shoe until it engaged with a ratchet on the frame. A corresponding raised ramp at the end of the electrified section released the ratchet. It was found, however, that the heavy traction current could interfere with the reliable operation of the on-board equipment when traversing these routes and it was for this reason that, in 1949, the otherwise "well proven" GWR system was not selected as the national standard (see below).[7][8] Notwithstanding the heavy commitment of maintaining the lineside and locomotive batteries, the GWR installed the equipment on all its main lines. For many years,Western Region(successors to the GWR) locomotives were dual fitted with both GWR ATC and BR AWS system. By the 1930s, other railway companies, under pressure from theMinistry of Transport, were considering systems of their own. A non-contact method based onmagnetic inductionwas preferred, to eliminate the problems caused by snowfall and day-to-day wear of the contacts which had been discovered in existing systems. The Strowger-Hudd system of Alfred Ernest Hudd (c.1883– 1958) used a pair of magnets, one a permanent magnet and one an electro-magnet, acting in sequence as the train passed over them. Hudd patented his invention and offered it for development to theAutomatic Telephone Manufacturing Companyof Liverpool (a subsidiary of theStrowger Automatic Telephone Exchange Companyof Chicago, Illinois).[9][10]It was tested by theSouthern Railway,London & North Eastern Railwayand theLondon, Midland & Scottish Railwaybut these trials came to nothing. In 1948 Hudd, now working for the LMS, equipped theLondon, Tilbury and Southend line, a division of the LMS, with his system. It was successful andBritish Railwaysdeveloped the mechanism further by providing a visual indication in the cab of the aspect of the last signal passed. In 1956, the Ministry of Transport evaluated the GWR, LTS and BR systems and selected the one developed by BR as standard for Britain's railways. This was in response to theHarrow & Wealdstone accidentin 1952.[8] AWS was later extended to give warnings for;[11] AWS was based on a 1930 system developed by Alfred Ernest Hudd[9]and marketed as the "Strowger-Hudd" system. An earlier contact system, installed on theGreat Western Railwaysince 1906 and known asautomatic train control(ATC), was gradually supplanted by AWS within theWestern Region of British Railways. Network Rail(NR) AWS consists of: The system works on a set/reset principle. When the signal is at 'clear' or green ("off"), the electromagnet is energised. As the train passes, the permanent magnet sets the system. A short time later, as the train moves forward, the electromagnet resets the system. Once so reset, a bell is sounded (a chime on newer stock) and the indicator is set to all black if it is not already so. No acknowledgement is required from the driver. The system must be reset within one second of being set, otherwise it behaves as for a warning indication. An additional safeguard is included in the distant-signal control wiring to ensure the AWS "clear" indication is only given when the distant is proved "off" – mechanical semaphore distants have a contact in the electromagnet coil circuit closed only when the arm is raised or lowered by at least 27.5 degrees. Colour-light signals have a current sensing relay in the lamp lighting circuit to prove the signal alight, this is used in combination with the relay controlling the green aspect to energise the AWS electro-magnet. In a Solid State Interlocking the signal module has a "Green-Proved" output from its driver electronics that is used to energise the electromagnet. When the distant signal is at 'caution' or yellow (on), the electro-magnet is de-energised. As the train passes, the permanent magnet sets the system. However, since the electromagnet is de-energised, the system is not reset. After the one-second delay within which the system can be reset, a horn warning is given until the driver acknowledges by pressing a plunger. If the driver fails to acknowledge the warning within 2.75 seconds, thebrakesare automatically applied. If the driver does acknowledge the warning, the indicator disk changes to yellow and black, to remind the driver that they have acknowledged a warning. The yellow and black indication persists until the next signal and serves as a reminder between signals that the driver is proceeding under caution. The one-second delay before the horn sounds allows the system to operate correctly down to speeds as low as1+3⁄4mph (2.8 km/h). Below this speed, the caution horn warning will always be given, but it will be automatically cancelled when the electromagnet resets the system if the driver has not already done so. The display will indicate all black once the system resets itself. The system isfail-safesince, in the event of a loss of power, only the electro-magnet is affected and therefore all trains passing will receive a warning. The system suffers one drawback in that on single track lines, the track equipment will set the AWS system on a train travelling in the opposite direction from that for which the track equipment is intended but not reset it as the electromagnet is encountered before the permanent magnet. To overcome this, a suppressor magnet may be installed in place of an ordinary permanent magnet. When energised, its suppressing coil diverts the magnetic flux from the permanent magnet so that no warning is received on the train. The suppressor magnet is fail-safe since loss of power will cause it to act like an ordinary permanent magnet. A cheaper alternative is the installation of a lineside sign that notifies the driver to cancel and ignore the warning. This sign is a blue square board with a whiteSt Andrew's crosson it (or a yellow board with a black cross, if provided in conjunction with a temporary speed restriction). With mechanical signalling, the AWS system was installed only at distant signals but, with multi-aspect signalling, it is fitted at all main line signals. All signal aspects, except green, cause the horn to sound and the indicator disc to change to yellow on black. AWS equipment without electromagnets is fitted at locations where a caution signal is invariably required or where a temporary caution is needed (for example, a temporary speed restriction). This is a secondary advantage of the system because temporary AWS equipment need only contain a permanent magnet. No electrical connection or supply is needed. In this case, the warning indication in the cab will persist until the next green signal is encountered. To verify that the on-train equipment is functioning correctlymotive power depotexit lines are fitted with a 'Shed Test Inductor' that produces a warning indication for vehicles entering service. Due to the low speed used on such lines the size of the track equipment is reduced from that found on the operational network. 'Standard Strength' magnets are used everywhere except in DCthird railelectrification areas and are painted yellow. The minimum field strength to operate the on-train equipment is 2milliteslas(measured 125 mm [5 in] above the track equipment casing). Typical track equipment produces a field of 5 mT (measured under the same conditions). Shed Test Inductors typically produce a field of 2.5 mT (measured under the same conditions). Where DC third rail electrification is installed 'Extra Strength' magnets are fitted and are painted green. This is because the current in the third rail produces a magnetic field of its own which would swamp the 'Standard Strength' magnets. AWS is provided at most main aspect signals on running lines, though there are some exceptions:[1] Because the permanent magnet is located in the centre of the track, it operates in both directions. The permanent magnet can be suppressed by an electric coil of suitable strength. Where signals applying to opposing directions of travel on the same line are suitably positioned relative to each other (i.e. facing each other and about 400yds apart), common track equipment may be used, comprising an unsuppressed permanent magnet sandwiched between with both signals' electro-magnets. The BR AWS system is also used in:
https://en.wikipedia.org/wiki/Automatic_Warning_System
Cab signallingis arailwaysafety system that communicates track status and condition information to thecab, crew compartment or driver's compartmentof alocomotive,railcarormultiple unit. The information is continually updated giving an easy to read display to thetrain driverorengine driver. The simplest systems display the trackside signal, while more sophisticated systems also display allowable speed, location of nearby trains, and dynamic information about the track ahead. Cab signals can also be part of a more comprehensivetrain protection systemthat can automatically apply the brakes stopping the train if the operator does not respond appropriately to a dangerous condition.[1] The main purpose of a signal system is to enforce a safe separation between trains and to stop or slow trains in advance of a restrictive situation. The cab signal system is an improvement over thewayside signalsystem, where visual signals beside or above the right-of-way govern the movement of trains, as it provides the train operator with a continuous reminder of the last wayside signal or a continuous indication of the state of the track ahead. The first such systems were installed on an experimental basis in the 1910s in the United Kingdom, in the 1920s in the United States, and in theNetherlandsin the 1940s. Modern high-speed rail systems such as those in Japan, France, and Germany were all designed from the start to use in-cab signalling due to the impracticality of sighting wayside signals at the new higher train speeds. Worldwide, legacy rail lines continue to see limited adoption of Cab signalling outside of high density or suburban rail districts and in many cases is precluded by use of older intermittentAutomatic Train Stoptechnology. In North America, thecoded track circuit systemdeveloped by thePennsylvania Railroad(PRR) andUnion Switch & Signal(US&S) became the de facto national standard. Variations of this system are also in use on many rapid transit systems and form the basis for several international cab signalling systems such asCAWSin Ireland,BACCin Italy,ALSNin Russia and the first generationShinkansensignalling developed by Japan National Railways (JNR). In Europe and elsewhere in the world, cab signalling standards were developed on a country by country basis with limited interoperability, however new technologies like the European Rail Traffic Management System (ERTMS) aim to improve interoperability. The train-control component of ERTMS, termed European Train Control System (ETCS), is a functional specification that incorporates some of the former national standards and allows them to be fully interoperable with a few modifications. All cab signalling systems must have a continuous in-cab indication to inform the driver of track condition ahead; however, these fall into two main categories. Intermittent cab signals are updated at discrete points along the rail line and between these points the display will reflect information from the last update. Continuous cab signals receive a continuous flow of information about the state of the track ahead and can have the cab indication change at any time to reflect any updates. The majority of cab signalling systems, including those that use coded track circuits, are continuous. The GermanIndusiand DutchATB-NGfall into this category. These and other such systems provide constant reminders to drivers of track conditions ahead, but are only updated at discrete points. This can lead to situations where the information displayed to the driver has become out of date. Intermittent cab signalling systems have functional overlap with many othertrain protection systemssuch as trip stops, but the distinction is that a driver orautomatic operating systemmakes continuous reference to the last received update. Continuous systems have the added benefit offail safebehaviour in the event a train stops receiving the continuous event relied upon by the cab signalling system. Early systems use the rails or loop conductors laid along the track to provide continuous communication between wayside signal systems and the train.[2]These systems provided for the transmission of more information than was typically possible with contemporary intermittent systems and are what enabled the ability to display a miniature signal to the driver; hence the term, "cab signalling". Continuous systems are also more easily paired withAutomatic Train Controltechnology, which can enforce speed restrictions based on information received through the signalling system, because continuous cab signals can change at any time to be more or less restrictive, providing for more efficient operation than intermittent ATC systems. Cab signals require a means of transmitting information from wayside to train. There are a few main methods to accomplish this information transfer. This is popular for early intermittent systems that used the presence of a magnetic field or electric current to designate a hazardous condition.[3]The British Rail Automatic Warning System (AWS) is an example of a two-indication cab signal system transmitting information using a magnetic field. Inductive systemsare non-contact systems that rely on more than the simple presence or absence of a magnetic field to transmit a message. Inductive systems typically require a beacon or aninduction loopto be installed at every signal and other intermediate locations. The inductive coil uses a changing magnetic field to transmit messages to the train. Typically, the frequency of pulses in the inductive coil are assigned different meanings. Continuous inductive systems can be made by using the running rails as one long tuned inductive loop. Examples of intermittent inductive systems include the GermanIndusisystem. Continuous inductive systems include the two-aspectGeneral Railway SignalCompany "Automatic Train Control" installed on theChicago and North Western Railroadamong others. A coded track circuit based system is essentially an inductive system that uses the running rails as information transmitter. The coded track circuits serve a dual purpose: to perform the train detection and rail continuity detection functions of a standardtrack circuit, and to continuously transmit signal indications to the train. The coded track circuit systems eliminate the need for specialized beacons. Examples of coded track circuit systems include thePennsylvania Railroad standard system, a variation of which was used on the London UndergroundVictoria line,[4]Later,audio frequency(AF) track circuit systems eventually came to replace "power" frequency systems in rapid transit applications as higher frequency signals could self-attenuatereducing the need for insulated rail joints. Some of the first users of AF cab signal systems include theWashington MetroandBay Area Rapid Transit. More recently, digital systems have become preferred, transmitting speed information to trains usingdatagramsinstead of simple codes. The FrenchTVMmakes use of the running rails to transmit the digital signalling information, while the GermanLZBsystem makes use of auxiliary wires strung down the centre of the track to continually transmit the signalling information. Transponder based systems make use of fixed antenna loops or beacons (calledbalises) that transmit datagrams or other information to a train as it passes overhead. While similar to intermittent inductive systems, transponder based cab signalling transmit more information and can also receive information from the train to aid traffic management. The low cost of loops and beacons allows for a larger number of information points that may have been possible with older systems as well as finer grained signalling information. The BritishAutomatic Train Protectionwas one example of this technology along with the more recent Dutch ATB-NG. Wireless cab signalling systems dispense with all track-based communications infrastructure and instead rely on fixed wireless transmitters to send trains signalling information. This method is most closely associated withcommunications-based train control.ETCSlevels 2 and 3 make use of this system, as do a number of other cab signalling systems under development. The cab display unit (CDU), (also called a driver machine interface (DMI) in theERTMSstandard) is the interface between the train operator and the cab signalling system. Early CDU's displayed simple warning indications or representations of wayside railway signals. Later, many railways and rapid transit systems would dispense with miniature in-cab signals in favour of an indication of what speed the operator was permitted to travel at. Typically this was in conjunction with some sort of Automatic Train Control speed enforcement system where it becomes more important for operators to run their trains at specific speeds instead of using their judgement based on signal indications. One common innovation was to integrate thespeedometerand cab signal display, superimposing or juxtaposing the allowed speed with the current speed. Digital cab signalling systems that make use of datagrams with "distance to target" information can use simple displays that simply inform the driver when they are approaching a speed penalty or have triggered a speed penalty or more complex ones that show a moving graph of the minimum braking curves permitted to reach the speed target. CDU's also inform the operator which, if any, mode the system might be in or if it is active at all. CDU's can also be integrated into thealertness system, providing count-downs to the alertness penalty or a means by which to cancel the alarm. Cab signalling in the United States was driven by a 1922 ruling by theInterstate Commerce Commission(ICC) that required 49 railways to install some form of automatic train control in one full passenger division by 1925.[5]While several large railways, including theSanta FeandNew York Central, fulfilled the requirement by installing intermittent inductive train stop devices, the PRR saw an opportunity to improve operational efficiency and installed the first continuous cab signal systems, eventually settling onpulse code cab signalingtechnology supplied byUnion Switch and Signal. In response to the PRR lead, the ICC mandated that some of the nation's other large railways must equip at least one division with continuous cab signal technology as a test to compare technologies and operating practices. The affected railroads were less than enthusiastic, and many chose to equip one of their more isolated or less trafficked routes to minimize the number of locomotives to be equipped with the apparatus. Several railways chose the inductive loop system rejected by the PRR. These railways included theCentral Railroad of New Jersey(installed on its Southern Division), theReading Railroad(installed on itsAtlantic City Railroadmain line), the New York Central, and theFlorida East Coast.[6]Both the Chicago and North Western andIllinois Centralemployed a two-aspect system on select suburban lines near Chicago. The cab signals would display "Clear" or "Restricting" aspects. The CNW went further and eliminated the wayside intermediate signals in the stretch of track between Elmhurst and West Chicago, requiring trains to proceed solely based on the 2-aspect cab signals. TheChicago, Milwaukee, St. Paul and Pacific Railroadhad a 3-aspect system operating by 1935 betweenPortage, WisconsinandMinneapolis, Minnesota.[7] As the Pennsylvania Railroad system was the only one adopted on a large scale, it became a de facto national standard, and most installations of cab signals in the current era have been this type. Recently, there have been several new types of cab signalling which use communications-based technology to reduce the cost of wayside equipment or supplement existing signal technologies to enforce speed restrictions and absolute stops and to respond to grade crossing malfunctions or incursions. The first of these was the Speed Enforcement System (SES) employed byNew Jersey Transiton their low-densityPascack Valley Lineas a pilot program using a dedicated fleet of 13GP40PH-2locomotives. SES used a system of transponder beacons attached to wayside block signals to enforce signal speed. SES was disliked by engine crews due to its habit of causing immediate penalty brake applications without first sounding an overspeed alarm and giving the engineer a chance to decelerate. SES is in the process of being removed from this line, and is being replaced with CSS. Amtrakuses theAdvanced Civil Speed Enforcement System(ACSES) for itsAcela Expresshigh-speed rail service on the NEC.[8]ACSES was an overlay to the existing PRR-type CSS and uses the same SES transponder technology to enforce both permanent and temporary speed restrictions at curves and other geographic features. The on-board cab signal unit processes both the pulse code "signal speed" and the ACSES "civil speed", then enforces the lower of the two. ACSES also provides for a positive stop at absolute signals which could be released by a code provided by the dispatcher transmitted from the stopped locomotive via a data radio. Later this was amended to a simpler "stop release" button on the cab signal display.
https://en.wikipedia.org/wiki/Cab_signalling
EBICabis a trademark registered byAlstom(formerBombardier) for the equipment on board a train used as a part of anAutomatic Train Controlsystem. Three different families exist, which are technically unrelated. EBICab 500 is Bombardier's implementation of the GermanPZB, the train protection system widely used inGermany,Austriaand other countries, allowing operation up to 160 km/h. EBICab 600 is Bombardier's implementation of the GermanPZBandLZBas a combinedSTM. The LZB is used on high-speed tracks in Germany, up to 300 km/h. EBICab 700 was originally derived from Ericsson's SLR system in Sweden. Most trains in Sweden and Norway use a similar on-board system, Ansaldo L10000 (more known as ATC-2) from Bombardier's competitor Ansaldo STS (nowHitachi Rail STS).[1]ATC-2 was also developed in Sweden.[2] These on-board systems use pairs ofbalisesmounted on the sleepers. The pairs of balises distinguish signals in one direction from the other direction with semicontinuous speed supervision, using a wayside to train punctual transmission using waysidetransponders.[3] EBICab comes in two versions, EBICab 700 inSweden,Norway,PortugalandBulgariaand EBICab 900 installed in the spanish Mediterranean Corridor (vmax= 220 km/h), and inFinland(Finnish:Junakulunvalvonta) under the nameATP-VR/RHK. InPortugalit is known asConvel(the contraction of Controlo de Velocidade, meaning Speed Control). The EBICab 900 system uses wayside balises with signal encoders or series communications with electroniclookup table, and on-board equipment on the train. The transmission of data occurs between the passive wayside balises (between 2 and 4 per signal) and the antenna installed under the train, which powers the balises when it passes over the balises. The coupling between the balise and the on-board antenna is inductive. In comparison withASFA, a system which transmits only a maximum amount of data per frequency, EBICAB uses electronic lookup table, the amount of data transmitted is much larger. Adif/Renfe, inSpain, sometimes use the termATPto refer to EBICAB 900, which is the first system on its network to provide Automatic Train Protection. TheManila MRT Line 3in thePhilippinesalso uses the ATP term to refer to EBICAB 900.[4] The most important difference with EBICab 900, is that EBICab 700 can only transmit packets with 12 useful bits for a total of 32bits and allows up to 5 transponders per signal. The EBICab 2000 is Bombardier's implementation of aETCS, operated in several European countries (Germany,Switzerland,United Kingdom,Belgium,Netherlands,Spain,Sweden,Poland). It can readEurobalisesand can communicate byEuroradiowith aRBC.
https://en.wikipedia.org/wiki/EBICAB
Linienzugbeeinflussung(orLZB) is acab signallingandtrain protection systemused on selectedGermanandAustrian railwaylines as well as on theAVEand some commuter rail lines inSpain. The system was mandatory where trains were allowed to exceed speeds of 160 km/h (99 mph) in Germany and 220 km/h (140 mph) in Spain. It is also used on some slower railway and urbanrapid transitlines to increase capacity. TheGermanLinienzugbeeinflussung translates tocontinuous train control, literally:linear train influencing. It is also calledlinienförmige Zugbeeinflussung. LZB is deprecated and will be replaced withEuropean Train Control System(ETCS) between 2023 and 2030. It is referenced byEuropean Union Agency for Railways(ERA) as a Class Btrain protection systeminNational Train Control(NTC).[1]Driving cars mostly have to replace classical control logic to ETCSOnboard Units(OBU) with commonDriver Machine Interface(DMI).[2]Because high performance trains are often not scrapped or reused on second order lines, specialSpecific Transmission Modules(STM) for LZB were developed for further support of LZB installation.[3] In Germany the standard distance from a distantsignalto its home signal is 1,000 metres (3,300 ft). On a train with strong brakes, this is thebraking distancefrom 160 km/h. In the 1960s Germany evaluated various options to increase speeds, including increasing the distance between distant and home signals, andcab signalling. Increasing the distance between the home and distant signals would decrease capacity. Adding another aspect would make the signals harder to recognize. In either case, changes to the conventional signals wouldn't solve the problem of the difficulty of seeing and reacting to the signals at higher speeds. To overcome these problems, Germany chose to develop continuous cab signalling. The LZB cab signalling system was first demonstrated in 1965, enabling daily trains at the International Transport Exhibition in Munich to run at 200 km/h. The system was further developed throughout the 1970s, then released on various lines in Germany in the early 1980s and on German, Spanish, and Austrian high-speed lines in the 1990s with trains running up to 300 km/h (190 mph). Meanwhile, additional capabilities were built into the system. LZB consists of equipment on the line as well as on the trains. A 30–40 km segment of track is controlled by a LZB control centre.[4]The control centre computer receives information about occupied blocks fromtrack circuitsoraxle countersand locked routes from interlockings. It is programmed with the track configuration including the location of points, turnouts, gradients, and curve speed limits. With this, it has sufficient information to calculate how far each train may proceed and at what speed. The control centre communicates with the train using two conductor cables that run between the tracks and are crossed every 100 m. The control centre sends data packets, known as telegrams, to the vehicle which give it its movement authority (how far it can proceed and at what speed) and the vehicle sends back data packets indicating its configuration, braking capabilities, speed, and position. The train's on-board computer processes the packets and displays the following information to the driver: If there is a long distance free in front of the train the driver will see the target speed and permitted speed equal to the maximum line speed, with the distance showing the maximum distance, between 4 km and 13.2 km depending on the unit, train, and line. As the train approaches a speed restriction, such as one for a curve or turnout, LZB will sound a buzzer and display the distance to and speed of the restriction. As the train continues the target distance will decrease. As the train nears the speed restriction the permitted speed will start to decrease, ending up at the target speed at the restriction. At that point the display will change to the next target. The LZB system treats a red signal or the beginning of a block containing a train as a speed restriction of 0 speed. The driver will see the same sequence as approaching a speed restriction except the target speed is 0. LZB includesAutomatic Train Protection. If the driver exceeds the permitted speed plus a margin LZB will activate the buzzer and an overspeed light. If the driver fails to slow the train the LZB system can apply the brakes itself, bringing the train to a halt if necessary. LZB also includes anAutomatic Train Operationsystem known as AFB (Automatische Fahr- und Bremssteuerung, automatic driving and braking control), which enables the driver to let the computer drive the train on auto-pilot, automatically driving at the maximum speed currently allowed by the LZB. In this mode, the driver only monitors the train and watches for unexpected obstacles on the tracks. Finally, the LZB vehicle system includes the conventionalIndusi(or PZB) train protection system for use on lines not equipped with LZB. In the 1960s, the German railways wanted to increase the speeds of some of their railway lines. One issue in doing so is signalling. German signals are placed too close to allow high-speed trains to stop between them, and signals may be difficult for train drivers to see at high speeds. Germany uses distant signals placed 1,000 m (3,300 ft) before the main signal. Trains with conventional brakes, decelerating at 0.76 m/s2(2.5 ft/s2), can stop from 140 km/h (87 mph) in that distance. Trains with strong brakes, usually including electromagnetictrack brakes, decelerating at 1 m/s2(3.3 ft/s2) can stop from 160 km/h (99 mph) and are allowed to travel that speed. However, even with strong brakes and the same deceleration, a train travelling 200 km/h (120 mph) would require 1,543 m (5,062 ft) to stop, exceeding the signalling distance. Furthermore, as the energy dissipated at a given acceleration increases with speed, higher speeds may require lower decelerations to avoid overheating the brakes, further increasing the distance. One possibility to increase speed would be to increase the distance between the main and distant signal. But, this would require longer blocks, which would decrease line capacity for slower trains. Another solution would be to introduce multiple aspect signalling. A train travelling at 200 km/h (120 mph) would see a "slow to 160" signal in the first block and then a stop signal in the 2nd block. Introducing multi-aspect signalling would require substantial reworking for the existing lines, as additional distant signals would need to be added onto long blocks and the signals reworked on shorter ones. In addition, it wouldn't solve the other problem with high-speed operation, the difficulty of seeing signals as a train rushes past, especially in marginal conditions such as rain, snow, and fog. Cab signalling solves these problems. For existing lines it can be added on top of the existing signalling system with little, if any, modifications to the existing system. Bringing the signals inside the cab makes it easy for the driver to see them. On top of these, the LZB cab signalling system has other advantages: Given all of these advantages, in the 1960s, the German railways chose to go with LZB cab signalling instead of increasing the signal spacing or adding aspects. The first prototype system was developed byGerman Federal Railwaysin conjunction withSiemensand tested in 1963. It was installed in Class 103 locomotives and presented in 1965 with 200 km/h (120 mph) runs on trains to the International Exhibition in Munich. From this Siemens developed the LZB 100 system and introduced it on the Munich-Augsburg-Donauwörth and Hanover-Celle-Uelzen lines, all in Class 103 locomotives.[5]The system was overlaid on the existing signal system. All trains would obey the standard signals, but LZB-equipped trains could run faster than normal as long as the track was clear ahead for a sufficient distance. LZB 100 could display up to 5 km (3.1 mi) in advance. The original installations were all hard-wired logic. However, as the 1970s progressedStandard Elektrik Lorenz(SEL) developed the computer-based LZB L72 central controllers and equipped other lines with them. By the late 1970s, with the development of microprocessors, the 2-out-of-3 computers could be applied to on-board equipment. Siemens and SEL jointly developed the LZB 80 on-board system and equipped all locomotives and trains that travel over 160 km/h (99 mph) plus some heavy haul locomotives. By 1991, Germany replaced all LZB 100 equipment with LZB 80/L 72.[4][5] When Germany built its high-speed lines, beginning with the Fulda-Würzburg segment that started operation in 1988, it incorporated LZB into the lines. The lines were divided into blocks about 1.5 to 2.5 km (0.93 to 1.55 mi) long, but instead of having a signal for every block, there are only fixed signals at switches and stations, with approximately 7 km (4.3 mi) between them. If there was no train for the entire distance the entry signal would be green. If the first block was occupied it would be red as usual. Otherwise, if the first block was free and a LZB train approached the signal would be dark and the train would proceed on LZB indications alone. The system has spread to other countries. The Spanish equipped their first high-speed line, operating at 300 km/h (190 mph), with LZB. It opened in 1992 and connectsMadrid,Cordoba, andSeville. In 1987 theAustrian railwaysintroduced LZB into their systems, and with the 23 May 1993 timetable change introduced EuroCity trains running 200 km/h (120 mph) on a 25 km (16 mi)-long section of theWestbahnbetweenLinzandWels. Siemens continued to develop the system, with "Computer Integrated Railroading", or "CIR ELKE", lineside equipment in 1999. This permitted shorter blocks and allowed speed restrictions for switches to start at the switch instead of at a block boundary. SeeCIR ELKEbelow for details. The LZB control centre communicates with the train using conductor cable loops. Loops can be as short as 50 metres long, as used at the entrance and exit to LZB controlled track, or as long as 12.7 km (7.9 mi). Where the loops are longer than 100 m (328 ft) they are crossed every 100 m (328 ft). At the crossing the signalphase angleis changed by 180° reducing electrical interference between the track and the train as well as long-distance radiation of the signal. The train detects this crossing and uses it to help determine its position. Longer loops are generally fed from the middle rather than an end. One disadvantage of very long loops is that any break in the cable will disable LZB transmission for the entire section, up to 12.7 km (7.9 mi). Thus, newer LZB installations, including all high-speed lines, break the cable loops into 300 m (984 ft) physical cables. Each cable is fed from a repeater, and all of the cables in a section will transmit the same information. The core of the LZB route centre, or central controller, consists of a 2-of-3 computer system with two computers connected to the outputs and an extra for standby. Each computer has its own power supply and is in its own frame.[5]All 3 computers receive and process inputs and interchange their outputs and important intermediate results. If one disagrees it is disabled and the standby computer takes its place. The computers are programmed with fixed information from the route such as speed limits, gradients, and the location of block boundaries, switches, and signals. They are linked by LAN or cables to the interlocking system from which they receive indications of switch positions, signal indications, and track circuit or axle counter occupancy. Finally, the route centre's computers communicates with controlled trains via the cable loops previously described. The vehicle equipment in the original LZB80 designed consisted of:[5] The equipment in newer trains is similar, although the details may vary. For example, some vehicles use radar rather than accelerometers to aid in their odometry. The number of antennas may vary by vehicle. Finally, some newer vehicles use a full-screen computer generated "Man-machine interface" (MMI) display rather than the separate dials of the "Modular cab display" (MFA). LZB operates by exchanging telegrams between the central controller and the trains. The central controller transmits a "call telegram" usingFrequency-shift keying(FSK) signalling at 1200 bits per second on a 36 kHz ± 0.4 kHz. The train replies with a "response telegram" at 600 bits per second at 56 kHz ± 0.2 kHz.[7] Call telegrams are 83.5 bits long: One might note that there is no "train identification" field in the telegram. Instead, a train is identified by position. SeeZones and Addressingfor more details. There are 4 types of response telegrams, each 41 bits long. The exact type of telegram a train sends depends on the "Group identity" in the call telegram. The most common type of telegram is type 1, which is used to signal a train's position and speed to the central controller. It contains the following fields: {LZB p3} The other telegrams are used primarily when a train enters the LZB controlled section. They all start with the same synchronization and start sequence and a "group identity" to identify the telegram type, and end with the CRC. Their data fields vary as follows: Before entering an LZB controlled section the driver must enable the train by entering the required information on theDriver Input Unitand enabling LZB. When enabled the train will light a "B" light. A controlled section of track is divided into up to 127 zones, each 100 m (328 ft) long. The zones are consecutively numbered, counting up from 1 in one direction and down from 255 in the opposite. When a train enters a LZB controlled section of track, it will normally pass over a fixed loop that transmits a "change of section identification" (BKW) telegram. This telegram indicates to the train the section identification number as well as the starting zone, either 1 or 255. The train sends back an acknowledgement telegram. At that time the LZB indications are switched on, including the "Ü" light to indicate that LZB is running. From that point on the train's location is used to identify a train. When a train enters a new zone it sends a response telegram with the "vehicle location acknowledgement" filed indicating that it has advanced into a new zone. The central controller will then use the new zone when addressing the train in the future. Thus a trains address will gradually increase or decrease, depending on its direction, as it travels along the track. A train identifies that it has entered a new zone by either detecting the cable transposition point in the cable or when it has travelled 100 metres (328 ft).[5]A train can miss detecting up to 3 transposition points and still remain under LZB control. The procedure for entering LZB controlled track is repeated when a train transitions from one controlled section to another. The train receives a new "change of section identification" telegram and gets a new address. Until the train knows its address it will ignore any telegrams received. Thus, if a train doesn't properly enter into the controlled section it won't be under LZB control until the next section. The main task of LZB is signalling to the train the speed and distance it is allowed to travel. It does this by transmitting periodic call telegrams to each train one to five times per second, depending on the number of trains present. Four fields in the call telegram are particularly relevant: The target speed and location are used to display the target speed and distance to the driver. The train's permitted speed is calculated using the trains braking curve, which can vary by train type, and the XG location, which is the distance from the start of the 100 m (328 ft) zone that is used to address the train. If the train is approaching a red signal or the beginning of an occupied block the location will match the location of the signal or block boundary. The on-board equipment will calculate the permitted speed at any point so that the train, decelerating at the deceleration indicated by its braking curve, will stop by the stopping point. A train will have a parabolic braking curve as follows: where: Where a train is approaching a speed restriction the control centre will transmit a packet with an XG location set to a point behind the speed restriction such that a train, decelerating based on its braking curve, will arrive at the correct speed at the start of the speed restriction. This, as well as deceleration to zero speed, is illustrated with the green line in the "Permitted and supervised speed calculation" figure. The red line in the figure shows the "monitoring speed", which is the speed which, if exceeded, the train will automatically apply the emergency brakes. When running at constant speed this is 8.75 km/h (5.44 mph) above the permitted speed for transited emergency braking (until speed is reduced) or 13.75 km/h (8.54 mph) above the permitted speed for continuous emergency braking. When approaching a stopping point, the monitoring speed follows a braking curve similar to the permitted speed, but with a higher deceleration, that will bring it to zero at the stopping point. When approaching a speed restriction, the monitoring speed braking curve intersects the speed restriction point at 8.75 km/h (5.44 mph) above the constant speed. Deceleration rates are more conservative with LZB than with conventional German signalling. A typical passenger train braking curve might have a "permitted speed" deceleration of 0.5 m/s2(1.6 ft/s2) and a "monitoring speed" deceleration of 0.71 m/s2(2.3 ft/s2) 42% higher than the deceleration for the permitted speed, but lower than the 0.76 m/s2(2.5 ft/s2) required to stop from 140 km/h (87 mph) in 1,000 m (3,281 ft) used in conventional signalling. The ICE3, which has a full service braking deceleration of 1.1 m/s2(3.6 ft/s2) below 160 km/h (99 mph), dropping to 0.65 m/s2(2.1 ft/s2) by 300 km/h (190 mph), has a LZB target speed deceleration of only 0.68 m/s2(2.2 ft/s2) to 120 km/h (75 mph), 0.55 m/s2(1.8 ft/s2) between 120 and 170 km/h (75 and 106 mph), and 0.5 m/s2(1.6 ft/s2) at higher speeds.[8] In between the permitted speed and monitoring speed is a warning speed, normally 5 km/h (3.1 mph) above the permitted speed. If the train exceeds that speed LZB will flash the "G" light on the train's display and sound a horn. About 1,700 m (5,577 ft) before the end of the LZB controlled section the central controller will send a telegram to announce the end of LZB control. The train will flash the "ENDE" light which the driver must acknowledge within 10 seconds. The display will normally give the distance and target speed at the end of the controlled section, which will depend on the signal at that point. When the train reaches the end of LZB control the "Ü" and "ENDE" lights go off and the conventional Indusi (or PZB) system takes over automatic train protection. Special conditions not covered by the full LZB system or failures can put LZB into one of the special operating modes. As a train approaches a crossover to a normally opposite direction track the display will flash the "E/40" light. The driver confirms the indication and the permitted speed drops following the braking curve to 40 km/h (25 mph). When the crossover section is reached the displays are switched off and the driver can proceed through the crossover at 40 km/h (25 mph). German signalling systems have a "drive by sight" signal that consists of 3 white lights forming a triangle with one light at the top. This signal, labelled "Zs 101", is placed with a fixed line side signal and, when lighted, permits the driver to pass a fixed red or defective signal and drive by sight to the end of the interlocking no faster than 40 km/h (25 mph). When approaching such a signal in LZB territory the "E/40" light will be lit until 250 m (820 ft) before the signal, then the "E/40" will go dark and "V40" will flash. The "V40" signal indicates the ability to drive by sight. If data exchange is interrupted, the trains distance measurement system fails, or the train fails to detect 4 or more cable transposition points the LZB system will go into a failure state. It will light the "Stör" indicator and then flash "Ü". The driver must acknowledge the indications within 10 seconds. The driver must slow the train to no more than 85 km/h (53 mph) or lower; the exact speed depends on the backup signalling system in place. CIR-ELKE is an improvement on the basic LZB system. It uses the same physical interface and packets as standard LZB but upgrades its software, adding capabilities and modifying some procedures. It is designed to increase line capacity by up to 40% and to further shorten travel times. The name is an abbreviation of the English/German project titleComputerIntegratedRailroading -Erhöhung derLeistungsfähigkeit imKernnetz derEisenbahn(Computer Integrated Railroading - Increase Capacity in the Core Railway Network). Being an extension of LZB it is also called LZB-CIR-ELKE further abbreviated into LZB-CE. CIR-ELKE includes the following improvements: The original LZB system was designed for permitted speeds up to 280 km/h (170 mph) andgradientsup to 1.25%. TheCologne–Frankfurt high-speed rail linewas designed for 300 km/h (190 mph) operation and has 4% gradients; thus, it needed a new version of LZB, and CIR ELKE-II was developed for this line. CIR ELKE-II has the following features: The LZB system has been quite safe and reliable; so much so that there have been no collisions on LZB equipped lines because of the failure of the LZB system. However, there have been some malfunctions that could have potentially resulted in accidents. They are: The following lines ofDeutsche Bahnare equipped with LZB, allowing for speeds in excess of 160 km/h (providing the general suitability of the track): Note:italicsindicate the physical location of an LZB control centre. TheWest railway(Vienna–Salzburg) is equipped with LZB in three sections: A modified version of LZB is installed on theChiltern MainlineasChiltern ATP.[9] In addition to mainline railways, versions of the LZB system are also used in suburban (S-Bahn) railways and subways. Tunnels in theDüsseldorfandDuisburg Stadtbahn(light rail) systems, and some tunnels of theEssen Stadtbahnaround theMülheim an der Ruhrarea are equipped with LZB. With the exception of lineU6, the entireVienna U-Bahnis equipped with LZB since it was first built and includes the capability of automatic driving with the operator monitoring the train. TheMunich U-Bahnwas built with LZB control. During regular daytime operations the trains are automatically driven with the operator simply starting the train. Stationary signals remain dark during that time. In the evenings from 9:00 p.m. until end of service and on Sundays the operators drive the trains manually according to the stationary signals in order to remain in practice. There are plans to automate the placement and reversal of empty trains. TheMunich S-Bahnuses LZB on itscore mainline tunnel section (Stammstrecke). TheNuremberg U-BahnU3 line uses LZB for fully automatic (driverless) operation. The system was jointly developed bySiemensandVAG Nurembergand is the first system where driverless trains and conventional trains share a section of line. The existing, conventionally driven U2 line trains shares a segment with the automatic U3 line trains. Currently, an employee still accompanies the automatically driven trains, but later the trains will travel unaccompanied. After several years of delays, the final three-month test run was successfully completed on 20 April 2008, and the operating licence granted on 30 April 2008. A few days later the driverless trains started operating with passengers, first on Sundays and public holidays, then weekdays at peak hours, and finally after the morning rush hour which has a tight sequence of U2 trains. The official opening ceremony for the U3 line was held on 14 June 2008 in the presence of the Bavarian Prime Minister and Federal Minister of Transport, the regular operation began with the schedule change on 15 June 2008. The Nuremberg U-bahn plans to convert U2 to automatic operation in about a year. TheDocklands Light Railwayin east London uses theSelTractechnology which was derived from LZB to run automated trains. The trains are accompanied by an employee who closes the doors and signals the train to start, but then is mainly dedicated to customer service and ticket control. In case of failure the train can be driven manually by the on train staff.
https://en.wikipedia.org/wiki/Linienzugbeeinflussung
Positive train control(PTC) is a family ofautomatic train protectionsystems deployed in the United States.[1]Most of the United States'national rail networkmileage has a form of PTC. These systems are generally designed to check that trains are moving safely and to stop them when they are not.[2] Positive train control restricts the train movement to an explicit allowance; movement is halted upon invalidation. A train operating under PTC receives amovement authoritycontaining information about its location and where it is allowed to safely travel. PTC was installed and operational on 100% of the statutory-required trackage by December 29, 2020.[3] TheAmerican Railway Engineering and Maintenance-of-Way Association(AREMA) describes positive train control systems as having these primary functions: In the late 1980s, interest in train protection solutions heightened after a period of stagnant investment and decline followingWorld War II. Starting in 1990, the United StatesNational Transportation Safety Board(NTSB) countedPTC(then known as positive train separation) among its "Most Wanted List of Transportation Safety Improvements."[5][6][7]At the time, the vast majority of rail lines in US relied upon crew members to comply with all safety rules, and a significant fraction of accidents were attributable to human error, as evidenced in several years of official reports from theFederal Railroad Administration(FRA).[8] In September 2008,Congressconsidered a new law that set a deadline of December 15, 2015 for implementation ofPTCtechnology across most of theUSrail network. The bill, ushered through the legislative process by theSenate Commerce Committeeand theHouse Transportation and Infrastructure Committee, was drafted in response to thecollisionof aMetrolinkpassenger trainand aUnion Pacific Railroadfreight trainat September 12, 2008, inLos Angeles, which resulted in the deaths of 25 and injuries to more than 135 passengers. As the bill neared final passage by Congress, theAssociation of American Railroads(AAR) issued a statement in support of the bill.[9]PresidentGeorge W. Bushsigned the 315-pageRail Safety Improvement Act of 2008into law on October 16, 2008.[10] Among its provisions, the law provides funding to help pay for the development of PTC technology, limits the number of hours freight rail crews can work each month, and requires theDepartment of Transportationto determine work hour limits for passenger train crews. To implement the law, the FRA published initial regulations for PTC systems on January 15, 2010.[11]The agency published amended regulations on August 22, 2014.[12] In December 2010, theGovernment Accountability Office(GAO) reported thatAmtrakand the majorClass Irailroads had taken steps to install PTC systems under the law, butcommuter railoperators were not on track for the 2015 deadline.[13]As of June 2015[update], only seven commuter systems (29 percent of those represented by APTA) were expecting to make the deadline. Several factors have delayed implementation, including the need to obtain funding (which was not provided by Congress); the time it has taken to design, test, make interoperable, and manufacture the technology; and the need to obtain radio spectrum along the entire rail network, which involves FCC permission and in some cases negotiating with an existing owner for purchase or lease.[14] TheMetrolinkcommuter railsystem inSouthern Californiais planning to be the first US passenger carrier to install the technology on its entire system. After some delays,[15]demonstration PTC in revenue service began in February 2014; the system is expected to be completed in late summer 2015.[16] In theChicago metropolitan area, theMetrasystem expected it will not be fully compliant with the PTC mandate until 2019.[14] In October 2015, Congress has passed a bill extending the compliance deadline by three years, to December 31, 2018. PresidentBarack Obamasigned the bill on October 29, 2015.[17][18]Only four railroads met the December 2018 deadline; the other 37 got extensions to December 2020, which was allowed under the law for railroads that demonstrated implementation progress.[19]On December 29, 2020, it was reported that the safeguards had been installed on all required railroads, two days ahead of the deadline.[20] There is some controversy as to whether PTC makes sense in the form mandated by Congress. Not only is the cost of nationwide PTC installation expected to be as much asUS$6–22 billion, most all of it borne by U.S. freight railroads,[21]there are questions as to the reliability and maturity of the technology for all forms of mainline freight trains and high density environments.[22]The PTC requirement could also impose startup barriers to new passenger rail or freight services that would trigger millions of dollars in additional PTC costs. Theunfunded mandatealso ties the hands of the FRA to adopt a more nuanced or flexible approach to the adoption of PTC technology where it makes the most sense or where it is technically most feasible.[21] While the FRA Rail Safety Advisory Committee identified several thousand "PPAs" (PTC preventable accidents) on US railroads over a 12-year period, a 2004 cost analysis determined that the accumulated savings to be realized from all of the accidents was not sufficient to cover the cost of PTC across theClass Irailroads at that time.[23]The FRA concurred with this cost assessment in its 2009 PTC rulemaking document. The cost of implementing PTC on up to 25 commuter rail services in the United States has been estimated at over $2 billion and because of these costs, several services are having to cancel or reduce repairs, capital improvements, and service.[citation needed]Other services simply do not have the funds available for PTC and have deferred action assuming some change from Congress.[citation needed]Railroads that operate lines equipped withcab signallingand existingAutomatic Train Controlsystems have argued that their proven track record of safety, which goes back decades, is being discounted because ATC is not as aggressive as PTC in all cases.[24] The number of PTC preventable crashes has increased in recent years. In 2013, aMetro-North derailmentin the Bronx killed four people and injured 61. This crash was caused by excess speed, something PTC is capable of guarding against. In 2015, anAmtrak derailmentin Philadelphia killed eight and injured 185.[25]In this incident, the train accelerated beyond safe speed due to actions of a distracted engineer of theAmtraktrain. According to the NTSB, this crash could have been prevented by a PTC system that would have enforced the 50 mile per hour (80 km/h) speed limit and prevented the overspeed and subsequent crash of the train.[26]In 2017, anotherAmtrak derailmentnearDupont, Washingtonkilled three and injured 62. The engineer mistook the train's current location, and therefore was not following proper restrictions for the length of track the train was in, failing to notice signage that should have indicated the speed restrictions in the area. The NTSB learned safety improvements had recently been made to the track, except for the PTC section of the improvements. It was ultimately concluded that the engineer's error in train location lead to the crash.Sound Transit, the owner of the section of railway where the accident occurred, was in the process of installing PTC, but it was not operational at the time of the accident.[26]In 2018,another collisionoccurred when Amtrak train rammed a stationary freight train inCayce, S.C., killed two crew members and injured 116 others. Chairman of the NTSB,Robert Sumwalt, stated that "an operational PTC is designed to prevent this type of incident."[27] A typical PTC system involves two basic components: Optionally, three additional components may exist: There are two main PTC implementation methods currently being developed. The first makes use of fixed signaling infrastructure such ascoded track circuitsandwireless transpondersto communicate with the onboard speed control unit. The other makes use of wireless data radios spread out along the line to transmit the dynamic information. The wireless implementation also allows for the train to transmit its location to the signaling system which could enable the use ofmoving or "virtual" blocks[broken anchor]. The wireless implementation is generally cheaper in terms of equipment costs, but is considered to be much less reliable than using "harder" communications channels. As of 2007[update], for example, the wireless ITCS system on Amtrak's Michigan Line was still not functioning reliably after 13 years of development,[28]while the fixed ACSES system has been in daily service on theNortheast Corridorsince 2002 (seeAmtrak, below). The fixed infrastructure method is proving popular on high-density passenger lines wherepulse code cab signalinghas already been installed. In some cases, the lack of a reliance on wireless communications is being touted as a benefit.[29]The wireless method has proven most successful on low density, unsignaleddark territorynormally controlled viatrack warrants, where speeds are already low and interruptions in the wireless connection to the train do not tend to compromise safety or train operations. Some systems, like Amtrak's ACSES, operate with a hybrid technology that uses wireless links to update temporary speed restrictions or pass certain signals, with neither of these systems being critical for train operations. The equipment on board the locomotive must continually calculate the train's current speed relative to a speed target some distance away governed by a braking curve. If the train risks not being able to slow to the speed target given the braking curve, the brakes are automatically applied and the train is immediately slowed. The speed targets are updated by information regarding fixed and dynamic speed limits determined by the track profile and signaling system. Most current PTC implementations also use the speed control unit to store a database of track profiles attached to some sort of navigation system. The unit keeps track of the train's position along the rail line and automatically enforces any speed restrictions as well as the maximum authorized speed. Temporary speed restrictions can be updated before the train departs its terminal or via wireless data links. The track data can also be used to calculate braking curves based on thegrade profile. The navigation system can use fixed track beacons or differential GPS stations combined with wheel rotation to accurately determine the train's location on the line within a few feet. While some PTC systems interface directly with the existing signal system, others may maintain a set ofvitalcomputer systems at a central location that can keep track of trains and issue movement authorities to them directly via a wireless data network. This is often considered to be a form ofCommunications Based Train Controland is not a necessary part of PTC. The train may be able to detect the status of (and sometimes control) wayside devices, for exampleswitchpositions. This information is sent to the control center to further define the train's safe movements. Text messages and alarm conditions may also be automatically and manually exchanged between the train and the control center. Another capability would allow the employee-in-charge (EIC) to give trains permission to pass through their work zones via a wireless device instead of verbal communications. Even where safety systems such ascab signalinghave been present for many decades, the freight railroad industry has been reluctant to fit speed control devices because the often heavy-handed nature of such devices can have an adverse effect on otherwise safe train operation. The advanced processor-based speed control algorithms found in PTC systems claim to be able to properly regulate the speed of freight trains over 5,000 feet (1,500 m) in length and weighing over 10,000 short tons (9,100 t), but concerns remain about taking the final decision out of the hands of skilledrailroad engineers. Improper use of theair brakecan lead to a train running away,derailmentor to an unexpected separation.[citation needed] Furthermore, an overly conservative PTC system runs the risk of slowing trains below the level at which they had previously been safely operated by human engineers. Railway speeds are calculated with asafety factorsuch that slight excesses in speed will not result in an accident. If a PTC system applies its own safety margin, then the end result will be an inefficient double safety factor. Moreover, a PTC system might be unable to account for variations in weather conditions or train handling, and might have to assume aworst-case scenario, further decreasing performance.[30]In its 2009 regulatory filing, the FRA stated that PTC was in fact likely to decrease the capacity of freight railroads on many main lines.[31]The EuropeanLOCOPROL/LOCOLOCproject had shown thatEGNOS-enhanced satellite navigation alone was unable to meet theSIL4 safety integrityrequired for train signaling.[32] From a purely technical standpoint, PTC will not prevent certain low-speed collisions caused bypermissive block operation, accidents caused by "shoving" (reversing with inadequate observation), derailments caused by track or train defect, grade crossing collisions, or collisions with previously derailed trains. Where PTC is installed in the absence of track circuit blocks, it will not detect broken rails, flooded tracks, or dangerous debris fouling the line. The wireless infrastructure planned for use by all USClass Ifreights, most small freight railroads, and many commuter railroads is based on data radios operating in a single frequency band near220 MHz. A consortium created by two freight railroads called PTC 220 LLC has purchased significant spectrum around220 MHz, from previous licensees for use in deploying PTC. Some of this spectrum is in the form of nationwide licenses and some is not. The consortium plans to make this spectrum available for use by the US freights, but has indicated as recently as 2011 that they are unsure if they have enough spectrum to meet their needs. Several commuter railroads have begun purchasing220 MHzspectrum in their geographic areas, but there is widespread concern that the acquisition of enough220 MHzspectrum may be difficult to accomplish because of a lack of availability, difficulties in negotiating complex multi-party deals to gain enough adjacent spectrum, and because the financial cost of the acquisitions may make the task impossible for some state agencies. However, research suggests that dynamic spectrum allocation can solve the spectrum allocation problem at 220 MHz bandwidth.[33][34] Many of the railroads have requested that the FCC reallocate parts of the220 MHzspectrum to them. They argue that they must have220 MHzspectrum to be interoperable with each other. The FCC has stated that there is no reallocation forthcoming, that the railroads are not justified in requesting spectrum reallocation because they have not quantified how much spectrum they need, and that the railroads should seek spectrum in the secondary220 MHzmarkets or in other bands.[35] There are no regulatory or technical requirements that demand that220 MHzbe used to implement PTC (if a PTC implementation is to use wireless components at all). If wireless data transmission is necessary, there are a few advantages to the220 MHzspectrum, provided it can be acquired at a reasonable cost. The first reason to consider using220 MHzspectrum is PTC interoperability for freights and for some, but not all, commuter rail operations. Freight operations in the US often include the sharing of railroad tracks where one railroad's rail vehicles operate as a guest on another railroad's host tracks. Implementing PTC in such an environment is most easily achieved by using the same PTC equipment, and this includes radios and the associated radio spectrum. When a commuter railroad operation must operate on a freight railroad territory, the commuter will most likely be required to install PTC equipment (including a radio) on their rail vehicle that is compliant with the freight railroad's PTC system, and this generally means the use of220 MHzradios and spectrum. If the commuter uses the same PTC equipment, radios, and spectrum on their own property, they will be able to use it when their vehicles travel onto a freight's territory. From a practical standpoint, if the commuter instead elects to use another type of PTC on their own property, they will need to install a second set of onboard equipment so they can operate PTC on their own property while also operating PTC on a freight's property. If a multi-band radio (such as the current generationsoftware-defined radios) is not available, then separate radios and separate antennas will be necessary. With the complexity of track geometries, PTC requires a variable amount of the spectrum in a time critical manner. One way to achieve this is to extend the PTC software-defined radios, such that it has the intelligence to allocate the spectrum dynamically. Adding the intelligence to the radio also helps to improve the security of the PTC communication medium.[36] If a small freight or commuter railroad does not operate on another railroad territory, then there is no interoperability-based reason that obligates them to use220 MHzspectrum to implement PTC. In addition, if a small freight or commuter railroad only operates on their own territory and hosts other guest railroads (freight or other passenger rail), there is still no interoperability-based reason the host is obliged to use220 MHzspectrum to implement PTC. Such a railroad could implement PTC by freely picking any radio spectrum and requiring the guest railroads to either install compliant PTC equipment (including radios) on board their trains or provide wayside equipment for their guest PTC implementation to be installed on the host railroad property. An interesting case that highlights some of these issues is the northeast corridor. Amtrak operates services on two commuter rail properties it does not own:Metro-North Railroad(owned by New York and Connecticut) andMassachusetts Bay Transportation Authority(MBTA) (owned by Massachusetts). In theory, Amtrak could have found themselves installing their own PTC system on these host properties (about 15 percent of the corridor), or worse, found themselves in the ridiculous position of trying to install three different PTC systems on each Amtrak train to traverse the commuter properties. This was not the case. Amtrak had a significant head start over the commuter rail agencies on the corridor in implementing PTC. They spent a considerable amount of time in research and development and won early approvals for their ACSES system on the northeast corridor with the FRA. They chose first to use900 MHzand then later moved to220 MHz, in part because of a perceived improvement in radio-system performance and in part because Amtrak was using220 MHzin Michigan for their ITCS implementation.[37]When the commuter agencies on the corridor looked at options for implementing PTC, many of them chose to take advantage of the advance work Amtrak had done and implement the ACSES solution using220 MHz. Amtrak's early work paid off and meant that they would be traversing commuter properties that installed the same protocol at the same frequency, making them all interoperable. (Actually most of the Northeast Corridor is owned and operated by Amtrak, not the commuter properties, including the tracks fromWashington, D.C.toNew York Penn Stationand the tracks from Philadelphia toHarrisburg, Pennsylvania. The State of Massachusetts owns the tracks from the Rhode Island state line to the New Hampshire state line, but Amtrak "operates" these lines. Only the line betweenNew York CityandNew Haven, Connecticutis actually owned and operated by a commuter line.) One other perceived reason to consider220 MHzfor PTC may be PTC-compatible radio equipment availability. Radio equipment specifically targeted toward PTC is currently only available from a limited number of vendors, and they are focused only on220 MHz. One radio vendor in particular, Meteorcomm LLC, is able to support the I-ETMS PTC protocol with a220 MHzradio. Meteorcomm is jointly owned by several of theClass Ifreights, and some in the industry have indicated that using their220 MHzradio and associated equipment will be done on a per-site licensing basis. Recurring fees may be associated with this process too. There is further concern that the 'buy in' and licensing fees will be significant, and this has led some to speculate that the owners of Meteorcomm (the freights) may have legal exposure to anti-trust violations.[citation needed]For many railroads, there is no other practical option to meet the federal mandate than to install PTC at220 MHzusing I-ETMS with the Meteorcomm radios. On the northeast corridor, another radio vendor, GE MDS, is able to support the Amtrak ACSES protocol with a220 MHzradio. It should be stressed that the main concern among the freights regarding the PTC deadline is the availability of PTC equipment.[38]With an eye to anti-trust issues and ready radio availability, Meteorcomm radio designs have been second-sourced toCalAmpradios. This all may mean that there is not enough220 MHzPTC radio equipment available for all of the railroads that must implement PTC.[citation needed] There are also issues with the use of these frequencies outside the US; in Canada,220 MHzremains part of theradioamateur1.25-metre band.[39][40] Other bands besides220 MHzwill support PTC, and have been used to win approvals from the FRA for PTC. When Amtrak received their initial approval, they planned to use900 MHzfrequencies for ACSES.BNSF Railwaywon its first PTC approvals from the FRA for an early version of ETMS using a multi-band radio that included45 MHzfrequencies,160 MHzfrequencies,900 MHzfrequencies and WiFi. A small freight or commuter that selects one or more of these bands or another one such as450 MHzmight find it easier to acquire spectrum. They will need to research spectrum issues, radio equipment, antennas, and protocol compatibility issues to successfully deploy PTC.[citation needed] There is no single defined standard for "interoperable PTC systems". Several examples of interoperable systems illustrate this point. First, the UP and BNSF are interoperable across their systems. They are both implementing I-ETMS and will use different radio frequencies in different locations.[citation needed]In the second example, Amtrak is interoperable withNorfolk Southernin Michigan. Amtrak uses ITCS, while Norfolk Southern uses I-ETMS. To interoperate, two220 MHzradios are installed in each wayside location and they both interface with a common PTC system through an interface device (similar to a network gateway or protocol converter) at each wayside location. One radio talks to freight trains using I-ETMS and one radio talks to passenger trains using ITCS. In this case interoperability stops at the wayside and does not include the wireless segment out to the rail vehicles or the onboard systems. In the third example, similar to the first, Metrolink, the commuter rail agency in Los Angeles, is implementing I-ETMS and will use the same PTC equipment as both the UP and BNSF. Metrolink is procuring their own220 MHzspectrum so that trains on Metrolink territory (commuter and freight) will use other channels than those used by the UP and BNSF. Interoperability is achieved by directing the onboard radio to change channels depending on location.[citation needed]ForSEPTA, the commuter operation in and aroundPhiladelphia,Ansaldois implementingACSES, the Amtrak northeast corridor PTC protocol. ForCSXall the ACSES PTC transactions will be handed to CSX at the SEPTA back office, and CSX will be responsible for deploying I-ETMS infrastructure that they will use to communicate with their freight trains. The SEPTA interoperability model is very similar to that of the public safety radio community wherein different radio systems that use different frequencies and protocols are cross-connected only in the back office to support system to system communications.[citation needed] For the major freight railroads and Amtrak the answer seems to be that one frequency band is sufficient. These rail operations measure on-time performance on a much more coarse scale than commuters do so their tolerance for delay is greater and has less impact on train schedules.[citation needed]In addition, the PTC implementations deployed by commuter operations will be running much closer to the performance envelope than that of either Amtrak or the freights. For commuters in particular there is therefore some concern that implementing PTC with a single frequency band may not be sufficient. The single frequency-band approach to supporting real-time train control has a history of being difficult to use for such applications.[citation needed]This difficulty is not unique to train control. Interference, both man-made and natural, can at times affect the operation of any wireless system that relies on one frequency band. When such wireless systems are employed for real-time control networks it is very difficult to ensure that network performance will not sometimes be impacted. CSX encountered this problem when it experienced propagation ducting problems in its900 MHzAdvanced Train Control System(ATCS) network in the 1990s.[41]The ATCS protocol, which the AAR had recommended the FCC consider as PTC in 2000 (when AAR sought a nationwide900 MHz"ribbon" license),[42]can support train control operation at both900 MHzand160 MHz.[43]The latter frequency band is only used for ATCS on a few subdivisions and shortlines. More recently, the industry had been moving toward a more robust multi-band radio solution for data applications such as PTC. In 2007,BNSFfirst won FRA approval for their original ETMS PTC system using a multi frequency-band radio.[44]In addition, in mid-2008, an FRA sponsored effort by the AAR to develop a Higher Performance Data Radio (HPDR) for use at160 MHzactually resulted in a contract being awarded to Meteorcomm for a 4-band radio to be used for voice and data.[45]These more recent multi-band radio efforts were shelved in late 2008, after the Rail Safety Improvements Act became law, and the freights decided to pursue PTC using220 MHzalone, in a single frequency-band configuration. Amtrak and most commuter operations quickly followed suit, selecting220 MHz.[citation needed] Soon after the Rail Safety Improvements Act was passed, many commuter railroads chose not to develop their own PTC protocol and instead decided to save time and money by using a protocol developed for either freight or long haul passenger (Amtrak) operations. Deploying such a protocol for urban commuter operation, where it will be necessary to support numerous, small, fast-moving trains, will be a challenge. It remains to be seen whether the performance envelope of PTC protocols developed and optimized for less numerous, slower and/or larger trains can support a more complex operational scenario, such as that of a commuter rail operation, without impacting on-time performance. Detailed and exhaustive protocol simulation testing can ease the risk of problems, however, there are too many variables, especially when the wireless component is considered, to guarantee beforehand that under certain worst-case operational profiles in certain locations, train operations will not be impacted. In fact, during system acceptance testing, such worst-case operational profiles may not even be tested because of the effort involved. One need only consider what it would take to identify the PTC protocol train capacity limitations at eachinterlockingof a large commuter rail operation when a train is broken down at the interlocking and 10–20 other trains are within communications range of a single wayside location. Such a what-if scenario may be tested at a few interlockings but not at the 30 or more interlockings on a large commuter property.[citation needed] A large group of industry experts from the federal government,[which?]manufacturers, railroads, and consultants are participating in a study group sponsored by theIEEE802.15working group, to look at using lessons learned in protocol development in theIEEE 802suite to propose a comprehensive solution to the wireless component of PTC. While this effort may not significantly change the current United States PTC efforts already underway, an open standard could possibly provide a way forward for all of the railroads to eventually deploy a more interoperable, robust, reliable, future-proof, and scalable solution for the wireless component of PTC.[citation needed] The railroad industry, like the process industry and the power utility industry, has always demanded that the return on investment for large capital investments associated with infrastructure improvements be fully realized before the asset is decommissioned and replaced. This paradigm will be applied to PTC as well. It is highly unlikely that there will be any major upgrades to initial PTC deployments within even the first 10 years. The calculation for return on investment is not a simple one and some railroads may determine, for instance after five years, that an upgrade of certain components of PTC may be justified. An example could be the radio component of PTC. If an open standard creates a less expensive radio product that is backwards compatible to existing systems and that perhaps improves PTC system performance and also includes improvements that save on operational costs, then a railroad would be prudent to consider a plan for replacing their PTC radios.[46] Wabtecis working with theAlaska Railroadto develop a collision-avoidance, Vital PTC system, for use on their locomotives. The system is designed to prevent train-to-train collisions, enforce speed limits, and protect roadway workers and equipment. Wabtec's Electronic Train Management System, (ETMS) is also designed to work with the Wabtec TMDS dispatching system to provide train control and dispatching operations from Anchorage.[47] Data between locomotive and dispatcher is transmitted over a digital radio system provided by Meteor Communications Corp (Meteorcomm). An onboard computer alerts workers to approaching restrictions and to stop the train if needed.[48] Alstom's and PHW'sAdvanced Civil Speed Enforcement System(ACSES) system is installed on parts of Amtrak'sNortheast CorridorbetweenWashingtonandBoston. ACSES enhances the cab signaling systems provided by PHW Inc. It uses passive transponders to enforce permanent civil speed restrictions. The system is designed to prevent train-to-train collisions (PTS), protection against overspeed and protect work crews with temporary speed restrictions.[49][50] GE Transportation Systems' Incremental Train Control System (ITCS) is installed on Amtrak'sMichigan line, allowing trains to travel at 110 mph (180 km/h).[51] The2015 Philadelphia train derailmentcould have been prevented had positive train control been implemented correctly on the section of track that train was travelling. The overspeed warning/penalty commands were not set up on that particular section of track although it was set up elsewhere.[52] Wabtec's Electronic Train Management System, (ETMS) is installed on a segment of theBNSF Railway. It is an overlay technology that augments existing train control methods. ETMS usesGPSfor positioning and a digital radio system to monitor train location and speed. It is designed to prevent certain types of accidents, including train collisions. The system includes an in-cab display screen that warns of a problem and then automatically stops the train if appropriate action is not taken.[53] CSX Transportationis developing a Communications-Based Train Management (CBTM) system to improve the safety of its rail operations. CBTM is the predecessor to ETMS.[54] Wabtec's Electronic Train Management System, (ETMS) will provide PTC solutions in conjunction withWabtec's Train Management and Dispatch System (TMDS), which has served as KCS's dispatch solution since 2007, for all U.S. based rail operations along the KCS line. In January 2015, KCS began training personnel on PTC at its TEaM Training Center in Shreveport, La., with an initial class of 160 people.[55] Most MBTA Commuter Rail locomotives andcab cars, except for the 1625–1652 series Bombardier control cars and the (now retired) 1000–1017 seriesF40PHlocomotives, are equipped with the PTC compliantACSEStechnology which is installed on the AmtrakNortheast Corridor. All MBTA trains traveling on any segment of the Northeast Corridor must be equipped with functioning ACSES onboard apparatus, which affects trains onProvidence/Stoughton Line,Franklin/Foxboro LineandNeedham Lineroutings. The MBTA shut down some lines on weekends in 2017 and 2018 to meet a December 2020 federal deadline for full-system PTC.[56] In November 2013 the New YorkMetropolitan Transportation Authoritysigned a $428 million contract to install positive train control on theLong Island Rail Roadand theMetro-North Railroadto a consortium ofBombardier TransportationRail Control Solutions andSiemens Rail Automation.[57][58]The LIRR and Metro-North installations will include modifications and upgrades of the existing signal systems and the addition ofACSESII[49]equipment. Siemens stated that the PTC installation will be completed by December 2015, but missed that deadline,[59]and did not complete the installation until the end of 2020.[60] Ansaldo STS USA Inc's Advanced Speed Enforcement System (ASES) is being installed onNew Jersey Transitcommuter lines. It is coordinated with Alstom's ACSES so that trains can operate on the Northeast Corridor.[29] Norfolk Southern Railwaybegan work on the system in 2008 withWabtecRailway electronics to start developing a plan to implementPositive Train Controlon NS rails. NS has already implemented PTC on 6,310 miles of track with plans to achieve it on 8,000 miles of track. NS has requested an extension on the time to have PTC active on its miles of track due to the need to work more on areas with no track signals, as well as making provisions for smaller railroads that the company does business with to be PTC capable. NS keeps experiencing issues with the system and wants to take the proper time to fix the system to ensure the safety of its employees and all others using their tracks. NS has been adding and updating its locomotives with PTC capable computers to allow those locomotives for use on mainlines. 2,900 locomotives out of the almost 4,000 the company has have been fitted with the PTC capable computers. NS plans to put at least 500 locomotives into storage using precision NS has been updating it trackside equipment such as radio towers and control point lighting to assist in PTC operations on the railroad. With the new computers on the locomotives it allows the locomotives to interact with each other and trackside systems. Norfolk Southern'sGeneral ElectricTransportation locomotives are equipped withGPSto aid in the use of PTC. All of NS's locomotives are equipped with Energy Management a computer system that provides real time data on the locomotive. The system can also control train speed and brake systems on board. The EM system allows the locatives to use less fuel and be more efficient. NS's final goal is completely autonomous operations of their trains. This system will be used alongside Auto-router used to route train movements with little to no human interactions. With these two systems integrated with PTC it allows for more precise movement and train control across the railroad. NS,Union Pacific,CSXT,BNSF, andVirginia Railway Expresshave been testing interoperation to make sure each companies PTC systems work with each other to ensure safe railroad travel. For this a NS train onCSXTtracks has to act like aCSXTtrain would or vice versa. That requires the railroads to use the same communications and radio frequencies for everything to operate smoothly. Nearly 3,000 locomotives have been fitted with the PTC capable computers.[61][62][63][64] Caltrain's Communications Based Overlay Signal System (CBOSS) has been installed but not fully tested along the Peninsula Corridor between San Francisco, San Jose and Gilroy, California.[65]Caltrain had selected Parsons Transportation Group (PTG), who had been working on a similar system for Metrolink in Southern California, to implement, install, and test CBOSS in November 2011.[66]In February 2017, Caltrain's board canceled the contract with PTG for failure to meet the scheduled 2015 deadline.[67]PTG and Caltrain would go on to file lawsuits for breach of contract.[65][68]At its board of directors meeting on 1 March 2018, Caltrain announced that it will be awarding a contract to Wabtec for implementation of I-EMTS.[69]The completed Caltrain PTC system was certified by the FRA in December 2020.[70] Positive Train Control (PTC) and vehicle monitoring system technologies have been developed for theDenver Metro Area's new commuter train lines that began opening in 2016.[71]After theUniversity of ColoradoA Lineopened on 22 April 2016 betweenDenver Union StationandDenver International Airport, it experienced a series of issues related to having to adjust the length of unpowered gaps between different overhead power sections, direct lightning strikes, snagging wires, and crossing signals behaving unexpectedly.[72]In response to the crossing issues, Denver Transit Partners, the contractor building and operating the A Line, stationed crossing guards at each place where the A line crosses local streets at grade, while it continued to explore software revisions and other fixes to address the underlying issues.[73]The FRA required frequent progress reports, but allowed RTD to open itsB Lineas originally scheduled on 25 July 2016,[74]because the B Line only has one at-grade crossing along its current route.[73]However, FRA halted testing on the longerG LinetoWheat Ridge– originally scheduled to open in Fall 2016 – until more progress could be shown resolving the A Line crossing issues.[75]G Line testing resumed in January 2018, though the A Line continued to operate under a waiver.[76]The G Line opened to passenger service on 26 April 2019.[77] Positive train control has been implemented atSonoma–Marin Area Rail Transit's 63 crossings for the length of the initial 43-mile (69 km) passenger corridor which began regular service on 25 August 2017 after the FRA gave its final approval for SMART's PTC system.[78]SMART uses the E-ATC system for its PTC implementation.[79] SEPTA received approval from the FRA on 28 February 2016 to launch PTC on itsRegional Raillines.[80]On 18 April 2016, SEPTA launched PTC on theWarminster Line, the first line to use the system.[80][81]Over the course of 2016 and into 2017, PTC was rolled out onto different Regional Rail lines. On 1 May 2017, thePaoli/Thorndale Line,Trenton Line, andWilmington/Newark Line(all of which run on Amtrak tracks) received PTC, the last of the Regional Rail lines to receive the system.[82] Metrolink, the Southern California commuter rail system involved in the2008 Chatsworth train collisionthat provided the impetus for theRail Safety Improvement Act of 2008, was the first passenger rail system to fully implement positive train control.[83]In October 2010, Metrolink awarded a $120 million contract to PTG to design, procure, and install PTC.[84]PTG designed a PTC system that used GPS technology informing position to on-board train computers, which communicate wirelessly with wayside signals and a central office.[85]Metrolink anticipated placing PTC in revenue service by summer 2013.[85]However, Parsons announced the FRA had authorized Metrolink to operate PTC RSD using Wabtec's I-ETMS in revenue service on the San Bernardino line in March 2015.[86]Metrolink announced PTC had been installed on all owned right-of-way miles by June 2015, and was working to install the system on tracks shared with Amtrak, freight, and other passenger rail partners.[87] In the 1990s, Union Pacific Railroad (UP) had a partnership project withGeneral Electricto implement a similar system known as "Precision Train Control." This system would have involvedmoving blockoperation, which adjusts a "safe zone" around a train based on its speed and location. The similar abbreviations have sometimes caused confusion over the definition of the technology. GE later abandoned the Precision Train Control platform.[88] In 2008, a team ofLockheed Martin,Wabtec, andAnsaldo STS USA Incinstalled an ITCS subsystem on a 120-mile segment of UP track between Chicago and St. Louis. Other major software companies, such asPK Global,Tech Mahindra, are also some of the strategic IT partners in development of PTC systems.[89] As of 31 December 2017[update], Union Pacific Installed 99 percent, or more than 17,000 miles, of total route miles with PTC signal hardware. Union Pacific has partially installed PTC hardware on about 98 percent of its 5,515 locomotives earmarked for the same technology and have equipped and commissioned 4,220 locomotives with PTC hardware and software. Union Pacific has also installed 100 percent of the wayside antennas needed to support PTC along the company's right of way.[90] Union Pacific had also equipped steam locomotive4014with its own PTC system in July 2024, mounting the necessary hardware in a new compartment of the tender.[91]This allows the steam locomotive to participate in the PTC network while operating unassisted whereas from 2021 on it had been relying on PTC equipment carried in an assisting diesel locomotive.
https://en.wikipedia.org/wiki/Positive_Train_Control
TheTrain Protection & Warning System(TPWS) is atrain protection systemused throughout the British passengermain-line railway network, and inVictoria, Australia.[1] According to the UKRail Safety and Standards Board,[2]the purpose of TPWS is to stop a train by automatically initiating a brake demand, where TPWS track equipment is fitted, if the train has: passed a signal at danger without authority; approached a signal at danger too fast; approached areduction in permissible speedtoo fast; approached buffer stops too fast. TPWS is not designed to preventsignals passed at danger(SPADs) but to mitigate the consequences of a SPAD, by preventing a train that has had a SPAD from reaching a conflict point after the signal. A standard installation consists of an on-track transmitter adjacent to a signal, activated when the signal is at danger. A train that passes the signal will have its emergency brake activated. If the train is travelling at speed, this may be too late to stop it before the point of collision, therefore a second transmitter may be placed on the approach to the signal that applies the brakes on trains going too quickly to stop at the signal, positioned to stop trains approaching at up to 75 mph (120 km/h). At around 400 high-risk locations,TPWS+is installed with a third transmitter further in rear of the signal increasing the effectiveness to 100 mph (160 km/h). When installed in conjunction with signal controls such as 'double blocking' (i.e. two red signal aspects in succession), TPWS can be fully effective at any realistic speed.[3] TPWS is not the same astrain stopswhich accomplish a similar task using electro-mechanical technology. Buffer stop protection using train stops is known as ‘Moorgate protection' or 'Moorgate control’. TPWS was developed byBritish Railand its successorRailtrack, following a determination in 1994 that British Rail'sAutomatic Train Protectionsystem was not economical, costing £600,000,000 equivalent to £979,431,929 in 2019 to implement, compared to value in lives saved: £3-£4 million (4,897,160 - 6,529,546 in 2019), per life saved, which was estimated to be 2.9 per year.[4][5] Trial installations of track side and train mounted equipment were made in 1997, with trials and development continuing over the next two years.[6] The rollout of TPWS accelerated when the Railway Safety Regulations 1999 came into force in 2003, requiring the installation of train stops at a number of types of location.[6]However, in March 2001 theJoint Inquiry Into Train Protection Systemsreport found that TPWS had a number of limitations, and that while it provided a relatively cheap stop-gap prior to the widescale introduction of ATP and ERTMS,[6]nothing should impede the installation of the much more capableEuropean Train Control System.[7] A pair of electronic loops are placed 50–450 metres on the approach side of the signal, energized when it is at danger. The distance between the loops determines the minimum speed at which the on board equipment will apply the train'semergency brake. When the train's TPWS receiver passes over the first loop a timer begins to count down. If the second loop is passed before the timer has reached zero, the TPWS will activate. The greater the line speed, the more widely spaced the two loops will be. There is another pair of loops at the signal, also energised when the signal is at danger. These are end to end, and thus will initiate a brake application on a train about to pass a signal at danger regardless of speed. In a standard installation there are two pairs of loops, colloquially referred to as "grids" or "toast racks". Both pairs consist of an 'arming' and a 'trigger' loop. If the signal is at danger the loops will be energised. If the signal is clear, the loops will de-energise. The first pair, the Overspeed Sensor System (OSS), is sited at a position determined by line speed and gradient. The loops are separated by a distance that should not be traversed within less than a pre-determined period of time of about one second if the train is running at a safe speed approaching the signal at danger. The exact timings are 974 milliseconds for passenger trains and 1218 milliseconds for freight trains, determined by equipment on the train. Freight trains use the 1.25 times longer timing because of their different braking characteristics.[8] The first, 'arming', loop emits a frequency of 64.25kHz. The second, 'trigger', loop has a frequency of 65.25 kHz. The other pair of loops is back to back at the signal, and is called a Train Stop System (TSS). The 'arming' and 'trigger' loops work at 66.25 kHz and 65.25 kHz respectively. The brakes will be applied if the on-train equipment detects both frequencies together after having detected the arming frequency alone. Thus, an energised TSS is effective at any speed, but only if a train passes it in the right direction. Since a train may be required to pass a signal at danger during failure etc., the driver has the option to override a TSS, but not an OSS. When asubsidiary signalassociated with a main aspect signal is cleared for a shunting movement, the TSS loops are de-energised, but the OSS loops remain active. Where trains are signalled in opposite directions on an individual line it could be possible for an unwarranted TPWS intervention to occur as a train travelled between an OSS arming and either trigger loops that were in fact associated with different signals. To cater for this situation one signal would be nominated the ‘normal direction’ and fitted with ‘ND’ equipment. The other signal would be nominated the ‘opposite direction’ and fitted with ‘OD’ equipment. Opposite direction TPWS transmission frequencies are slightly different, working at 64.75 (OSS arming), 66.75 (TSS arming), and 65.75 kHz (common trigger). At the lineside there are two modules associated with each set of loops: a Signal Interface Module (SIM) and an OSS or TSS module. These generate the frequencies for the loops, and prove the loops are intact. They interface with the signalling system. SIM Modules are colour coded red ND TSS Modules are colour coded green OD TSS Modules are colour coded brown ND OSS Modules are colour coded yellow OD OSS Modules are colour coded blue Every traction unit is fitted with a:[8] If the loops are energised, an aerial on the underside of the train picks up the radio frequency signal and passes it to the receiver. A timer measures how long it takes to pass between the arming and trigger loops. This time is used to check the speed, and if it is higher than the TPWS 'set speed', an emergency brake application is initiated. If the train is travelling slower than the TPWS set speed, but then passes the signal at danger, the aerial will receive the signal from the energised Train Stop System loops, and the brake will be applied to stop the train within theoverlap. Multiple unit trains have an aerial at each end. Vehicles that can operate singly (single car DMUs and locomotives) only have one aerial. This would be either at the front or rear of it depending on the direction the vehicle was moving in. Every driving cab has a TPWS control panel, located where the driver can see it from their desk. There are two types of panel; the original 'standard' type, and a more recent 'enhanced' version, which gives separate indications for a brake demand caused by a SPAD, Overspeed or AWS.[9] The standard type consists of two circular indicator lamps and a square push button. The push switch marked "Train Stop Override" is used topass a signal at danger with authority. It ignores the TPWS TSS loops for approximately 20 seconds (generally for passenger trains) or 60 seconds (generally for slower accelerating freight trains) or until the loops have been passed, whichever is sooner. TheAWSsystem and the TPWS system are inter-linked and if either of these has initiated a brake application, the "Brake Demand" indicator lamp will flash. The "Temporary Isolation/Fault" indicator lamp will flash if there is a TPWS system fault, or will show a steady illumination if the "Temporary Isolation Switch" has been activated. There is also a separate TPWS Temporary Isolation Switch located out of reach of the driver's desk. This is operated by the driver when the train is being worked in degraded conditions such as Temporary Block Working where multiple signals need to be passed at danger with the signaller's authority. Temporarily isolating the TPWS does not affect the AWS. The driver must reinstate the TPWS immediately at the point where normal working is resumed. As a safety feature, if they forget to do this, the TPWS will be reinstated on the next occasion that the driver's desk is shut down and then opened up again. An alternative to usingderailersinDepot Personnel Protection Systemsis to equip the system with TPWS. This equipment safeguards staff from unauthorised movements by using the TPWS equipment. Any unplanned movement will cause the train to automatically come to a stand when it has passed the relevant signal set at danger. This has the added benefit of preventing damage to the infrastructure and traction and rolling stock that a derailer system can cause. The first known installation of such a system is at Ilford Depot.[citation needed]TPWS equipped depot protection systems are suitable only for locations where vehicles are driven in and out of the maintenance building from a leading driving cab - they are not suitable for use with loose coaching stock or wagon maintenance, where vehicle movements are undertaken by a propelling shunting loco (in this case the lead vehicles would not be equipped with the relevant TPWS safety equipment), nor will it prevent a run-away vehicle from entering a protected work area. Certain signals may have multiple OSSes fitted. Alternatively, usually due to low line speeds, an OSS may not be fitted. An example of this is a terminalstation platformstarting signal. An OSS on its own may be used to protect a permanent speed restriction, orbuffer stop. Although loops are standard, buffer stops may be fitted with 'mini loops', due to the very low approach speed, usually 10 mph. When buffer stops were originally fitted with TPWS using standard loops there were many instances of false applications, causing delays whilst it reset, with trains potentially blocking the station throat, plus the risk of passengers standing to alight being thrown over by the sudden braking. This problem arose when a train passed over the arming loop so slowly that it was still detected by the train's receiver after the on-board timer had completed its cycle. The timer would reset and begin timing again, and the trigger loop then being detected within this second timing cycle would lead to a false intervention. As a temporary solution, drivers were instructed to pass the buffer stop OSSs at 5 mph, eliminating the problem, but meaning that trains no longer had the momentum to roll to the normal stopping point and requiring drivers to apply power beyond the OSS, just a short distance from the buffers, arguably making a buffer stop collision more likely than before TPWS was fitted. The redesigned 'mini loops', roughly a third the length of the standard ones, eliminate this problem, although due to the low speed and low margin, buffer stop OSSs are still a major cause of TPWS trips.[citation needed] Recent applications in the UK have, in conjunction with advancedSPADprotection techniques, used TPWS with outer home signals that protect converging junctions with a higher than average risk by controlling the speed of an approaching train an extra signal section in rear of the junction. If this fails the resultant TPWS application of brakes will stop the train before the point of conflict is reached. This system is referred to as TPWS OS (Outer Signal). Standard TPWS installations can only bring a train to a stop prior to passing a red signal, at 74 miles per hour (119 km/h). In 2001, it was observed that roughly one-third of the UK railway allows for a speed above 75 miles per hour (121 km/h). Further this assumes the train's brakes is capable of providing a brake force of 12%g.[10][a]A number of train types, most notably, theHSTswere not capable of achieving this, despite having a top speed of 125 miles per hour (201 km/h). TPWS-A was capable of stopping a train up to 100 miles per hour (160 km/h). TPWS has no ability to regulate speed after a trainpasses a signal at dangerwith authority. However, on those occasions there are strict rules governing the actions of drivers, train speed, and the use of TPWS. There aremany reasonswhy a driver might be required to pass a signal at danger with authority. The signaller will advise the driver to pass the signal at danger, proceed with caution, be prepared to stop short of any obstruction, and then obey all other signals. Immediately before moving, the driver will press the "Trainstop Override" button on the TPWS panel, so that the train can pass the signal without triggering the TPWS to apply the brakes. The driver must then proceed at a speed which enables them to stop within the distance that they can see to be clear. Even if it appears that the section is clear to the next signal, they must still exercise caution.[11] TPWS failed to prevent the2021 Salisbury rail crash, because although the train went to full emergency braking, theslick conditionsproduced wheel slide and the train therefore was not brought to a stop prior to the collision point. (ATP would not have prevented this circumstance either).[12] Critics, such as those representing victims of the Ladbroke Grove and Southhall rail crashes, and ASLEF and RMT rail unions pushed for the abandonment of TPWS in the late 1990s in favor of continuing with British Rail's ATP project.[13] A 2000 study,Automatic Train Protection for the rail network in Britainremarked that TPWS was "in terms of avoiding “ATP preventable accidents” it is about 70% effective.", highlighting the speed limitation.[14]That 2000 study did still conclude that TPWS was good solution for the short term of 10–15 years, but stressed that European Train Control system was the long term solution.[14] Notably, the combination of TPWS and AWS is least effective in accidents like the one atPurley, where a driver repeatedly cancelled the AWS warning without applying the brakes, passing the danger signal at high speed. Purley was one of several high profile SPAD crashes in the late 1980s, that led to the initial plan in the 1990s for the mass rollout of ATP, that was subsequently canceled in 1994 to be replaced by TPWS. Supporters of TPWS claim that even where it could not prevent accidents due to SPADs, it would likely reduce the impact, and reduce or eliminate fatalities, by at least slowing the train down. However, it is likely that in those cases the driver would have applied the emergency brakes well before the overspeed sensor.[7] While it has been noted that there have been very few fatalities since the fitting of TPWS that would have been prevented had ATP been fitted instead. This overlooks that during the delay between the decision to cancel ATP and replace it with TPWS and the actual roll out of TPWS thatLadbroke GroveandSouthall rail crashboth occurred, accidents that were ATP preventable, and occurred on the Great Western line, which had been outfitted with ATP as part of the pilot studies in the early-90s.[15][16] The TPWS system is used in: Since 1996, an older variant of TPWS, called the Auxiliary Warning System, has been used by theMumbai Suburban Railwayin India, on theWestern LineandCentral Line.
https://en.wikipedia.org/wiki/Train_Protection_%26_Warning_System
Anacademic mobility networkis an informal association of universities and government programs that encourages the international exchange ofhigher educationstudents (academic mobility).[1][2] Students choosing to study abroad (International students) aim to improve their own social and economic status by choosing to study in a nation with better systems of educations than their own. This creates movement of students, usually South to North and East to West.[3]It is predicted that citizens of Asian nations, particularly India and China, will represent an increasing portion of the global international student population.[4] The total number of students enrolled intertiary educationabroad (international students) increased from 1.3 million in 1990, to 2 million in 2000, to more than 3 million in 2010 and to 4.3 million in 2011.[5][6]Thefinancial crisis of 2007–2008did not decrease these figures.[5] The formation ofacademic mobilitynetworks can be explained by changes in systems of education. The governments of some countries allocated funds to improve tertiary education for international students. For some countries, the presence of international students represents an indicator of quality of their education system. International students contribute to the economy of their chosen country of study. In 2011, OECD countries were hosting seventy percent of international students. Within the OECD, almost half of international students were enrolled in one of the top five destinations for tertiary studies. These were United States (17 percent), United Kingdom (13 percent), Australia (6 percent), Germany (6 percent) and France (6 percent). International students prefer to study in English-speaking countries. Popular fields of study are thesocial sciences,businessandlaw. Thirty percent of international students studied in these fields in 2011.[5] Academic mobility networks aim to assist students by providing cultural and social diversity, encouraging adaptability and independent thinking, allowing them to improve their knowledge of a foreign language and expand their professional network. By bringing international students, the network can provide educational institutions with a source of revenue and contribute to the nation's economy. For example, inCanada, international student expenditure on tuition, accommodation and living expenses contributed more than CAD (Canadian dollar) 8 billion to the economy in 2010.[5]International students also have a long-term economic effect. Their stay after graduation increases the domestic skilledlabor market. In the 2008-2009 year, the rate of staying inOECDcountries was 25 percent. InAustralia,Canada,the Czech Republic, andFrance, the rate was greater than 30 percent.[5]In 2005, 27 percent of international students from aEuropean Unionmember state were employed in theUKsix months after graduation. InNorway, 18 percent of students from outside theEuropean Economic Area(EEA) who were studying between 1991 and 2005 stayed in the country; the corresponding number for EEA students was eight percent.[7] In the United States, educational exchange programs are generally managed by theBureau of Educational and Cultural Affairs.Education in the United Statesconsists of thousands of colleges and universities. Diversity in schools and subjects provides choice to international students.[8] After the terrorist attack of September 2001 international student enrolment in the United States declined for the first time in 30 years. It was more difficult to obtainvisas, other countries competed for international student enrolments andanti-American sentimentincreased.[7][9] TheBologna processis a European initiative to promote international student mobility. Quality is a core element of theEuropean Higher Education Areawith an emphasis on multi-linguistic skills.Erasmus programmehas supported European student exchanges since 1987. In 1987, around 3,000 students received grants to study for a period of 6 to 12 months at a host university of another of the twelve European member states. In 2012, the budget for the Erasmus Program was 129.1 billionEuros.[10]
https://en.wikipedia.org/wiki/Academic_mobility_network
Abarcodeorbar codeis a method of representing data in a visual,machine-readable form. Initially, barcodes represented data by varying the widths, spacings and sizes of parallel lines. These barcodes, now commonly referred to as linear or one-dimensional (1D), can be scanned by specialoptical scanners, calledbarcode readers, of which there are several types. Later, two-dimensional (2D) variants were developed, using rectangles, dots,hexagonsand other patterns, called2D barcodesormatrix codes, although they do not use bars as such. Both can be read using purpose-built 2D optical scanners, which exist in a few different forms. Matrix codes can also be read by a digital camera connected to a microcomputer running software that takes a photographic image of the barcode and analyzes the image to deconstruct and decode the code. Amobile devicewith a built-in camera, such as asmartphone, can function as the latter type of barcode reader using specializedapplication softwareand is suitable for both 1D and 2D codes. The barcode was invented byNorman Joseph WoodlandandBernard Silverand patented in the US in 1952.[1]The invention was based onMorse code[2]that was extended to thin and thick bars. However, it took over twenty years before this invention became commercially successful. UK magazineModern RailwaysDecember 1962 pages 387–389 record howBritish Railwayshad already perfected a barcode-reading system capable of correctly reading rolling stock travelling at 100 mph (160 km/h) with no mistakes. An early use of one type of barcode in an industrial context was sponsored by theAssociation of American Railroadsin the late 1960s. Developed byGeneral Telephone and Electronics(GTE) and calledKarTrak ACI(Automatic Car Identification), this scheme involved placing colored stripes in various combinations on steel plates which were affixed to the sides of railroad rolling stock. Two plates were used per car, one on each side, with the arrangement of the colored stripes encoding information such as ownership, type of equipment, and identification number.[3]The plates were read by a trackside scanner located, for instance, at the entrance to a classification yard, while the car was moving past.[4]The project was abandoned after about ten years because the system proved unreliable after long-term use.[3] Barcodes became commercially successful when they were used to automate supermarket checkout systems, a task for which they have become almost universal. The Uniform Grocery Product Code Council had chosen, in 1973, the barcode design developed byGeorge Laurer. Laurer's barcode, with vertical bars, printed better than the circular barcode developed by Woodland and Silver.[5]Their use has spread to many other tasks that are generically referred to asautomatic identification and data capture(AIDC). The first successful system using barcodes was in the UK supermarket groupSainsbury'sin 1972 using shelf-mounted barcodes which were developed byPlessey.[6][7]In June 1974,Marsh supermarketinTroy, Ohioused a scanner made byPhotographic Sciences Corporationto scan theUniversal Product Code(UPC) barcode on a pack ofWrigley'schewing gum.[8][5]QR codes, a specific type of 2D barcode, rose in popularity in the second decade of the 2000s due to the growth in smartphone ownership.[9] Other systems have made inroads in theAIDCmarket, but the simplicity, universality and low cost of barcodes has limited the role of these other systems, particularly before technologies such asradio-frequency identification(RFID) became available after 2023. In 1948,Bernard Silver, a graduate student atDrexel Institute of Technologyin Philadelphia, Pennsylvania, US overheard the president of the local food chain,Food Fair, asking one of the deans to research a system to automatically read product information during checkout.[10]Silver told his friendNorman Joseph Woodlandabout the request, and they started working on a variety of systems. Their first working system usedultravioletink, but the ink faded too easily and was expensive.[11] Convinced that the system was workable with further development, Woodland left Drexel, moved into his father's apartment in Florida, and continued working on the system. His next inspiration came from Morse code, and he formed his first barcode from sand on the beach. "I just extended the dots and dashes downwards and made narrow lines and wide lines out of them."[11]To read them, he adapted technology from optical soundtracks in movies, using a 500-watt incandescent light bulb shining through the paper onto anRCA935photomultipliertube (from a movie projector) on the far side. He later decided that the system would work better if it were printed as a circle instead of a line, allowing it to be scanned in any direction. On 20 October 1949 Woodland and Silver filed a patent application for "Classifying Apparatus and Method", in which they described both the linear andbull's eyeprinting patterns, as well as the mechanical and electronic systems needed to read the code. The patent was issued on 7 October 1952 as US Patent 2,612,994.[1]In 1951, Woodland moved toIBMand continually tried to interest IBM in developing the system. The company eventually commissioned a report on the idea, which concluded that it was both feasible and interesting, but that processing the resulting information would require equipment that was some time off in the future. IBM offered to buy the patent, but the offer was not accepted.Philcopurchased the patent in 1962 and then sold it toRCAsometime later.[11] During his time as an undergraduate,David Jarrett Collinsworked at thePennsylvania Railroadand became aware of the need to automatically identify railroad cars. Immediately after receiving his master's degree fromMITin 1959, he started work atGTE Sylvaniaand began addressing the problem. He developed a system calledKarTrakusing blue, white and red reflective stripes attached to the side of the cars, encoding a four-digit company identifier and a six-digit car number.[11]Light reflected off the colored stripes was read byphotomultipliervacuum tubes.[12] TheBoston and Maine Railroadtested the KarTrak system on their gravel cars in 1961. The tests continued until 1967, when theAssociation of American Railroads(AAR) selected it as a standard,automatic car identification, across the entire North American fleet. The installations began on 10 October 1967. However, theeconomic downturnand rash of bankruptcies in the industry in the early 1970s greatly slowed the rollout, and it was not until 1974 that 95% of the fleet was labeled. To add to its woes, the system was found to be easily fooled by dirt in certain applications, which greatly affected accuracy. The AAR abandoned the system in the late 1970s, and it was not until the mid-1980s that they introduced a similar system, this time based on radio tags.[13] The railway project had failed, but a toll bridge in New Jersey requested a similar system so that it could quickly scan for cars that had purchased a monthly pass. Then the US Post Office requested a system to track trucks entering and leaving their facilities. These applications required specialretroreflectorlabels. Finally,Kal Kanasked the Sylvania team for a simpler (and cheaper) version which they could put on cases of pet food for inventory control. In 1967, with the railway system maturing, Collins went to management looking for funding for a project to develop a black-and-white version of the code for other industries. They declined, saying that the railway project was large enough, and they saw no need to branch out so quickly. Collins then quit Sylvania and formed theComputer Identics Corporation.[11]As its first innovations, Computer Identics moved from using incandescent light bulbs in its systems, replacing them withhelium–neon lasers, and incorporated a mirror as well, making it capable of locating a barcode up to a meter (3 feet) in front of the scanner. This made the entire process much simpler and more reliable, and typically enabled these devices to deal with damaged labels, as well, by recognizing and reading the intact portions. Computer Identics Corporation installed one of its first two scanning systems in the spring of 1969 at aGeneral Motors(Buick) factory in Flint, Michigan.[11]The system was used to identify a dozen types of transmissions moving on an overhead conveyor from production to shipping. The other scanning system was installed at General Trading Company's distribution center in Carlstadt, New Jersey to direct shipments to the proper loading bay. In 1966 theNational Association of Food Chains(NAFC) held a meeting on the idea of automated checkout systems.RCA, which had purchased the rights to the original Woodland patent, attended the meeting and initiated an internal project to develop a system based on the bullseye code. TheKrogergrocery chain volunteered to test it. In the mid-1970s the NAFC established the Ad-Hoc Committee for U.S. Supermarkets on a Uniform Grocery-Product Code to set guidelines for barcode development. In addition, it created a symbol-selection subcommittee to help standardize the approach. In cooperation with consulting firm,McKinsey & Co., they developed a standardized 11-digit code for identifying products. The committee then sent out a contract tender to develop abarcode systemto print and read the code. The request went toSinger,National Cash Register(NCR),Litton Industries, RCA,Pitney-Bowes, IBM and many others.[14]A wide variety of barcode approaches was studied, including linear codes, RCA's bullseye concentric circle code,starburstpatterns and others. In the spring of 1971 RCA demonstrated their bullseye code at another industry meeting. IBM executives at the meeting noticed the crowds at the RCA booth and immediately developed their own system. IBM marketing specialist Alec Jablonover remembered that the company still employed Woodland, and he established a new facility inResearch Triangle Parkto lead development. In July 1972 RCA began an 18-month test in a Kroger store in Cincinnati. Barcodes were printed on small pieces of adhesive paper, and attached by hand by store employees when they were adding price tags. The code proved to have a serious problem; the printers would sometimes smear ink, rendering the code unreadable in most orientations. However, a linear code, like the one being developed by Woodland at IBM, was printed in the direction of the stripes, so extra ink would simply make the code "taller" while remaining readable. So on 3 April 1973 the IBM UPC was selected as the NAFC standard. IBM had designed five versions of UPC symbology for future industry requirements: UPC A, B, C, D, and E.[15] NCR installed a testbed system atMarsh's SupermarketinTroy, Ohio, near the factory that was producing the equipment. On 26 June 1974, a 10-pack of Wrigley'sJuicy Fruitgum was scanned, registering the first commercial use of the UPC.[16] In 1971 an IBM team was assembled for an intensive planning session, threshing out, 12 to 18 hours a day, how the technology would be deployed and operate cohesively across the system, and scheduling a roll-out plan. By 1973, the team were meeting with grocery manufacturers to introduce the symbol that would need to be printed on the packaging or labels of all of their products. There were no cost savings for a grocery to use it, unless at least 70% of the grocery's products had the barcode printed on the product by the manufacturer. IBM projected that 75% would be needed in 1975. Economic studies conducted for the grocery industry committee projected over $40 million in savings to the industry from scanning by the mid-1970s. Those numbers were not achieved in that time-frame and some predicted the demise of barcode scanning. The usefulness of the barcode required the adoption of expensive scanners by a critical mass of retailers while manufacturers simultaneously adopted barcode labels. Neither wanted to move first and results were not promising for the first couple of years, withBusiness Weekproclaiming "The Supermarket Scanner That Failed" in a 1976 article.[16][17] Sims Supermarketswere the first location in Australia to use barcodes, starting in 1979.[18] A barcode system is a network of hardware and software, consisting primarily ofmobile computers,printers,handheld scanners, infrastructure, and supporting software. Barcode systems are used to automate data collection where hand recording is neither timely nor cost effective. Despite often being provided by the same company, Barcoding systems are notradio-frequency identification(RFID) systems. Many companies use both technologies as part of largerresource managementsystems. A typical barcode system consist of some infrastructure, either wired or wireless that connects some number of mobile computers, handheld scanners, and printers to one or many databases that store and analyze the data collected by the system. At some level there must be some software to manage the system. The software may be as simple as code that manages the connection between the hardware and the database or as complex as anERP,MRP, or some otherinventory managementsoftware. A wide range of hardware is manufactured for use in barcode systems by such manufacturers as Datalogic, Intermec, HHP (Hand Held Products),Microscan Systems, Unitech, Metrologic, PSC, and PANMOBIL, with the best known brand of handheld scanners and mobile computers being produced bySymbol,[citation needed]a division ofMotorola. Some ERP, MRP, and otherinventory management softwarehave built in support for barcode reading. Alternatively, custom interfaces can be created using a language such asC++,C#,Java,Visual Basic.NET, and many others. In addition, software development kits are produced to aid the process. In 1981 theUnited States Department of Defenseadopted the use ofCode 39for marking all products sold to the United States military. This system, Logistics Applications of Automated Marking and Reading Symbols (LOGMARS), is still used by DoD and is widely viewed as the catalyst for widespread adoption of barcoding in industrial uses.[19] Barcodes are widely used around the world in many contexts. In stores, UPC barcodes are pre-printed on most items other than fresh produce from a grocery store. This speeds up processing at check-outs and helps track items and also reduces instances of shoplifting involving price tag swapping, although shoplifters can now print their own barcodes.[20]Barcodes that encode a book'sISBNare also widely pre-printed on books, journals and other printed materials. In addition, retail chain membership cards use barcodes to identify customers, allowing for customized marketing and greater understanding of individual consumer shopping patterns. At the point of sale, shoppers can get product discounts or special marketing offers through the address or e-mail address provided at registration. Barcodes are widelyused in healthcare and hospital settings, ranging from patient identification (to access patient data, including medical history, drug allergies, etc.) to creatingSOAP notes[21]with barcodes to medication management. They are also used to facilitate the separation and indexing of documents that have been imaged in batch scanning applications, track the organization of species in biology,[22]and integrate with in-motioncheckweighersto identify the item being weighed in a conveyor line for data collection. They can also be used to keep track of objects and people; they are used to keep track of rental cars, airline luggage, nuclear waste, express mail, and parcels. Barcoded tickets (which may be printed by the customer on their home printer, or stored on their mobile device) allow the holder to enter sports arenas, cinemas, theatres, fairgrounds, and transportation, and are used to record the arrival and departure of vehicles from rental facilities etc. This can allow proprietors to identify duplicate or fraudulent tickets more easily. Barcodes are widely used in shop floor control applications software where employees can scan work orders and track the time spent on a job. Barcodes are also used in some kinds of non-contact 1D and 2Dposition sensors. A series of barcodes are used in some kinds of absolute 1Dlinear encoder. The barcodes are packed close enough together that the reader always has one or two barcodes in its field of view. As a kind offiducial marker, the relative position of the barcode in the field of view of the reader gives incremental precise positioning, in some cases withsub-pixel resolution. The data decoded from the barcode gives the absolute coarse position. An "address carpet", used indigital paper, such as Howell's binary pattern and theAnotodot pattern, is a 2D barcode designed so that a reader, even though only a tiny portion of the complete carpet is in the field of view of the reader, can find its absolute X, Y position and rotation in the carpet.[23][24] Matrix codes can embed ahyperlinkto a web page. A mobile device with a built-in camera might be used to read the pattern and browse the linked website, which can help a shopper find the best price for an item in the vicinity. Since 2005, airlines use an IATA-standard 2D barcode on boarding passes (Bar Coded Boarding Pass (BCBP)), and since 2008 2D barcodes sent to mobile phones enable electronic boarding passes.[25] Some applications for barcodes have fallen out of use. In the 1970s and 1980s, software source code was occasionally encoded in a barcode and printed on paper (Cauzin Softstripand Paperbyte[26]are barcode symbologies specifically designed for this application), and the 1991Barcode Battlercomputer game system used any standard barcode to generate combat statistics. Artists have used barcodes in art, such asScott Blake's Barcode Jesus, as part of thepost-modernismmovement. The mapping between messages and barcodes is called asymbology. The specification of a symbology includes the encoding of the message into bars and spaces, any required start and stop markers, the size of the quiet zone required to be before and after the barcode, and the computation of achecksum. Linear symbologies can be classified mainly by two properties: Some symbologies use interleaving. The first character is encoded using black bars of varying width. The second character is then encoded by varying the width of the white spaces between these bars. Thus, characters are encoded in pairs over the same section of the barcode.Interleaved 2 of 5is an example of this. Stacked symbologies repeat a given linear symbology vertically. The most common among the many 2D symbologies are matrix codes, which feature square or dot-shaped modules arranged on a grid pattern. 2D symbologies also come in circular and other patterns and may employsteganography, hiding modules within an image (for example,DataGlyphs). Linear symbologies are optimized for laser scanners, which sweep a light beam across the barcode in a straight line, reading asliceof the barcode light-dark patterns. Scanning at an angle makes the modules appear wider, but does not change the width ratios. Stacked symbologies are also optimized for laser scanning, with the laser making multiple passes across the barcode. In the 1990s development ofcharge-coupled device(CCD) imagers to read barcodes was pioneered byWelch Allyn. Imaging does not require moving parts, as a laser scanner does. In 2007, linear imaging had begun to supplant laser scanning as the preferred scan engine for its performance and durability. 2D symbologies cannot be read by a laser, as there is typically no sweep pattern that can encompass the entire symbol. They must be scanned by an image-based scanner employing a CCD or other digital camera sensor technology. The earliest, and still[when?]the cheapest, barcode scanners are built from a fixed light and a singlephotosensorthat is manually moved across the barcode. Barcode scanners can be classified into three categories based on their connection to the computer. The older type is theRS-232barcode scanner. This type requires special programming for transferring the input data to the application program. Keyboard interface scanners connect to a computer using aPS/2orAT keyboard–compatible adaptor cable (a "keyboard wedge"). The barcode's data is sent to the computer as if it had been typed on the keyboard. Like the keyboard interface scanner,USBscanners do not need custom code for transferring input data to the application program. On PCs running Windows thehuman interface deviceemulates the data merging action of a hardware "keyboard wedge", and the scanner automatically behaves like an additional keyboard. Most modern smartphones are able to decode barcode using their built-in camera. Google's mobileAndroidoperating system can use their ownGoogle Lensapplication to scan QR codes, or third-party apps likeBarcode Scannerto read both one-dimensional barcodes and QR codes. Google'sPixeldevices can natively read QR codes inside the defaultPixel Cameraapp. Nokia'sSymbianoperating system featured a barcode scanner,[27]while mbarcode[28]is a QR code reader for theMaemooperating system. In AppleiOS 11, the native camera app can decode QR codes and can link to URLs, join wireless networks, or perform other operations depending on the QR Code contents.[29]Other paid and free apps are available with scanning capabilities for other symbologies or for earlier iOS versions.[30]WithBlackBerrydevices, the App World application can natively scan barcodes and load any recognized Web URLs on the device's Web browser.Windows Phone 7.5is able to scan barcodes through theBingsearch app. However, these devices are not designed specifically for the capturing of barcodes. As a result, they do not decode nearly as quickly or accurately as a dedicated barcode scanner orportable data terminal.[citation needed] It is common for producers and users of bar codes to have aquality management systemwhich includesverification and validationof bar codes.[31]Barcode verification examines scanability and the quality of the barcode in comparison to industry standards and specifications.[32]Barcode verifiers are primarily used by businesses that print and use barcodes. Any trading partner in thesupply chaincan test barcode quality. It is important to verify a barcode to ensure that any reader in the supply chain can successfully interpret a barcode with a low error rate. Retailers levy large penalties for non-compliant barcodes. These chargebacks can reduce a manufacturer's revenue by 2% to 10%.[33] A barcode verifier works the way a reader does, but instead of simply decoding a barcode, a verifier performs a series of tests. For linear barcodes these tests are: 2D matrix symbols look at the parameters: Depending on the parameter, eachANSItest is graded from 0.0 to 4.0 (F to A), or given a pass or fail mark. Each grade is determined by analyzing thescan reflectance profile(SRP), an analog graph of a single scan line across the entire symbol. The lowest of the 8 grades is the scan grade, and the overall ISO symbol grade is the average of the individual scan grades. For most applications a 2.5 (C) is the minimal acceptable symbol grade.[36] Compared with a reader, a verifier measures a barcode's optical characteristics to international and industry standards. The measurement must be repeatable and consistent. Doing so requires constant conditions such as distance, illumination angle, sensor angle and verifieraperture. Based on the verification results, the production process can be adjusted to print higher quality barcodes that will scan down the supply chain. Bar code validation may include evaluations after use (and abuse) testing such as sunlight, abrasion, impact, moisture, etc.[37] Barcode verifier standards are defined by theInternational Organization for Standardization(ISO), in ISO/IEC 15426-1 (linear) or ISO/IEC 15426-2 (2D).[citation needed]The current international barcode quality specification is ISO/IEC 15416 (linear) and ISO/IEC 15415 (2D).[citation needed]TheEuropean StandardEN 1635 has been withdrawn and replaced by ISO/IEC 15416. The original U.S. barcode quality specification wasANSIX3.182. (UPCs used in the US – ANSI/UCC5).[citation needed]As of 2011 the ISO workgroup JTC1 SC31 was developing aDirect Part Marking (DPM)quality standard: ISO/IEC TR 29158.[38] In point-of-sale management, barcode systems can provide detailed up-to-date information on the business, accelerating decisions and with more confidence. For example: Besides sales and inventory tracking, barcodes are very useful in logistics and supply chain management. Barcode scanners are relatively low cost and extremely accurate compared to key-entry, with only about 1 substitution error in 15,000 to 36 trillion characters entered.[39][unreliable source?]The exact error rate depends on the type of barcode. A first generation, "one dimensional" barcode that is made up of lines and spaces of various widths or sizes that create specific patterns. 2D barcodes consist of bars, but use both dimensions for encoding. Amatrix codeor simply a2D code, is a two-dimensional way to represent information. It can represent more data per unit area. Apart from dots various other patterns can be used. Patented.[61]DataGlyphs can be embedded into a half-tone image or background shading pattern in a way that is almost perceptually invisible, similar tosteganography.[62][63] In architecture, a building inLingang New Cityby German architectsGerkan, Marg and Partnersincorporates a barcode design,[87]as does a shopping mall calledShtrikh-kod(Russian forbarcode) in Narodnaya ulitsa ("People's Street") in theNevskiy districtofSt. Petersburg, Russia.[88] In media, in 2011, theNational Film Board of CanadaandARTE Francelaunched a web documentary entitledBarcode.tv, which allows users to view films about everyday objects by scanning the product's barcode with theiriPhonecamera.[89][90] Inprofessional wrestling, theWWEstableD-Generation Xincorporated a barcode into their entrance video, as well as on a T-shirt.[91][92] In video games, the protagonist of theHitmanvideo game serieshas a barcode tattoo on the back of his head; QR codes can also be scanned in a side mission inWatch Dogs. The 2018 videogameJudgmentfeaturesQR Codesthat protagonist Takayuki Yagami can photograph with his phone camera. These are mostly to unlock parts for Yagami'sDrone.[93] Interactive Textbooks were first published byHarcourt College Publishers to Expand Education Technology with Interactive Textbooks.[94] Some companies integrate custom designs into barcodes on their consumer products without impairing their readability. Some have regarded barcodes to be an intrusivesurveillancetechnology. Some Christians, pioneered by a 1982 bookThe New Money System 666by Mary Stewart Relfe, believe the codes hide the number666, representing the "Number of the beast".[95]Old Believers, a separation of theRussian Orthodox Church, believe barcodes are the stamp of theAntichrist.[96]Television hostPhil Donahuedescribed barcodes as a "corporate plot against consumers".[97]
https://en.wikipedia.org/wiki/Barcode
Incomputing, aprinteris aperipheralmachine which makes a durable representation of graphics or text, usually onpaper.[1]While most output is human-readable, bar code printers are an example of an expanded use for printers.[2]Different types of printers include 3D printers, inkjet printers, laser printers, and thermal printers.[3] The first computer printer designed was a mechanically driven apparatus byCharles Babbagefor hisdifference enginein the 19th century; however, his mechanical printer design was not built until 2000.[4]He also had plans for a curve plotter, which would have been the first computer graphics printer if it was built.[5] The first patented printing mechanism for applying a marking medium to a recording medium or more particularly an electrostatic inking apparatus and a method for electrostatically depositing ink on controlled areas of a receiving medium, was in 1962 by C. R. Winston, Teletype Corporation, using continuous inkjet printing. The ink was a red stamp-pad ink manufactured by Phillips Process Company of Rochester, NY under the name Clear Print. This patent (US3060429) led to the Teletype Inktronic Printer product delivered to customers in late 1966.[6] The first compact, lightweight digital printer was theEP-101, invented by Japanese companyEpsonand released in 1968, according to Epson.[7][8][9] The first commercial printers generally used mechanisms fromelectric typewritersandTeletypemachines. The demand for higher speed led to the development of new systems specifically for computer use. In the 1980s there weredaisy wheelsystems similar to typewriters,line printersthat produced similar output but at much higher speed, anddot-matrixsystems that could mix text and graphics but produced relatively low-quality output. Theplotterwas used for those requiring high-quality line art likeblueprints. The introduction of the low-cost laser printer in 1984, with the firstHP LaserJet,[10]and the addition ofPostScriptin next year'sApple LaserWriterset off a revolution in printing known asdesktop publishing.[11]Laser printers using PostScript mixed text and graphics, like dot-matrix printers, but at quality levels formerly available only from commercialtypesettingsystems. By 1990, most simple printing tasks like fliers and brochures were now created onpersonal computersand then laser printed; expensiveoffset printingsystems were being dumped as scrap. TheHP Deskjetof 1988 offered the same advantages as a laser printer in terms of flexibility, but produced somewhat lower-quality output (depending on the paper) from much less-expensive mechanisms. Inkjet systems rapidly displaced dot-matrix and daisy-wheel printers from the market. By the 2000s, high-quality printers of this sort had fallen under the $100 price point and became commonplace. The rapid improvement ofinternetemailthrough the 1990s and into the 2000s has largely displaced the need for printing as a means of moving documents, and a wide variety of reliable storage systems means that a "physical backup" is of little benefit today. Starting around 2010,3D printingbecame an area of intense interest, allowing the creation of physical objects with the same sort of effort as an early laser printer required to produce a brochure. As of the 2020s, 3D printing has become a widespread hobby due to the abundance of cheap 3D printer kits, with the most common process beingFused deposition modeling. Personalprinters are mainly designed to support individual users, and may be connected to only a single computer. These printers are designed for low-volume, short-turnaroundprint jobs, requiring minimal setup time to produce a hard copy of a given document. They are generally slow devices ranging from 6 to around 25 pages per minute (ppm), and the cost per page is relatively high. However, this is offset by the on-demand convenience. Some printers can print documents stored onmemory cardsor fromdigital camerasandscanners. Networkedorsharedprinters are "designed for high-volume, high-speed printing". They are usually shared by many users on anetworkand can print at speeds of 45 to around 100 ppm. TheXerox 9700could achieve 120 ppm. AnID Card printeris used for printing plastic ID cards. These can now be customised with important features such as holographic overlays, HoloKotes and watermarks.[citation needed]This is either a direct to card printer (the more feasible option) or a retransfer printer.[citation needed] Avirtual printeris a piece of computer software whose user interface andAPIresembles that of a printer driver, but which is not connected with a physical computer printer. A virtual printer can be used to create a file which is an image of the data which would be printed, for archival purposes or as input to another program, for example to create aPDFor to transmit to another system or user. Abarcode printeris a computer peripheral for printingbarcodelabels or tags that can be attached to, or printed directly on, physical objects. Barcode printers are commonly used to label cartons before shipment, or to label retail items withUPCsorEANs. A3D printeris a device for making a three-dimensional object from a 3D model or other electronic data source through additive processes in which successive layers of material (including plastics, metals, food, cement, wood, and other materials) are laid down under computer control. It is called a printer by analogy with an inkjet printer which produces a two-dimensional document by a similar process of depositing a layer of ink on paper. Acard printeris an electronicdesktop printerwith single card feeders which print and personalizeplastic cards. In this respect they differ from, for example,label printerswhich have a continuous supply feed. Card dimensions are usually 85.60 × 53.98 mm, standardized underISO/IEC 7810as ID-1. This format is also used inEC-cards,telephone cards,credit cards,driver's licensesandhealth insurance cards. This is commonly known as thebank cardformat. Card printers are controlled by corresponding printer drivers or by means of a specific programming language. Generally card printers are designed with laminating, striping, and punching functions, and use desktop or web-based software. The hardware features of a card printer differentiate a card printer from the more traditional printers, as ID cards are usually made of PVC plastic and require laminating and punching. Different card printers can accept different card thickness and dimensions. The principle is the same for practically all card printers: the plastic card is passed through athermal printhead at the same time as a color ribbon. The color from theribbonis transferred onto the card through the heat given out from the print head. The standard performance for card printing is 300 dpi (300 dots per inch, equivalent to 11.8 dots per mm). There are different printing processes, which vary in their detail: Broadly speaking there are three main types of card printers, differing mainly by the method used to print onto the card. They are: Different ID Card Printers use different encoding techniques to facilitate disparate business environments and to support security initiatives. Known encoding techniques are: There are basically two categories of card printer software: desktop-based, and web-based (online). The biggest difference between the two is whether or not a customer has a printer on their network that is capable of printing identification cards. If a business already owns an ID card printer, then a desktop-based badge maker is probably suitable for their needs. Typically, large organizations who have high employee turnover will have their own printer. A desktop-based badge maker is also required if a company needs their IDs make instantly. An example of this is the private construction site that has restricted access. However, if a company does not already have a local (or network) printer that has the features they need, then the web-based option is a perhaps a more affordable solution. The web-based solution is good for small businesses that do not anticipate a lot of rapid growth, or organizations who either can not afford a card printer, or do not have the resources to learn how to set up and use one. Generally speaking, desktop-based solutions involve software, a database (or spreadsheet) and can be installed on a single computer or network. Alongside the basic function of printing cards, card printers can also read and encode magnetic stripes as well as contact and contact free RFID chip cards (smart cards). Thus card printers enable the encoding of plastic cards both visually and logically. Plastic cards can also be laminated after printing. Plastic cards are laminated after printing to achieve a considerable increase in durability and a greater degree of counterfeit prevention. Some card printers come with an option to print both sides at the same time, which cuts down the time taken to print and less margin of error. In such printers one side of id card is printed and then the card is flipped in the flip station and other side is printed. Alongside the traditional uses in time attendance and access control (in particular with photo personalization), countless other applications have been found for plastic cards, e.g. for personalized customer and members' cards, for sports ticketing and in local public transport systems for the production of season tickets, for the production of school and college identity cards as well as for the production of national ID cards. The choice of print technology has a great effect on the cost of the printer and cost of operation, speed, quality and permanence of documents, and noise. Some printer technologies do not work with certain types of physical media, such ascarbon paperortransparencies. A second aspect of printer technology that is often forgotten is resistance to alteration: liquidink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed withtoneror solid inks, which do not penetrate below the paper surface. Cheques can be printed with liquid ink or on special cheque paper with toner anchorage so that alterations may be detected.[13]The machine-readable lower portion of a cheque must be printed usingMICRtoner or ink. Banks and other clearing houses employ automation equipment that relies on themagnetic fluxfrom these specially printed characters to function properly. The followingprintingtechnologies are routinely found in modern printers: Alaser printerrapidly produces high quality text and graphics. As with digitalphotocopiersand multifunction printers (MFPs), laser printers employ axerographicprinting process but differ from analog photocopiers in that the image is produced by the direct scanning of alaserbeam across the printer'sphotoreceptor. Another toner-based printer is theLED printerwhich uses an array ofLEDsinstead of alaserto cause toneradhesionto the print drum. Inkjet printersoperate by propelling variably sized droplets of liquid ink onto almost any sized page. They are the most common type of computer printer used by consumers. Solid inkprinters, also known as phase-change ink or hot-melt ink printers, are a type ofthermal transfer printer, graphics sheet printer or 3D printer . They use solid sticks, crayons, pearls or granular ink materials. Common inks areCMYK-colored ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. A Thermal transfer printhead jets the liquid ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is immediately transferred, or transfixed, to the page. Solid ink printers are most commonly used as color office printers and are excellent at printing on transparencies and other non-porous media. Solid ink is also called phase-change or hot-melt ink and was first used by Data Products and Howtek, Inc., in 1984.[14]Solid ink printers can produce excellent results with text and images. Some solid ink printers have evolved to print 3D models, for example, Visual Impact Corporation[15]of Windham, NH was started by retired Howtek employee, Richard Helinski whose 3D patents US4721635 and then US5136515 was licensed to Sanders Prototype, Inc., later named Solidscape, Inc. Acquisition and operating costs are similar tolaser printers. Drawbacks of the technology include highenergy consumptionand long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks frompens, and are difficult to feed throughautomatic document feeders, but these traits have been significantly reduced in later models. This type of thermal transfer printer is only available from one manufacturer,Xerox, manufactured as part of theirXerox Phaseroffice printer line. Previously,solid inkprinters were manufactured byTektronix, but Tektronix sold the printing business to Xerox in 2001. A dye-sublimation printer (or dye-sub printer) is a printer that employs a printing process that uses heat to transfer dye to a medium such as aplastic card, paper, orcanvas. The process is usually to lay one color at a time using a ribbon that has color panels. Dye-sub printers are intended primarily for high-quality color applications, including color photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers. Thermal printerswork by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers,ATMs,gasoline dispensersand some older inexpensive fax machines. Colors can be achieved with special papers and different temperatures and heating rates for different colors; these colored sheets are not required in black-and-white output. One example isZink(a portmanteau of "zero ink"). The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use. Impact printers rely on a forcible impact to transfer ink to the media. The impact printer uses a print head that either hits the surface of the ink ribbon, pressing the ink ribbon against the paper (similar to the action of atypewriter), or, less commonly, hits the back of the paper, pressing the paper against the ink ribbon (theIBM 1403for example). All but thedot matrix printerrely on the use offully formed characters,letterformsthat represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome, or sometimes two-color, printing in a single typeface at one time, althoughboldingandunderliningof text could be done by "overstriking", that is, printing two or more impressions either in the same character position or slightly offset. Impact printers varieties include typewriter-derived printers, teletypewriter-derived printers, daisywheel printers, dot matrix printers, and line printers. Dot-matrix printers remain in common use[16]in businesses where multi-part forms are printed.An overview of impact printing[17]contains a detailed description of many of the technologies used. Several different computer printers were simply computer-controllable versions of existing electric typewriters. TheFriden FlexowriterandIBM Selectric-basedprinters were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM's well-known "golf ball" printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second. The commonteleprintercould easily be interfaced with the computer and became very popular except for those computers manufactured byIBM. Some models used a "typebox" that was positioned, in the X- and Y-axes, by a mechanism, and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS. Daisy wheel printers operate in much the same fashion as atypewriter. A hammer strikes a wheel with petals, the "daisy wheel", each petal containing a letter form at its tip. The letter form strikes a ribbon ofink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to asletter-quality printersbecause they could produce text which was as clear and crisp as a typewriter. The fastest letter-quality printers printed at 30 characters per second. The termdot matrix printeris used for impact printers that use a matrix of smallpinsto transfer ink to the page.[18]The advantage of dot matrix over other impact printers is that they can producegraphicalimages in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type). Dot-matrix printers can be broadly divided into two major classes: Dot matrix printers can either becharacter-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head. In the 1970s and '80s, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers normally had either 9 or 24 pins on the print head (early 7 pin printers also existed, which did not printdescenders). There was a period during the early home computer era when a range of printers were manufactured under many brands such as theCommodoreVIC-1525 using theSeikoshaUni-Hammersystem. This used a single solenoid with an oblique striker that would be actuated 7 times for each column of 7 vertical pixels while the head was moving at a constant speed. The angle of the striker would align the dots vertically even though the head had moved one dot spacing in the time. The vertical dot position was controlled by a synchronized longitudinally ribbed platen behind the paper that rotated rapidly with a rib moving vertically seven dot spacings in the time it took to print one pixel column.[19]24-pin print heads were able to print at a higher quality and started to offer additional type styles and were marketed asNear Letter Qualityby some vendors. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favour for general use. Some dot matrix printers, such as the NEC P6300, can be upgraded to print in color. This is achieved through the use of a four-color ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Color graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, color graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode. Dot matrix printers are still commonly used in low-cost, low-quality applications such ascash registers, or in demanding, very high volume applications likeinvoiceprinting. Impact printing, unlike laser printing, allows the pressure of the print head to be applied to a stack of two or more forms to printmulti-part documentssuch as sales invoices andcredit cardreceipts usingcontinuous stationerywithcarbonless copy paper. It also has security advantages as ink impressed into a paper matrix by force is harder to erase invisibly. Dot-matrix printers were being superseded even as receipt printers after the end of the twentieth century. Line printers print an entire line of text at a time. Four principal designs exist. In each case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper. Each system could have slight timing issues, which could cause minor misalignment of the resulting printed characters. For drum or typebar printers, this appeared as vertical misalignment, with characters being printed slightly above or below the rest of the line. In chain or bar printers, the misalignment was horizontal, with printed characters being crowded closer together or farther apart. This was much less noticeable to human vision than vertical misalignment, where characters seemed to bounce up and down in the line, so they were considered as higher quality print. Line printers are the fastest of all impact printers and are used for bulk printing in large computer centres. A line printer can print at 1100 lines per minute or faster, frequently printing pages more rapidly than many current laser printers. On the other hand, the mechanical components of line printers operate with tight tolerances and require regularpreventive maintenance(PM) to produce a top quality print. They are virtually never used withpersonal computersand have now been replaced by high-speedlaser printers. The legacy of line printers lives on in manyoperating systems, which use the abbreviations "lp", "lpr", or "LPT" to refer to printers. Liquid ink electrostatic printers use a chemical coated paper, which is charged by the print head according to the image of the document.[24]The paper is passed near a pool of liquid ink with the opposite charge. The charged areas of the paper attract the ink and thus form the image. This process was developed from the process ofelectrostatic copying.[25]Color reproduction is very accurate, and because there is no heating the scale distortion is less than ±0.1%. (All laser printers have an accuracy of ±1%.) Worldwide, most survey offices used this printer before color inkjet plotters become popular. Liquid ink electrostatic printers were mostly available in 36 to 54 inches (910 to 1,370 mm) width and also 6 color printing. These were also used to print large billboards. It was first introduced by Versatec, which was later bought byXerox.3Malso used to make these printers.[26] Pen-basedplotterswere an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper (but not impact, per se) and special purpose pens that are mechanically run over the paper to create text and images. Since the pens output continuous lines, they were able to produce technical drawings of higher resolution than was achievable with dot-matrix technology.[27]Some plotters used roll-fed paper, and therefore had a minimal restriction on the size of the output in one dimension. These plotters were capable of producing quite sizable drawings. A number of other sorts of printers are important for historical reasons, or for special purpose uses. Printers can be connected to computers in many ways: directly by a dedicateddata cablesuch as theUSB, through a short-range radio likeBluetooth, alocal area networkusing cables (such as theEthernet) or radio (such asWiFi), or on a standalone basis without a computer, using amemory cardor other portable data storage device. Most printers other than line printers acceptcontrol charactersor unique character sequences to control various printer functions. These may range from shifting from lower to upper case or from black to red ribbon on typewriter printers to switching fonts and changing character sizes and colors on raster printers. Early printer controls were not standardized, with each manufacturer's equipment having its own set. The IBMPersonal Printer Data Stream(PPDS) became a commonly used command set for dot-matrix printers. Today, most printers accept one or morepage description languages(PDLs). Laser printers with greater processing power frequently offer support for variants of Hewlett-Packard'sPrinter Command Language(PCL),PostScriptorXML Paper Specification. Most inkjet devices support manufacturer proprietary PDLs such asESC/P. The diversity in mobile platforms have led to various standardization efforts around device PDLs such as thePrinter Working Group(PWG's)PWG Raster. The speed of early printers was measured in units ofcharacters per minute(cpm) for character printers, orlines per minute(lpm) for line printers. Modern printers are measured inpages per minute(ppm). These measures are used primarily as a marketing tool, and are not as well standardised astoner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially color images. Speeds in ppm usually apply toA4 paperin most countries in the world, andletterpaper size, about 6% shorter, in North America. The data received by a printer may be: Some printers can process all four types of data, others not. Today it is possible to print everything (even plain text) by sending ready bitmapped images to the printer. This allows better control over formatting, especially among machines from different vendors. Manyprinter driversdo not use the text mode at all, even if the printer is capable of it.[7] A monochrome printer can only producemonochrome images, with only shades of a singlecolor. Most printers can produce only two colors, black (ink) and white (no ink). Withhalf-tonningtechniques, however, such a printer can produce acceptablegrey-scaleimages too A color printer can produce images of multiple colors. A photo printer is a color printer that can produce images that mimic thecolor range(gamut) andresolutionof prints made fromphotographic film. The page yield is the number of pages that can be printed from atoner cartridgeorink cartridge—before the cartridge needs to be refilled or replaced. The actual number of pages yielded by a specific cartridge depends on a number of factors.[28] For a fair comparison, many laser printer manufacturers use theISO/IEC 19752process to measure the toner cartridge yield.[29][30] In order to fairly compare operating expenses of printers with a relatively smallink cartridgeto printers with a larger, more expensivetoner cartridgethat typically holds more toner and so prints more pages before the cartridge needs to be replaced, many people prefer to estimate operating expenses in terms ofcost per page(CPP).[29] Retailers often apply the"razor and blades" model: a company may sell a printer at cost and make profits on theink cartridge, paper, or some otherreplacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sellcompatibleink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it. Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: "cheap printer – expensive ink" or "expensive printer – cheap ink". Ultimately, the consumer decision depends on their referenceinterest rateor theirtime preference. From aneconomicsviewpoint, there is a cleartrade-offbetween cost per copy and cost of the printer. Printer steganography is a type ofsteganography– "hiding data within data"[31]– produced by color printers, includingBrother,Canon, Dell,Epson,HP, IBM,Konica Minolta,Kyocera, Lanier,Lexmark,Ricoh,ToshibaandXerox[32]brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps.[33] As of 2020–2021, the largest worldwide vendor of printers isHewlett-Packard, followed byCanon,Brother,Seiko EpsonandKyocera.[34]Other known vendors includeNEC,Ricoh,Xerox,Lexmark,[35]OKI,Sharp,Konica Minolta,Samsung,Kodak,Dell,Toshiba,Star Micronics,CitizenandPanasonic.
https://en.wikipedia.org/wiki/Card_printer
TheErasmus+ Programmeis the programme combining all the EU's current schemes for education, training, youth and sport, the most recent programme covering the years from 2021 to 2027. Erasmus was created as the European Community Action Scheme for the Mobility of University Students,[1]aEuropean Union(EU)student exchange programmeestablished in 1987.[2][3]Since then, a range of actions in areas relating to education, training, youth and sports have been progressively integrated in the programme. Erasmus+is theEU's programme to support education, training, youth and sport in Europe. The programme involves the 27 EU Member States and 6 non-EU associated countries (North Macedonia, Serbia, Iceland, Liechtenstein, Norway and Turkey)[4]with 55 National Agencies responsible for the decentralised management of most of the programme's actions. Other countries across the world may also participate in certain parts of the programme. The overall responsibility for the programme’s management, direction and evaluation lies with theEuropean Commission(Directorate-General for Education, Youth, Sport and Culture), assisted by itsEducation, Audio-visual and Culture Executive Agency(EACEA). The objective of Erasmus+ is to promote transnational learning mobility and cooperation, as a mean of improving quality and excellence, supporting inclusion and equity, and boosting innovation in the fields of education, youth and sport. In all these sectors, the aim is to provide support, throughlifelong learning, for the educational, professional and personal development of participants in Europe and beyond. The programme's objective is pursued through three key actions: Other activities include "Jean Monnet" actions, which support teaching, learning, research and debates on European integration matters, e.g. on the EU's future challenges and opportunities. Full information on the current profile and the activities for which funding within the programme is available can be found in the Erasmus+ Programme Guide.[5] Launched in 1987, the Erasmus programme was originally established to promote closer cooperation between universities and higher education institutions across Europe. Over time, the programme has expanded and is now referred to asErasmus+, or Erasmus Plus, combining the EU's different schemes for transnational cooperation and mobility in education, training, youth and sport in Europe and beyond. The Erasmus+ programme concluded its first funding cycle from 2014 to 2020 and is now in its second cycle, spanning from 2021 to 2027. Noted for its participation among staff, students, young people, and learners across age groups, as of 2021, the programme had engaged over 13 million participants. Its name refers toErasmus of Rotterdam, a leading scholar and inspiring lecturer during theRenaissanceperiod who travelled extensively in Europe to teach and study at a number of universities. At the same time, the word "Erasmus" is also an acronym for "European Community Action Scheme for the Mobility of University Students". In 1989, the Erasmus Bureau invited 32 former Erasmus students for an evaluation meeting inGhent,Belgium. The lack of peer-to-peer support was singled out as a major issue, but it was also a driving force behind the creation of theErasmus Student Network. The organization supports students from the Erasmus programme and other bilateral agreements and cooperates with national agencies in order to help international students. As of 23 July 2020, the Erasmus Student Network consists of 534 local associations ("sections") in 42 countries and has more than 15,000 volunteers across Europe. As of 2014, 27 years after its creation, the programme had promoted the mobility of more than 3.3 million students within the European community. More than 5,000 higher education institutions from 38 countries are participating in the project.[6] The Erasmus Programme, along with several other independent programmes, was incorporated into theSocrates programmeestablished by the European Commission in 1994. The Socrates programme ended on 31 December 1999 and was replaced with the Socrates II programme on 24 January 2000, which in turn was replaced by theLifelong Learning Programme 2007–2013on 1 January 2007. Beside the more popular student mobility (SMS), theErasmus+programme promotes the teacher mobility (STA), by which university teachers can spend a short period, for a minimum of 2 teaching days and a maximum of 2 months, teaching at least 8 hours in a foreign partner university. The average and suggested stay is of 5 teaching days.[7] The programme is named after the Dutch philosopher, theologian,Renaissance Humanist,monk, and devoutRoman Catholic,Desiderius Erasmus of Rotterdam, called "the crowning glory of the Christian humanists".[1]Erasmus, along with his good friendThomas More, became one of the main figures of European intellectual life during the Renaissance. Known for his satire, Erasmus urged internal reform of the Catholic Church. He encouraged a recovery of the CatholicPatristictradition against what he considered to be contemporary abuses of theSacramentsand certain excessive devotional practices. He famously clashed with Protestant revolutionaryMartin Lutheron the subject offree will. ERASMUS is abackronymmeaningEuropean Community Action Scheme for the Mobility of University Students.[1]Erasmus travelled widely across Europe and he was a pioneer of the EuropeanRepublic of Letters. He was one of the first intellectuals to use as a vehicle of diffusion of his ideas a path-breaking technology, namely themovable typeand spent a lot of his time inside printing workshops.[8] The programme's origins can be traced through a series of significant events documented on the official European Commission programme page: In 1969, the concept of promoting cultural, social, and academic exchanges among European students took root, thanks to the efforts of Italian educator and scientific consultantSofia Corradifor the permanent conference of Italian university rectors. Her role allowed her to raise awareness about this idea and make it known in the academic and institutional spheres.[10]The project was born after an initiative of the EGEE student association (nowAEGEE) founded byFranck Biancheri(who later became president of the trans-European movementNewropeans) which in 1986–1987 convinced French presidentFrançois Mitterrandto support the creation of the Erasmus programme. This active collaboration between AEGEE and the European Commission and especially Domenico Lenarduzzi, Ministry of Public Education, allowed the approval of the Erasmus programme in 1987. It became an integral part of the Socrates I (1994–1999) and Socrates II (2000–2006) programmes. From 2007 it became one of the elements of the Lifelong Learning Programme (2007–2013). In June 1984, the European Council decided in Fontainebleau to establish an ad-hoc European citizens' committee with the mission to make proposals to improve the image of the European Union. Each council member would select a member and together they should present a set of proposals to be approved at a future European Council. Under the chairmanship of Pietro Adonnino, the committee presented two successive reports[11]that were approved at the Council session inMilanon the 28–29 of June 1985.[12]Under the proposals that were advanced in these reports was the suggestion (to be found in the second report from number 5.6: University Cooperation) that the ministers for education and universities and higher-education establishments These suggestions were advanced by the Belgian member Prosper Thuysbaert and were discussed and approved by the committee.[citation needed] By the time the Erasmus Programme was adopted in June 1987, theEuropean Commissionhad been supporting pilot student exchanges for six years. It proposed the original Erasmus Programme in early 1986, but reaction from the then member states varied: those with substantial exchange programmes of their own (essentially France, Germany and the United Kingdom) were broadly hostile[citation needed]; the remaining countries were broadly in favour. Exchanges between themember statesand theEuropean Commissiondeteriorated, and the latter withdrew the proposal in early 1987 to protest against the inadequacy of the triennial budget proposed by some member states.[1] This method of voting, a simple majority, was not accepted by some of the opposing member states, who challenged the adoption of the decision before theEuropean Court of Justice. Although the court held that the adoption was procedurally flawed, it maintained the substance of the decision; a further decision, adapted in the light of the jurisprudence, was rapidly adopted by theCouncil of Ministers.[citation needed] The programme built on the 1981–1986 pilot student exchanges, and although it was formally adopted only shortly before the beginning of the academic year 1987–1988, it was still possible for 3,244 students to participate in Erasmus in its first year. In 2006, over 150,000 students, or almost 1% of the European student population, took part. The proportion is higher among university teachers, where Erasmus teacher mobility is 1.9% of the teacher population inEurope, or 20,877 people.[citation needed] From 1987 to 2006, over two million students benefited from Erasmus grants.[13]In 2004, the Erasmus Programme was awarded thePrincess of Asturias Award for International Cooperation. After 2007, theLifelong Learning Programme 2007–2013replaced the Socrates programme as the overall umbrella under which the Erasmus (and other) programmes operated. TheErasmus Mundusprogramme is a parallel programme oriented towards globalising European education and is open to non-Europeans with Europeans being exceptional cases. In May 2012,[14]Fraternité 2020was registered as Europe's firstEuropean Citizens' Initiativeand a goal to increase the budget for EU exchange programmes like Erasmus or theEuropean Voluntary Servicestarting in 2014. It ultimately collected only 71,057 signatures from citizens across the EU out of 1 million signatures needed by 1 November 2013.[15] Erasmus+, also called Erasmus Plus, has been the new 14.7 billion euro catch-all framework programme for education, training, youth and sport from 2014 to 2020.[16]The Erasmus+ programme combined all the EU's schemes for education, training, youth and sport, including the Lifelong Learning Programme (Erasmus,Leonardo da Vinci, Comenius, Grundtvig), Youth in Action and five international co-operation programmes (Erasmus Mundus, Tempus, Alfa, Edulink and the programme for co-operation with industrialised countries). The Erasmus+ regulation[17]was signed on 11 December 2013.[18] Erasmus+ provided grants for a wide range of actions including the opportunity for students to undertake work placements abroad and for teachers and education staff to attend training courses. Projects are divided into two parts – formal and non-formal education – each of them with three key actions. Erasmus+ key action one provides an opportunity for teachers, headmasters, trainers and other staff of education institutions to participate in international training courses in different European countries.[19] The staff home institution must apply to receive the grant to send its staff members abroad for training.[20] Erasmus+ conducted projects in Central Asia'sKazakhstan, funding 40 projects involving 47 Kazakh universities with more than 35.5 million euros.[21] WithBrexit, the UK government decision not to participate in Erasmus+ meant UK students (9,993 in 2018) lost access to the Erasmus programme and EU students (29,797 in 2018) lost access to UK universities,[22]despite some Conservatives such asSuella Bravermanhaving benefitted from it and promises made by then-Prime MinisterBoris Johnsonthat "There is no threat to the ERASMUS scheme."[23][24]The UK decided to utilise the funds that would have been paid into Erasmus+ to create theTuring scheme, allowing 40,000 students p.a., from age 4 to university age, to gain experience overseas, without being limited to European destinations.[25] On 30 May 2018, the European Commission adopted its proposal for the next Erasmus programme, with a doubling of the budget to 30 billion euros for the period 2021–2027.[26]Further negotiations were expected to take place during the 2019–2024 European parliamentary term with the European Parliament and the European Council before the final programme is adopted.[27]The agreement between the European Parliament and the European Council was adopted and the publication of the new regulation 2021/817 establishing the new Erasmus+ programme was made on 28 May 2021.[28] For the second phase of the programme, the EU has made the commitment to expand Erasmus+ further and to enrich it by introducing a new 'greening' dimension as well as a strong new digital education component. The new greening dimension is designed to contribute to combating climate change and addressing other global challenges including health, while the digital education strand seeks in particular to improve the quality of online education in Europe which has grown considerably in the aftermath of the Covid 19 pandemic. Further transversal priorities for the programme are the commitment to social inclusion and diversity, and to promoting stronger participation in democratic life, common values and civic engagement. Nearly 14 million people have participated in the Erasmus programme since its creation. The young participants has increased significantly since 1987. Nearly 300,000 a year[when?]compared with only 3,244 in 1987.Spainis the country that has allowed most people to participate in Erasmus with more than 40,000 per year, slightly ahead ofFrance,GermanyandItaly. The countries receiving the most Erasmus students are Spain with more than 47,000 students and Germany with 32,800.[29]Erasmus has positively impacted higher education, bringing educational, social, cultural, and economic benefits to institutions. It professionalizes international cooperation, strengthens academic ties, fosters research collaborations, and forms informal networks, creating friendships across borders. The programme has become a valued source of 'soft power' and diplomatic value for participating countries. There are currently more than 4,000 higher institutions participating in Erasmus across the 37 countries. In 2012–13, 270,000 took part, the most popular destinations being Spain, Germany, Italy and France.[30]Erasmus students represented 5 percent of European graduates as of 2012.[31][32]EU Member States and third countries associated to the Programme can fully take part in all the Actions of Erasmus+. The third countries associated to the Programme are North Macedonia, Serbia, Iceland, Liechtenstein, Norway and Turkey.[4] Studies have discussed issues related to the selection into the programme and the representativeness of the participants. Some studies have raised doubts about the inclusiveness of the programme, by socio-economic background, level of study, or academic performance. Thus, one study analyses the financial issues and family background of Erasmus students, showing that despite the fact that access to the programme has been moderately widened, there are still important socio-economic barriers to participation in the programme.[33]Another study uncovered what seems to be an adverse self-selection of Erasmus students based on their prior academic performance, with higher-performing students less likely to participate than lower-performing ones. However, this case was based on four hundred graduates in only a Spanish university.[34]Inversely, one study looking in details at French and Italian students found that the primary predictor of participation to Erasmus was students' prior academic records, not the occupation of their parents.[35] The Erasmus Programme had previously been restricted to applicants who had completed at least one year of tertiary-level study, but it is now also available to secondary school students. Indeed, non-formal education programmes for adults, can expand opportunities for its students and staff through the development of international partnerships following the priorities of the Programme, such as inclusion and diversity, digital transformation, environment and climate change or participation in democratic life.[36] Students who join the Erasmus Programme study at least three months or do an internship for a period of at least 2 months to an academic year in another European country. The former case is called aStudent Mobility for StudiesorSMS, while the latter case is called aStudent Mobility of PlacementorSMP.[37][38]The Erasmus Programme guarantees that the period spent abroad is recognised by their university when they come back, as long as they abide by terms previously agreed.Switzerlandhas been suspended as a participant in the Erasmus programme as of 2015, followingthe popular voteto limit the immigration of EU citizens into Switzerland. As a consequence, Swiss students will not be able to apply for the programme and European students will not be able to spend time at a Swiss university under that programme.[39] A main part of the programme is that students do not pay extra tuition fees to the university that they visit. Students can also apply for an Erasmus grant to help cover the additional expense of living abroad. Students with disabilities can apply for an additional grant to cover extraordinary expenses. To reduce expenses and increase mobility, many students also use the European Commission-supported accommodation network, CasaSwap, FlatClub, Erasmusinn,Eurasmus,[40]Erasmate or Student Mundial, which are free websites where students and young people can rent, sublet, offer and swap accommodation – on a national and international basis. A derived benefit is that students can share knowledge and exchange tips and hints with each other before and after going abroad. Erasmus has positively impacted higher education, bringing educational, social, cultural, and economic benefits to institutions. It professionalises international cooperation, strengthens academic ties, fosters research collaborations, and forms informal networks, creating friendships across borders. The programme has become a valued source of'soft power'and diplomatic value for participating countries.[41] External evaluations and a multitude of personal stories abound from successive generations of participants to confirm the extent to which their "Erasmus experience" provided them not only with enhanced knowledge and competence in their respective fields, but also with a transformative, life-enhancing dimension both in personal terms and for their careers. This is true of all parts of the programme forming the present Erasmus+, though most data is available for higher education. For many higher education students, the Erasmus Programme is their first time living and studying in another country. Alongside academic study or practical training, the programme fosters learning and understanding of the host country, and the Erasmus experience is considered both a time for learning as well as a chance to socialize and experience a different culture, not to mention self-discovery. In this respect, and thanks to its popularity among students, it has emerged as a contemporary cultural phenomenon, inspiring books and films that explore the Erasmus experience. Most of the characters in the French-Spanish filmL'Auberge espagnole(2002) are enrolled in the programme, which plays a central role in the film. Various documentary films have been produced around the programme, notably the Italian documentaryErasmus 24 7[42]orErasmus, notre plus belle année(2017). The self-finding experiences of Erasmus students in France form the basis for Davide Faraldi's first novelGenerazione Erasmus(2008). InDavid Mitchell's 1997 novelSlade House, an Erasmus party is the scene for an important experience for fresher student Sally Timms. Pakistani novelist Nimra Ahmed's novelJannat K Patte(Leaves of Heaven) is based on the Erasmus programme, where the protagonist Haya goes toSabancı UniversityinTurkeythroughErasmus Mundus, which marks a turning point in her life.[43] In the novelNormal People(2018) written by Irish authorSally Rooneyand itssubsequent adaptation for television, Marianne goes toSwedenvia the Erasmus programme. The online public forumCafébabelwas founded in 2001 by Erasmus exchange programme students, and is headquartered inParis. The forum is based on the principle of participatory journalism. As of June 2020 it had over 15,000 contributors as well as a team of professional editors and journalists in Paris,Brussels,Rome,MadridandBerlin.[44]Volunteer contributors simultaneously translate the forum into six languages –French,English,German,Italian,SpanishandPolish.[45] Some academics have speculated that former Erasmus students will prove to be a powerful force in creating apan-European identity. In 2005, the political scientistStefan Wolff, for example, argued that "Give it 15, 20 or 25 years, and Europe will be run by leaders with a completely different socialisation from those of today", referring to the so-called 'Erasmus generation'.[46]This term describes youngEuropeanswho participate in Erasmus programme and are assumed to supportEuropean integrationmore actively when compared with their elder generations.[47]The assumption is that young Europeans, who enjoyed the benefits of European integration, think of themselves as European citizens, and therefore create a base of support for further European integration. However, questions are raised about whether there is positive correlation between the programme and pro-European integration.[48] According to the formerEuropean Commissioner for Education, Culture, Youth and Sport,Tibor Navracsics, Erasmus programme is asoft powertool and it reflects the political motivation behind its creation,[49]including the task of legitimising the European institutions. This conception has already presented in the project ofSofia Corradi, an Italianeducationalistcreator of the Erasmus Programme. She gives a particular attention to the need to activate an exchange between young people from all over Europe to contribute to the strengthening of its unity and integrity.[50] One issue discussed is whether participation in the Erasmus program helps generate more Europeansolidarity. A study carried out by the European Commission in 2010, shows that participating to Erasmus strengthens tolerance. Another issue is whether Erasmus enables the mixing of Europeans.[51]For example, more than a quarter of Erasmus participants meet their life partner through it and participation in Erasmus encourages mobility between European countries.[52]Umberto Ecocalled it sexual integration.[53]TheEuropean Commissionestimates that the program has resulted directly in the births of over 1 million children, sometimes called "Erasmus Babies".[54] As to whether Erasmus fosters a sense of European identity the results are mixed. Some research indicates that those participating in Erasmus exchanges are already significantly predisposed to be pro-European,[55][56]leading some scholars to conclude that such exchange programmes are 'preaching to the converted'.[57]However, more recent research has criticised the earlier findings for being methodically flawed, relying primarily on the experience of British students and for using relatively small samples. Relying on a larger scale survey conducted on some 1700 students in six countries, Mitchell found that 'participation in an Erasmus exchange is significantly and positively related to changes in both identification as European and identification with Europe'.[58]In addition, it has also been submitted that the earlier literature confused cause and effect, since the existence of such programmes constitutes a tangible benefit provided by the EU to prospective students interested in going abroad, which may cause them to view the EU positively even prior to participation.[59]
https://en.wikipedia.org/wiki/Erasmus_Programme
Anidentity document(abbreviated asID) is adocumentproving a person'sidentity. If the identity document is aplastic cardit is called anidentity card(abbreviated asICorID card). When the identity document incorporates a photographicportrait, it is called aphoto ID.[1]In some countries, identity documents may becompulsoryto have. The identity document is used to connect a person to information about the person, often in adatabase. The connection between the identity document and database is based on personal information present on the document, such as the bearer'sfull name,birth date,address, an identification number, card number, gender,citizenshipand more. A uniquenational identification numberis the most secure way, but some countries lack such numbers or do not show them on identity documents. In the absence of an explicit identity document, other documents such asdriver's licensemay be accepted in many countries foridentity verification. Some countries do not accept driver's licenses for identification, often because in those countries they do not expire as documents and can be old or easily forged. Most countries acceptpassportsas a form of identification. Some countries require all people to have an identity document available at all times. Many countries require all foreigners to have a passport or occasionally a national identity card from their home country available at any time if they do not have a residence permit in the country. A version of thepassportconsidered to be the earliest identity document inscribed into law was introduced byKing Henry V of Englandwith theSafe Conducts Act 1414.[2] For the next 500 years up to the onset of theFirst World War, most people did not have or need an identity document. Photographic identification appeared in 1876[3]but it did not become widely used until the early 20th century when photographs became part of passports and other ID documents, all of which came to be referred to as "photo IDs" in the late 20th century. Both Australia and Great Britain, for example, introduced the requirement for a photographic passport in 1915 after the so-calledLody spy scandal.[4] The shape and size of identity cards were standardized in 1985 byISO/IEC 7810. Some modern identity documents aresmart cardsthat include a difficult-to-forge embedded integrated circuit standardized in 1988 byISO/IEC 7816. Newtechnologiesallow identity cards to containbiometricinformation, such as aphotograph,face;hand, orirismeasurements; orfingerprints. Many countries issueelectronic identity cards. Law enforcementofficials claim that identity cards make surveillance and the search for criminals easier and therefore support the universal adoption of identity cards. In countries that do not have a national identity card, there is concern about the projected costs and potential abuse of high-tech smartcards. In many countries – especially English-speaking countries such asAustralia,Canada,Ireland,New Zealand, theUnited Kingdom, and theUnited States– there are no government-issued compulsory identity cards for all citizens. Ireland's Public Services Card is not considered a national identity card by the Department of Employment Affairs and Social Protection (DEASP),[5]but many say it is in fact becoming that, and without public debate or even a legislative foundation.[6] There is debate in these countries about whether such cards and their centralised databases constitute an infringement ofprivacyandcivil liberties. Most criticism is directed towards the possibility of abuse of centralised databases storing sensitive data. A 2006 survey of UKOpen Universitystudents concluded that the planned compulsory identity card under the Identity Cards Act 2006 coupled with acentral governmentdatabasegenerated the most negative response among several options. None of the countries listed above mandate identity documents, but they havede factoequivalents since these countries still require proof of identity in many situations. For example, all vehicle drivers must have a driving licence, and young people may need to use specially issued "proof of age cards" when purchasing alcohol. Arguments for identity documents as such: Arguments for national identity documents: Arguments against identity documents as such: Arguments against national identity documents: Arguments against overuse or abuse of identity documents: According toPrivacy International, as of 1996[update], possession of identity cards was compulsory in about 100 countries, though what constitutes "compulsory" varies. In some countries, it is compulsory to have an identity card when a person reaches a prescribed age. The penalty for non-possession is usually a fine, but in some cases it may result indetentionuntil identity is established. For people suspected with crimes such as shoplifting orfare evasion, non-possession might result in such detention, also in countries not formally requiring identity cards. In practice, random checks are rare, except in certain situations. A handful of countries do not issue identity cards. These includeAndorra,[14]Australia, theBahamas,[15]Canada,Nauru,New Zealand,Samoa,Tuvaluand theUnited Kingdom.[16]Other identity documents such as passports or driver's licenses are then used as identity documents when needed. However, governments of the Bahamas and Samoa are planning to introduce new national identity cards in the near future[17][18]Some countries, like Denmark, have more simple official identity cards, which do not match the security and level of acceptance of a national identity card, and which are used by people without driver's licenses. A number of countries have voluntary identity card schemes. These include Austria, Belize, Finland, France (seeFrance section), Hungary (however, all citizens of Hungary must have at least one of: valid passport, photo-based driving licence, or the National ID card), Iceland, Ireland, Norway, Saint Lucia, Sweden, Switzerland and the United States. The United Kingdom'sschemewas scrapped in January 2011 and the database was destroyed. In the United States, the Federal government issues optional non-obligatory identity cards known as "Passport Cards" (which include important information such as the nationality). On the other hand, states issue optional identity cards for people who do not hold a driver's license as an alternate means of identification. These cards are issued by the same organisation responsible for driver's licenses, usually called theDepartment of Motor Vehicles. Passport Cards hold limited travel status or provision, usually for domestic travel. For theSahrawi peopleofWestern Sahara, pre-1975 Spanish identity cards are the main proof that they were Saharawi citizens as opposed to recent Moroccan settlers. They would thus be allowed to vote in an eventualself-determinationreferendum. Companies and government departments may issue ID cards for security purposes, proof ofidentity, or also as proof of aqualification(without proving identity). For example, alltaxicab driversin the UK carry ID cards. Managers, supervisors, and operatives in construction in the UK can get a photographic ID[19]card, the CSCS (Construction Skills Certification Scheme) card, indicating training and skills including safety training. The card is not an identity card or a legal requirement, but enables holders to prove competence without having to provide all the pertinent documents. Those working on UK railway lands near working lines must carry a photographic ID card to indicate training in track safety (PTS and other cards) possession of which is dependent on periodic and random alcohol anddrug screening. InQueenslandandWestern Australia, anyone working with children has to take abackground checkand get issued aBlue Cardor Working with Children Card, respectively. Cartão Nacional de Identificação (CNI) is the national identity card ofCape Verde. It is compulsory for all Egyptian citizens age 16 or older to possess an ID card[20](Arabic:بطاقة تحقيق شخصيةBiṭāqat taḥqīq shakhṣiyya, literally, "Personal Verification Card").[citation needed]Indaily colloquial speech, it is generally simply called "el-biṭāqa" ("the card"). It is used for:[citation needed] Egyptian ID cards consist of 14 digits, the national identity number, and expire after 7 years from the date of issue. Some feel that Egyptian ID cards are problematic, due to the general poor quality of card holders' photographs and the compulsory requirements for ID card holders to identify their religion and for married women to include their husband's name on their cards.[citation needed] AllGambian citizensover 18 years of age are required to hold a Gambian National Identity Card.[citation needed]In July 2009, a newbiometricidentity card was introduced.[citation needed]The biometric card is one of the acceptable documents required to apply for a Gambian Driving Licence.[citation needed] Ghanabegun the issuing of a national identity card for Ghanaiancitizensin 1973.[21]However, the project was discontinued three years later due to problems with logistics and lack of financial support. This was the first time the idea of national identification systems in the form of theGhana Cardarose in the country.[21]Full implementation of the Ghana Cards begun from 2006.[22]According to theNational Identification Authority, over 15 million Ghanaians have been registered for the Ghana card by September 2020.[23] Liberia has begun the issuance process of its national biometric identification card, which citizens and foreign residents will use to open bank accounts and participate in other government services on a daily basis. More than 4.5 million people are expected to register and obtain ID cards of citizenship or residence in Liberia. The project has already started where NIR (National Identification Registry) is issuing Citizen National ID Cards. The centralized National Biometric Identification System (NBIS) will be integrated with other government ministries. Resident ID Cards and ECOWAS ID Cards will also be issued.[24] Mauritius requires all citizens who have reached the age of 18 to apply for a National Identity Card. The National Identity Card is one of the few accepted forms of identification, along with passports. A National Identity Card is needed to apply for a passport for all adults, and all minors must take with them the National Identity Card of a parent(s) when applying for a passport.[25] Bilhete de identidade (BI) is the national ID card ofMozambique. Nigeria first introduced a national identity card in 2005, but its adoption back then was limited and not widespread. The country is now in the process of introducing a new biometric ID card complete with a SmartCard and other security features. The National Identity Management Commission (NIMC)[26]is the federal government agency responsible for the issuance of these new cards, as well as the management of the new National Identity Database. The Federal Government of Nigeria announced in April 2013[27]that after the next general election in 2015, all subsequent elections will require that voters will only be eligible to stand for office or vote provided the citizen possesses a NIMC-issued identity card. The Central Bank of Nigeria is also looking into instructing banks to request for a National Identity Number (NIN) for any citizen maintaining an account with any of the banks operating in Nigeria. The proposed kick off date is yet to be determined. South African citizens aged 15 years and 6 months or older are eligible for an ID card. The South African identity document is not valid as a travel document or valid for use outside South Africa. Although carrying the document is not required in daily life, it is necessary to show the document or a certified copy as proof of identity when: The South African identity document used to also contain driving andfirearms licences; however, these documents are now issued separately in card format. In mid 2013 a smart card ID was launched to replace the ID book. The cards were launched on July 18, 2013, when a number of dignitaries received the first cards at a ceremony in Pretoria.[28]The government plans to have the ID books phased out over a six to eight-year period.[29]The South African government is looking into possibly using this smart card not just as an identification card but also for licences,National Health Insurance, and social grants.[30] Every citizen of Tunisia is expected to apply for an ID card by the age of 18; however, with the approval of a parent(s), a Tunisian citizen may apply for, and receive, an ID card prior to their eighteenth birthday upon parental request.[citation needed] In 2016, The government has introduced a new bill to the parliament to issue new biometric ID documents. The bill has created controversy amid civil society organizations.[31] Zimbabweansare required to apply for National Registration at the age of 16.[citation needed]Zimbabwean citizens are issued with aplastic cardwhich contains a photograph and their particulars onto it. Before the introduction of the plastic card, the Zimbabwean ID card used to be printed on anodised aluminium. Along with Driving Licences, the National Registration Card (including the old metal type) is universally accepted as proof of identity in Zimbabwe. Zimbabweans are required by law to carry identification on them at all times and visitors to Zimbabwe are expected to carry their passport with them at all times.[citation needed] Afghan citizens over the age of 18 are required to carry a national ID document calledTazkira. Bahraini citizens must have both an ID card, called a "smart card", which is recognized as an official document and can be used within theGulf Cooperation Council, and a passport, which is recognized worldwide.[citation needed] Biometric identification has existed inBangladeshsince 2008. All Bangladeshis who are 18 years of age and older are included in a central Biometric Database, which is used by the Bangladesh Election Commission to oversee the electoral procedure in Bangladesh. All Bangladeshis are issued with anNID Cardwhich can be used to obtain a passport, Driving Licence, credit card, and to register land ownership. The Bhutanese national identity card (called the Buthanese Citizenship card) is an electronic ID card, compulsory for all Bhutanese nationals and costs 100 Bhutanese ngultrum. The People's Republic of China requires each of its citizens aged 16 and over to carry an identity card. The card is the only acceptable legal document to obtain employment, a residence permit, driving licence or passport, and to open bank accounts or apply for entry to tertiary education and technical colleges. TheHong Kong Identity Card(orHKID) is an official identity document issued by theImmigration Department of Hong Kongto all people who hold the right of abode, right to land or other forms of limited stay longer than 180 days in Hong Kong. According toBasic Law of Hong Kong, all permanent residents are eligible to obtain theHong Kong Permanent Identity Cardwhich states that the holder has theright of abode in Hong Kong. All persons aged 16 and above must carry a valid legal government identification document in public. All persons aged 16 and above must be able to produce valid legal government identification documents when requested by legal authorities; otherwise, they may be held in detention to investigate his or her identity and legal right to be in Hong Kong. While there is no mandatory identity card in India, theAadhaarcard, a multi-purpose national identity card, carrying 16 personal details and a unique identification number, has been available to all citizens since 2009.[32]The card contains a photograph, full name, date of birth, and a unique, randomly generated 12-digitNational Identification Number. However, the card itself is rarely required as proof, the number or a copy of the card being sufficient. The card has a SCOSTA QR code embedded on the card, through which all the details on the card are accessible.[33]In addition to Aadhaar,PANcards,ration cards, voter cards and driving licences are also used. These may be issued by either the government of India or the government of any state and are valid throughout the nation. The Indian passport may also be used. Residents over 17 are required to hold a KTP (Kartu Tanda Penduduk) identity card. The card will identify whether the holder is anIndonesian citizenorforeign national. In 2011, the Indonesian government started a two-year ID issuance campaign that utilizes smartcard technology and biometric duplication of fingerprint andiris recognition. This card, called the Electronic KTP (e-KTP), will replace the conventional ID card beginning in 2013. By 2013, it is estimated that approximately 172 million Indonesian nationals will have an e-KTP issued to them. Every citizen of Iran has an identification document calledShenasnameh(Iranian identity booklet) inPersian(شناسنامه). This is a booklet based on the citizen's birth certificate which features their Shenasnameh National ID number,given name,surname, their birth date, their birthplace, and the names, birth dates and National ID numbers of their legal ascendants. In other pages of the Shenasnameh, their marriage status, names of spouse(s), names of children, date of every vote cast and eventually their death would be recorded.[34] Every Iranian permanent resident above the age of 15 must hold a validNational Identity Card(Persian:کارت ملی) or at least obtain their unique National Number from any of the local Vital Records branches of the IranianMinistry of Interior.[35] In order to apply for an NID card, the applicant must be at least 15 years old and have a photograph attached to theirBirth Certificate, which is undertaken by the Vital Records branch. Since June 21, 2008, NID cards have been compulsory for many things in Iran and Iranian missions abroad (e.g., obtaining a passport, driver's license, any banking procedure, etc.).[36] EveryIraqicitizen must have aNational Card(البطاقة الوطنية). Israeli law requires every permanent resident above the age of 16, whether a citizen or not, to carry an identification card calledte'udat zehut(Hebrew:תעודת זהות) inHebreworbiţāqat huwīya(بطاقة هوية) inArabic. The card is designed in a bilingual form, printed inHebrewandArabic; however, the personal data is presented in Hebrew by default and may be presented in Arabic as well if the owner decides so. The card must be presented to an official on duty (e.g., a policeman) upon request, but if the resident is unable to do this, one may contact the relevant authority within five days to avoid a penalty. Until the mid-1990s, the identification card was considered the only legally reliable document for many actions such as voting or opening a bank account. Since then, the new Israeli driver's licenses which include photos and extra personal information are now considered equally reliable for most of these transactions. In other situations, any government-issued photo ID, such as a passport or a military ID, may suffice. Japanese citizens are not required to have identification documents with them within the territory of Japan. When necessary, official documents, such as one'sJapanese driver's license,individual number card, basic resident registration card,[37]radio operator license,[38]social insurance card, health insurance card or passport are generally used and accepted. On the other hand, mid- to long-term foreign residents are required to carry theirZairyū cards,[39]while short-term visitors and tourists (those with a Temporary Visitor status sticker in their passport) are required to carry theirpassports. Since 1994, Kazakhstan has issued a compulsory identity card (Kazakh:Jeke kuälık), with a validity of 10 years, for all its citizens over the age of 16.[40]In order to receive an ID card, a Kazakh citizen must apply to NJSC State Corporation "Government for Citizens" at their permanent or temporary place of residence.[41] Currently, there's no legislation in requiring persons in Kazakhstan to carry their ID cards in public.[42]In addition, the ID card documents could be stored digitally in mobile phones due to an eGov app launched in November 2019.[43] The Kuwaiti identity card is issued to Kuwaiti citizens. It can be used as atravel documentwhen visiting countries in theGulf Cooperation Council. The first post-Soviet Kyrgyz identity document was regulated in a government resolution No. 775 of October 17, 1994 "on the approval of the regulations on the passport system of the Kyrgyz Republic" which included a sample passport and its description.[44] According to the Resolution of the Government of the Kyrgyz Republic dated November 18, 2016 No. 598, 1994 passports with a mark on the extension of the validity period "indefinitely" from April 1, 2017 completely lost their legal force and were recognized as invalid.[45] TheMacau Resident Identity Cardis an official identity document issued by the Identification Department to permanent residents and non-permanent residents. In Malaysia, theMyKadis the compulsory identity document forMalaysiancitizens aged 12 and above. Introduced by theNational Registration Department of Malaysiaon September 5, 2001, as one of fourMSC Malaysiaflagship applications[46]and a replacement for theHigh Quality Identity Card(Kad Pengenalan Bermutu Tinggi), Malaysia became the first country in the world to use an identification card that incorporates both photo identification andfingerprintbiometricdata on an in-built computer chip embedded in a piece of plastic.[47] Myanmar citizens are required to obtain a National Registration Card (NRC), while non-citizens are given a Foreign Registration Card (FRC). New biometric cards rolled out in 2018. Information displayed in both English and Nepali.[48][49] InPakistan, all adult citizens must register for the Computerized National Identity Card (CNIC), with a unique number, at age 18. CNIC serves as an identification document to authenticate an individual's identity as a citizen ofPakistan. Earlier on, National Identity Cards (NICs) were issued to citizens of Pakistan. Now, the government has shifted all its existing records of National Identity Cards (NIC) to the central computerized database managed byNADRA. New CNIC's are machine readable and have security features such as facial and fingerprint information. At the end of 2013, smart national identity cards, SNICs, were also made available. ThePalestinian Authorityissues identification cards following agreements with Israel. Since 1995, in accordance to theOslo Accords, the data is forwarded to Israeli databases and verified.[citation needed]In February 2014, a presidential decision issued by Palestinian presidentMahmoud Abbasto abolish the religion field was announced.[50]Israel has objected to abolishing religion on Palestinian IDs because it controls their official records, IDs and passports and the PA does not have the right to make amendments to this effect without the prior approval of Israel. The Palestinian Authority inRamallahsaid that abolishing religion on the ID has been at the center of negotiations with Israel since 1995. The decision was criticized byHamasofficials inGaza Strip, saying it is unconstitutional and will not be implemented in Gaza because it undermines the Palestinian cause.[51] A new Philippines identity card known as the Philippine Identification System (PhilSys) ID card began to be issued in August 2018 to Filipino citizens and foreign residents aged 18 and above. This national ID card is non-compulsory but should harmonize existing government-initiated identification cards that have been issued – including the Unified Multi-Purpose ID issued to members of theSocial Security System,Government Service Insurance System,Philippine Health Insurance Corporationand theHome Development Mutual Fund(Pag-IBIG Fund). InSingapore, every citizen, and permanent resident (PR) must register at the age of 15 for an Identity Card (IC). The card is necessary not only for procedures of state but also in the day-to-day transactions such as registering for a mobile phone line, obtaining certain discounts at stores, and logging on to certain websites on the internet. Schools frequently use it to identify students, both online and in exams.[52] Every citizen of South Korea over the age of 17 is issued an ID card calledJumindeungrokjeung(주민등록증). It has had several changes in its history, the most recent form being a plastic card meeting the ISO 7810 standard. The card has the holder's photo and a 15-digit ID number calculated from the holder's birthday and birthplace. A hologram is applied for the purpose of hampering forgery. This card has no additional features used to identify the holder, save the photo. Other than this card, the South Korean government accepts a Korean driver's license card, an Alien Registration Card, a passport and a public officer ID card as an official ID card. The E-National Identity Card(abbreviation: E-NIC) is the identity document in use inSri Lanka. It is compulsory for all Sri Lankan citizens who are sixteen years of age and older to have a NIC. NICs are issued from the Department for Registration of Persons. The Registration of Persons Act No.32 of 1968 as amended by Act Nos 28 and 37 of 1971 and Act No.11 of 1981 legislates the issuance and usage of NICs. Sri Lankais in the process of developing aSmart CardbasedRFIDNIC card which will replace the obsolete 'laminated type' cards by storing the holders information on a chip that can be read by banks, offices, etc., thereby reducing the need to have documentation of these data physically by storing in thecloud. The NIC number is used for unique personal identification, similar to thesocial security numberin the US. InSri Lanka, all citizens over the age of 16 need to apply for aNational Identity Card(NIC). Each NIC has a unique 10-digit number, in the format 000000000A (where 0 is a digit and A is a letter). The first two digits of the number are your year of birth (e.g.: 93xxxxxxxx for someone born in 1993). The final letter is generally a 'V' or 'X'. An NIC number is required to apply for a passport (over 16), driving license (over 18) and to vote (over 18). In addition, all citizens are required to carry their NIC on them at all times as proof of identity, given the security situation in the country.[citation needed]NICs are not issued to non-citizens, who are still required to carry a form of photo identification (such as a photocopy of their passport or foreign driving license) at all times. At times the Postal ID card may also be used. The "National Identification Card" (Chinese:國民身分證) is issued to all nationals of theRepublic of China (Official name of Taiwan)aged 14 and older who havehousehold registrationin theTaiwan area. The Identification Card is used for virtually all activities that require identity verification within Taiwan such as opening bank accounts, renting apartments,employment applicationsand voting. The Identification Card contains the holder's photo,ID number,Chinese name, and (Minguo calendar) date of birth. The back of the card also contains the person's registered address where official correspondence is sent, place of birth, and the name of legal ascendants and spouse (if any). If residents move, they must re-register at a municipal office (Chinese:戶政事務所). ROC nationals with household registration in Taiwan are known as "registered nationals". ROC nationals who do not have household registration in Taiwan (known as "unregistered nationals") do not qualify for the Identification Card and its associated privileges (e.g., the right to vote and the right of abode in Taiwan), but qualify for theRepublic of China passport, which unlike the Identification Card, is not indicative of residency rights in Taiwan. If such "unregistered nationals" are residents of Taiwan, they will hold aTaiwan Area Resident Certificateas an identity document, which is nearly identical to the Alien Resident Certificate issued to foreign nationals/citizens residing in Taiwan. In 1994, the first post-Soviet Tajikinternal passportappeared, which was filled in manually. Neither the number nor the series of the document were printed on the inner pages, and the owner's photo was easy to re-stick. This document turned out to be the most convenient for falsification, hence such passports were in great demand among citizens of neighbouring countries hiding from justice.[45] InThailand, the Thai National ID Card (Thai: บัตรประจำตัวประชาชน; RTGS: bat pracham tua pracha chon) is an official identity document issued only to Thai Nationals. The card proves the holder's identity for receiving government services and other entitlements. Following thedissolutionof theSoviet Unionand theestablishment of independent Turkmenistan, blankpassports of citizens of the USSRof the 1974 model and foreign passports of citizens of the USSR were used in Turkmenistan both as internal identity document andpassport, in which the stamp “Citizen of Turkmenistan” was placed. The unified national passport system was introduced in Turkmenistan on October 25, 1996 by the Decree of the President "On Approval of the Regulations on the Passport System in Turkmenistan".[53]According to the approved regulations, the exchange and issuance of national passports of a citizen of Turkmenistan was to be carried out in the period from October 25, 1996 to December 31, 2001. The new document kept its dual-purpose role as internal identity document and passport. Following the introduction of a Turkmen biometric passport in July 2008 to be used astravel document, a separateinternal passportwas issued. The Federal Authority For Identity and Citizenship is a government agency that is responsible for issuing the National Identity Cards for the citizens (UAE nationals), GCC (Gulf Corporation Council) nationals and residents in the country. All individuals are mandated to apply for the ID card at all ages. For individuals of 15 years and above, fingerprint biometrics (10 fingerprints, palm, and writer) are captured in the registration process. Each person has a unique 15-digit identification number (IDN) that a person holds throughout his/her life. The Identity Card is a smart card that has a state-of-art technology in the smart cards field with very high security features which make it difficult to duplicate. It is a 144KB Combi Smart Card, where the electronic chip includes personal information, 2 fingerprints, 4-digit pin code, digital signature, and certificates (digital and encryption). Personal photo, IDN, name, date of birth, signature, nationality, and the ID card expiry date are fields visible on the physical card. In theUAEit is used as an official identification document for all individuals to benefit from services in the government, some of the non-government, and private entities in the UAE. This supports the UAE's vision of smart government as the ID card is used to securely access e-services in the country. The ID card could also be used by citizens as an official travel document between GCC countries instead of using passports. The implementation of the national ID program in the UAE enhanced security of the individuals by protecting their identities and preventingidentity theft.[54] Following thedissolutionof theSoviet Union,Uzbek passportwas also used as an internal identity document. In September 2020,President of UzbekistanShavkat Mirziyoyevsigned a decree "On measures to introduce ID cards in the Republic of Uzbekistan". According to the document, from January 1, 2021, a unified personal identification system will be introduced in the country, which provides for a gradual replacement of biometric passports with ID cards with an electronic data carrier (chip) by 2030. This will also allow citizens to use government services. It is expected that the document processing period will be 1 day[55]. Until December 31, 2022, ID cards was issued voluntarily to persons who have reached the age of 16, as well as in case of loss of a passport, desire to change the full name, nationality and for other reasons specified in the legislation. From January 1, 2023 to December 31, 2030, the exchange of biometric passports for ID cards is mandatory as they expire. In Vietnam, all citizens above 14 years old must possess anIdentification cardprovided by the local authority, and must be reissued when the citizens' years of age reach 25, 40 and 60. Children from 6 to under 14 years old can request if needed. Formerly apeople's ID documentwas used.[56] National identity cards issued to the citizens of theEuropean UnionandEuropean Free Trade Association(Iceland,Liechtenstein,Norway, andSwitzerland) that state the bearer's citizenship as belonging to an EU/EFTA member can be used as identity documents within the home country, and astravel documentsto exercise theright of free movementin the EU or EFTA.[57][58][59] During theUK Presidency of the EU in 2005a decision was made to: "Agree common standards for security features and secure issuing procedures for ID cards (December 2005), with detailed standards agreed as soon as possible thereafter. In this respect, the UK Presidency put forward a proposal for the EU-wide use of biometrics in national identity cards".[60] From August 2, 2021, the European identity card[61][62]is intended to replace and standardize the various identity card styles currently in use.[a][64][65] The Austrian identity card is issued to Austrian citizens. It can be used as a travel document when visiting countries in the EU/EFTA, Albania, Andorra, Bosnia and Herzegovina, Georgia, Kosovo, Moldova, Monaco, Montenegro, North Macedonia, San Marino, Serbia, Vatican City, the French overseas territories and the British Crown Possessions, as well as on organized tours to Jordan (through Aqaba airport) and Tunisia. Only around 10% of the citizens of Austria had this card in 2012, as they can use the Austrian driver's licenses or other identity cards domestically and the more widely accepted Austrian passport abroad. InBelgium, everyone above the age of 12 is issued an identity card (carte d'identitéin French,identiteitskaartin Dutch andPersonalausweisin German), and from the age of 15 carrying this card at all times is mandatory. For foreigners residing in Belgium, similar cards (foreigner's cards,vreemdelingenkaartin Dutch,carte pour étrangersin French) are issued, although they may also carry a passport, a work permit, or a (temporary) residence permit. Since 2000, all newly issued Belgian identity cards have a chip (eID card), and roll-out of these cards is expected to be complete in the course of 2009. Since 2008, the aforementioned foreigner's card has also been replaced by an eID card, containing a similar chip. The eID cards can be used both in the public and private sector for identification and for the creation of legally binding electronic signatures. Until end 2010 Belgian consulates issued old style ID cards (105 x 75 mm) to Belgian citizens who were permanently residing in their jurisdiction and who chose to be registered at the consulate (which is strongly advised). Since 2011 Belgian consulates issue electronic ID cards, the electronic chip on which is not activated however. InBulgaria, it is obligatory to possess an identity card (Bulgarian – лична карта, lichna karta) at the age of 14 and above. Any person above 14 being checked by the police without carrying at least some form of identification is liable to a fine of 50 Bulgarian levs (about €25). All Croatian citizens may request an Identity Card, calledOsobna iskaznica(literally Personal card). All persons over the age of 18 must have an Identity Card and carry it at all times. Refusal to carry or produce an Identity Card to a police officer can lead to a fine of 100kunaor more[needs update]and detention until the individual's identity can be verified by fingerprints. The Croatian ID card is valid in the entire European Union, and can also be used to travel throughout the non-EU countries of the Balkans. The 2013 design of the Croatian ID card is prepared for future installation of anelectronic identity cardchip, which is set for implementation in 2014.[66] The acquisition and possession of a Civil Identity Card is compulsory for any eligible person who has reached twelve years of age. On January 29, 2015, it was announced that all future IDs to be issued will be biometric.[67]They can be applied for at Citizen Service Centres (KEP) or at consulates with biometric data capturing facilities. An ID card costs €30 for adults and €20 for children with 10/5 years validity respectively. It is a valid travel document for the entire European Union. InCzech, an ID is calledObčanský průkaz, an identity card with a photo is issued to all citizens of theCzech Republicat the age of 15. It is officially recognised by all member states of theEuropean Unionfor intra EU travel. Travelling outside the EU mostly requires theCzech passport. Denmark is the onlyEU/EEAcountry that does not issueEU standardnational identity cards or travel documents in a card format. The most common identity documents in Denmark are driving licences and passports, containing both thepersonal identification numberand a photo. Identity documents are not mandatory in Denmark. For those who do not have a passport or driving licence, Danish identification cards (Danish:legitimationskort) are issued by municipalities. Each municipality has their own design and they are not accepted as valid travel documents outside Denmark. They were launched in 2017, replacing previous 'Youth Cards'.[68]Since 2018, information about the nationality of the cardholder has been included which briefly allowed the card to be used for travel to Sweden.[69]However in September 2019, Swedish authorities explicitly banned Danish municipal identity cards from being used for entry for security reasons.[70]In 2021, the Danish Ministry of Interior came to the conclusion that more secure ID cards were not on the agenda due to prohibitive costs.[71] Previously,Personal identification number certificates(Danish:Personnummerbevis)were optionally issued in Denmark but have been largely replaced byNational Health Insurance Card(Danish:Sundhedskortet) which contains the same information and health insurance information. The National Health Insurance Card is issued to all health insured residents in Denmark. It was commonly used as ade factoidentity document despite the fact it has no photo of the holder. Until 2004, the national debit cardDankortcontained a photo of the holder and was widely accepted as identification until Danish banks lobbied successfully to have pictures removed from debit cards. Between 2004 and 2016, municipalities issued a "photo identity card" or "youth cards" (Danish:billedlegitimationskort), but it was limited to proof of age verification. The Estonian identity card (Estonian:ID-kaart) is achippedpicture ID in theRepublic of Estonia. An Estonian identity card is officially recognised by all member states of theEuropean Unionfor intra EU travel. For travelling outside the EU, Estonian citizens may also require apassport. The card's chip stores akey pair, allowing users to cryptographically sign digital documents based on principles ofpublic key cryptographyusingDigiDoc. Under Estonian law, since December 15, 2000 the cryptographic signature is legally equivalent to a manualsignature. The Estonian identity card is also used for authentication in Estonia's ambitiousInternet-based votingprogramme. In February 2007, Estonia was the first country in the world to institute electronic voting for parliamentary elections. Over 30 000 voters participated in the country's first e-election. By 2014, at the European Parliament elections, the number of e-voters has increased to more than 100,000 comprising 31% of the total votes cast.[72] In Finland, any citizen can get an identification card (henkilökortti/identitetskort). This, along with the passport, is one of two official identity documents. It is available as an electronic ID card (sähköinen henkilökortti/elektroniskt identitetskort), which enables logging into certain government services on the Internet. Driving licenses andKELA(social security) cards with a photo are also widely used for general identification purposes even though they are not officially recognized as such. However, KELA has ended the practice of issuing social security cards with the photograph of the bearer, while it has become possible to embed the social security information onto the national ID card. For most purposes when identification is required, only valid documents are ID card, passport or driving license. However, a citizen is not required to carry any of these. France has had a national ID card for all citizens since the beginning ofWorld War IIin 1940. Compulsory identity documents were created before, for workers from 1803 to 1890, nomads (gens du voyage) in 1912, and foreigners in 1917 during World War I. National identity cards were first issued as thecarte d'identité françaiseunder the law of October 27, 1940, and were compulsory for everyone over the age of 16. Identity cards were valid for 10 years, had to be updated within a year in case of change of residence, and their renewal required paying a fee. Under theVichy regime, in addition to the face photograph, the family name, first names, date and place of birth, the card included the national identity number managed by the national statisticsINSEE, which is also used as the national service registration number, as the Social Security account number for health and retirement benefits, for access to court files and for tax purposes. Under the decree 55-1397 of October 22, 1955[73][74]a revised non-compulsory card, thecarte nationale d'identité(CNI) was introduced. The law (Art. 78–1 to 78–6 of theFrench code of criminal procedure(Code de procédure pénale)[75]mentions only that during an ID check performed by police, gendarmerie or customs officer, one can prove his identity "by any means", the validity of which is left to the judgment of the law enforcement official. Though not stated explicitly in the law, an ID card, a driving licence, a passport, a visa, aCarte de Séjour, avoting cardare sufficient according to jurisprudence. The decision to accept other documents, with or without the bearer's photograph, like aSocial Security card, atravel cardor abank card, is left to the discretion of the law enforcement officer. According to Art. 78-2 of theFrench Code of Criminal Procedure, ID checks are only possible:[76] The last case allows checks of passers-by ID by the police, especially in neighborhoods with a higher criminality rate which are often the poorest at the condition, according to theCour de cassation, that the policeman does not refer only to "general and abstract conditions" but to "particular circumstances able to characterise a risk of breach of public order and in particular an offence against the safety of persons or property" (Cass. crim. December 5, 1999, n°99-81153, Bull., n°95). In case of necessity to establish your identity, not being able to prove it "by any means" (for example the legality of a road trafficprocès-verbaldepends on it), may lead to a temporary arrest (vérification d'identité) of up to 4 hours for the time strictly required for ascertaining your identity according to art. 78-3 of the French Code of criminal procedure (Code de procédure pénale).[75] For financial transactions, ID cards and passports are almost always accepted as proof of identity. Due to possibleforgery, driver's licenses are sometimes refused. For transactions by cheque involving a larger sum, two different ID documents are frequently requested by merchants. The current identification cards are now issued free of charge and optional, and are valid for ten years for minors, and fifteen for adults.[80]The current government has proposed a compulsory biometric card system, which has been opposed by human rights groups and by the national authority and regulator on computing systems and databases, theCommission nationale de l'informatique et des libertés,CNIL. Another non-compulsory project is being discussed. It is compulsory for allGermancitizens aged 16 or older to possess either aPersonalausweis(identity card) or a passport but not to carry one. Police officers and other officials have a right to demand to see one of those documents (obligation of identification); however, the law does not state that one is obliged to submit the document at that very moment. But asdriver's licences, although sometimes accepted, are not legally accepted forms of identification in Germany, people usually choose to carry theirPersonalausweiswith them. Beginning from November 2010, German ID cards are issued in the ID-1 format and contain an integrated digital signature. The cards have a photograph and a chip with biometric data, including two now mandatory fingerprints. Until October 2010, German ID cards were issued inISO/IEC 7810ID-2 format. On November 1, 2019, German ID cards underwent minor textual adjustments concerning the information field on the surname and surname at birth. On August 2, 2021, German ID cards were adapted to Regulation (EU) 2019/1157. The changes include the country code "DE" being shown in white in the blueEuropean flagon the front and two fingerprints (as an encrypted image file) becoming mandatory. In addition, the version number was added to the machine-readable zone. On May 2, 2024, the doctor's title was moved to the back side of the identity card. A compulsory, universal ID system based on personal ID cards has been in place in Greece sinceWorld War II. ID cards are issued by the police on behalf of the ministry responsible for the Headquarters of theHellenic Police(Ministry of Public OrderorMinistry of Citizen ProtectionorMinistry of the Interiorat times) and display the holder's signature, standardized face photograph, name and surname, legal ascendants name and surname, date and place of birth, height, municipality, and the issuing police precinct. There are also two optional fields designed to facilitate emergency medical care:ABOandRhesus factorblood typing. Fields included in previous ID card formats, such as vocation or profession, religious denomination, domiciliary address, name and surname of spouse, fingerprint, eye and hair color, citizenship and ethnicity were removed permanently as being intrusive of personal data and/or superfluous for the sole purpose of personal identification. Since 2000, name fields have been filled in bothGreekandLatincharacters. According to the Signpost Service of the European Commission [reply to Enquiry 36581], old type Greek ID cards "are as valid as the new type according to Greek law and thus they constitute valid travel documents that all other EU Member States are obliged to accept". In addition to being equivalent to passports within the EU and EFTA, Greek ID cards are the principal means of identification of voters during elections. Since 2005, the procedure to issue an ID card has been automated and now all citizens over 12 years of age must have an ID card, which is issued within one work day.[citation needed]Prior to that date, the age of compulsory issue was at 14 and the whole procedure could last several months. In Greece, an ID card is a citizen's most important state document[original research?]. For instance, it is required to perform banking transactions if the teller personnel is unfamiliar with the apparent account holder, to interact with the Citizen Service Bureaus (KEP),[81]receive parcels or registered mail etc. Citizens are also required to produce their ID card at the request of law enforcement personnel. All the above functions can be fulfilled also with a valid Greek passport (e.g., for people who have lost their ID card and have not yet applied for a new one, people who happen to carry their passport instead of their ID card or Greeks who reside abroad and do not have an identity card, which can be issued only in Greece in contrast to passports also issued by consular authorities abroad). Currently, there are three types of valid ID documents (Személyazonosító igazolvány, néeSzemélyi igazolvány, abbr.Sz.ig.) in Hungary: the oldest valid ones are hard-covered, multi-page booklets and issued before 1989 by the People's Republic of Hungary, the second type is a soft-cover, multi-page booklet issued after the change of regime; these two have one, original photo of the owner embedded, with original signatures of the owner and the local police's representative. The third type is a plastic card with the photo and the signature of the holder digitally reproduced. These are generally called Personal Identity Card. The plastic card shows the owners full name, maiden name if applicable, birth date and place, mother's maiden name, the cardholder's gender, the ID's validity period and the local state authority which issued the card. The card has a 6 digit number + 2 letter unique ID and a separate machine readable zone on the back for identity document scanning devices. It does not have any information about the owner's residential address, nor their personal identity number – this sensitive information is contained on a separate card, called a Residency Card (Lakcímkártya). Personal identity numbers have been issued since 1975; they have the following format in numbers: gender (1 number) – birth date (6 numbers) – unique ID (4 numbers). They are no longer used as a personal identification number, but as a statistical signature. Other valid documents are the passport (blue colored or red colored withRFIDchip) and thedriver's license; an individual is required to have at least one of them on hand all the time. The Personal Identity Card is mandatory to vote in state elections or open a bank account in the country. ID cards are issued to permanent residents of Hungary; the card has a different color for foreign citizens.[citation needed] Icelandic state-issued identity cards are called "Nafnskírteini" (lit.'name certificate').[82]The ID cards are voluntary, conforming to biometricICAOandEU standardsand can be used as a travel document in theEU/EFTAand theNordic countries.[83]Identity documents are not mandatory to carry or own by law (unless driving a car), but can be needed for bank services, age verification and other situations. Most people (91%) have driving licences for day-to-day use.[84] Irelanddoes not issue mandatory national identity cards as such. Except for a brief period during the Second World War when the Irish Department of External Affairs issued identity cards to those wishing to travel to the United Kingdom,[85]Ireland has never issued national identity cards as such. Identity documentation is optional for Irish and British citizens. Nevertheless, identification is mandatory to obtain certain services such as air travel, banking, interactions regarding welfare and public services, age verification, and additional situations. "Non-nationals" aged 16 years and over must produce identification on demand to any immigration officer or a member of theGarda Síochána(police).[86] Passport booklets, passport cards, driving licences, GNIB Registration Certificates[87]and other forms of identity cards can be used for identification. Ireland has issued optionalpassport cardssince October 2015.[88]The cards are the size of a credit card and have all the information from the biographical page of an Irish passport booklet and can be used explicitly for travel in the EU and EFTA. Ireland issues a "Public Services Card" which is useful when identification is needed for contacts regarding welfare and public services. They have photographs but not birth dates and are therefore not accepted by banks. The card is also not considered as being an identity card by the Department of Employment Affairs and Social Protection (DEASP). In anOireachtas(parliament) committee hearing held on February 22, 2018, Tim Duggan of that department stated "A national ID card is an entirely different idea. People are generally compelled to carry (such a card)."[5] Anyone who is legally resident in Italy, whether a citizen or not, is entitled to request an identity card at the local municipality.[89]However, only Italian citizens can use it as a travel document in lieu of a passport, and get it on a consulate/embassy.[90] It is valid in all Europe (except in Belarus, Russia, Ukraine and the UK) and to travel to Turkey, Georgia, Egypt and Tunisia.[91] The Italian citizen is not legally required to carry an identification document, as they have the right to identify themselves verbally. However, if they are asked to present it by law enforcement and have it with them at that moment, they must show it to avoid committing an offense.[92][93]If public-security officers are not convinced of the claimed identity, such as may be the case for a verbally provided identity claim, they may keep the claimant in custody until his/her identity is ascertained;[94]such an arrest is limited to the time necessary for identification and has no legal consequence. Instead, all foreigners in Italy are required by law to have an ID with them at all times.[95]Citizens of EU member countries must be always ready to display an identity document that is legally government-issued in their country. Non-EU residents must have their passport with customs entrance stamp or a residence permit issued by Italian authorities; while all resident/immigrant aliens must have a residence permit (they are otherwise illegal and face deportation), foreigners from certain non-EU countries staying in Italy for a limited amount of time (typically for tourism) may be only required to have their passport with a proper customs stamp. The current Italian identity document is acontactlesselectronic card made ofpolycarbonatein theID-1format with many security features and containing the following items printed bylaser engraving:[96] Moreover, the embedded electronicmicroprocessorchip stores the holder's picture, name, surname, place and date of birth, residency and (only if aged 12 and more) two fingerprints.[97] The card is integrated into the ItalianSSOinfrastructure, theSPIDand permits the holder to use theNFCchip of the card as a login for that service. The card is issued by theMinistry of the Interiorin collaboration with theIPZSin Rome and sent to the applicant within 6 business days.[98] The validity is 10 years for adults, 5 years for minors aged 3–18, 3 years for children aged 0–3[89]and it is extended or shortened in order to expire always on birthday.[99] However, the old classic Italian ID card is still valid and in the process of being replaced with the new eID card since 4 July 2016,[100]because the lack of aMachine Readable Zone, the odd size, the fact that is made of paper and so easy to forge, often cause delays at border controls and, furthermore, foreign countries outside the EU sometimes refuse to accept it as a valid document. These common criticisms were considered in the development of the newItalian electronic identity card, which is in the more common credit-card format and now has many of the latest security features available nowadays. The Latvian "Personal certificate" is issued to Latvian citizens and is valid for travel withinEurope(except Belarus, Russia, Ukraine and UK), Georgia, French Overseas territories and Montserrat (max. 14 days). ThePrincipality of Liechtensteinhas a voluntary ID card system for citizens, the Identitätskarte. Liechtenstein citizens are entitled to use a validnational identity cardto exercise their right of free movement in the EU and EFTA[58][57][101] Lithuanian Personal Identity Card can be used as primary evidence of Lithuanian citizenship, just like a passport and can also be used as proof of identity both inside and outside Lithuania. It is valid for travel within most European nations. The Luxembourgish identity card is issued toLuxembourgish citizens. It serves as proof of identity and nationality and can also be used for travel within theEuropean Unionand a number of other European countries. Maltese identity cards are issued to Maltese citizens and other lawful residents of Malta. They can be used as a travel document when visiting countries in the European Union and the European Free Trade Association. Dutch citizens from the age of 14 are required to be able to show a valid identity document upon request by a police officer or similar official. Furthermore, identity documents are required when opening bank accounts and upon start of work for a new employer. Official identity documents for residents in the Netherlands are: For the purpose of identification in public (but not for other purposes), also a Dutchdriving licenseoften may serve as an identity document. In theCaribbean Netherlands, Dutch and other EU/EFTA identity cards are not valid; and theIdentity card BESis an obligatory document for all residents. In Norway there is no law penalising non-possession of an identity document. But there are rules requiring it for services like banking, air travel and voting (where personal recognition or other identification methods have not been possible). The following documents are generally considered valid (varying a little, since no law lists them):[102]Nordic driving licence, passport (often only from EU and EFTA), national ID card from EU, Norwegian ID card from banks and some more. Bank ID cards are printed on the reverse of Norwegian debit cards. To get a bank ID card either a Nordic passport or another passport together with Norwegian residence and work permit is needed. TheNorwegian identity cardwas introduced on November 30, 2020.[103][104]Two versions of the card exist, one that states Norwegian citizenship and is usable for exercising freedom of movement within the EU and EFTA[58][57][101]and the other for general identification.[105]The plan started in 2007 and was delayed several times.[106]Banks were campaigning to be freed from the task of issuing ID cards, stating that it was supposed to be the responsibility of state authorities.[107]Some banks ceased issuing ID cards, so people had to carry their passport for credit card purchases or buying prescribed medication if not in possession of a driving licence.[108][109] Foreign citizens resident in Norway are not allowed to get the Norwegian identity card. When banks stopped issuing the cards and it was suggested that citizens get a national identity card, foreign citizens who did not have a driving licence or a homeland passport were left outside of the system. Therefore, as of 2022 there are plans to issue a version of the Norwegian identity card for foreign citizens.[110] As of 2020, a digital ID document was introduced in Norway.[111]It requires aphone appand is useful for age checks, pick up of postal packages as well as other tasks. To activate it, a passport or national ID-card is needed. Many young people targeted for alcohol age checks or without a driver's license tend to use it, but also some older citizens.[112] Every Polish citizen 18 years of age or older residing permanently inPolandmust have an Identity Card (dokument tożsamości) issued by the local Office of Civic Affairs. Polish citizens living permanently abroad are entitled, but not required, to have one. All Portuguese citizens are required by law to obtain an Identity Card when they turn 6 years of age. They are not required to carry it at all times but are obliged to present it to the authorities if requested. The old format of the cards (yellow laminated paper document) featured a portrait of the bearer, their fingerprint, and the names of parent(s), among other information. They are currently being replaced by grey plastic cards with a chip, calledCartão de Cidadão(Citizen's Card), which now incorporate NIF (Tax Number), Cartão de Utente (Health Card) and Social Security, all of which are protected by a PIN obtained when the card is issued. The new Citizen's Card is technologically more advanced than the former Identity Card and has the following characteristics: Every citizen ofRomaniamust register for an ID card (Carte de identitate, abbreviatedCI) at the age of 14. The CI offers proof of the identity, address, sex and other data of the possessor. It has to be renewed every 10 years. It can be used instead of a passport for travel inside the European Union and several other countries outside the EU. Another ID card is the Provisional ID Card (Cartea de Identitate Provizorie) issued temporarily when an individual cannot get a normal ID card. Its validity extends for up to 1 year. It cannot be used in order to travel within the EU, unlike the normal ID card. Other forms of officially accepted identification include thedriver's licenseand the birth certificate. However, these are accepted only in limited circumstances and cannot take the place of the ID card in most cases. The ID card is mandatory for dealing with government institutions, banks or currency exchange shops. A valid passport may also be accepted, but usually only for foreigners. In addition, citizens can be expected to provide the personal identification number (CNP) in many circumstances; purposes range from simple unique identification and internal book-keeping (for example when drawing up the papers for the warranty of purchased goods) to being asked for identification by the police. The CNP is 13 characters long, with the format S-YY-MM-DD-RR-XXX-Y. Where S is the sex, YY is year of birth, MM is month of birth, DD is day of birth, RR is a regional id, XXX is a unique random number and Y is a control digit. Presenting the ID card is preferred but not mandatory when asked by police officers; however, in such cases people are expected to provide a CNP or alternate means of identification which can be checked on the spot (via radio if needed). The information on the ID card is required to be kept updated by the owner, current address of domicile in particular. Doing otherwise can expose the citizen to certain fines or be denied service by those institutions that require a valid, up to date card. In spite of this, it is common for people to let the information lapse or go around with expired ID cards. The Slovak ID card (Slovak:Občiansky preukaz) is a picture ID inSlovakia. It is issued to citizens of the Slovak Republic who are 15 or older. A Slovak ID card is officially recognised by all member states of theEU/EFTAfor travel. For travel outside the EU, Slovak citizens may also require theSlovak passport, which is a legally accepted form of picture ID as well. Police officers and some[who?]other officials have a right to demand to see one of those documents, and the law states that one is obliged to submit such a document at that very moment. If one fails to comply, law enforcement officers are allowed to insist on personal identification at the police station. Every Slovenian citizen regardless of age has the right to acquire an identity card (Slovene:osebna izkaznica), and every citizen of theRepublic of Sloveniaof 18 years of age or older is obliged by law to acquire one and carry it at all times (or any other identity document with a picture, e.g., the Slovene passport or a driver's license). The card is a valid identity document within all member states of theEuropean Unionfor travel within the EU. With the exception of the Faroe Islands and Greenland, it may be used to travel outside of the EU toNorway,Liechtenstein,BiH,Macedonia,Montenegro,Serbia, andSwitzerland. The front side displays the name and surname, sex, nationality, date of birth, and expiration date of the card, as well as the number of the ID card, a black and white photograph, and a signature. The back contains the permanent address, administrative unit, date of issue,EMŠO, and a code with key information in amachine-readable zone. Depending on the holder's age, the card is valid for 5 years or 10 years or permanently, and it is valid 1 year for foreigners living in Slovenia, in case of repeated loss, and in some other circumstances.[113]Since 28 March 2022, it is possible but not mandatory to acquire abiometricID card.[114]An identity document in healthcare institutions is thehealthcare insurance card. Since April 2023, a biometric ID card may be used instead.[115] InSpain, citizens, resident foreigners, and companies have similar but distinct identity numbers, some with prefix letters, all with a check-code:[116] Despite the NIF/CIF/NIE/NIF distinctions theidentity numberis unique and always has eight digits (theNIEhas 7 digits) followed by a letter calculated from a 23-Modular arithmeticcheckused to verify the correctness of the number. The letters I, Ñ, O and U are not used and the sequence is as follows: This number is the same for tax, social security and all legal purposes. Without this number (or a foreign equivalent such as a passport number), a contract may not be enforceable. In Spain the formal identity number on an ID card is the most important piece of identification. It is used in all public and private transactions. It is required to open a bank account, to sign a contract, to have state insurance and to register at a university and should be shown when being fined by a police officer.[120]It is one of the official documents required to vote at any election, although any other form of official ID such as a driving licence or passport may be used. The card also constitutes a validtravel documentwithin theEuropean Union.[121] Non-resident citizens of countries such as the United Kingdom, where passport numbers are not fixed for the holder's life but change with renewal, may experience difficulty with legal transactions after the document is renewed since the old number is no longer verifiable on a valid (foreign) passport. However, a NIE is issued for life and does not change and can be used for the same purposes. Sweden does not have a legal statute for compulsory identity documents. However, ID cards are regularly used to ascertain a person's identity when completing certain transactions. These include but are not limited to banking and age verification. Also, interactions with public authorities often require it, in spite of the fact that there is no law explicitly requiring it, because there are laws requiring authorities to somehow verify people's identity. Without Swedish identity documents difficulties can occur accessing health care services, receiving prescription medications and getting salaries or grants. From 2008, EU passports have been accepted for these services due to EU legislation (with exceptions including banking), but non-EU passports are not accepted. Identity cards have therefore become an important part of everyday life. There are currently three public authorities that issue ID cards: theSwedish Tax Agency, theSwedish Police Authority, and theSwedish Transport Agency. The Tax Agency cards can only be used within Sweden to validate a person's identity, but they can be obtained both by Swedish citizens and those that currently reside in Sweden. ASwedish personal identity numberis required. It is possible to get one without having any Swedish ID card. In this case a person holding such a card must guarantee the identity, and the person must be a verifiable relative or the boss at the company the person has been working or a few other verifiable people. The Police can only issue identity documents to Swedish citizens. They issue an internationally recognised identity card according to EU standard usable for intra-Europeantravel, and Swedish passports which are acceptable as identity documents worldwide.[122] The Transport Agency issuesdriving licences, which are valid as identity documents in Sweden. To obtain one, one must be approved as a driver and strictly have another Swedish identity document as proof of identity. In the past there have been certain groups that have experienced problems obtaining valid identification documents. This was due to the initial process that was required to validate one's identity, unregulated security requirements by the commercial companies which issued them. Since July 2009, the Tax Agency has begun to issue ID cards, which has simplified the identity validation process for foreign passport holders. There are still requirements for identity validation that can cause trouble, especially for foreign citizens, but the list of people who can validate one's identity has been extended. Swiss citizenshave noobligation of identificationinSwitzerlandand thus, are not required by law to be able to show a valid identity document upon request by a police officer or similar official. Furthermore, identity documents are required when opening a bank account or when dealing withpublic administration. Relevant in daily life of Swiss citizens are SwissID card[123]and Swissdriver's license;[124]the latter needs to be presented upon request by a police officer, when driving a motor-vehicle as e.g., a car, a motorcycle, a bus or a truck. Swiss citizens are entitled to use a validnational identity cardto exercise their right of free movement in EFTA[58]and the EU.[59] Swiss passport[125]is needed only for e.g., travel abroad to countries not accepting Swiss ID card as travel document. No national identity card in the principality. Passports and driving licenses are most commonly used for identification.[14]When visiting France or Spain a passport is needed in lack[clarification needed]of a national identity card, although driving licenses are often used and accepted unofficially. From January 12, 2009, the Government ofAlbaniais issuing a compulsoryelectronic and biometric ID Card(Letërnjoftim)for its citizens.[126]Every citizen at age 16 must apply for Biometric ID card. Azerbaijanissues a compulsoryID Card(Şəxsiyyət vəsiqəsi)for its citizens. Every citizen at age 16 must apply for ID card. Belarus has combined the internationalpassportand theinternal passportinto one document which is compulsory from age 14. It follows the international passport convention but has extra pages for domestic use. Bosnia and Herzegovina allows every person over the age of 15 to apply for an ID card, and all citizens over the age of 18 must have the national ID card with them at all times. A penalty is issued if the citizen does not have the acquired ID card on them or if the citizen refuses to show proof of identification. The Kosovo Identity Card is an ID card issued to the citizens of Kosovo for the purpose of establishing their identity, as well as serving as proof of residency, right to work and right to public benefits. It can be used instead of a passport for travel to some neighboring countries. In Moldova, identity cards (Romanian:Carte de identitate) have been issued since 1996. The first person to get identity card was former president of Moldova –Mircea Snegur. Since then, all the Moldovan citizens are required to have and use it inside the country. It cannot be used to travel outside the country; however, it is possible to pass the so-calledTransnistrianborder with it. The Moldovan identity card may be obtained by a child from his/her date of birth. The state Public Services Agency is responsible for issuing identity cards and for storing data of all Moldovan citizens. Monégasque identity cards are issued toMonégasque citizensand can be used for travel within theSchengen Area. InMontenegroevery resident citizen over the age of 14 can have theirLična kartaissued, and all persons over the age of 18 must have ID cards and carry them at all times when they are in public places. It can be used for international travel toBosnia and Herzegovina,Serbia,North Macedonia,KosovoandAlbaniainstead of thepassport. The identity card of North Macedonia (Macedonian:Лична карта, Lična karta) is a compulsory identity document issued inNorth Macedonia. The document is issued by the police on behalf of the Ministry of Interior. Every citizen over 18 must be issued this identity card. The role of identity documentation is primarily played by the so-calledRussian internal passport, a passport-size booklet which contains a person's photograph, birth information and other data such as registration at the place of residence (informally known aspropiska), marital data, information about military service and underage children. Internal passports are issued by theMain Directorate for Migration Affairsto all citizens who reach their 14th birthday and do not reside outside Russia. They are re-issued at the age 20 and 45. The internal passport is commonly considered the only acceptable ID document in governmental offices, banks, while traveling by train or plane, getting a subscription service, etc. If the person does not have an internal passport (i.e., foreign nationals or Russian citizens who live abroad), an international passport can be accepted instead, theoretically in all cases. Another exception is army conscripts, who produce theIdentity Card of the Russian Armed Forces. Internal passports can also be used to travel to Belarus, Kazakhstan, Tajikistan, Kyrgyzstan, Abkhazia andSouth Ossetia.[citation needed] Other documents, such as driver's licenses or student cards, can sometimes be accepted as ID, subject to regulations. The national identity card is compulsory for all Sanmarinese citizens.[127]Biometric and valid for international travel since 2016. InSerbiaevery resident citizen over the age of 10 can have theirLična kartaissued, and all persons over the age of 16 must have ID cards and carry them at all times when they are in public places.[128]It can be used for international travel toBosnia and Herzegovina,MontenegroandMacedoniainstead of the passport.[129]Contact microchip on ID is optional. Kosovoissues its ownidentity cards. These documents are accepted by Serbia when used as identification while crossing the Serbia-Kosovo border.[130]They can also be used for international travel toMontenegro[131]andAlbania.[132] The Turkish national ID card (Turkish:Nüfus Cüzdanı) is compulsory for all Turkish citizens from birth. Cards for males and females have a different color. The front shows the first and last name of the holder, first name of legal ascendants, birth date and place, and an 11 digit ID number. The back shows marital status, religious affiliation, the region of the county of origin, and the date of issue of the card. On February 2, 2010, the European Court of Human Rights ruled in a 6 to 1 vote that the religious affiliation section of the Turkish identity card violated articles 6, 9, and 12 of the European Convention of Human Rights, to which Turkey is a signatory. The ruling should coerce the Turkish government to completely omit religious affiliation on future identity cards. The Turkish police are allowed to ask any person to show ID, and refusing to comply may lead to arrest. It can be used for international travel toNorthern Cyprus,GeorgiaandUkraineinstead of a passport. Ministry of Interiorof Turkey released EU-like identity cards for all Turkish citizens in 2017. New identity cards are fully biometric and can be used as a bank card, bus ticket or at international trips. The Ukrainian identity card or Passport of the Citizen of Ukraine (also known as theInternal passportor Passport Card) is an identity document issued to citizens of Ukraine. Every Ukrainian citizen aged 14[133]or above and permanently residing in Ukraine must possess an identity card issued by local authorities of the State Migration Service of Ukraine. Ukrainian identity cards are valid for 10 years (or 4 years, if issued for citizens aged 14 but less than 18) and afterwards must be exchanged for a new document. As of July 2021, the UK has no national identity card and has no generalobligation of identification, although drivers may be required to produce their licence and insurance documents to a police station within 7 days of a traffic stop if they are not able to provide them at the time. The UK had an identity card during World War II as part of a package of emergency powers; this was abolished in 1952 by repealing theNational Registration Act 1939. Identity cards were first proposed in the mid-1980s for people attending football matches, following a series of high-profilehooliganismincidents involving English football fans. However, this proposed identity card scheme never went ahead asLord Taylor of Gosforthruled it out as "unworkable" in theTaylor Reportof 1990. TheIdentity Cards Act 2006implemented a national ID scheme backed by a National Identity Register – an ambitious database linking a variety of data including Police, Health, Immigration, Electoral Rolls and other records. Several groups such asNo2IDformed to campaign against ID cards in Britain and more importantly the NIR database, which was seen as a "panopticon" and a significant threat to civil liberties. The scheme saw setbacks after theLoss of United Kingdom child benefit data (2007)and other high-profiledata lossesturned public opinion against the government storing large, linked personal datasets. Various partial rollouts were attempted such as compulsory identity cards for non-EU residents in Britain (starting late 2008), with voluntary registration for British nationals introduced in 2009 and mandatory registration proposed for certain high-security professions such as airport workers. However, the mandatory registrations met with resistance from unions such as theBritish Airline Pilots' Association.[134] After the2010 general electiona new coalition government was formed. Both parties had pledged to scrap ID cards in their election manifestos. The 2006 act was repealed by theIdentity Documents Act 2010which also required that the nascent NIR database be destroyed. TheHome Officeannounced that the national identity register had been destroyed on February 10, 2011.[135]Prior to the 2006 Act, work had started to updateBritish passportswith RFID chips to support the use ofePassport gates. This continued, with traditional passports being replaced with RFID versions on renewal. Driving licences, particularly the photocard driving licence introduced in 1998, andpassportsare now the most widely used ID documents in the United Kingdom, but the former cannot be used as travel documents, except within theCommon Travel Area. However, driving licences from the UK and EU countries are usually accepted within EU and EFTA countries for identity verification. Most people do not carry their passports in public without knowing in advance that they are going to need them as they do not fit in a typicalwalletand are relatively expensive to replace. Consequently, driving licences are the most common and convenient form of ID in use, along withPASS-accredited cards, used mainly for proof-of-age purposes. Unlike a travel document, they do not show the holder's nationality or immigration status. For proof-of-age purchases, aprovisional driving licenseis often used by those who do not hold a full driving license, as they are easy to obtain. Generally, in day-to-day life, most authorities do not ask for identification from individuals in a sudden, spot-check type manner, such as by police or security guards, although this may become a concern in instances ofstop and search[clarification needed–discuss]. Gibraltar has operated an identity card system since 1943. The cards issued were originally folded cardboard, similar to the wartime UK Identity cards abolished in 1950. There were different colours for British and non-British residents. Gibraltar requires all residents to hold identity cards, which are issued free. In 1993 the cardboard ID card was replaced with a laminated version. However, although valid as atravel documentto the UK, they were not accepted by Spain. A new version in an EU-compliant format was issued and is valid for use throughout the EU, although as very few are seen, there are sometimes problems when used, even in the UK. ID cards are needed for some financial transactions, but apart from that and to cross the frontier with Spain, they are not in common use.[citation needed] Called the "Identification Card R.R". Optional, although compulsory for voting and other government transactions. Available also for any Commonwealth country citizen who has lived in Belize for a year without leaving and been at least 2 months in an area where the person has been registered in.[136][137] In Canada, different forms of identification documentation are used, but there is no de jure national identity card. TheCanadian passportis issued by the federal (national) government, and the provinces and territories issue various documents which can be used for identification purposes. The most commonly used forms of identification within Canada are the health card anddriver's licenceissued by provincial and territorial governments. The widespread usage of these two documents for identification purposes has made them de facto identity cards. In Canada, a driver's license usually lists the name, home address, height and date of birth of the bearer. A photograph of the bearer is usually present, as well as additional information, such as restrictions to the bearer's driving licence. The bearer is required by law to keep the address up to date.[citation needed] A few provinces, such as Québec and Ontario, issue provincial health care cards which contain identification information, such as a photo of the bearer, their home address, and their date of birth. British Columbia, Saskatchewan and Ontario are among the provinces that produce photo identification cards for individuals who do not possess a driving licence, with the cards containing the bearer's photo, home address, and date of birth.[138][139][140] For travel abroad, a passport is almost always required. There are a few minor exceptions to this rule; required documentation to travel among North American countries is subject to theWestern Hemisphere Travel Initiative, such as theNEXUS programmeand theEnhanced Drivers Licenseprogramme implemented by a few provincial governments as a pilot project. These programmes have not yet gained widespread acceptance, and the Canadian passport remains the most useful and widely accepted international travel document. Optional and not fully launched. Legislation was enacted in 2022.[141][142][143] EveryCosta Ricancitizenmust carry anidentity cardimmediately after turning 18. The card is namedCédula de Identidadand it is issued by the local registrar's office (Registro Civil), an office belonging to the local elections committee (Tribunal Supremo de Elecciones), which in Costa Rica has the same rank as the Supreme Court. Each card has a unique number composed of nine numerical digits, the first of them being the province where the citizen was born (with other significance in special cases such as granted citizenship to foreigners,adopted persons, or in rare cases, old people for whom nobirth certificatewas processed at birth). After this digit, two blocks of four digits follow; the combination corresponds to the unique identifier of the citizen. It is widely requested as part of every legal and financial purpose, often requested at payment withcreditordebit cardsfor identification guarantee and requested for buyingalcoholic beveragesor cigarettes or upon entrance to adults-only places like bars. The card must be renewed every ten years and is freely issued again if lost. Among the information included there are, on the front, two identification pictures and digitized signature of the owner, identification number (known colloquially just as thecédula), first name, first and second-last names and an optionalknown asfield. On the back, there is again the identification number, birth date, where the citizen issues its vote for national elections or referendums, birthplace, gender, date when it must be renewed and amatrix codethat includes all this information and even a digitized fingerprint of the thumb and index finger. The matrix code is not currently being used nor inspected by any kind of scanner. Besides this identification card, every vehicle driver must carry adriving licence, an additional card that uses the same identification number as the ID card (Cédula de Identidad) for the driving license number. A passport is also issued with the same identification number used in the ID card. The same situation occurs with the Social Security number; it is the same number used for the ID card. All non-Costa Rican citizens with aresident statusmust carry an ID card (Cédula de Residencia), otherwise, a passport and a valid visa. Each resident's ID card has a unique number composed of 12 digits; the first three of them indicate their nationality and the rest of them a sequence used by the immigration authority (called Dirección General de Migración y Extranjería). As with the Costa Rican citizens, their Social Security number and their driver's license (if they have it) would use the same number as in their own resident's ID card. A "Cédulade Identidad y Electoral" (Identity and Voting Document) is a National ID that is also used for voting in both Presidential and Congressional ballots. Each "Cédula de Identidad y Electoral" has its unique serial number composed by the serial of the municipality of current residence, a sequential number plus a verification digit. This National ID card is issued to all legal residents ofadultage. It is usually required to validate job applications, legally binding contracts, official documents, buying/sellingreal estate, opening a personal bank account, obtaining aDriver's Licenseand the like. It is issued free of charge[144]by the "Junta Central Electoral" (Central Voting Committee) to allDominicansnot living abroad at the time of reachingadulthood(16 years of age) or younger is they arelegally emancipated. Foreigners who have taken permanent residence and have not yet applied for Dominicannaturalization(i.e., have not opted for Dominican citizenship but have taken permanent residence) are required to pay an issuing tariff and must bring along their non-expired Country of Origin passport and deposit photocopies of their Residential Card and Dominican Red Cross Blood Type card. Foreigners residing on a permanent basis must renew their "Foreign ID" on a 2-, 4-, or 10-year renewal basis (about US$63–US$240, depending on desired renewal period).[145] In El Salvador, ID Card is called Documento Único de Identidad (DUI) (Unique Identity Document). Every citizen above 18 years must carry this ID for identification purposes at any time. It is not based on a smartcard but on a standard plastic card with two-dimensional bar-coded information with picture and signature. In January 2009, the National Registry of Persons (RENAP) inGuatemalabegan offering a new identity document in place of theCédula de Vecindad(neighborhood identity document) to all Guatemala citizens and foreigners. The new document is called "Documento Personal de Identification" (DPI) (Personal Identity Document). It is based on a smartcard with a chip and includes an electronic signature and several measures against fraud.[146] Optional, although compulsory for voting and other government transactions.[147]Since 2022 a brand new biometric National ID Card has been unveiled, free of charge for Jamaican citizens.[148][149] Not mandatory, but needed in almost all official documents, theCURPis the standardized version of an identity document. It actually could be a printed green wallet-sized card (without a photo) or simply an 18-character identification key printed on a birth or death certificate.[150] While Mexico has a national identity card (cédula de identitad personal), it is only issued to children aged 4–17.[151] Unlike most other countries, Mexico has assigned a CURP to nearly all minors, since both the government and most private schools ask parent(s) to supply their children's CURP to keep a data base of all the children. Also, minors must produce their CURP when applying for a passport or being registered at Public Health services by their parent(s). Most adults need the CURP code too, since it is required for almost all governmental paperwork like tax filings and passport applications. Most companies ask for a prospective employee's CURP, voting card, or passport rather than birth certificates.[citation needed] To have a CURP issued for a person, a birth certificate or similar proof must be presented to the issuing authorities to prove that the information supplied on the application is true. Foreigners applying for a CURP must produce a certificate of legal residence in Mexico. Foreign-born naturalized Mexican citizens must present their naturalization certificate. On August 21, 2008, the Mexican cabinet passed the National Security Act, which compels all Mexican citizens to have a biometric identity card, called Citizen Identity Card (Cédula de identidad ciudadana) before 2011.[citation needed] On February 13, 2009, the Mexican government designated the state ofTamaulipasto start procedures for issuing a pilot program of the national Mexican ID card.[citation needed] Although the CURP is thede jureofficial identification document in Mexico, theInstituto Nacional Electoral'svoting cardis thede factoofficial identification and proof of legal age for citizens of ages 18 and older. On July 28, 2009, Mexican President Felipe Calderón, facing the Mexican House of Representatives, announced the launch of the Mexican national Identity card project, which will see the first card issued before the end of 2009. Thecédula de identidad personalis required at age 12 (cedula juvenil) and age 18. Panamanian citizens must carry theircédulaat all times. New biometric national identity cards rolled out in 2019. The card must be renewed every 10 years (every 5 years for those under 18), and it can only be replaced 3 times (with each replacement costing more than the previous one) without requiring a background check, to confirm and verify that the card holder is not selling his or her identity to third parties for human trafficking or other criminal activities. All cards have QR, PDF417, and Code 128 barcodes. The QR code holds all printed (on the front of the card) text information about the card holder, while the PDF417 barcode holds, in JPEG format encoded with Base64, an image of the fingerprint of the left index finger of the card holder. Panamanian biometric/electronic/machine readable ID cards are similar to biometric passports and current European/Czech national ID cards and have only a small PDF417 barcode, with a machine readable area, a contactless smart card RFID chip, and golden contact pads similar to those found in smart card credit cards and SIM cards. The machine-readable code contains all printed text information about the card holder (it replaces the QR code) while both chips (the smart card chip is hidden under the golden contact pads) contain all personal information about the card holder along with a JPEG photo of the card holder, a JPEG photo with the card holder's signature, and another JPEG photo but with all 10 fingerprints of both hands of the card holder. Earlier cards used Code 16K and Code 49 barcodes with magnetic stripes.[152][153] There is no compulsory federal-level ID card that is issued to all U.S. citizens. U.S. citizens and nationals may obtainpassportsorU.S. passport cardsif they choose to, but this is optional and other alternatives are more popular. For most people,driver's licensesissued by the respectivestateand territorial governments have become thede factoidentity cards, and are used for many identification purposes, such as when purchasing alcohol and tobacco, opening bank accounts, and boarding planes, along with confirming a voter's identity in states withvoter photo identificationlaws. Individuals who do not drive can obtain an identification card with the same functions from the state agency that issues driver's licenses. In addition, many schools issue student and teacher ID cards.[154] The United States passed theREAL ID Acton May 11, 2005. The bill requires states to redesign their driver's licenses to comply with federal security standards by December 2009. Federal agencies would then reject licenses or identity cards that do not comply, which would force Americans accessing everything from airplanes to courthouses to have federally mandated cards. At airports, those not having compliant licenses or cards would be redirected to a secondary screening location.[155]As of 2024, every state has implemented some ID that satisfies the standard.[156] In 2006, theU.S. State Departmentstudied the idea of issuing passports withradio-frequency identification, or RFID, chips embedded in them. TheUnited States passportverifies both personal identity and citizenship, but is not mandatory for citizens to possess within the country and is issued by the U.S. State Department on a discretionary basis. Since February 1, 2008, U.S. citizens may apply forpassport cards, in addition to the usualpassportbooks. Although their main purpose is forland and sea travel within North America, the passport card may also be accepted by federal authorities (such as for domestic air travel or entering federal buildings). TheTransportation Security Administration(TSA) accepts the passport card as an identity document at airport security checkpoints.[157] U.S. Citizenship and Immigration Servicesallows the U.S. passport card to be used in the Employment Eligibility Verification FormI-9 (form)process.[158]The passport card is considered a "List A" document that may be presented by newly hired employees during the employment eligibility verification process to show work authorized status. "List A" documents are those used by employees to prove both identity and work authorization when completing the Form I-9. The basic document needed to establish a person's identity and citizenship in order to obtain a passport is abirth certificate. These are issued by either the U.S. state of birth or by the U.S. Department of State for overseas births to U.S. citizens. A child born in the U.S. is in nearly all cases (except for children of foreign diplomats) automatically a U.S. citizen. The parents of a child born overseas to U.S. citizens can report the birth to the U.S. embassy/consulate to obtain a Consular Report of Birth Abroad.[159] Social Security numbers(SSNs) and cards are issued by the U.S.Social Security Administrationfor trackingSocial Securitytaxes and benefits. They have become thede factonational identification number for federal and state taxation, private financial services, and identification with various companies. SSNs do not establish citizenship because they can also be issued to permanent residents. They typically can only be part of the establishment of a person's identity; a photo ID that verifies date of birth is also usually requested. A mix of documents can be presented to, for instance, verify one's legal eligibility to take a job within the United States.Identityandcitizenshipis established by presenting a passport alone, but this must be accompanied by a Social Security card for taxation ID purposes. A driver's license/state ID establishesidentityalone, but does not establishcitizenship, as these can be provided to non-citizens as well. In this case, an applicant without a passport may sign an affidavit of citizenship or be required to present a birth certificate. They must also submit their Social Security number. "Residency" within a certain U.S. jurisdiction, such as a voting precinct, can be proven if the driver's license or state ID has the home address printed on it corresponding to that jurisdiction. Utility bills or other pieces of official printed mail can also suffice for this purpose. In the case of voter registration, citizenship must also be proven with a passport, birth certificate, or signed citizenship affidavit. TheSelective Service Systemhas in the past, in times of a military draft, issued an identification card for men that were eligible for the draft. Australia does not have a national identity card. Instead, various identity documents are used or required to prove a person's identity, whether for government or commercial purposes. Currently,driver licencesandphoto cards, both issued by thestates and territories, are the most widely used personal identification documents in Australia. Additionally, theAustralia Post Keypass identity card, issued byAustralia Post, can be used by people who do not have an Australian drivers licence or an Australian state and territory issued identity photo card. Photo cards are also called "Proof of Age Cards" or similar and can be issued to people as another type of identity. Identification indicating age is commonly required to purchase alcohol and tobacco and to enter nightclubs and gambling venues. Other important identity documents include a passport, an official birth certificate, an official marriage certificate, cards issued by government agencies (typically social security cards), some cards issued by commercial organisations (e.g., a debit or credit card), and utility accounts. Often, some combination of identity documents is required, such as an identity document linking a name, photograph and signature (typically photo-ID in the form of a driver licence or passport), evidence of operating in the community, and evidence of a current residential address. New alcohol laws in the state of Queensland require some Brisbane-based pubs and bars to scan ID documents against a database of people who should be denied alcohol, for which foreign passports and driver's licences are not valid.[160] An "Identification Card" seems to exists among citizens of the Marshall Islands, but little information is found on these documents.[citation needed] National Identity cards, called "FSM Voters National Identity card", are issued on an optional basis, free of charge. The Identity Cards were introduced in 2005.[161] New Zealand does not have an official ID card. The most commonly carried form of identification is a driver licence issued by the Transport Agency. Other forms of special purpose identification documents are issued by different government departments, for example a Firearms Licence issued to gun owners by the Police and the SuperGold card issued to elderly people by the Ministry of Social Development. For purchasing alcohol or tobacco, the only legal forms of identification is a New Zealand or foreign passport, a New Zealand driver licences and a Kiwi Access Card (formerly known as 18+ cards)[162]from the Hospitality Association of New Zealand.[163]Overseas driver licences are not legal for this purpose. For opening a bank account, each bank has its own list of documents that it will accept. Generally speaking, banks accept a foreign or NZ passport, a NZ Firearms Licence, or a foreign ID card by itself. If the customer do not have these documents, they will need to produce two different documents on the approved list (for example a driver licence and a marriage certificate).[164] Republic of Palau Identification Cards are primarily issued to foreign nationals whom are not eligible to acquire a Palau passport or driver's license, under the Digital Residency Act. Foreign nationals are required to undergo a sanctions check. E-National ID cards were rolled out in 2015.[165] "National Voter's Identity card" are optional upon request.[166][167] Tonga's National ID Card was first issued in 2010, and it is optional, along with the driver's licenses and passports. Either one of these are mandatory for to vote though. Applicants need to be 14 years of age or older to apply for a National ID Card.[168] National Identity Cards are being issued since October 2017. Plans for rolling out biometric cards were due for the late 2018.[169][170] Documento Nacional de Identidad (DNI; "National Identity Document") is the main identity document for Argentine residents. It is issued as a card (tarjeta DNI) at birth to all people born in the country (and hence citizens), and to foreigners who register as residents with the National Directorate of Migrations. It must be updated at 8 and 14 years of age, and thereafter every 15 years. The documents are produced at a special plant inBuenos Airesby the Argentine national registry of people (ReNaPer).[171]The National Identity Document (DNI) is the sole instrument for personal identification and is mandatory. Its format and use are regulated by Law No. 17671 on Identification, Registration, and Classification of the National Human Potential, enacted in 1968, replacing the enrolment document issued to men undergoing mandatory military service andlibreta cívicagiven to women upon turning 18. According to this law, the DNI cannot be substituted by any other document for legal purposes. It is required for voting, which is mandatory, and for identification before judicial authorities. The Argentine DNI is also required for conducting procedures with state authorities, and entitles adult bearers to work within the country. From November 4, 2009, as part of a modernization and digitization process of national documents, a new type of DNI with both a booklet and a card was issued; either may be used for most purposes, but the booklet had to be used for voting. The DNI booklet had a light blue cover with laser printing for the citizen's unique number and silver prints for the rest of the presentation. Internally, it had an identical design to the card format but included spaces for marital status, address changes, organ donations, and the stamping of the DNI after voting in national elections. The card was entirely laminated and contained all the individual's data, including a photograph and fingerprint impression. From 2011, the DNI underwent further changes, with no booklet and a different card design, a higher-quality plastic card to be used for all purposes. The booklet was no longer required for voting by holders of the still-valid 2009 version. Since April 1, 2017, the DNI card is the only valid identification document. Taxpayers also have a Unique Tax Identification Code (CUIT). In December 2023, the National Registry of Persons of Argentina (Renaper), a subsidiary of the Ministry of the Interior, introduced the Biometric National Identity Document (DNI). It adheres to international standards of security, with an embeddedelectronic chipand aQR codefor electronic validation, identity verification, digital functions, and advanced security measures. Manufactured using laser technology on polycarbonate, a durable material, the new document has up-to-date physical security features to enhance visual verification and prevent counterfeiting.[172] In Brazil, at the age of 18, all Brazilian citizens are supposed to be issued acédula de identidade(ID card), usually known by its number, theRegistro Geral (RG), Portuguese for "General Registry". The cards are needed to obtain a job, to vote, and to use credit cards. Foreigners living in Brazil have a different kind of ID card. Since the RG is not unique, being issued in a state-basis, in many places theCPF(the Brazilian revenue agency's identification number) is used as a replacement. The current Brazilian driver's license contains both the RG and the CPF, and as such can be used as an identification card as well. There are plans in course to replace the current RG system with a newDocumento Nacional de Identificação(National Identification Document), which will be electronic (accessible by amobile application) and national in scope, and to change the current ID card to a newsmartcard.[173][174] Upon turning 18 every resident inColombiamust obtain an identity document (Spanish:Cédula de CiudadaníaorDocumento de Identidad), which is the only document that proves the identity of a person for legal purposes. ID cards must be carried at all times and must be presented to the police upon request. If the individual fails to present the ID card upon request by the police or the military, he/she is most likely going to be detained at police station even if he/she is not a suspect of any wrongdoing. ID cards are needed to obtain employment, open bank accounts, obtain a passport, driver's license, military card, to enroll in educational institutions, vote or enter public buildings including airports and courthouses; failure to produce ID is a misdemeanor punishable with a fine. ID duplicate costs must be assumed by citizens. Every resident over the age of 14 is issued an identity card called (Tarjeta de Identidad) Every resident of Chile over the age of 18 must have and carry at all times their ID Card calledCédula de Identidadissued by theCivil Registry and Identification Service of Chile. The identity card is the official document that proves the identity of a Chilean person. Among the data it contains is the full name, Unique National Role (RUN) and sex, in addition to the photo, signature and fingerprint. Anyone who wants their profession to appear on their identity card must be registered in the professional registry. This is the only official form of identification for residents in Chile and is widely used and accepted as such. It is necessary for every contract, most bank transactions, voting, driving (along with the driver's licence) and other public and private situations. Biometrics collection is mandatory.[175] InPeru, it is mandatory for all citizens over the age of 18, whether born inside or outside the territory of the Republic, to obtain aNational Identity Document(Documento Nacional de Identidad). The DNI is a public, personal and untransferable document. TheDNIis the only means of identification permitted for participating in any civil, legal, commercial, administrative, and judicial acts. It is also required forvotingand must be presented to authorities upon request. The DNI can be used as apassportto travel to all South American countries that are members ofUNASUR. The DNI is issued by the National Registry of Identification and Civil Status (RENIEC). For Peruvians abroad, service is provided through the Consulates of Peru, in accordance with Articles 26, 31 and 8 of Law No. 26,497. The document is card-sized as defined by ISO format ID-1 (prior to 2005 the DNI was size ISO ID-2; renewal of the card due to the size change was not mandatory, nor did previously-emitted cards lose validity). The front of the card presents photographs of the holder's face, their name, date and place of birth (the latter in coded form),genderandmarital status; the bottom quarter consists ofmachine-readabletext. Three dates are listed as well; the date the citizen was first registered at RENIEC; the date the document was issued; and the expiration date of the document. The back of the DNI features the holder's address (includingdistrict,department and/or province) and voting group. Eight voting record blocks are successively covered with metallic labels when the citizen presents themselves at their voting group on voting days. The back also denotes whether the holder is anorgan donor, presents the holder's right index finger print, aPDF417bar code, and a 1Dbar code. InUruguay, the identity card (documento de identidad) is issued by theMinistry of the Interiorand the National Civil Identification Bureau (Dirección Nacional de Identificación Civil| DNIC).[176] It is mandatory and essential for several activities at either governmental or private levels. The document is mandatory for all inhabitants of the Oriental Republic of Uruguay, whether they are native citizens, legal citizens, or resident aliens in the country, even for children as young as 45 days old. It is a laminated card 9 cm (3.5 in) wide and approximately 5 cm (2.0 in) high, dominated by the color blue, showing the flag in the background with the photo of the owner, the number assigned by the DNIC (including a self-generated or check digit), full name, and the corresponding signature along with biometrics. The card is bilingual in Spanish and Portuguese.[177] Identity cards are required for most formal transactions, from credit card purchases to any identity validation, proof of age, and so on. The identity card is not to be confused with theCredencial Cívica, which is used exclusively for voting.[178] Identity cards in Venezuela consist of a plastic-laminated paper which contains the national ID number (Cédula de Identidad) as well as a color-photo and the last names, given names, date of birth, right thumb print, signature, andmarital status(single, married, divorced, widowed) of the bearer. It also contains the documents expedition and expiration date. Two different prefixes can be found before the ID number: "V" for Venezuelans and "E" for foreigners (extranjerosin Spanish). This distinction is also shown in the document at the very bottom by a bold all-caps typeface displaying either the word VENEZOLANO or EXTRANJERO, respectively. Despite Venezuela being the second country in the Americas (after the United States) to adopt abiometric passport, the current Venezuelan ID document is remarkably low-security, even for regional standards. It can hardly be called a card. The paper inside the laminated cover contains only two security measures, first, it is a special type of government-issued paper, and second, it has microfilaments in the paper that glow in the presence of UV light. The laminated cover itself is very simplistic and quite large for the paper it covers and the photo, although is standard sized (3x3.5 cm) is rather blurred. Government officials in charge of issuing the document openly recommend each individual to cut the excess plastic off and re-laminate the document in order to protect it from bending. The requirements for getting a Venezuelan identity document are quite relaxed and Venezuela lacks high-security in its birth certificates and other documents that give claim to citizenship.
https://en.wikipedia.org/wiki/Identification_document
TheInternational Student Identity Card(ISIC) serves as internationally recognized proof ofstudentstatus and offers access to various benefits and discounts globally, includingtravel, accommodation, andcultural institutions. The ISIC Association also issues the International Youth Travel Card (IYTC) for non-students, and the International Teacher Identity Card (ITIC) forteachersandprofessors. Membership fees for these cards vary by country. The ISIC Association is a non-profit membership organisation legally registered in Denmark.[1] The ISIC card is administered and managed at a global level by the ISIC Service Office d.o.o. The ISIC Service Office d.o.o is a company seated in Belgrade, Serbia. The ISIC Service Office d.o.o is wholly owned by the ISIC Association.[1] ISIC Exclusive Representatives, who have the exclusive rights to issue ISIC cards in their respective countries, make up a global distribution network for ISIC cards. The ISIC card is available in over 130 countries. In each country, the ISIC Exclusive Representative is exclusively responsible for ISIC card distribution, promotion and development, including the development and managing a portfolio of local and national benefits and discounts, and services available to the ISIC holders.[2] Eligibility for the International Student Identity Card (ISIC) is limited to students in higher,tertiary, or full-timesecondary education, with a minimum age requirement of 12 years. There is no upper age limit for obtaining an ISIC card. The validity of an ISIC card spans 16 months, aligning with theacademic yearof the country where it is purchased.[3] The idea to conduct an ISIC Brand refresh originated in 2017, and was subsequently launched in May 2019.[4] The United Nations Educational, Scientific and Cultural Organization (UNESCO) has been involved in the ISIC development almost since the beginning. UNESCO joined the International Student Travel Conference in 1995 and supported the ISIC card. In 1968 UNESCO issued an official endorsement in full support of the ISIC card. UNESCO recognised the ISIC card as the only internationally accepted proof of full-time student status and a unique document encouraging cultural exchange and international understanding. A renewed Memorandum of Understanding was signed in 1993. The UNESCO logo has appeared on the ISIC card since 1993. UNESCO recognises the ISIC card as a unique document encouraging cultural exchange and international understanding.[5] This initiative was launched by the British Council IELTS,Studyportalsand the ISIC Association in 2015. The goal of these annual awards is to encourage and support more students undertaking study abroad. This award is available in all countries worldwide. In total there have been 17 winners in 5 rounds[6] The ISIC Event[35]
https://en.wikipedia.org/wiki/International_Student_Identity_Card
The termdigital card[1]can refer to a physical item, such as a memory card on a camera,[2][3]or, increasingly since 2017, to the digital content hosted as avirtual cardorcloud card, as a digital virtual representation of a physical card. They share a common purpose:identity management,credit card,debit cardordriver's license. A non-physical digital card, unlike amagnetic stripe card, canemulate(imitate) any kind of card.[4][1] Asmartphoneorsmartwatchcan store content from the card issuer; discount offers and news updates can be transmitted wirelessly, viaInternet. These virtual cards are used in very high volumes by the mass transit sector, replacing paper-based tickets and the earlier magnetic strip cards.[5] Magnetic recording on steel tape and wire was invented byValdemar Poulsenin Denmark around 1900 for recording audio.[6]In the 1950s, magnetic recording of digital computer data on plastic tape coated with iron oxide was invented. In 1960,IBMbuilt upon the magnetic tape idea and developed a reliable way of securing magnetic stripes toplastic cards,[7]as part of a contract with the US government for a security system. A number ofInternational Organization for Standardizationstandards,ISO/IEC 7810,ISO/IEC 7811,ISO/IEC 7812,ISO/IEC 7813,ISO 8583, andISO/IEC 4909, now define the physical properties of such cards, including size, flexibility, location of the magstripe, magnetic characteristics, and data formats. Those standards also specify characteristics for financial cards, including the allocation of card number ranges to different card issuing institutions. As technological progress emerged in the form of highly capable and always carriedsmartphones,handheldsandsmartwatches, the term "digital card" was introduced.[1] On May 26, 2011Googlereleased its own version of a cloud hostedGoogle Walletwhich contains digital cards - cards that can be created online without having to have a plastic card in first place, although all of its merchants currently issue both plastic and digital cards.[8]There are several virtual card issuing companies located in different geographical regions, such as Weel in Australia and Privacy in the USA. Amagnetic stripe cardis a type of card capable of storing data bystoring it on magnetic materialattached to a plastic card. A computer device can update the card's content. The magnetic stripe is read by swiping it past amagnetic reading head. Magnetic stripe cards are commonly used incredit cards,identity cards, and transportation tickets. They may also contain aradio frequency identification (RFID)tag, atransponder deviceand/or amicrochipmostly used foraccess controlor electronic payment. Magnetic storage was known from World War II and computer data storage in the 1950s.[7] In 1969an IBM engineerhad the idea of attaching a piece of magnetic tape, the predominant storage medium at the time, to a plastic card base. He tried it, but the result was unsatisfactory. Strips of tape warped easily, and the tape's function was negatively affected by adhesives he used to attach it to the card. After a frustrating day in the laboratory trying to find an adhesive that would hold the tape securely without affecting its function, he came home with several pieces of magnetic tape and several plastic cards. As he entered his home his wife was ironing clothing. When he explained the source of his frustration – inability to get the tape to "stick" to the plastic so that it would not come off, but without compromising its function – she suggested that he use the iron to melt the stripe on. He tried it and it worked.[9][10]The heat of the iron was just high enough to bond the tape to the card. Incremental improvements from 1969 through 1973 enabled developing and selling implementations of what became known as theUniversal Product Code(UPC).[11][12][13]This engineering effort resulted in IBM producing the first magnetic striped plastic credit and ID cards used by banks, insurance companies, hospitals and many others.[11][14] Initial customers included banks, insurance companies and hospitals, who provided IBM with raw plastic cards preprinted with their logos, along with a list of the contact information and data which was to be encoded and embossed on the cards.[14]Manufacturing involved attaching the magnetic stripe to the preprinted plastic cards using the hot stamping process developed by IBM.[15][16] IBM's development work, begun in 1969, but still needed more work. Steps required to convert the magnetic striped media into an industry acceptable device included: These steps were initially managed byJerome Svigalsof the Advanced Systems Division of IBM,Los Gatos, California, from 1966 to 1975. In most magnetic stripe cards, the magnetic stripe is contained in a plastic-like film. The magnetic stripe is located 0.223 inches (5.7 mm) from the edge of the card, and is 0.375 inches (9.5 mm) wide. The magnetic stripe contains three tracks, each 0.110 inches (2.8 mm) wide. Tracks one and three are typically recorded at 210 bits per inch (8.27 bits per mm), while track two typically has a recording density of 75 bits per inch (2.95 bits per mm). Each track can either contain 7-bit alphanumeric characters, or 5-bit numeric characters. Track 1 standards were created by theairlines industry (IATA). Track 2 standards were created by thebanking industry (ABA). Track 3 standards were created by the thrift-savings industry. Magstripes following these specifications can typically be read by mostpoint-of-salehardware, which are simply general-purpose computers that have been programmed to perform the required tasks. Examples of cards adhering to these standards includeATM cards,bank cards(credit and debit cards includingVisaandMasterCard),gift cards,loyalty cards,driver's licenses,telephone cards,membership cards, electronic benefit transfer cards (e.g.food stamps), and nearly any application in which monetary value or secure information isnotstored on the card itself. Many video game and amusement centers now use debit card systems based on magnetic stripe cards. Magnetic stripe cloning can be detected by the implementation of magnetic card reader heads and firmware that can read a signature of magnetic noise permanently embedded in all magnetic stripes during the card production process. This signature can be used in conjunction with common two-factor authentication schemes utilized in ATM, debit/retail point-of-sale and prepaid card applications.[17] Some types of cards intentionally ignore the ISO standards regarding which kind of data is recorded in each track, and use their own data sequences instead; these include hotel key cards, most subway and bus cards, and some national prepaid calling cards (such as for the country ofCyprus) in which the balance is stored and maintained directly on the stripe and not retrieved from a remote database. There are up to three tracks on magnetic cards known as tracks 1, 2, and 3. Track 3 is virtually unused by the major worldwide networks[citation needed], and often is not even physically present on the card by virtue of a narrower magnetic stripe. Point-of-sale card readers almost always read track 1, or track 2, and sometimes both, in case one track is unreadable. The minimum cardholder account information needed to complete a transaction is present on both tracks. Track 1 has a higher bit density (210 bits per inch vs. 75), is the only track that may contain alphabetic text, and hence is the only track that contains the cardholder's name. Track 1 is written with code known asDECSIXBITplus oddparity. The information on track 1 on financial cards is contained in several formats:A, which is reserved for proprietary use of the card issuer,B, which is described below,C-M, which are reserved for use by ANSI Subcommittee X3B10 andN-Z, which are available for use by individual card issuers: Format B: This format was developed by the banking industry (ABA). This track is written with a 5-bit scheme (4 data bits + 1 parity), which allows for sixteen possible characters, which are the numbers 0–9, plus the six characters: ; < = > ?. (It may seem odd that these particular punctuation symbols were selected, but by using them the set of sixteen characters matches theASCIIrange 0x30 through 0x3f.) The data format is as follows: Service codevalues common in financial cards: First digit Second digit Third digit The data stored on magnetic stripes on American and Canadian driver's licenses is specified by theAmerican Association of Motor Vehicle Administrators. Not all states and provinces use a magnetic stripe on their driver's licenses. For a list of those that do, see the AAMVA list.[18][19] The following data is stored on track 1:[20] The following data is stored on track 2: The following data is stored on track 3: Note: Each state has a different selection of information they encode, not all states are the same. Note: Some states, such as Texas,[22]have laws restricting the access and use of electronically readable information encoded on driver's licenses or identification cards under certain circumstances. Smart cardsare a newer generation of card that contain anintegrated circuit. Some smart cards have metal contacts to electrically connect the card to thereader; there are alsocontactless cardsthat use a magnetic field or radio frequency (RFID) for proximity reading. Hybrid smart cards include a magnetic stripe in addition to the chip—this combination is most commonly found inpayment cards, to make them usable at payment terminals that do not include a smart card reader. Cards that contain all three features (magnetic stripe, smart card chip, and RFID chip) are also becoming common as more activities require the use of such cards.[citation needed] DuringDEF CON24, Weston Hecker presentedHacking Hotel Keys, and Point Of Sales Systems.In the talk, Hecker described the way magnetic strip cards function and utilised spoofing software,[23]and anArduinoto obtain administrative access from hotel keys, via service staff walking past him. Hecker claims he used administrative keys from POS systems on other systems, effectively providing access to any system with a magnetic stripe reader, providing access to run privileged commands.[citation needed] Identification with a digital card is usually done in several ways:
https://en.wikipedia.org/wiki/Magnetic_Stripe
MIFAREis a series ofintegrated circuit (IC)chips used incontactless smart cardsandproximity cards. The brand includes proprietary solutions based on various levels of theISO/IEC 14443Type-A 13.56MHzcontactless smart card standard. It usesAESandDES/Triple-DESencryption standards, as well as an older proprietary encryption algorithm,Crypto-1. According to NXP, 10 billion of their smart card chips and over 150 million reader modules have been sold.[1] The MIFARE trademark is owned byNXP Semiconductors, which wasspun offfromPhilips Electronicsin 2006.[2][3] MIFARE products are embedded incontactlessand contact smart cards, smart paper tickets,wearablesand phones.[4][5] The MIFARE brand name (derived from the term MIKRON FARE collection and created by the companyMikron) covers four families of contactless cards: Subtypes: MIFARE Classic EV1 (other subtypes are no longer in use). Subtypes: MIFARE Plus S, MIFARE Plus X, MIFARE Plus SE and MIFARE Plus EV2. Subtypes: MIFARE Ultralight C, MIFARE Ultralight EV1, MIFARE Ultralight Nano and MIFARE Ultralight AES. Subtypes: MIFARE DESFire EV1, MIFARE DESFire EV2, MIFARE DESFire EV3, MIFARE DESFire EV3C and MIFARE DESFire Light. There is also the MIFARE SAM AV2 contact smart card. This can be used to handle the encryption in communicating with the contactless cards. The SAM (Secure Access Module) provides the secure storage ofcryptographickeys and cryptographic functions. The MIFARE Classic IC is a basic memory storage device, where the memory is divided into segments and blocks with simple security mechanisms foraccess control. They areASIC-based and have limited computational power. Due to their reliability and low cost, those cards are widely used for electronic wallets, access control, corporate ID cards, transportation or stadium ticketing. It uses an NXP proprietary security protocol (Crypto-1) for authentication and ciphering.[citation needed] MIFARE Classic encryption has been compromised; seebelowfor details.[citation needed] The MIFARE Classic with 1K memory offers 1,024 bytes of data storage, split into 16sectors; each sector is protected by two different keys, calledAandB. Each key can be programmed to allow operations such as reading, writing, increasing value blocks, etc. MIFARE Classic with 4K memory offers 4,096 bytes split into forty sectors, of which 32 are the same size as in the 1K with eight more that are quadruple size sectors. MIFARE Classic Mini offers 320 bytes split into five sectors. For each of these IC types, 16 bytes per sector are reserved for the keys and access conditions and can not normally be used for user data. Also, the very first 16 bytes contain the serial number of the card and certain other manufacturer data and are read-only. That brings the net storage capacity of these cards down to 752 bytes for MIFARE Classic with 1K memory, 3,440 bytes for MIFARE Classic with 4K memory, and 224 bytes for MIFARE Mini.[citation needed] The SamsungTecTileNFCtag stickers use MIFARE Classic chips. This means only devices with an NXP NFC controller chip can read or write these tags. At the moment BlackBerry phones, the Nokia Lumia 610 (August 2012[6]), the Google Nexus 4, Google Nexus 7 LTE and Nexus 10 (October 2013[7]) can't read/write TecTile stickers.[citation needed] MIFARE Plus is a replacement IC solution for the MIFARE Classic. It is less flexible than a MIFARE DESFire EV1 contactless IC. MIFARE Plus was publicly announced in March 2008 with first samples in Q1 2009.[8] MIFARE Plus, when used in older transportation systems that do not yet support AES on the reader side, still leaves an open door to attacks. Though it helps to mitigate threats from attacks that broke theCrypto-1cipher through the weak random number generator, it does not help against brute force attacks and crypto analytic attacks.[9] During the transition period from MIFARE Classic to MIFARE Plus where only a few readers might support AES in the first place, it offers an optional AES authentication in Security Level 1 (which is in fact MIFARE Classic operation). This does not prevent the attacks mentioned above but enables a secure mutual authentication between the reader and the card to prove that the card belongs to the system and is not fake. In its highest security level SL3, using 128-bit AES encryption, MIFARE Plus is secured from attacks.[citation needed] MIFARE Plus EV1 was announced in April 2016.[10] New features compared to MIFARE Plus X include: The MIFARE Plus EV2 was introduced to the market on 23 June 2020.[11]It comes with an enhanced read performance and transaction speed compared to MIFARE Plus EV1.[12] New features compared to MIFARE Plus EV1 include: The MIFARE Ultralight has only 512 bits of memory (i.e. 64 bytes), without cryptographic security. The memory is provided in 16pagesof 4 bytes. Cards based on these chips are so inexpensive that they are often used for disposable tickets for events such as the2006 FIFA World Cup. It provides only basic security features such as one-time-programmable (OTP) bits and a write-lock feature to prevent re-writing of memory pages but does not include cryptography as applied in other MIFARE product-based cards. MIFARE Ultralight EV1[13]introduced in November 2012 the next generation of paper ticketing smart card ICs for limited-use applications for ticketing schemes and additional security options.[14]It comes with several enhancements above the original MIFARE Ultralight: Introduced at the Cartes industry trade show in 2008, the MIFARE Ultralight C IC is part of NXP's low-cost MIFARE product offering (disposable ticket). With Triple DES, MIFARE Ultralight C uses a widely adopted standard, enabling easy integration in existing infrastructures. The integrated Triple DES authentication provides an effective countermeasure against cloning.[citation needed] Key applications for MIFARE Ultralight C are public transportation, event ticketing, loyalty and NFC Forum tag type 2. It was introduced in 2022. The MIFARE DESFire (MF3ICD40) was introduced in 2002 and is based on a core similar to SmartMX, with more hardware and software security features than MIFARE Classic. It comes pre-programmed with the general-purpose MIFARE DESFire operating system which offers a simple directory structure and files. They are sold in four variants: One with Triple-DES only and 4 KiB of storage, and three with AES (2, 4, or 8 kiB; see MIFARE DESFire EV1). The AES variants have additional security features; e.g.,CMAC. MIFARE DESFire uses a protocol compliant with ISO/IEC 14443-4.[16]The contactless IC is based on an8051 processorwith 3DES/AES cryptographic accelerator, making very fast transactions possible. The maximal read/write distance between card and reader is 10 centimetres (3.9 in), but the actual distance depends on the field power generated by the reader and its antenna size. In 2010, NXP announced the discontinuation of the MIFARE DESFire (MF3ICD40) after it had introduced its successor MIFARE DESFire EV1 (MF3ICD41) in late 2008. In October 2011 researchers of Ruhr University Bochum[17]announced that they had broken the security of MIFARE DESFire (MF3ICD40), which was acknowledged by NXP[18](seeMIFARE DESFire security). First evolution of MIFARE DESFire contactless IC, broadly backwards compatible. Available with 2 KiB, 4 KiB, and 8 KiB non-volatile memory. Other features include:[19] MIFARE DESFire EV1 was publicly announced in November 2006.[citation needed] The second evolution of the MIFARE DESFire contactless IC family, broadly backwards compatible.[20]New features include: MIFARE DESFire EV2 was publicly announced in March 2016 at the IT-TRANS event in Karlsruhe, Germany The latest evolution of the MIFARE DESFire contactless IC family, broadly backward compatible. New features include: MIFARE DESFire EV3 was publicly announced on 2 June 2020.[21] The DESFire EV3C card has all the features of the DESFire EV3, plus it can also emulate a MIFARE Classic 1K card. Some DESFire files can be mapped to a MIFARE Classic 1K memory layout, which offers a migration route for existing users on MIFARE Classic that gradually want to move to DESFire. MIFARE SAMs are not contactless smart cards. They aresecure access modulesdesigned to provide the secure storage ofcryptographickeys and cryptographic functions for terminals to access the MIFARE products securely and to enablesecure communicationbetweenterminalsandhost(backend). MIFARE SAMs are available from NXP in the contact-only module (PCM 1.1) as defined inISO/IEC 7816-2 and the HVQFN32 format.[citation needed] Integrating a MIFARE SAM AV2 in a contactlesssmart cardreader enables a design that integrates high-end cryptography features and the support of cryptographic authentication and data encryption/decryption.[citation needed]Like any SAM, it offers functionality to store keys securely and perform authentication and encryption of data between the contactless card and the SAM and the SAM towards the backend. Next to a classical SAM architecture, the MIFARE SAM AV2 supports the X-mode which allows a fast and convenient contactless terminal development by connecting the SAM to the microcontroller and reader IC simultaneously.[citation needed] MIFARE SAM AV2 offers AV1 mode and AV2 mode where in comparison to the SAM AV1 the AV2 version includespublic key infrastructure(PKI),hash functionslikeSHA-1, SHA-224, andSHA-256. It supports MIFARE Plus and secure host communication. Both modes provide the same communication interfaces, cryptographic algorithms (Triple-DES 112-bit and 168-bit key, MIFARE products using Crypto1, AES-128 and AES-192,RSAwith up to 2048-bit keys), and X-mode functionalities.[citation needed]The MIFARE SAM AV3 is the third generation of NXP's Secure Access Module, and it supports MIFARE ICs as well as NXP's UCODE DNA, ICODE DNA and NTAG DNA ICs.[22] A cloud-based platform that digitizes MIFARE product-based smart cards and makes them available on NFC-enabled smartphones and wearables. With this, new Smart City use cases such as mobile transit ticketing, mobile access and mobile micropayments are being enabled.[23] The MIFARE product portfolio was originally developed by Mikron in Gratkorn, Austria. Mikron was acquired by Philips in 1995.[24]Mikron sourced silicon from Atmel in the US, Philips in the Netherlands, and Siemens in Germany.[citation needed] Infineon Technologies(then Siemens) licensed MIFARE Classic from Mikron in 1994[25]and developed both stand alone and integrated designs with MIFARE product functions. Infineon currently produces various derivatives based on MIFARE Classic including 1K memory (SLE66R35) and various microcontrollers (8 bit (SLE66 series), 16 bit (SLE7x series), and 32 bit (SLE97 series) with MIFARE implementations, including devices for use in USIM withNear Field Communication.[26] Motorola tried to develop MIFARE product-like chips for the wired-logic version but finally gave up. The project expected one million cards per month for start, but that fell to 100,000 per month just before they gave up the project.[27] In 1998 Philips licensed MIFARE Classic toHitachi[28]Hitachi licensed MIFARE products for the development of the contactless smart card solution forNTT'sIC telephone card which started in 1999 and finished in 2006.[citation needed]In the NTT contactless IC telephone card project, three parties joined: Tokin-Tamura-Siemens,Hitachi(Philips-contract for technical support), and Denso (Motorola-only production).[citation needed]NTT asked for two versions of chip, i.e. wired-logic chip (like MIFARE Classic) with small memory and big memory capacity. Hitachi developed only big memory version and cut part of the memory to fit for the small memory version. The deal with Hitachi was upgraded in 2008 by NXP (by then no longer part of Philips) to include MIFARE Plus and MIFARE DESFire to the renamed semiconductor division of HitachiRenesas Technology.[29] In 2010 NXP licensed MIFARE products toGemalto. In 2011 NXP licensedOberthurto use MIFARE products on SIM cards. In 2012 NXP signed an agreement withGiesecke & Devrientto integrate MIFARE product-based applications on their secure SIM products. These licensees are developingNear Field Communicationproducts[30][31] The encryption used by the MIFARE Classic IC uses a 48-bit key.[32] A presentation by Henryk Plötz andKarsten Nohl[33]at theChaos Communication Congressin December 2007 described a partial reverse-engineering of the algorithm used in the MIFARE Classic chip.[34]A paper that describes the process of reverse engineering this chip was published at the August 2008USENIXsecurity conference.[35] In March 2008 the Digital Security[36]research group of theRadboud University Nijmegenmade public that they performed a complete reverse-engineering and were able to clone and manipulate the contents of anOV-Chipkaartwhich is using MIFARE Classic chip.[37]For demonstration they used theProxmark3device, a 125 kHz / 13.56 MHz research instrument.[38]The schematics and software are released under the freeGNU General Public Licenseby Jonathan Westhues in 2007. They demonstrate it is even possible to perform card-only attacks using just an ordinary stock-commercial NFC reader in combination with the libnfc library. The Radboud University published four scientific papers concerning the security of the MIFARE Classic: In response to these attacks, the DutchMinister of the Interior and Kingdom Relationsstated that they would investigate whether the introduction of the Dutch Rijkspas could be brought forward from Q4 of 2008.[43] NXP tried to stop the publication of the second article by requesting a preliminary injunction. However, the injunction was denied, with the court noting that, "It should be considered that the publication of scientific studies carries a lot of weight in a democratic society, as does inform society about serious issues in the chip because it allows for mitigating of the risks."[44][45] Both independent research results are confirmed by the manufacturer NXP.[46]These attacks on the cards didn't stop the further introduction of the card as the only accepted card for all Dutch public transport theOV-chipkaartcontinued as nothing happened[47]but in October 2011 the companyTLS, responsible for the OV-Chipkaart announced that the new version of the card will be better protected against fraud.[48] The MIFARE Classic encryptionCrypto-1can be broken in about 200 seconds on a laptop from 2008,[49]if approx. 50 bits of known (or chosen) keystream are available. This attack reveals the key from sniffed transactions under certain (common) circumstances and/or allows an attacker to learn the key by challenging the reader device. Another attack recovers the secret key in about 40 ms on a laptop. This attack requires just one (partial) authentication attempt with a legitimate reader.[50] Additionally, there are a number of attacks that work directly on a card and without the help of a valid reader device.[51]These attacks have been acknowledged by NXP.[52]In April 2009 new and better card-only attack on MIFARE Classic has been found. It was first announced at the rump session of Eurocrypt 2009.[53]This attack was presented at SECRYPT 2009.[54]The full description of this latest and fastest attack to date can also be found in the IACR preprint archive.[55]The new attack improves by a factor of more than 10 all previous card-only attacks on MIFARE Classic, has instant running time, and does not require a costly precomputation. The new attack allows recovering the secret key of any sector of the MIFARE Classic card via wireless interaction, within about 300 queries to the card. It can then be combined with the nested authentication attack in the Nijmegen Oakland paper to recover subsequent keys almost instantly. Both attacks combined and with the right hardware equipment such asProxmark3, one should be able to clone any MIFARE Classic card in 10 seconds or less. This is much faster than previously thought. In an attempt to counter these card-only attacks, new "hardened" cards have been released in and around 2011, such as the MIFARE Classic EV1.[56]These variants are insusceptible for all card-only attacks publicly known until then, while remaining backward compatible with the original MIFARE Classic. In 2015, a new card-only attack was discovered that is also able to recover the secret keys from such hardened variants.[57]Since the discovery of this attack, NXP is officially recommending to migrate from MIFARE Classic product-based systems to higher security products.[58] In November 2010, security researchers from the Ruhr University released a paper detailing aside-channel attackagainst MIFARE product-based cards.[59]The paper demonstrated that MIFARE DESFire product-based cards could be easily emulated at a cost of approximately $25 in"off the shelf" hardware. The authors asserted that this side-channel attack allowed cards to be cloned in approximately 100 ms. Furthermore, the paper's authors included hardware schematics for their original cloning device, and have since made corresponding software, firmware and improved hardware schematics publicly available on GitHub.[60] In October 2011 David Oswald and Christof Paar of Ruhr-University in Bochum, Germany, detailed how they were able to conduct a successful "side-channel" attack against the card using equipment that can be built for nearly $3,000. Called "Breaking MIFARE DESFire MF3ICD40: Power Analysis and Templates in the Real World",[61]they stated that system integrators should be aware of the new security risks that arise from the presented attacks and can no longer rely on the mathematical security of the used 3DES cipher. Hence, to avoid, e.g. manipulation or cloning of smart cards used in payment or access control solutions, proper actions have to be taken: on the one hand, multi-level countermeasures in the back end allow to minimize the threat even if the underlying RFID platform is insecure," In a statement[62]NXP said that the attack would be difficult to replicate and that they had already planned to discontinue the product at the end of 2011. NXP also stated "Also, the impact of a successful attack depends on the end-to-end system security design of each individual infrastructure and whether diversified keys – recommended by NXP – are being used. If this is the case, a stolen or lost card can be disabled simply by the operator detecting the fraud and blacklisting the card, however, this operation assumes that the operator has those mechanisms implemented. This will make it even harder to replicate the attack with a commercial purpose." In September 2012 a security consultancy Intrepidus[63]demonstrated at the EU SecWest event in Amsterdam,[64]that MIFARE Ultralight product-based fare cards in the New Jersey and San Francisco transit systems can be manipulated using an Android application, enabling travelers to reset their card balance and travel for free in a talk entitled "NFC For Free Rides and Rooms (on your phone)".[65]Although not a direct attack on the chip but rather the reloading of an unprotected register on the device, it allows hackers to replace value and show that the card is valid for use. This can be overcome by having a copy of the register online so that values can be analyzed and suspect cards hot-listed. NXP has responded by pointing out that they had introduced the MIFARE Ultralight C in 2008 with 3DES protection and in November 2012 introduced the MIFARE Ultralight EV1[66]with three decrement only counters to foil such reloading attacks. For systems based on contactless smartcards (e.g. public transportation), security against fraud relies on many components, of which the card is just one. Typically, to minimize costs,systems integratorswill choose a relatively cheap card such as a MIFARE Classic and concentrate security efforts in theback office. Additionalencryptionon the card, transaction counters, and other methods known incryptographyare then employed to make cloned cards useless, or at least to enable the back office to detect a fraudulent card, and put it on a blacklist. Systems that work with online readers only (i.e., readers with a permanent link to the back office) are easier to protect than systems that have offline readers as well, for which real-time checks are not possible and blacklists cannot be updated as frequently. Another aspect of fraud prevention and compatibility guarantee is to obtain certification called to live in 1998 ensuring the compatibility of several certified MIFARE product-based cards with multiple readers. With thiscertification, the main focus was placed on thecontactlesscommunication of the wireless interface, as well as to ensure proper implementation of all the commands of MIFARE product-based cards. The certification process was developed and carried out by the Austrian laboratory called Arsenal Research. Today, independent test houses such as Arsenal Testhouse, UL and LSI-TEC, perform the certification tests and provide the certified products in an online database.[67]
https://en.wikipedia.org/wiki/MIFARE
Radio-frequency identification(RFID) useselectromagnetic fieldsto automaticallyidentifyandtracktags attached to objects. An RFID system consists of a tiny radiotranspondercalled a tag, aradio receiver, and atransmitter. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually anidentifying inventory number, back to the reader. This number can be used to trackinventorygoods.[1] Passive tags are powered by energy from the RFID reader's interrogatingradio waves. Active tags are powered by a battery and thus can be read at a greater range from the RFID reader, up to hundreds of meters. Unlike abarcode, the tag does not need to be within theline of sightof the reader, so it may be embedded in the tracked object. RFID is one method ofautomatic identification and data capture(AIDC).[2] RFID tags are used in many industries. For example, an RFID tag attached to an automobile during production can be used to track its progress through theassembly line,[citation needed]RFID-tagged pharmaceuticals can be tracked through warehouses,[citation needed]andimplanting RFID microchipsin livestock and pets enables positive identification of animals.[3]Tags can also be used in shops to expedite checkout, and toprevent theftby customers and employees.[4] Since RFID tags can be attached to physical money, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information withoutconsenthas raised seriousprivacyconcerns.[5]These concerns resulted in standard specifications development addressing privacy and security issues. In 2014, the world RFID market was worth US$8.89billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This figure includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise from US$12.08 billion in 2020 to US$16.23 billion by 2029.[6] In 1945,Leon Theremininventedthe "Thing", a listening devicefor theSoviet Unionwhich retransmitted incident radio waves with the added audio information. Sound waves vibrated adiaphragmwhich slightly altered the shape of theresonator, which modulated the reflected radio frequency. Even though this device was acovert listening device, rather than an identification tag, it is considered to be a predecessor of RFID because it was passive, being energised and activated by waves from an outside source.[7] Similar technology, such as theIdentification friend or foetransponder, was routinely used by the Allies and Germany inWorld War IIto identify aircraft as friendly or hostile.Transpondersare still used by most powered aircraft.[8]An early work exploring RFID is the landmark 1948 paper by Harry Stockman,[9]who predicted that "Considerable research and development work has to be done before the remaining basic problems in reflected-power communication are solved, and before the field of useful applications is explored." Mario Cardullo's device, patented on January 23, 1973, was the first true ancestor of modern RFID,[10]as it was a passive radio transponder with memory.[11]The initial device was passive, powered by the interrogating signal, and was demonstrated in 1971 to theNew York Port Authorityand other potential users. It consisted of a transponder with 16bitmemory for use as atoll device. The basic Cardullo patent covers the use of radio frequency (RF), sound and light as transmission carriers. The original business plan presented to investors in 1969 showed uses in transportation (automotive vehicle identification, automatic toll system,electronic license plate, electronic manifest, vehicle routing, vehicle performance monitoring), banking (electronic chequebook, electronic credit card), security (personnel identification, automatic gates, surveillance) and medical (identification, patient history).[10] In 1973, an early demonstration ofreflected power(modulated backscatter) RFID tags, both passive and semi-passive, was performed by Steven Depp, Alfred Koelle and Robert Freyman at theLos Alamos National Laboratory.[12]The portable system operated at 915 MHz and used 12-bit tags. This technique is used by the majority of today's UHFID and microwave RFID tags.[13] In 1983, the first patent to be associated with the abbreviation RFID was granted toCharles Walton.[14] In 1996, the first patent for a batteryless RFID passive tag with limited interference was granted to David Everett, John Frech, Theodore Wright, and Kelly Rodriguez.[15] A radio-frequency identification system usestags, orlabelsattached to the objects to be identified. Two-way radio transmitter-receivers calledinterrogatorsorreaderssend a signal to the tag and read its response.[16] RFID tags are made out of three pieces: The tag information is stored in a non-volatile memory.[17]The RFID tags includes either fixed or programmable logic for processing the transmission and sensor data, respectively.[citation needed] RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its ID signal.[17]A battery-assisted passive tag has a small battery on board and is activated when in the presence of an RFID reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than an active tag for signal transmission.[18] Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write, where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, read-multiple; "blank" tags may be written with an electronic product code by the user.[19] The RFID tag receives the message and then responds with its identification and other information. This may be only a unique tag serial number, or may be product-related information such as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them simultaneously. RFID systems can be classified by the type of tag and reader. There are 3 types:[20] Fixed readers are set up to create a specific interrogation zone which can be tightly controlled. This allows a highly defined reading area for when tags go in and out of the interrogation zone. Mobile readers may be handheld or mounted on carts or vehicles. Signaling between the reader and the tag is done in several different incompatible ways, depending on the frequency band used by the tag. Tags operating on LF and HF bands are, in terms of radio wavelength, very close to the reader antenna because they are only a small percentage of a wavelength away. In thisnear fieldregion, the tag is closely coupled electrically with the transmitter in the reader. The tag can modulate the field produced by the reader by changing the electrical loading the tag represents. By switching between lower and higher relative loads, the tag produces a change that the reader can detect. At UHF and higher frequencies, the tag is more than one radio wavelength away from the reader, requiring a different approach. The tag canbackscattera signal. Active tags may contain functionally separated transmitters and receivers, and the tag need not respond on a frequency related to the reader's interrogation signal.[27] AnElectronic Product Code(EPC) is one common type of data stored in a tag. When written into the tag by an RFID printer, the tag contains a 96-bit string of data. The first eight bits are a header which identifies the version of the protocol. The next 28 bits identify the organization that manages the data for this tag; the organization number is assigned by the EPCGlobal consortium. The next 24 bits are an object class, identifying the kind of product. The last 36 bits are a unique serial number for a particular tag. These last two fields are set by the organization that issued the tag. Rather like aURL, the total electronic product code number can be used as a key into a global database to uniquely identify a particular product.[28] Often more than one tag will respond to a tag reader. For example, many individual products with tags may be shipped in a common box or on a common pallet. Collision detection is important to allow reading of data. Two different types of protocols are used to"singulate"a particular tag, allowing its data to be read in the midst of many similar tags. In aslotted Alohasystem, the reader broadcasts an initialization command and a parameter that the tags individually use to pseudo-randomly delay their responses. When using an "adaptive binary tree" protocol, the reader sends an initialization symbol and then transmits one bit of ID data at a time; only tags with matching bits respond, and eventually only one tag matches the complete ID string.[29] Both methods have drawbacks when used with many tags or with multiple overlapping readers.[citation needed] "Bulk reading" is a strategy for interrogating multiple tags at the same time, but lacks sufficient precision for inventory control. A group of objects, all of them RFID tagged, are read completely from one single reader position at one time. However, as tags respond strictly sequentially, the time needed for bulk reading grows linearly with the number of labels to be read. This means it takes at least twice as long to read twice as many labels. Due to collision effects, the time required is greater.[30] A group of tags has to be illuminated by the interrogating signal just like a single tag. This is not a challenge concerning energy, but with respect to visibility; if any of the tags are shielded by other tags, they might not be sufficiently illuminated to return a sufficient response. The response conditions for inductively coupledHFRFID tags and coil antennas in magnetic fields appear better than for UHF or SHF dipole fields, but then distance limits apply and may prevent success.[citation needed][31] Under operational conditions, bulk reading is not reliable. Bulk reading can be a rough guide for logistics decisions, but due to a high proportion of reading failures, it is not (yet)[when?]suitable for inventory management. However, when a single RFID tag might be seen as not guaranteeing a proper read, multiple RFID tags, where at least one will respond, may be a safer approach for detecting a known grouping of objects. In this respect, bulk reading is afuzzymethod for process support. From the perspective of cost and effect, bulk reading is not reported as an economical approach to secure process control in logistics.[32] RFID tags are easy to conceal or incorporate in other items. For example, in 2009 researchers atBristol Universitysuccessfully glued RFID micro-transponders to liveantsin order to study their behavior.[33]This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances. Hitachi holds the record for the smallest RFID chip, at 0.05 mm × 0.05 mm. This is 1/64th the size of the previous record holder, the mu-chip.[34]Manufacture is enabled by using thesilicon-on-insulator(SOI) process. These dust-sized chips can store 38-digit numbers using 128-bitRead Only Memory(ROM).[35]A major challenge is the attachment of antennas, thus limiting read range to only millimeters. In early 2020, MIT researchers demonstrated aterahertzfrequency identification (TFID) tag that is barely 1 square millimeter in size. The devices are essentially a piece of silicon that are inexpensive, small, and function like larger RFID tags. Because of the small size, manufacturers could tag any product and track logistics information for minimal cost.[36][37] An RFID tag can be affixed to an object and used to track tools, equipment, inventory, assets, people, or other objects. RFID offers advantages over manual systems or use ofbarcodes. The tag can be read if passed near a reader, even if it is covered by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can be read hundreds at a time; barcodes can only be read one at a time using current devices. Some RFID tags, such as battery-assisted passive tags, are also able to monitor temperature and humidity.[38] In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma sterilization, could cost up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers started at US$50 and could be over US$100 each.[39]Battery-Assisted Passive (BAP) tags were in the US$3–10 range.[citation needed] RFID can be used in a variety of applications,[40][41]such as: In 2010, three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a reliability of 99.9%, and a stable international standard around HF and UHF passive RFID. The adoption of these standards were driven by EPCglobal, a joint venture betweenGS1and GS1 US, which were responsible for driving global adoption of the barcode in the 1970s and 1980s. The EPCglobal Network was developed by theAuto-ID Center.[45] RFID provides a way for organizations to identify and manage stock, tools and equipment (asset tracking), etc. without manual data entry. Manufactured products such as automobiles or garments can be tracked through the factory and through shipping to the customer. Automatic identification with RFID can be used for inventory systems. Many organisations require that their vendors place RFID tags on all shipments to improvesupply chain management.[citation needed]Warehouse Management System[clarification needed]incorporate this technology to speed up the receiving and delivery of the products and reduce the cost of labor needed in their warehouses.[46] RFID is used foritem-level taggingin retail stores. This can enable more accurate and lower-labor-cost supply chain and store inventory tracking, as is done atLululemon, though physically locating items in stores requires more expensive technology.[47]RFID tags can be used at checkout; for example, at some stores of the French retailerDecathlon, customers performself-checkoutby either using a smartphone or putting items into a bin near the register that scans the tags without having to orient each one toward the scanner.[47]Some stores use RFID-tagged items to trigger systems that provide customers with more information or suggestions, such as fitting rooms atChaneland the "Color Bar" atKendra Scottstores.[47] Item tagging can also provide protection against theft by customers and employees by usingelectronic article surveillance(EAS). Tags of different types can be physically removed with a special tool or deactivated electronically when payment is made.[48]On leaving the shop, customers have to pass near an RFID detector; if they have items with active RFID tags, an alarm sounds, both indicating an unpaid-for item, and identifying what it is. Casinos can use RFID to authenticatepoker chips, and can selectively invalidate any chips known to be stolen.[49] RFID tags are widely used inidentification badges, replacing earliermagnetic stripecards. These badges need only be held within a certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.[citation needed] In 2010, Vail Resorts began using UHF Passive RFID tags in ski passes.[50] Facebook is using RFID cards at most of their live events to allow guests to automatically capture and post photos.[citation needed][when?] Automotive brands have adopted RFID for social media product placement more quickly than other industries. Mercedes was an early adopter in 2011 at thePGA Golf Championships,[51]and by the 2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing.[52][further explanation needed] To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.[53][when?] Yard management, shipping and freight and distribution centers use RFID tracking. In therailroadindustry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the type, origin, destination, etc. of the commodities being carried.[54] In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and cargo at several airports and airlines.[55][56] Some countries are using RFID for vehicle registration and enforcement.[57]RFID can help detect and retrieve stolen cars.[58][59] RFID is used inintelligent transportation systems. InNew York City, RFID readers are deployed at intersections to trackE-ZPasstags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic management center to be used inadaptive traffic controlof the traffic lights.[60] Where ship, rail, or highway tanks are being loaded, a fixed RFID antenna contained in a transfer hose can read an RFID tag affixed to the tank, positively identifying it.[61] At least one company has introduced RFID to identify and locate underground infrastructure assets such asgaspipelines,sewer lines, electrical cables, communication cables, etc.[62] The first RFID passports ("E-passport") were issued byMalaysiain 1998. In addition to information also contained on the visual data page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entry into and exit out of the country.[citation needed] Other countries that insert RFID in passports include Norway (2005),[63]Japan (March 1, 2006), mostEUcountries (around 2006), Singapore (2006), Australia, Hong Kong, the United States (2007), the United Kingdom and Northern Ireland (2006), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan (December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), Argentina (2012), Canada (2013), Uruguay (2015)[64]and Israel (2017). Standards for RFID passports are determined by theInternational Civil Aviation Organization(ICAO), and are contained in ICAO Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to theISO/IEC 14443RFID chips in e-passports as "contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front cover. Since 2006, RFID tags included in newUnited States passportsstore the same information that is printed within the passport, and include a digital picture of the owner.[65]The United States Department of State initially stated the chips could only be read from a distance of 10 centimetres (3.9 in), but after widespread criticism and a clear demonstration that special equipment can read the test passports from 10 metres (33 ft) away,[66]the passports were designed to incorporate a thin metal lining to make it more difficult for unauthorized readers toskiminformation when the passport is closed. The department will also implementBasic Access Control(BAC), which functions as apersonal identification number(PIN) in the form of characters printed on the passport data page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of any communication between the chip and interrogator.[67] In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways. Somebike lockersare operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or locker and is used to track and charge based on how long the bike is parked.[citation needed] TheZipcarcar-sharing service uses RFID cards for locking and unlocking cars and for member identification.[68] In Singapore, RFID replaces paper Season Parking Ticket (SPT).[69] RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak ofmad-cow disease, RFID has become crucial inanimal identificationmanagement. Animplantable RFID tagortranspondercan also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or "chips" on animals.[70]TheCanadian Cattle Identification Agencybegan using RFID tags as a replacement for barcode tags. Currently, CCIA tags are used inWisconsinand by United States farmers on a voluntary basis. TheUSDAis currently developing its own program. RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well.[71] Biocompatiblemicrochip implantsthat use RFID technology are being routinely implanted in humans. The first-ever human to receive an RFID microchip implant was American artistEduardo Kacin 1997.[72][73]Kac implanted the microchip live on television (and also live on the Internet) in the context of his artworkTime Capsule.[74]A year later, British professor ofcyberneticsKevin Warwickhad an RFID chip implanted in his arm by hisgeneral practitioner, George Boulos.[75][76]In 2004, the 'Baja Beach Club' operated byConrad ChaseinBarcelona[77]andRotterdamoffered implanted chips to identify their VIP customers, who could in turn use it to pay for service. In 2009, British scientistMark Gassonhad an advanced glass capsule RFID device surgically implanted into his left hand and subsequently demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems.[78] TheFood and Drug Administrationin the United States approved the use of RFID chips in humans in 2004.[79] There is controversy regarding human applications of implantable RFID technology including concerns that individuals could potentially be tracked by carrying an identifier unique to them. Privacy advocates have protested against implantable RFID chips, warning of potential abuse. Some are concerned this could lead to abuse by an authoritarian government, to removal of freedoms,[80]and to the emergence of an "ultimatepanopticon", a society where all citizens behave in a socially accepted manner because others might be watching.[81] On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City demonstrated that they could clone the RFID signal from a human implanted RFID chip, indicating that the device was not as secure as was previously claimed.[82] The UFO religionUniverse Peopleis notorious online for their vocal opposition to human RFID chipping, which they claim is asaurianattempt to enslave the human race; one of their web domains is "dont-get-chipped".[83][84][85] Adoption of RFID in the medical industry has been widespread and very effective.[86]Hospitals are among the first users to combine both active and passive RFID.[87]Active tags track high-value, or frequently moved items, and passive tags track smaller, lower cost items that only need room-level identification.[88]Medical facility rooms can collect data from transmissions of RFID badges worn by patients and employees, as well as from tags assigned to items such as mobile medical devices.[89]TheU.S. Department of Veterans Affairs (VA)recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.[90] Since 2004, a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems; the systems are typically used for workflow and inventory management.[91][92][93]The use of RFID to prevent mix-ups betweenspermandovainIVFclinics is also being considered.[94] In October 2004, the FDA approved the USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical treatments, according to the company. Anti-RFID activistsKatherine AlbrechtandLiz McIntyrediscovered anFDA Warning Letterthat spelled out health risks.[95]According to the FDA, these include "adverse tissue reaction", "migration of the implanted transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility." Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management and self-service checkout by patrons. It can also act as asecuritydevice, taking the place of the more traditionalelectromagnetic security strip.[96] It is estimated that over 30 million library items worldwide now contain RFID tags, including some in theVatican LibraryinRome.[97] Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of books can be read simultaneously. Book tags can be read while books are in motion on aconveyor belt, which reduces staff time. This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories could be done on a whole shelf of materials within seconds.[98]However, as of 2008, this technology remained too costly for many smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was that a library which lends 100,000 books per year should plan on a cost of €50,000 (borrow- and return-stations: 12,500 each, detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be needed, resulting in some of them getting laid off,[97]but that has so far not happened in North America where recent surveys have not returned a single library that cut staff because of adding RFID.[citation needed][99]In fact, library budgets are being reduced for personnel and increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size.[citation needed][99]Also, the tasks that RFID takes over are largely not the primary tasks of librarians.[citation needed][99]A finding in the Netherlands is that borrowers are pleased with the fact that staff are now more available for answering questions.[citation needed][99] Privacy concerns have been raised surrounding library use of RFID.[100][101]Because some RFID tags can be read up to 100 metres (330 ft) away, there is some concern over whether sensitive information could be collected from an unwilling source. However, library RFID tags do not contain any patron information,[102]and the tags used in the majority of libraries use a frequency only readable from approximately 10 feet (3.0 m).[96]Another concern is that a non-library agency could potentially record the RFID tags of every person leaving the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page, possibly put there by the publisher.[citation needed] RFID technologies are now[when?]also implemented in end-user applications in museums.[103]An example was the custom-designed temporary research application, "eXspot", at theExploratorium, a science museum inSan Francisco,California. A visitor entering the museum received an RF tag that could be carried as a card. The eXspot system enabled the visitor to receive information about specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a "personalized" website keyed to the RFID tag.[104] In 2004, school authorities in the Japanese city ofOsakamade a decision to start chipping children's clothing, backpacks, and student IDs in a primary school.[105]Later, in 2007, a school inDoncaster, England, piloted a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms.[106][when?]St Charles Sixth Form Collegein westLondon, England, starting in 2008, uses an RFID card system to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly,Whitcliffe Mount SchoolinCleckheaton, England, uses RFID to track pupils and staff in and out of the building via a specially designed card. In the Philippines, during 2012, some schools already[when?]use RFID in IDs for borrowing books.[107][unreliable source?]Gates in those particular schools also have RFID scanners for buying items at school shops and canteens. RFID is also used in school libraries, and to sign in and out for student and teacher attendance.[99] RFID for timing racesbegan in the early 1990s with pigeon racing, introduced by the companyDeister Electronicsin Germany. RFID can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for every entrant.[citation needed] In races using RFID, racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags provide accurate readings with specially designed antennas. Rush error,[clarification needed]lap count errors and accidents at race start are avoided, as anyone can start and finish at any time without being in a batch mode.[clarification needed] The design of the chip and of the antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe, or strapped to the ankle withhook-and-loop fasteners. The chips must be about 400 mm from the mat, therefore giving very good temporal resolution. Alternatively, a chip plus a very large (125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at a height of about 1.25 m (4.1 ft).[citation needed] Passive and active RFID systems are used in off-road events such asOrienteering,Enduroand Hare and Hounds racing. Riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.[citation needed] RFID is being[when?]adapted by many recruitment agencies which have a PET (physical endurance test) as their qualifying procedure, especially in cases where the candidate volumes may run into millions (Indian Railway recruitment cells, police and power sector). A number ofski resortshave adopted RFID tags to provide skiers hands-free access toski lifts. Skiers do not have to take their passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56MHz. The bulk of ski areas in Europe, from Verbier to Chamonix, use these systems.[108][109][110] TheNFLin the United States equips players with RFID chips that measures speed, distance and direction traveled by each player in real-time. Currently, cameras stay focused on thequarterback; however, numerous plays are happening simultaneously on the field. The RFID chip will provide new insight into these simultaneous plays.[111]The chip triangulates the player's position within six inches and will be used to digitallybroadcastreplays. The RFID chip will make individual player information accessible to the public. The data will be available via the NFL 2015 app.[112]The RFID chips are manufactured byZebra Technologies. Zebra Technologies tested the RFID chip in 18 stadiums last year[when?]to track vector data.[113] RFID tags are often a complement, but not a substitute, forUniversal Product Code(UPC) orEuropean Article Number(EAN) barcodes. They may never completely replace barcodes, due in part to their higher cost and the advantage of multiple data sources on the same object. Also, unlike RFID labels, barcodes can be generated and distributed electronically by e-mail or mobile phone, for printing or display by the recipient. An example is airlineboarding passes. The newEPC, along with several other schemes, is widely available at reasonable cost. The storage of data associated with tracking items will require manyterabytes. Filtering and categorizing RFID data is needed to create useful information. It is likely that goods will be tracked by the pallet using RFID tags, and at package level with UPC or EAN from unique barcodes. The unique identity is a mandatory requirement for RFID tags, despite special choice of the numbering scheme. RFID tag data capacity is large enough that each individual tag will have a unique code, while current barcodes are limited to a single type code for a particular product. The uniqueness of RFID tags means that a product may be tracked as it moves from location to location while being delivered to a person. This may help to combat theft and other forms of product loss. The tracing of products is an important feature that is well supported with RFID tags containing a unique identity of the tag and the serial number of the object. This may help companies cope with quality deficiencies and resulting recall campaigns, but also contributes to concern about tracking and profiling of persons after the sale. Since around 2007, there has been increasing development in the use of RFID[when?]in thewaste managementindustry. RFID tags are installed on waste collection carts, linking carts to the owner's account for easy billing and service verification.[114]The tag is embedded into a garbage and recycle container, and the RFID reader is affixed to the garbage and recycle trucks.[115]RFID also measures a customer's set-out rate and provides insight as to the number of carts serviced by each waste collection vehicle. This RFID process replaces traditional "pay as you throw" (PAYT)municipal solid wasteusage-pricing models. Active RFID tags have the potential to function as low-cost remote sensors that broadcasttelemetryback to a base station. Applications of tagometry data could include sensing of road conditions by implantedbeacons, weather reports, and noise level monitoring.[116] Passive RFID tags can also report sensor data. For example, theWireless Identification and Sensing Platformis a passive tag that reports temperature, acceleration and capacitance to commercial Gen2 RFID readers. It is possible that active or battery-assisted passive (BAP) RFID tags could broadcast a signal to an in-store receiver to determine whether the RFID tag – and by extension, the product it is attached to – is in the store.[citation needed] To avoid injuries to humans and animals, RF transmission needs to be controlled.[117]A number of organizations have set standards for RFID, including theInternational Organization for Standardization(ISO), theInternational Electrotechnical Commission(IEC),ASTM International, theDASH7Alliance andEPCglobal.[118] Several specific industries have also set guidelines, including the Financial Services Technology Consortium (FSTC) for tracking IT Assets with RFID, the Computer Technology Industry AssociationCompTIAfor certifying RFID engineers, and theInternational Air Transport Association(IATA) for luggage in airports.[citation needed] Every country can set its own rules forfrequency allocationfor RFID tags, and not all radio bands are available in all countries. These frequencies are known as theISM bands(Industrial Scientific and Medical bands). The return signal of the tag may still causeinterferencefor other radio users.[citation needed] In North America, UHF can be used unlicensed for 902–928 MHz (±13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power.[citation needed]In Europe, RFID and other low-power radio applications are regulated byETSIrecommendationsEN 300 220andEN 302 208, andEROrecommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865–868 MHz.[citation needed]Readers are required to monitor a channel before transmitting ("Listen Before Talk"); this requirement has led to some restrictions on performance, the resolution of which is a subject of current[when?]research. The North American UHF standard is not accepted in France as it interferes with its military bands.[citation needed]On July 25, 2012, Japan changed its UHF band to 920 MHz, more closely matching the United States' 915 MHz band, establishing an international standard environment for RFID.[citation needed] In some countries, a site license is needed, which needs to be applied for at the local authorities, and can be revoked.[citation needed] As of 31 October 2014, regulations are in place in 78 countries representing approximately 96.5% of the world's GDP, and work on regulations was in progress in three countries representing approximately 1% of the world's GDP.[119] Standardsthat have been made regarding RFID include: In order to ensure global interoperability of products, several organizations have set up additional standards forRFID testing. These standards include conformance, performance and interoperability tests.[citation needed] EPC Gen2 is short forEPCglobal UHF Class 1 Generation 2. EPCglobal, a joint venture betweenGS1and GS1 US, is working on international standards for the use of mostly passive RFID and theElectronic Product Code(EPC) in the identification of many items in thesupply chainfor companies worldwide. One of the missions of EPCglobal was to simplify the Babel of protocols prevalent in the RFID world in the 1990s. Two tag air interfaces (the protocol for exchanging information between a tag and a reader) were defined (but not ratified) by EPCglobal prior to 2003. These protocols, commonly known as Class 0 and Class 1, saw significant commercial implementation in 2002–2005.[121] In 2004, the Hardware Action Group created a new protocol, the Class 1 Generation 2 interface, which addressed a number of problems that had been experienced with Class 0 and Class 1 tags. The EPC Gen2 standard was approved in December 2004. This was approved after a contention fromIntermecthat the standard may infringe a number of their RFID-related patents. It was decided that the standard itself does not infringe their patents, making the standard royalty free.[122]The EPC Gen2 standard was adopted with minor modifications as ISO 18000-6C in 2006.[123] In 2007, the lowest cost of Gen2 EPC inlay was offered by the now-defunct company SmartCode, at a price of $0.05 apiece in volumes of 100 million or more.[124] Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another, or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that are meaningful to an inventory control system.[125] Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various concepts[example needed]have been designed, mainly offered asmiddlewareperforming the filtering from noisy and redundant raw data to significant processed data.[citation needed] The frequencies used for UHF RFID in the USA are as of 2007 incompatible with those of Europe or Japan. Furthermore, no emerging standard has yet become as universal as thebarcode.[126]To address international trade concerns, it is necessary to use a tag that is operational within all of the international frequency domains. A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal location privacy and corporate/military security. Such concerns have been raised with respect to theUnited States Department of Defense's recent[when?]adoption of RFID tags forsupply chain management.[127]More generally, privacy organizations have expressed concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in general-use products. This is mostly as a result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped on, from non-trivial distances. RFID used in access control,[128]payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems but are also vulnerable toskimmingand eavesdropping, albeit at shorter distances.[129] A second method of prevention is by using cryptography.Rolling codesandchallenge–response authentication(CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader, as any messages that have been recorded would prove to be unsuccessful on repeat transmission.[clarification needed]Rolling codes rely upon the tag's ID being changed after each interrogation, while CRA uses software to ask for acryptographicallycoded response from the tag. The protocols used during CRA can besymmetric, or may usepublic key cryptography.[130] While a variety of secure protocols have been suggested for RFID tags, in order to support long read range at low cost, many RFID tags have barely enough power available to support very low-power and therefore simple security protocols such ascover-coding.[131] Unauthorized reading of RFID tags presents a risk to privacy and to business secrecy.[132]Unauthorized readers can potentially use RFID information to identify or track packages, persons, carriers, or the contents of a package.[130]Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption,[133]as well as the possibility of legislation, and 700 scientific papers have been published on this matter since 2002.[134]There are also concerns that the database structure ofObject Naming Servicemay be susceptible to infiltration, similar todenial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable.[135] Microchip–induced tumours have been noted during animal trials.[136][137] In an effort to prevent the passive "skimming" of RFID-enabled cards or passports, the U.S.General Services Administration(GSA) issued a set of test procedures for evaluating electromagnetically opaque sleeves.[138]For shielding products to be in compliance with FIPS-201 guidelines, they must meet or exceed this published standard; compliant products are listed on the website of the U.S. CIO's FIPS-201 Evaluation Program.[139]The United States government requires that when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder.[140]Although many wallets and passport holders are advertised to protect personal information, there is little evidence that RFID skimming is a serious threat; data encryption and use ofEMVchips rather than RFID makes this sort of theft rare.[141][142] There are contradictory opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum shielding, essentially creating aFaraday cage, does work.[143]Others claim that simply wrapping an RFID card in aluminum foil only makes transmission more difficult and is not completely effective at preventing it.[144] Shielding effectiveness depends on the frequency being used.Low-frequencyLowFID tags, like those used in implantable devices for humans and pets, are relatively resistant to shielding, although thick metal foil will prevent most reads.High frequencyHighFID tags (13.56 MHz—smart cardsand access badges) are sensitive to shielding and are difficult to read when within a few centimetres of a metal surface.UHFUltra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal surface, although their read range is actually increased when they are spaced 2–4 cm from a metal surface due to positive reinforcement of the reflected wave and the incident wave at the tag.[145] The use of RFID has engendered considerable controversy and someconsumer privacyadvocates have initiated productboycotts. Consumer privacy expertsKatherine AlbrechtandLiz McIntyreare two prominent critics of the "spychip" technology. The two main privacy concerns regarding RFID are as follows:[citation needed] Most concerns revolve around the fact that RFID tags affixed to products remain functional even after the products have been purchased and taken home; thus, they may be used forsurveillanceand other purposes unrelated to their supply chain inventory functions.[146] The RFID Network responded to these fears in the first episode of their syndicated cable TV series, saying that they are unfounded, and let RF engineers demonstrate how RFID works.[147]They provided images of RF engineers driving an RFID-enabled van around a building and trying to take an inventory of items inside. They also discussed satellite tracking of a passive RFID tag. The concerns raised may be addressed in part by use of theClipped Tag. The Clipped Tag is an RFID tag designed to increase privacy for the purchaser of an item. The Clipped Tag has been suggested byIBMresearchersPaul Moskowitzand Guenter Karjoth. After the point of sale, a person may tear off a portion of the tag. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling. However, read range is a function of both the reader and the tag itself. Improvements in technology may increase read ranges for tags. Tags may be read at longer ranges than they are designed for by increasing reader power. The limit on read distance then becomes the signal-to-noise ratio of the signal reflected from the tag back to the reader. Researchers at two security conferences have demonstrated that passive Ultra-HighFID tags normally read at ranges of up to 30 feet can be read at ranges of 50 to 69 feet using suitable equipment.[148][149] In January 2004, privacy advocates from CASPIAN and the German privacy groupFoeBuDwere invited to the METRO Future Store in Germany, where an RFID pilot project was implemented. It was uncovered by accident that METRO "Payback" customerloyalty cardscontained RFID tags with customer IDs, a fact that was disclosed neither to customers receiving the cards, nor to this group of privacy advocates. This happened despite assurances by METRO that no customer identification data was tracked and all RFID usage was clearly disclosed.[150] During the UNWorld Summit on the Information Society(WSIS) in November 2005,Richard Stallman, the founder of thefree software movement, protested the use of RFID security cards by covering his card with aluminum foil.[151] In 2004–2005, theFederal Trade Commissionstaff conducted a workshop and review of RFID privacy concerns and issued a report recommending best practices.[152] RFID was one of the main topics of the 2006Chaos Communication Congress(organized by theChaos Computer ClubinBerlin) and triggered a large press debate. Topics included electronic passports, Mifare cryptography and the tickets for the FIFA World Cup 2006. Talks showed how the first real-world mass application of RFID at the 2006 FIFA Football World Cup worked. The groupmonochromstaged a "Hack RFID" song.[153] Some individuals have grown to fear the loss of rights due to RFID human implantation. By early 2007, Chris Paget of San Francisco, California, showed that RFID information could be pulled from aUS passport cardby using only $250 worth of equipment. This suggests that with the information captured, it would be possible to clone such cards.[154] According to ZDNet, critics believe that RFID will lead to tracking individuals' every movement and will be an invasion of privacy.[155]In the bookSpyChips: How Major Corporations and Government Plan to Track Your Every Moveby Katherine Albrecht and Liz McIntyre, one is encouraged to "imagine a world of no privacy. Where your every purchase is monitored and recorded in a database and your every belonging is numbered. Where someone many states away or perhaps in another country has a record of everything you have ever bought. What's more, they can be tracked and monitored remotely".[156] According to an RSA laboratories FAQ, RFID tags can be destroyed by a standard microwave oven;[157]however, some types of RFID tags, particularly those constructed to radiate using large metallic antennas (in particular RF tags andEPCtags), may catch fire if subjected to this process for too long (as would any metallic item inside a microwave oven). This simple method cannot safely be used to deactivate RFID features in electronic devices, or those implanted in living tissue, because of the risk of damage to the "host". However the time required is extremely short (a second or two of radiation) and the method works in many other non-electronic and inanimate items, long before heat or fire become of concern.[158] Some RFID tags implement a "kill command" mechanism to permanently and irreversibly disable them. This mechanism can be applied if the chip itself is trusted or the mechanism is known by the person that wants to "kill" the tag. UHF RFID tags that comply with the EPC2 Gen 2 Class 1 standard usually support this mechanism, while protecting the chip from being killed with a password.[159]Guessing or cracking this needed 32-bit password for killing a tag would not be difficult for a determined attacker.[160]
https://en.wikipedia.org/wiki/RFID
Asmart card(SC),chip card, orintegrated circuit card(ICCorIC card), is a card used to control access to a resource. It is typically a plastic credit card-sized card with anembeddedintegrated circuit(IC) chip.[1]Many smart cards include a pattern of metal contacts to electrically connect to the internal chip. Others arecontactless, and some are both. Smart cards can provide personal identification, authentication, data storage, and application processing.[2]Applications include identification, financial, public transit, computer security, schools, and healthcare. Smart cards may provide strong security authentication forsingle sign-on(SSO) within organizations. Numerous nations have deployed smart cards throughout their populations. Theuniversal integrated circuit card(UICC) for mobile phones, installed as pluggableSIM cardor embeddedeSIM, is also a type of smart card. As of 2015[update], 10.5billion smart card IC chips are manufactured annually, including 5.44billion SIM card IC chips.[3] The basis for the smart card is thesiliconintegrated circuit(IC) chip.[4]It was invented byRobert NoyceatFairchild Semiconductorin 1959. The invention of the silicon integrated circuit led to the idea of incorporating it onto a plastic card in the late 1960s.[4] The idea of incorporating anintegrated circuitchip onto a plastic card was first introduced by the German engineerHelmut Gröttrup. In February 1967, Gröttrup filed the patents DE1574074[5]and DE1574075[6]inWest Germanyfor a tamper-proof identification switch based on asemiconductor deviceand described contactless communication via inductive coupling.[7]Its primary use was intended to provide individual copy-protected keys for releasing the tapping process at unmanned gas stations. In September 1968, Gröttrup, together withJürgen Dethloffas an investor, filed further patents for this identification switch, first inAustria[8]and in 1969 as subsequent applications in the United States,[9][10]Great Britain, West Germany and other countries.[11] Independently, Kunitaka Arimura of the Arimura Technology Institute in Japan developed a similar idea of incorporating an integrated circuit onto a plastic card, and filed a smart card patent in March 1970.[4][12]The following year, Paul Castrucci ofIBMfiled an American patent titled "Information Card" in May 1971.[12] In 1974Roland Morenopatented a secured memory card later dubbed the "smart card".[13][14]In 1976, Jürgen Dethloff introduced the known element (called "the secret") to identify gate user as of USP 4105156.[15] In 1977, Michel Ugon fromHoneywell Bullinvented the firstmicroprocessorsmart card with twochips: one microprocessor and onememory, and in 1978, he patented the self-programmable one-chip microcomputer (SPOM) that defines the necessary architecture to program the chip. Three years later,Motorolaused this patent in its "CP8". At that time, Bull had 1,200 patents related to smart cards. In 2001, Bull sold its CP8 division together with its patents toSchlumberger, who subsequently combined its own internal smart card department and CP8 to createAxalto. In 2006, Axalto and Gemplus, at the time the world's top two smart-card manufacturers, merged and becameGemalto. In 2008, Dexa Systems spun off from Schlumberger and acquired Enterprise Security Services business, which included the smart-card solutions division responsible for deploying the first large-scale smart-card management systems based onpublic key infrastructure(PKI). The first mass use of the cards was as atelephone cardfor payment in Frenchpayphones, starting in 1983.[16] After the Télécarte, microchips were integrated into all FrenchCarte Bleuedebit cardsin 1992. Customers inserted the card into the merchant'spoint-of-sale(POS) terminal, then typed thepersonal identification number(PIN), before the transaction was accepted. Only very limited transactions (such as paying smallhighway tolls) are processed without a PIN. Smart-card-based "electronic purse" systems store funds on the card, so that readers do not need network connectivity. They entered European service in the mid-1990s. They have been common in Germany (Geldkarte), Austria (Quick Wertkarte),Belgium(Proton), France (Moneo[17]), the Netherlands (ChipknipChipper (decommissioned in 2015)), Switzerland ("Cash"), Norway ("Mondex"), Spain ("Monedero 4B"), Sweden ("Cash", decommissioned in 2004), Finland ("Avant"), UK ("Mondex"), Denmark ("Danmønt") and Portugal ("Porta-moedas Multibanco"). Private electronic purse systems have also been deployed such as the Marines corps (USMC) at Parris Island allowing small amount payments at the cafeteria. Since the 1990s, smart cards have been thesubscriber identity modules(SIMs) used inGSMmobile-phone equipment. Mobile phones are widely used across the world, so smart cards have become very common. Europay MasterCard Visa (EMV)-compliant cards and equipment are widespread with the deployment led by European countries. The United States started later deploying the EMV technology in 2014, with the deployment still in progress in 2019. Typically, a country's national payment association, in coordination withMasterCardInternational,VisaInternational,American ExpressandJapan Credit Bureau(JCB), jointly plan and implement EMV systems. Historically, in 1993 several international payment companies agreed to develop smart-card specifications fordebitand credit cards. The original brands were MasterCard, Visa, andEuropay. The first version of the EMV system was released in 1994. In 1998 the specifications became stable. EMVCo maintains these specifications. EMVco's purpose is to assure the various financial institutions and retailers that the specifications retain backward compatibility with the 1998 version. EMVco upgraded the specifications in 2000 and 2004.[18] EMV compliant cards were first accepted into Malaysia in 2005[19]and later into United States in 2014. MasterCard was the first company that was allowed to use the technology in the United States. The United States has felt pushed to use the technology because of the increase inidentity theft. The credit card information stolen from Target in late 2013 was one of the largest indicators that American credit card information is not safe. Target made the decision on 30 April 2014 that it would try to implement the smart chip technology to protect itself from future credit card identity theft. Before 2014, the consensus in America was that there were enough security measures to avoid credit card theft and that the smart chip was not necessary. The cost of the smart chip technology was significant, which was why most of the corporations did not want to pay for it in the United States. The debate finally ended when Target sent out a notice[20]stating unauthorized access to magnetic strips[21]costing Target over 300 million dollars along with the increasing cost of online credit theft was enough for the United States to invest in the technology. The adaptation of EMV's increased significantly in 2015when the liability shifts occurred in October by the credit card companies.[clarify][citation needed] Contactlesssmart cards do not require physical contact between a card and reader. They are becoming more popular for payment and ticketing. Typical uses include mass transit and motorway tolls. Visa and MasterCard implemented a version deployed in 2004–2006 in the U.S., with Visa's current offering calledVisa Contactless. Most contactless fare collection systems are incompatible, though theMIFAREStandard card fromNXP Semiconductorshas a considerable market share in the US and Europe. Use of "Contactless" smart cards in transport has also grown through the use of low cost chips NXP Mifare Ultralight and paper/card/PET rather than PVC. This has reduced media cost so it can be used for low cost tickets and short term transport passes (up to 1 year typically). The cost is typically 10% that of a PVC smart card with larger memory. They are distributed through vending machines, ticket offices and agents. Use of paper/PET is less harmful to the environment than traditional PVC cards. Smart cards are also being introduced for identification and entitlement by regional, national, and international organizations. These uses include citizen cards, drivers’ licenses, and patient cards. InMalaysia, the compulsory national IDMyKadenables eight applications and has 18 million users. Contactless smart cards are part ofICAObiometric passportsto enhance security for international travel. Complex Cards are smart cards that conform to theISO/IEC 7810standard and include components in addition to those found in traditional single chip smart cards. Complex Cards were invented byCyril Laloand Philippe Guillaud in 1999 when they designed a chip smart card with additional components, building upon the initial concept consisting of using audio frequencies to transmit data patented by Alain Bernard.[22]The first Complex Card prototype was developed collaboratively by Cyril Lalo and Philippe Guillaud, who were working at AudioSmartCard[23]at the time, and Henri Boccia and Philippe Patrice, who were working atGemplus. It was ISO 7810-compliant and included a battery, a piezoelectric buzzer, a button, and delivered audio functions, all within a 0.84mm thickness card. The Complex Card pilot, developed by AudioSmartCard, was launched in 2002 byCrédit Lyonnais, a French financial institution. This pilot featured acoustic tones as a means of authentication. Although Complex Cards were developed since the inception of the smart card industry, they only reached maturity after 2010. Complex Cards can accommodate various peripherals including: While first generation Complex Cards were battery powered, the second generation is battery-free and receives power through the usual card connector and/or induction . Sound, generated by a buzzer, was the preferred means of communication for the first projects involving Complex Cards. Later, with the progress of displays, visual communication is now present in almost all Complex Cards. Complex Cards support all communication protocols present on regular smart cards: contact, thanks to a contact pad as definedISO/IEC 7816standard, contactless following theISO/IEC 14443standard, and magstripe. Developers of Complex Cards target several needs when developing them: A Complex Card can be used to compute a cryptographic value, such as aOne-time password. The One-Time Password is generated by acryptoprocessorencapsulated in the card. To implement this function, the crypto processor must be initialized with a seed value, which enables the identification of the OTPs respective of each card. The hash of seed value has to be stored securely within the card to prevent unauthorized prediction of the generated OTPs. One-Time Passwords generation is based either on incremental values (event based) or on a real time clock (time based). Using clock-based One-Time Password generation requires the Complex Card to be equipped with aReal-time clock. Complex Cards used to generate One Time Password have been developed for: A Complex Card with buttons can display the balance of one or multiple account(s) linked to the card. Typically, either one button is used to display the balance in the case of a single account card or, in the case of a card linked to multiple accounts, a combination of buttons is used to select a specific account's balance. For additional security, features such as requiring the user to enter an identification or a security value such as aPINcan be added to a Complex Card. Complex Cards used to provide account information have been developed for: The latest generation of battery free, button free, Complex Cards can display a balance or other kind of information without requiring any input from the card holder. The information is updated during the use of the card. For instance, in a transit card, key information such as the monetary value balance, the number of remaining trips or the expiry date of a transit pass can be displayed. A Complex Card being deployed as a payment card can be equipped with capability to provide transaction security. Typically,online paymentsare made secure thanks to theCard Security Code (CSC), also known as card verification code (CVC2), or card verification value (CVV2). The card security code (CSC) is a 3 or 4 digits number printed on a credit or debit card, used as a security feature forcard-not-present (CNP)payment card transactions to reduce the incidence of fraud. The Card Security Code (CSC) is to be given to the merchant by the cardholder to complete a card-not-present transaction. The CSC is transmitted along with other transaction data and verified by the card issuer. ThePayment Card Industry Data Security Standard (PCI DSS)prohibits the storage of the CSC by the merchant or any stakeholder in the payment chain. Although designed to be a security feature, the static CSC is susceptible to fraud as it can easily be memorized by a shop attendant, who could then use it for fraudulent online transactions or sale on the dark web. This vulnerability has led the industry to develop a Dynamic Card Security Code (DCSC) that can be changed at certain time intervals, or after each contact or contactless EMV transaction. This Dynamic CSC brings significantly better security than a static CSC. The first generation of Dynamic CSC cards, developed by NagraID Security required a battery, a quartz and Real Time Clock (RTC) embedded within the card to power the computation of a new Dynamic CSC, after expiration of the programmed period. The second generation of Dynamic CSC cards, developed by Ellipse World, Inc., does not require any battery, quartz, or RTC to compute and display the new dynamic code. Instead, the card obtains its power either through the usual card connector or by induction during every EMV transaction from the Point of Sales (POS) terminal or Automated Teller Machine (ATM) to compute a new DCSC. The Dynamic CSC, also called dynamic cryptogram, is marketed by several companies, under different brand names: The advantage of the Dynamic Card Security Code (DCSC) is that new information is transmitted with the payment transactions, thus making it useless for a potential fraudster to memorize or store it. A transaction with a Dynamic Card Security Code is carried out exactly the same way, with the same processes and use of parameters as a transaction with a static code in a card-not-present transaction. Upgrading to a DCSC allows cardholders and merchants to continue their payment habits and processes undisturbed. Complex Cards can be equipped with biometric sensors allowing for stronger user authentication. In the typical use case, fingerprint sensors are integrated into a payment card to bring a higher level of user authentication than a PIN. To implement user authentication using a fingerprint enabled smart card, the user has to authenticate himself/herself to the card by means of the fingerprint before starting a payment transaction. Several companies[29]offer cards with fingerprint sensors, including: Complex Cards can incorporate a wide variety of components. The choice of components drives functionality, influences cost, power supply needs, and manufacturing complexity. Depending on Complex Card types, buttons have been added to allow an easy interaction between the user and the card. Typically, these buttons are used to: Whileseparate keyshave been used on prototypes in the early days, capacitive keyboards are the most popular solution now, thanks to technology developments by AudioSmartCard International SA.[30] The interaction with a capacitive keyboard requires constant power, therefore a battery and a mechanical button are required to activate the card. The first Complex Cards were equipped with a buzzer that made it possible to broadcast sound. This feature was generally used over the phone to send identification data such as an identifier and one-time passwords (OTPs). Technologies used for sound transmission include DTMF (dual-tone multi-frequency signaling) or FSK (frequency-shift keying). Companies that offered cards with buzzers include: Displaying data is an essential part of Complex Card functionalities. Depending on the information that needs to be shown, displays can be digital or alphanumeric and of varying lengths. Displays can be located either on the front or back of the card. A front display is the most common solution for showing information such as a One-Time Password or an electronic purse balance. A rear display is more often used for showing a Dynamic Card Security Code (DCSC). Displays can be made using two technologies: If a Complex smart Card is dedicated to making cryptographic computations (such as generating a one-time password) it may require asecure cryptoprocessor. As Complex Cards contain more components than traditional smart cards, their power consumption must be carefully monitored. First generation Complex Cards require a power supply even in standby mode. As such, product designers generally included a battery in their design. Incorporating a battery creates an additional burden in terms of complexity, cost, space and flexibility in an already dense design. Including a battery in a Complex Card increases the complexity of the manufacturing process as a battery cannot be hot laminated. Second generation Complex Cards feature a battery-free design. These cards harvest the necessary power from external sources; for example when the card interacts in a contact orcontactlessfashion with a payment system or an NFC-enabled smartphone. The use of a bistable display in the card design ensures that the screen remains legible even when the Complex Card is unconnected to the power source. Complex Card manufacturing methods are inherited from the smart card industry and from the electronics mounting industry. As Complex Cards incorporate several components while having to remain within 0.8 mm thickness and be flexible, and to comply with theISO/IEC 7810,ISO/IEC 7811andISO/IEC 7816standards, renders their manufacture more complex than standard smart cards. One of the most popular manufacturing processes in the smart card industry is lamination. This process involves laminating an inlay between two card faces. The inlay contains the needed electronic components with an antenna printed on an inert support. Typically battery-powered Complex Cards require a cold lamination manufacturing process. This process impacts the manufacturing lead time and the whole cost of such a Complex Card. Second generation, battery-free Complex Cards can be manufactured by existing hot lamination process. This automated process, inherited from traditional smart card manufacturing, enables the production of Complex Cards in large quantities while keeping costs under control, a necessity for the evolution from a niche to a mass market. As with standard smart cards, Complex Cards go through a lifecycle comprising the following steps: As Complex Cards bring more functionalities than standard smart cards and, due to their complexity, their personalization can take longer or require more inputs. Having Complex Cards that can be personalized by the same machines and the same processes as regular smart cards allows them to be integrated more easily in existing manufacturing chains and applications. First generation, battery-operated Complex Cards require specificrecyclingprocesses, mandated by different regulatory bodies. Additionally, keeping battery-operated Complex Cards in inventory for extended periods of time may reduce their performance due tobattery ageing. Second-generation battery-free technology ensures operation during the entire lifetime of the card and eliminates self-discharge, providingextended shelf life, and is more eco-friendly. Since the inception of smart cards, innovators have been trying to add extra features. As technologies have matured and have been industrialized, several smart card industry players have been involved in Complex Cards. The Complex Card concept began in 1999 when Cyril Lalo and Philippe Guillaud, its inventors, first designed a smart card with additional components. The first prototype was developed collaboratively by Cyril Lalo, who was the CEO of AudioSmartCard at the time, and Henri Boccia and Philippe Patrice, from Gemplus. The prototype included a button and audio functions on a 0.84mm thick ISO 7810-compliant card . Since then, Complex Cards have been mass-deployed primarily by NagraID Security. AudioSmartCard International SA[33]was instrumental in developing the first Complex Card that included a battery, a piezoelectric buzzer, a button, and audio functions all on a 0.84mm thick, ISO 7810-compatible card. AudioSmartCard was founded in 1993 and specialized in the development and marketing of acoustic tokens incorporating security features. These acoustic tokens exchanged data in the form of sounds transmitted over a phone line. In 1999, AudioSmartCard transitioned to a new leadership under Cyril Lalo and Philippe Guillaud, who also became major shareholders. They made AudioSmartCard evolve towards the smart card world. In 2003 Prosodie,[34]a subsidiary ofCapgemini, joined the shareholders of AudioSmartCard. AudioSmartCard was renamed nCryptone,[35]in 2004. CardLab Innovation,[36]incorporated in 2006 in Herlev, Denmark, specializes in Complex Cards that include a switch, a biometric reader, anRFIDjammer, and one or more magstripes. The company works with manufacturing partners in China and Thailand and owns a card lamination factory in Thailand. Coin was a US-based startup[37]founded in 2012 by Kanishk Parashar.[38]It developed a Complex Card capable of storing the data of several credit and debit cards. The card prototype was equipped with a display[39][full citation needed]and a button that enabled the user to switch between different cards. In 2015, the original Coin card concept evolved into Coin 2.0 adding contactless communication to its original magstripe emulation.[40] Coin was acquired byFitbitin May 2016[41]and all Coin activities were discontinued in February 2017.[42] Ellipse World, Inc.[43]was founded in 2017 by Cyril Lalo and Sébastien Pochic, both recognized experts in Complex Card technology. Ellipse World, Inc. specializes in battery-free Complex Card technology. The Ellipse patented technologies enable smart card manufacturers to use their existing dual interface payment card manufacturing process and supply chain to build battery-free, second generation Complex Cards with display capabilities. Thanks to this ease of integration, smart card vendors are able to address banking, transit and prepaid cards markets. EMue[44]Technologies, headquartered in Melbourne, Australia, designed and developed authentication solutions for the financial services industry from 2009 to 2015.[45]The company's flagship product, developed in collaboration with Cyril Lalo and Philippe Guillaud, was the eMue Card, a Visa CodeSure[46]credit card with an embedded keypad, a display and a microprocessor. Feitian Technologies, a China-based company created in 1998, provides cyber security products and solutions. The company offers security solutions based on smart cards as well as other authentication devices. These include Complex Cards, that incorporate a display,[47]a keypad[48]or a fingerprint sensor.[49] Fingerprint CardsAB (or Fingerprints[50]) is a Swedish company specializing in biometric solutions. The company sells biometric sensors and has recently introduced payment cards incorporating a fingerprint sensor[51]such as the Zwipe card,[52]a biometric dual-interface payment card using an integrated sensor from Fingerprints. Giesecke & Devrient, also known as G+D,[53]is a German company headquartered in Munich that provides banknotes, security printing, smart cards and cash handling systems. Its smart card portfolio includes display cards, OTP cards, as well as cards displaying aDynamic CSC. Gemalto, a division ofThales Group, is a major player in the secure transaction industry. The company's Complex Card portfolio includes cards with a display[54]or a fingerprint sensor.[55]These cards may display an OTP[56]or a Dynamic CSC.[57] IDEMIAis the product of the 2017[58]merger of Oberthur Technologies and Morpho. The combined company has positioned itself as a global provider of financial cards, SIM cards, biometric devices as well as public and private identity solutions. Due to Oberthur's acquisition of NagraID Security in 2014, Idemia's Complex Card offerings include the F.CODE[59]biometric payment card that includes a fingerprint sensor, and its battery-powered Motion Code[60]card that displays a Dynamic CSC. IDEX BiometricsASA, incorporated in Norway, specializes in fingerprint identification technologies for personal authentication. The company offers fingerprint sensors[61]and modules[62]that are ready to be embedded into cards.[63] Founded in 2002, by Alan Finkelstein, Innovative Card Technologies developed and commercialized enhancements for the smart card market. The company acquired the display card assets of nCryptone[64]in 2006. Innovative Card Technologies has ceased its activities. Nagra ID, now known as NID,[65]was a wholly-owned subsidiary of theKudelski Groupuntil 2014. NID can trace its history with Complex Cards back to 2003 when it collaborated on development with nCryptone. Nagra ID was instrumental in developing the cold lamination process for Complex Cards manufacturing. Nagra ID manufactures Complex Cards[66]that can include a battery, buttons, displays or other electronic components. Nagra ID Security began in 2008 as a spinoff of Nagra ID to focus on Complex Card development and manufacturing. The company was owned byKudelski Group(50%), Cyril Lalo (25%) and Philippe Guillaud (25%). NagraID Security quickly became a leading player in the adoption of Complex Cards due, in large part, to its development of MotionCode cards that featured a small display to enable aCard Security Code (CVV2). NagraID Security was the first Complex Cards manufacturer to develop a mass market for payment display cards. Their customers included: NagraID Security also delivered One-Time Password cards to companies including: In 2014, NagraID Security was sold toOberthur Technologies(nowIDEMIA). nCryptone emerged in 2004 from the renaming of AudioSmartCard. nCryptone was headed by Cyril Lalo and Philippe Guillaud[68]and developed technologies around authentication servers and devices. nCryptone display card assets were acquired by Innovative Card Technologies in 2006.[69] Oberthur Technologies, nowIDEMIA, is one of the major players in the secure transactions industry. It acquired the business of NagraID Security in 2014. Oberthur then merged with Morpho and the combined entity was renamed Idemia in 2017. Major references in the Complex Cards business include: Set up in 2009, Plastc announced a single card that could digitally hold the data of up to 20 credit or debit cards. The company succeeded in raising US$9 million through preorders but failed to deliver any product.[73]Plastc was then acquired[74]in 2017 by Edge Mobile Payments,[75]a Santa Cruz-based Fintech company. The Plastc project continues as the Edge card,[76]a dynamic payment card that consolidates several payment cards in one device. The card is equipped with a battery and an ePaper screen and can store data from up to 50 credit, debit, loyalty and gift cards. Stratos[77]was created in 2012 in Ann Arbor, Michigan, USA. In 2015, Stratos developed the Stratos Bluetooth Connected Card,[78]which was designed to integrate up to three credit and debit card in a single card format and featured a smartphone app used to manage the card. Due to its Lithium ion thin film battery, the Stratos card was equipped with LEDs and communicated in contactless mode and in Bluetooth low Energy. In 2017 Stratos was acquired[79]by CardLab Innovation, a company headquartered in Herlev, Denmark. SWYP[80]was the brand name of a card developed by Qvivr, a company incorporated in 2014 in Fremont, California. SWYP was introduced in 2015 and dubbed the world's first smart wallet. SWYP was a metal card with the ability to combine over 25 credit, debit, gift and loyalty cards. The card worked in conjunction with a smartphone app used to manage the cards. The Swyp card included a battery, a button and a matrix display that showed which card was in use. The company registered users in its beta testing program, but the product never shipped on a commercial scale. Qvivr raised US$5 million in January 2017[81]and went out of business in November 2017. Complex Cards have been adopted by numerous financial institutions worldwide. They may include different functionalities such as payment cards (credit, debit, prepaid),One-time password, mass-transit, and dynamicCard Security Code (CVV2). Complex Card technology is used by numerous financial institutions including: A smart card may have the following generic characteristics: Since April 2009, a Japanese company has manufactured reusable financial smart cards made from paper.[98] As mentioned above, data on a smart card may be stored in afile system(FS). In smart card file systems, the root directory is called the "master file" ("MF"), subdirectories are called "dedicated files" ("DF"), and ordinary files are called "elementary files" ("EF").[99] The file system mentioned above is stored on anEEPROM(storage or memory) within the smartcard.[99]In addition to the EEPROM, other components may be present, depending upon the kind of smartcard. Most smartcards have one of three logical layouts: In cards with microprocessors, the microprocessor sits inline between the reader and the other components. The operating system that runs on the microprocessor mediates the reader's access to those components to prevent unauthorized access.[99] Contact smart cards have a contact area of approximately 1 square centimetre (0.16 sq in), comprising several gold-platedcontact pads. These pads provide electrical connectivity when inserted into areader,[102]which is used as a communications medium between the smart card and a host (e.g., a computer, a point of sale terminal) or a mobile telephone. Cards do not containbatteries; power is supplied by the card reader. TheISO/IEC 7810andISO/IEC 7816series of standards define: Because the chips in financial cards are the same as those used insubscriber identity modules(SIMs) in mobile phones, programmed differently and embedded in a different piece ofPVC, chip manufacturers are building to the more demanding GSM/3G standards. So, for example, although the EMV standard allows a chip card to draw 50 mA from its terminal, cards are normally well below the telephone industry's 6 mA limit. This allows smaller and cheaper financial card terminals. Communication protocols for contact smart cards include T=0 (character-level transmission protocol, defined in ISO/IEC 7816-3) and T=1 (block-level transmission protocol, defined in ISO/IEC 7816-3). Contactless smart cardscommunicate with readers under protocols defined in theISO/IEC 14443standard. They support data rates of 106–848 kbit/s. These cards require only proximity to an antenna to communicate. Like smart cards with contacts, contactless cards do not have an internal power source. Instead, they use aloop antennacoil to capture some of the incident radio-frequency interrogation signal,rectifyit, and use it to power the card's electronics. Contactless smart media can be made with PVC, paper/card and PET finish to meet different performance, cost and durability requirements. APDU transmission by a contactless interface is defined inISO/IEC 14443-4. Hybrid cards implement contactless and contact interfaces on a single card with unconnected chips including dedicated modules/storage and processing. Dual-interface cards implement contactless and contact interfaces on a single chip with some shared storage and processing. An example isPorto's multi-application transport card, calledAndante, which uses a chip with both contact andcontactless(ISO/IEC 14443 Type B) interfaces. Numerous payment cards worldwide are based on hybrid card technology allowing them to communicate in contactless as well as contact modes. TheCCID(Chip Card Interface Device) is a USB protocol that allows a smart card to be interfaced to a computer using a card reader which has a standard USB interface. This allows the smart card to be used as a security token for authentication and data encryption such asBitlocker. A typical CCID is a USB dongle and may contain a SIM. Different smart cards implement one or more reader-side protocols. Common protocols here include CT-API andPC/SC.[99] Smartcard operating systems may provide application programming interfaces (APIs) so that developers can write programs ("applications") to run on the smartcard. Some such APIs, such asJava Card, allow programs to be uploaded to the card without replacing the card's entire operating system.[99] Smart cards serve as credit orATM cards,fuel cards, mobile phoneSIMs, authorization cards for pay television, household utility pre-payment cards, high-security identification andaccess badges, and public transport and public phone payment cards. Smart cards may also be used aselectronic wallets. The smart card chip can be "loaded" with funds to pay parking meters, vending machines or merchants.Cryptographic protocolsprotect the exchange of money between the smart card and the machine. No connection to a bank is needed. The holder of the card may use it even if not the owner. Examples areProton,Geldkarte,ChipknipandMoneo. The German Geldkarte is also used to validate customer age atvending machinesfor cigarettes. These are the best known payment cards (classic plastic card): Roll-outs started in 2005 in the U.S. Asia and Europe followed in 2006. Contactless (non-PIN) transactions cover a payment range of ~$5–50. There is anISO/IEC 14443PayPass implementation. Some, but not all, PayPass implementations conform to EMV. Non-EMV cards work likemagnetic stripe cards. This is common in the U.S. (PayPass Magstripe and Visa MSD). The cards do not hold or maintain the account balance. All payment passes without a PIN, usually in off-line mode. The security of such a transaction is no greater than with a magnetic stripe card transaction.[citation needed] EMV cards can have either contact or contactless interfaces. They work as if they were a normal EMV card with a contact interface. Via the contactless interface they work somewhat differently, in that the card commands enabled improved features such as lower power and shorter transaction times. EMV standards include provisions for contact and contactless communications. Typically modern payment cards are based on hybrid card technology and support both contact and contactless communication modes. Thesubscriber identity modulesused in mobile-phone systems are reduced-size smart cards, using otherwise identical technologies. Smart-cards canauthenticateidentity. Sometimes they employ apublic key infrastructure(PKI). The card stores an encrypted digital certificate issued from the PKI provider along with other relevant information. Examples include theU.S. Department of Defense(DoD)Common Access Card(CAC), and other cards used by other governments for their citizens. If they include biometric identification data, cards can provide superior two- or three-factor authentication. Smart cards are not always privacy-enhancing, because the subject may carry incriminating information on the card. Contactless smart cards that can be read from within a wallet or even a garment simplify authentication; however, criminals may access data from these cards. Cryptographic smart cards are often used forsingle sign-on. Most advanced smart cards include specialized cryptographic hardware that uses algorithms such asRSAandDigital Signature Algorithm(DSA). Today's cryptographic smart cards generate key pairs on board, to avoid the risk from having more than one copy of the key (since by design there usually isn't a way to extract private keys from a smart card). Such smart cards are mainly used fordigital signaturesand secure identification. The most common way to access cryptographic smart card functions on a computer is to use a vendor-providedPKCS#11library.[citation needed]OnMicrosoft WindowstheCryptographic Service Provider(CSP) API is also supported. The most widely used cryptographic algorithms in smart cards (excluding the GSM so-called "crypto algorithm") areTriple DESandRSA. The key set is usually loaded (DES) or generated (RSA) on the card at the personalization stage. Some of these smart cards are also made to support theNational Institute of Standards and Technology(NIST) standard forPersonal Identity Verification,FIPS 201. Turkey implemented the first smart card driver's license system in 1987. Turkey had a high level of road accidents and decided to develop and use digital tachograph devices on heavy vehicles, instead of the existing mechanical ones, to reduce speed violations. Since 1987, the professional driver's licenses in Turkey have been issued as smart cards. A professional driver is required to insert his driver's license into a digital tachograph before starting to drive. The tachograph unit records speed violations for each driver and gives a printed report. The driving hours for each driver are also being monitored and reported. In 1990 the European Union conducted a feasibility study through BEVAC Consulting Engineers, titled "Feasibility study with respect to a European electronic drivers license (based on a smart-card) on behalf of Directorate General VII". In this study, chapter seven describes Turkey's experience. Argentina's Mendoza province began using smart card driver's licenses in 1995. Mendoza also had a high level of road accidents, driving offenses, and a poor record of recovering fines.[citation needed]Smart licenses hold up-to-date records of driving offenses and unpaid fines. They also store personal information, license type and number, and a photograph. Emergency medical information such as blood type, allergies, and biometrics (fingerprints) can be stored on the chip if the card holder wishes. The Argentina government anticipates that this system will help to collect more than $10 million per year in fines. In 1999Gujaratwas the first Indian state to introduce a smart card license system.[103]As of 2005, it has issued 5 million smart card driving licenses to its people.[104] In 2002, the Estonian government started to issue smart cards namedID Kaartas primary identification for citizens to replace the usual passport in domestic and EU use. As of 2010 about 1 million smart cards have been issued (total population is about 1.3 million) and they are widely used in internet banking, buying public transport tickets, authorization on various websites etc. By the start of 2009, the entire population ofBelgiumwas issued eID cards that are used for identification. These cards contain two certificates: one for authentication and one for signature. This signature is legally enforceable. More and more services in Belgium use eID forauthorization.[105] Spain started issuing national ID cards (DNI) in the form of smart cards in 2006 and gradually replaced all the older ones with smart cards. The idea was that many or most bureaucratic acts could be done online but it was a failure because the Administration did not adapt and still mostly requires paper documents and personal presence.[106][107][108][109] On 14 August 2012, the ID cards inPakistanwere replaced. The Smart Card is a third generation chip-basedidentity documentthat is produced according to international standards and requirements. The card has over 36 physical security features and has the latest[clarification needed]encryption codes. This smart card replaced the NICOP (the ID card foroverseas Pakistani). Smart cards may identify emergency responders and their skills. Cards like these allow first responders to bypass organizational paperwork and focus more time on the emergency resolution. In 2004, The Smart Card Alliance expressed the needs: "to enhance security, increase government efficiency, reduce identity fraud, and protect personal privacy by establishing a mandatory, Government-wide standard for secure and reliable forms of identification".[110]emergency responsepersonnel can carry these cards to be positively identified in emergency situations.WidePoint Corporation, a smart card provider toFEMA, produces cards that contain additional personal information, such as medical records and skill sets. In 2007, theOpen Mobile Alliance(OMA) proposed a new standard defining V1.0 of the Smart Card Web Server (SCWS), anHTTP serverembedded in a SIM card intended for asmartphoneuser.[111]The non-profit trade association SIMalliance has been promoting the development and adoption of SCWS. SIMalliance states that SCWS offers end-users a familiar,OS-independent, browser-based interface to secure, personal SIM data. As of mid-2010, SIMalliance had not reported widespread industry acceptance of SCWS.[112]The OMA has been maintaining the standard, approving V1.1 of the standard in May 2009, and V1.2 was expected to be approved in October 2012.[113] Smart cards are also used to identify user accounts on arcade machines.[114] Smart cards, used astransit passes, andintegrated ticketingare used by many public transit operators. Card users may also make small purchases using the cards. Some operators offer points for usage, exchanged at retailers or for other benefits.[115]Examples include Singapore'sCEPAS, Malaysia'sTouch 'n Go, Ontario'sPresto card, Hong Kong'sOctopus card, Tokyo'sSuicaandPASMOcards, London'sOyster card, Ireland'sLeap Card, Brussels'MoBIB, Québec'sOpus card, Boston'sCharlieCard, San Francisco'sClipper card, Washington, D.C.'sSmarTrip, Auckland'sAT Hop, Brisbane'sgo card, Perth'sSmartRider, Sydney'sOpal cardand Victoria'smyki. However, these present aprivacyrisk because they allow the mass transit operator (and the government) to track an individual's movement. In Finland, for example, the Data ProtectionOmbudsmanprohibited the transport operatorHelsinki Metropolitan Area Council(YTV) from collecting such information, despite YTV's argument that the card owner has the right to a list of trips paid with the card. Earlier, such information was used in the investigation of theMyyrmanni bombing.[citation needed] The UK'sDepartment for Transportmandated smart cards to administer travel entitlements for elderly and disabled residents. These schemes let residents use the cards for more than just bus passes. They can also be used for taxi and other concessionary transport. One example is the "Smartcare go" scheme provided by Ecebs.[116]The UK systems use theITSO Ltdspecification. Other schemes in the UK include period travel passes, carnets of tickets or day passes and stored value which can be used to pay for journeys. Other concessions for school pupils, students and job seekers are also supported. These are mostly based on theITSO Ltdspecification. Many smart transport schemes include the use of low cost smart tickets for simple journeys, day passes and visitor passes. Examples include GlasgowSPT subway. These smart tickets are made of paper or PET which is thinner than a PVC smart card e.g. Confidex smart media.[117]The smart tickets can be supplied pre-printed and over-printed or printed on demand. In Sweden, as of 2018–19, the old SL Access smart card system has started to be phased out and replaced by smartphone apps. The phone apps have less cost, at least for the transit operators who don't need any electronic equipment (the riders provide that). The riders are able buy tickets anywhere and don't need to load money onto smart cards. New NFC smart cards are still in use for foreseeable future (as of 2024). In Japaneseamusement arcades,contactless smart cards(usually referred to as "IC cards") are used by game manufacturers as a method for players to access in-game features (both online likeKonamiE-AmusementandSegaALL.Netand offline) and as a memory support to save game progress. Depending on a case by case scenario, the machines can use a game-specific card or a "universal" one usable on multiple machines from the same manufacturer/publisher. Amongst the most widely used there are Banapassport byBandai Namco,E-amusement passbyKonami,AimebySegaandNesicabyTaito. In 2018, in an effort to make arcade game IC cards more user friendly,[118]Konami, Bandai Namco and Sega have agreed on a unified system of cards namedAmusement IC. Thanks to this agreement, the three companies are now using a unified card reader in their arcade cabinets, so that players are able to use their card, no matter if a Banapassport, an e-Amusement Pass or an Aime, with hardware and ID services of all three manufacturers. A common logo forAmusement ICcards has been created, and this is now displayed on compatible cards from all three companies. In January 2019, Taito announced[119]that their Nesica card was also joining theAmusement ICagreement with the other three companies. Smart cards can be used as asecurity token. Mozilla'sFirefoxweb browsercan use smart cards to storecertificatesfor use in secure web browsing.[120] Somedisk encryption systems, such asVeraCryptand Microsoft'sBitLocker, can use smart cards to securely hold encryption keys, and also to add another layer of encryption to critical parts of the secured disk. GnuPG, the well known encryption suite, also supports storing keys in a smart card.[121] Smart cards are also used forsingle sign-ontolog onto computers. Smart cards are being provided to students at some schools and colleges.[122][123][124]Uses include: Smart health cards can improve thesecurityandprivacyof patient information, provide a secure carrier for portablemedical records, reducehealth care fraud, support new processes for portable medical records, provide secure access to emergency medical information, enable compliance with government initiatives (e.g.,organ donation) and mandates, and provide the platform to implement other applications as needed by thehealth care organization.[125][126] Smart cards are widely used toencryptdigital television streams.VideoGuardis a specific example of how smart card security worked. The Malaysian government promotesMyKadas a single system for all smart-card applications. MyKad started as identity cards carried by all citizens and resident non-citizens. Available applications now include identity, travel documents, drivers license, health information, an electronic wallet, ATM bank-card, public toll-road and transit payments, and public key encryption infrastructure. The personal information inside the MYKAD card can be read using special APDU commands.[127] Smart cards have been advertised as suitable for personal identification tasks, because they areengineeredto betamper resistant. The chip usually implements somecryptographicalgorithm. There are, however, several methods for recovering some of the algorithm's internal state. Differential power analysisinvolves measuring the precise time andelectric currentrequired for certain encryption or decryption operations. This can deduce the on-chip private key used by public key algorithms such asRSA. Some implementations ofsymmetric cipherscan be vulnerable to timing orpower attacksas well. Smart cards can be physically disassembled by using acid, abrasives, solvents, or some other technique to obtain unrestricted access to the on-board microprocessor. Although such techniques may involve a risk of permanent damage to the chip, they permit much more detailed information (e.g.,photomicrographsof encryption hardware) to be extracted. The benefits of smart cards are directly related to the volume of information and applications that are programmed for use on a card. A single contact/contactless smart card can be programmed with multiple banking credentials, medical entitlement, driver's license/public transport entitlement, loyalty programs and club memberships to name just a few. Multi-factor and proximity authentication can and has been embedded into smart cards to increase the security of all services on the card. For example, a smart card can be programmed to only allow a contactless transaction if it is also within range of another device like a uniquely paired mobile phone. This can significantly increase the security of the smart card. Governments and regional authorities save money because of improved security, better data and reduced processing costs. These savings help reduce public budgets or enhance public services. There are many examples in the UK, many using a common open LASSeO specification. Individuals have better security and more convenience with using smart cards that perform multiple services. For example, they only need to replace one card if their wallet is lost or stolen. The data storage on a card can reduce duplication, and even provide emergency medical information. The first main advantage of smart cards is their flexibility. Smart cards have multiple functions which simultaneously can be an ID, a credit card, a stored-value cash card, and a repository of personal information such as telephone numbers or medical history. The card can be easily replaced if lost, and, the requirement for aPIN(or other form of security) provides additional security from unauthorised access to information by others. At the first attempt to use it illegally, the card would be deactivated by the card reader itself. The second main advantage is security. Smart cards can be electronic key rings, giving the bearer ability to access information and physical places without need for online connections. They are encryption devices, so that the user can encrypt and decrypt information without relying on unknown, and therefore potentially untrustworthy, appliances such as ATMs. Smart cards are very flexible in providing authentication at different level of the bearer and the counterpart. Finally, with the information about the user that smart cards can provide to the other parties, they are useful devices for customizing products and services. Other general benefits of smart cards are: Smart cards can be used inelectronic commerce, over the Internet, though the business model used in current electronic commerce applications still cannot use the full feature set of the electronic medium. An advantage of smart cards for electronic commerce is their use customize services. For example, for the service supplier to deliver the customized service, the user may need to provide each supplier with their profile, a boring and time-consuming activity. A smart card can contain a non-encrypted profile of the bearer, so that the user can get customized services even without previous contacts with the supplier. The plastic or paper card in which the chip is embedded is fairly flexible. The larger the chip, the higher the probability that normal use could damage it. Cards are often carried in wallets or pockets, a harsh environment for a chip and antenna in contactless cards. PVC cards can crack or break if bent/flexed excessively. However, for large banking systems, failure-management costs can be more than offset by fraud reduction.[citation needed] The production, use and disposal of PVC plastic is known to be more harmful to the environment than other plastics.[128]Alternative materials including chlorine free plastics and paper are available for some smart applications. If the account holder's computer hostsmalware, the smart card security model may be broken. Malware can override the communication (both input via keyboard and output via application screen) between the user and the application.Man-in-the-browsermalware (e.g., the TrojanSilentbanker) could modify a transaction, unnoticed by the user. Banks likeFortisandBelfiusin Belgium andRabobank("random reader") in the Netherlands combine a smart card with an unconnected card reader to avoid this problem. The customer enters a challenge received from the bank's website, a PIN and the transaction amount into the reader. The reader returns an 8-digit signature. This signature is manually entered into the personal computer and verified by the bank, preventingpoint-of-sale-malwarefrom changing the transaction amount. Smart cards have also been the targets of security attacks. These attacks range from physical invasion of the card's electronics, to non-invasive attacks that exploit weaknesses in the card's software or hardware. The usual goal is to expose private encryption keys and then read and manipulate secure data such as funds. Once an attacker develops a non-invasive attack for a particular smart card model, he or she is typically able to perform the attack on other cards of that model in seconds, often using equipment that can be disguised as a normal smart card reader.[129]While manufacturers may develop new card models with additionalinformation security, it may be costly or inconvenient for users to upgrade vulnerable systems.Tamper-evidentand audit features in a smart card system help manage the risks of compromised cards. Another problem is the lack of standards for functionality and security. To address this problem, the Berlin Group launched the ERIDANE Project to propose "a new functional and security framework for smart-card based Point of Interaction (POI) equipment".[130]
https://en.wikipedia.org/wiki/Smartcard
Ambient IoT, fromambientandInternet of things, is a concept originally coined by3GPP[1]that is used in thetechnologyindustry referring to anecosystemof a large number of objects in which every item is connected into awireless sensor networkusing low-costself-poweredsensor nodes.[2][3][4][5]Bluetooth SIGhas assessed the total addressable market of Ambient IoT to be more than 10 trillion devices across differentverticals.[6] The applications of Ambient IoT include makingsupply chainsfor food and medicine more efficient andsustainable, protecting fromcounterfeitingand delivering the data required foradvanced transportationandsmart cityinitiatives.[2][7]Ambient IoT has been called "the original vision for theIoT" byU.S. Department of CommerceIoT Advisory BoardchairBenson Chan.[2] Standards for Ambient IoT are being considered by3GPP,[8]IEEEandBluetooth SIG.[4]
https://en.wikipedia.org/wiki/Ambient_IoT
The Artificial Intelligence of Things(AIoT) is the combination ofartificial intelligence(AI) technologies with theInternet of things(IoT) infrastructure to achieve more efficient IoT operations, improve human-machine interactions and enhance data management and analytics.[1][2][3] In 2018,KPMGpublished aforesightstudy on the future of AI including scenarios until 2040.[4]The analysts describe a scenario in detail where a community of things would see each device also contain its own AI that could link autonomously to other AIs to, together, perform tasks intelligently. Value creation would be controlled and executed in real-time usingswarm intelligence. Many industries could be transformed with the application of swarm intelligence, including: automotive, cloud, medical, military, research, and technology. In the AIoT an important facet is AI being done on some Thing. In its purest form this involves performing the AI on the device, i.e. at the edge orEdge Computing, with no need for external connections. There is no need for an Internet in AIoT, it is an evolution of the concept of the IoT and that is where the comparison ends. The combined power of AI and IoT, promises to unlock unrealized customer value in a broad swath of industry verticals such as edge analytics, autonomous vehicles, personalized fitness, remote healthcare, precision agriculture, smart retail,predictive maintenance, and industrial automation.[5] As defined by the 21st Century Cures Act in 2016, a medical device is a device that performs a function in healthcare with the intention of using it "in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or intended to affect the structure or any function of the body of man or other animals".[6] Under the Federal Food, Drug, and Cosmetic Act, all AI systems falling within this definition are regulated by the FDA. Medical devices are classified into three classes by the FDA based on their uses and risks. The higher the risk is, the stricter the control. The Class I category includes devices with the smallest risk and Class III has the greatest risk.[6]Approved medical devices that utilize artificial intelligence or machine learning (AI/ML) has been increasing steadily. By 2020, the United States The Food and Drug Administration (FDA) approved very many medical devices that utilized AI/ML. A year later, the FDA released a regulatory framework for machines that use AI/ML software, in addition to the EU medical device regulation, which replaced the EU medical.[7]As technology continues to improve, it has rapidly increased the medical fields' method of working and diagnosing. Various AI applications can improve productivity and reduce medical errors, such as diagnoses and treatment selection, and creating risk predictions and stratifying diseases.[8] AI also helps patients by providing patients' data, electronic health records, mobile apps, and providing easy access to devices and sensors to specific patients who are in need of such technologies. The need to protect patients' data is extreme. Using electronic records to conceal patient data becomes increasingly difficult as data becomes integrated into clinical care. The accessibility to patients' data may be easy to access for the patient, but it also brings skepticism of data protection. Technology and AI have combined to provide opportunities for better management of healthcare information and technology integration in the medical industry. AI is implemented to recognize abnormalities and suspicion to sensitive data being accessed by a third-party. On the other hand, it will be necessary to rethink confidentiality and other core medical ethics principles in order to implement deep learning systems, since we cannot rely solely on technology.[6] When integrating AI into cloud engineering, it can help multiple professional fields in maximizing data collection. It can improve performance and efficiency through digital management. Cloud engineering follows engineering methods to apply to cloud computing and focuses on technological cloud services.[9]In conceiving, developing, operating, and maintaining cloud computing systems, it adopts a systematic approach to commercialization, standardization, and governance. Among its diverse aspects are contributions from development engineering, software engineering, web development, performance engineering, security engineering, platform engineering, risk engineering, and quality engineering.[10] Implementing AI into information technology's framework to establish smooth workloads and automate repetitive processes.[11]Using these tools, organizations can better manage data as they develop greater amounts of collective data and integrate data recognition, classification, and management processes as time progresses. With AI, it can bring efficiency to organizations, bringing strategic methods and saving time from repeated tasks. By executing analysis, organizations can save time and be more efficient.
https://en.wikipedia.org/wiki/Artificial_intelligence_of_things
Automotive securityrefers to the branch ofcomputer securityfocused on the cyber risks related to the automotive context. The increasingly high number ofECUsin vehicles and, alongside, the implementation of multiple different means of communication from and towards the vehicle in a remote and wireless manner led to the necessity of a branch ofcybersecuritydedicated to the threats associated with vehicles. Not to be confused withautomotive safety. The implementation of multipleECUs(Electronic Control Units) inside vehicles began in the early '70s thanks to the development ofintegrated circuitsandmicroprocessorsthat made it economically feasible to produce the ECUs on a large scale.[1]Since then the number of ECUs has increased to up to 100 per vehicle. These units nowadays control almost everything in the vehicle, from simple tasks such as activating thewipersto more safety-related ones likebrake-by-wireorABS(Anti-lock Braking System).Autonomous drivingis also strongly reliant on the implementation of new, complex ECUs such as theADAS, alongside sensors (lidarsandradars) and their control units. Inside the vehicle, the ECUs are connected with each other through cabled or wireless communication networks, such asCAN bus(controller area network),MOST bus(Media Oriented System Transport),FlexRay(Automotive Network Communications Protocol) orRF(radio frequency) as in many implementations ofTPMSs(tire-pressure monitoring systems). Many of these ECUs require data received through these networks that arrive from various sensors to operate and use such data to modify the behavior of the vehicle (e.g., thecruise controlmodifies the vehicle's speed depending on signals arriving from a button usually located on the steering wheel). Since the development of cheap wireless communication technologies such asBluetooth,LTE,Wi-Fi,RFIDand similar, automotive producers andOEMshave designed ECUs that implement such technologies with the goal of improving the experience of the driver and passengers. Safety-related systems such as theOnStar[2]fromGeneral Motors,telematicunits, communication between smartphones and the vehicle's speakers through Bluetooth,Android Auto[3]andApple CarPlay.[4] Threat modelsof the automotive world are based on both real-world and theoretically possible attacks. Most real-world attacks aim at the safety of the people in and around the car, by modifying thecyber-physicalcapabilities of the vehicle (e.g., steering, braking, accelerating without requiring actions from the driver[5][6]), while theoretical attacks have been supposed to focus also on privacy-related goals, such as obtainingGPSdata on the vehicle, or capturing microphone signals and similar.[7][8] Regarding theattack surfacesof the vehicle, they are usually divided in long-range, short-range, and local attack surfaces:[9]LTEandDSRCcan be considered long-range ones, while Bluetooth and Wi-Fi are usually considered short-range although still wireless. Finally,USB,OBD-IIand all the attack surfaces that require physical access to the car are defined as local. An attacker that is able to implement the attack through a long-range surface is considered stronger and more dangerous than the one that requires physical access to the vehicle. In 2015 the possibility of attacks on vehicles already on the market has been proven possible by Miller and Valasek, that managed to disrupt the driving of aJeep Cherokeewhile remotely connecting to it through remote wireless communication.[10][11] The most common network used in vehicles and the one that is mainly used for safety-related communication isCAN, due to its real-time properties, simplicity, and cheapness. For this reason the majority of real-world attacks have been implemented against ECUs connected through this type of network.[5][6][10][11] The majority of attacks demonstrated either against actual vehicles or in testbeds fall in one or more of the following categories: Sniffingin the computer security field generally refers to the possibility of intercepting and logging packets or more generally data from a network. In the case of CAN, since it is abus network, every node listens to all communication on the network. It is useful for the attacker to read data to learn the behavior of the other nodes of the network before implementing the actual attack. Usually, the final goal of the attacker is not to simply sniff the data on CAN, since the packets passing on this type of network are not usually valuable just to read.[9] Denial of service (DoS) in information security is usually described as an attack that has the objective of making a machine or a network unavailable.DoSattacks against ECUs connected to CAN buses can be done both against the network, by abusing the arbitration protocol used by CAN to always win the arbitration, and targeting the single ECU, by abusing the error handling protocol of CAN.[12]In this second case the attacker flags the messages of the victim as faulty to convince the victim of being broken and therefore shut itself off the network.[12] Spoofing attackscomprise all cases in which an attacker, by falsifying data, sends messages pretending to be another node of the network. In automotive security usually spoofing attacks are divided into masquerade andreplay attacks. Replay attacks are defined as all those where the attacker pretends to be the victim and sends sniffed data that the victim sent in a previous iteration of authentication. Masquerade attacks are, on the contrary, spoofing attacks where the data payload has been created by the attacker.[13] Security researchersCharlie MillerandChris Valasekhave successfully demonstrated remote access to a wide variety of vehicle controls using aJeep Cherokeeas the target. They were able to control the radio, environmental controls, windshield wipers, and certain engine and brake functions.[11] The method used to hack the system was implementation of pre-programmed chip into thecontroller area network (CAN) bus. By inserting this chip into the CAN bus, he was able to send arbitrary message to CAN bus. One other thing that Miller has pointed out is the danger of the CAN bus, as it broadcasts the signal which the message can be caught by the hackers throughout the network. The control of the vehicle was all done remotely, manipulating the system without any physical interaction. Miller states that he could control any of some 1.4 million vehicles in the United States regardless of the location or distance, the only thing needed is for someone to turn on the vehicle to gain access.[14] The work by Miller and Valasek replicated earlier work completed and published by academics in 2010 and 2011 on a different vehicle.[15]The earlier work demonstrated the ability to compromise a vehicle remotely, over multiple wireless channels (including cellular), and the ability to remotely control critical components on the vehicle post-compromise, including the telematics unit and the car's brakes. While the earlier academic work was publicly visible, both in peer-reviewed scholarly publications[16][17]and in the press,[18]the Miller and Valesek work received even greater public visibility. The increasing complexity of devices and networks in the automotive context requires the application of security measures to limit the capabilities of a potential attacker. Since the early 2000 many different countermeasures have been proposed and, in some cases, applied. Following, a list of the most common security measures:[9] In June 2020, theUnited Nations Economic Commission for Europe (UNECE)World Forum for Harmonization of Vehicle Regulationsreleased two new regulations, R155 and R156, establishing "clear performance and audit requirements for car manufacturers" in terms of automotive cybersecurity and software updates.[22]
https://en.wikipedia.org/wiki/Automotive_security
Cloud manufacturing(CMfg) is a new manufacturing paradigm developed from existing advanced manufacturing models (e.g., ASP, AM, NM, MGrid) and enterprise information technologies under the support ofcloud computing,Internet of Things(IoT),virtualizationand service-oriented technologies, and advanced computing technologies. It transforms manufacturing resources and manufacturing capabilities into manufacturing services, which can be managed and operated in an intelligent and unified way to enable the full sharing and circulating of manufacturing resources and manufacturing capabilities. CMfg can provide safe and reliable, high quality, cheap and on-demand manufacturing services for the whole lifecycle of manufacturing. The concept ofmanufacturinghere refers tobig manufacturingthat includes the whole lifecycle of a product (e.g. design, simulation, production, test, maintenance). The concept ofCloud manufacturingwas initially proposed by the research group led by Prof. Bo Hu Li and Prof. Lin Zhang in China in 2010.[1][2][3]Related discussions and research were conducted hereafter,[4]and some similar definitions (e.g.Cloud-Based Design and Manufacturing(CBDM).[5]) to cloud manufacturing were introduced. Cloud manufacturing is a type of parallel, networked, anddistributed systemconsisting of an integrated and inter-connected virtualized service pool (manufacturing cloud) of manufacturing resources and capabilities as well as capabilities of intelligent management and on-demand use of services to provide solutions for all kinds of users involved in the whole lifecycle of manufacturing.[6][7][8][9] Cloud Manufacturing can be divided intotwo categories.[10][11] In CMfg system, variousmanufacturingresources and abilities can be intelligently sensed and connected into widerInternet, and automatically managed and controlled using IoT technologies (e.g.,RFID, wired andwireless sensor network,embedded system). Then the manufacturing resources and abilities are virtualized and encapsulated into different manufacturing cloud services (MCSs), that can be accessed, invoked, and deployed based on knowledge by using virtualization technologies, service-oriented technologies, and cloud computing technologies. The MCSs are classified and aggregated according to specific rules andalgorithms, and different kinds of manufacturing clouds are constructed. Different users can search and invoke the qualified MCSs from related manufacturing cloud according to their needs, and assemble them to be a virtual manufacturing environment or solution to complete their manufacturing task involved in the whole life cycle of manufacturing processes under the support of cloud computing, service-oriented technologies, and advanced computing technologies.[2] Four types of cloud deployment modes (public, private, community and hybrid clouds) are ubiquitous as a single point of access.[2][10][12] From the resource’s perspective, each kind of manufacturing capability requires support from the related manufacturing resource. For each type of manufacturing capability, its related manufacturing resource comes in two forms, soft resources and hard resources.[13]
https://en.wikipedia.org/wiki/Cloud_manufacturing
TheData Distribution Service(DDS) for real-time systems is anObject Management Group(OMG)machine-to-machine(sometimes calledmiddlewareor connectivity framework) standard that aims to enabledependable,high-performance,interoperable,real-time,scalabledata exchangesusing apublish–subscribe pattern. DDS addresses the real-time data exchange needs of applications within aerospace, defense,air-traffic control,autonomous vehicles, medical devices, robotics, power generation, simulation and testing,smart gridmanagement, transportation systems, and other applications. DDS is a networkingmiddlewarethat simplifies complexnetwork programming. It implements apublish–subscribe patternfor sending and receiving data, events, and commands among thenodes. Nodes that produce information (publishers) create "topics" (e.g., temperature, location, pressure) and publish "samples". DDS delivers the samples to subscribers that declare an interest in that topic. DDS handles transfer chores: message addressing,data marshalling and de-marshalling(so subscribers can be on different platforms from the publisher), delivery, flow control, retries, etc. Any node can be a publisher, subscriber, or both simultaneously. The DDS publish-subscribe model virtually eliminates complex network programming for distributed applications.[citation needed] DDS supports mechanisms that go beyond the basic publish-subscribe model.[citation needed]The key benefit is that applications that use DDS for their communications are decoupled. Little design time needs be spent on handling their mutual interactions. In particular, the applications never need information about the other participating applications, including their existence or locations. DDS transparently handles message delivery without requiring intervention from the user applications, including: DDS allows the user to specifyquality of service(QoS) parameters to configure discovery and behavior mechanisms up-front. By exchanging messages anonymously, DDS simplifies distributed applications and encourages modular, well-structured programs.[citation needed]DDS also automatically handles hot-swapping redundant publishers if the primary fails.[citation needed]Subscribers always get the sample with the highest priority whose data is still valid (that is, whose publisher-specified validity period has not expired)[citation needed]. It automatically switches back to the primary when it recovers, too[citation needed]. Both proprietary andopen-source softwareimplementations of DDS are available. These includeapplication programming interfaces(APIs) and libraries of implementations inAda,C,C++,C#,Java,Python,Scala,Lua,Pharo,Ruby, andRust. DDS vendors participated in interoperability demonstrations at the OMG Spring technical meetings from 2009 to 2013.[1][2][3][4][5][6] During demos, each vendor published and subscribed to each other's topics using a test suite called the shapes demo. For example, one vendor publishes information about a shape and the other vendors can subscribe to the topic and display the results on their own shapes display. Each vendor takes turns publishing the information and the other subscribe. Two things made the demos possible: the DDS-I or Real-Time Publish-Subscribe (RTPS) protocol,[7]and the agreement to use a common model. In March 2009, three vendors demonstrated interoperability between the individual, independent products that implemented the OMG Real-time Publish-Subscribe protocol version 2.1 from January 2009. The demonstration included the discovery of each other's publishers and subscribers on different OS Platforms (Microsoft WindowsandLinux) and supportedmulticastandunicastnetwork communications.[1] The DDS interoperability demonstration used scenarios such as: Development of the DDS specification started in 2001. In 2004, theObject Management Group(OMG) published DDS version 1.0.[8]Version 1.1 was published in December 2005,[9]1.2 in January 2007,[10]and 1.4 in April 2015.[11]DDS is covered by several US patents,[12][13][14][15]among others. The DDS specification describes two levels of interfaces: Other related standards followed the initial core document. The Real-time Publish-Subscribe Wire Protocol DDS Interoperability Wire Protocol Specification ensured that information published on a topic using one vendor's DDS implementation is consumable by one or more subscribers using the same or different vendor's DDS implementations. Although the specification is targeted at the DDS community, its use is not limited. Versions 2.0 was published in April 2008, version 2.1 in November 2010, 2.2 in September 2014, and 2.3 in May 2019.[7] DDS for LightweightCCM(dds4ccm) offers an architectural pattern that separates the business logic from the non-functional properties. A 2012 extension added support for streams.[16]A Java 5 Language PSM for DDS defined a Java 5 language binding, referred to as a Platform Specific Model (PSM) for DDS. It specified only the Data-Centric Publish-Subscribe (DCPS) portion of the DDS specification; Additionally, it encompasses the DDS APIs introduced by DDS-XTypes and DDS-CCM. DDS-PSM-Cxx defines the ISO/IEC C++[17]PSM language binding, referred to as a Platform Specific Model (PSM) for DDS. It provides a new C++ API for programming DDS that is more natural to a C++ programmer.[18]The specification provides mappings for theapplication programming interface(API) specified in DDS-XTypes, and accessingquality of service(QoS) profiles specified in DDS-CCM. Extensible and Dynamic Topic Types for DDS (DDS-XTypes) provided support for data-centric publish-subscribe communication where topics are defined with specific data structures. To beextensible, DDS topics use data types defined before compile time and used throughout the DDS global data space. This model is desirable when static type checking is useful.[19]AUnified Modeling Language(UML) profile specified DDS domains and topics to be part of analysis and design modeling.[20]This specification also defined how to publish and subscribe objects without first describing the types in another language, such as XML or OMG IDL.[21]Aninterface definition language(IDL) was specified in 2014 independently from theCommon Object Request Broker Architecture(CORBA) specification chapter 3. This IDL 3.5 was compatible with the CORBA 3 specification, but extracted as its own specification allowing it to evolve independently from CORBA.[22] Other protocols to be mentioned are: DDS-XRCE (DDS for eXtremely Resource Constrained Environments), this specification protocol allows the communication between devices of limited resources, like microcontroller for example and a DDS network. It makes publishing and subscribing to topics via an intermediate service in a DDS domain possible[23]and DDS-RPC (RPC Over DDS) which defines Remote Procedure Calls. These provide a bidirectional request/reply communication and determine distributed services, and are detailed using a service interface. It also supports both synchronous and asynchronous method invocation.[24] Starting with DDS version 1.4 in 2015, the optional DLRL layer was moved to a separate specification.[25]
https://en.wikipedia.org/wiki/Data_Distribution_Service
Adigital object memory(DOMe) is adigitalstoragespace intended to keep permanently all related information about a concrete physical object instance that is collected during the lifespan of this object[1]and thus forms a basic building block for theInternet of Things(IoT)[2]by connecting digital information with physical objects.[3] Such memories require each object instance to be uniquely identified and thisIDto be attached to the physical object. The underlying techniques to create identification codes and to attach them to objects are manifold butmachine-readabletechniques are mandatory. Commonly used arebarcodeswith one or two dimensions (e.g.QRcodeorDataMatrix) and radio based tags likeRFIDorNFC. Such codes or tags are a low cost solution but demand an underlying server infrastructure to host the memory data. In contrast to the mentioned memories providing only a passive storage space, the more sophisticatedactive digital object memories(ADOMe) are based onembedded systemsin terms ofcyber-physical systems(CPS) and provide on the hardware side and on the software side there are Such active memories allow for "on-object" processing of object-related tasks, such as condition monitoring, compilation of associated data, and memory clean-up. In addition to strictly passive memories (storage space is located in the web as mentioned above) and active memories (with "on-object" storage) hybrid forms are also available, that perform simple tasks "on-product", outsource more complex tasks to server-based infrastructures, and keep both representations in sync. Digital product memories(DPM) are a subclass of digital object memories, which include memories for all artifacts that were intentionally created such as containers and pieces of art or valuable and rare natural objects such as a marble plate or a lump of gold. Such objects don't have all the attributes of industrial products, but nevertheless a digital black box attached to them forlifeloggingcan make sense for specific applications.[4] Semantic product memories(SemProM) go beyond that, since they provide a machine-understandable meaning description of their contents based on semantic web technologies. If a product memory has no explicit semantic markup, only propriety software can exploit the information stored in the memory. In contrast, semantic product memories can be interpreted by any software that has access to the semantic description of the epistemological primitives and the ontologies used for capturing memory contents. In the context of the Object Memory Modeling Incubator Group,[5]part of theW3CIncubator Activity, an object memory format, which allows for modeling of events or other information about individual physical artifacts (ideally over their lifetime) and thus implements anobject memory model(OMM), was created. The model consists of a block-based approach to partition the entire memory to groups each with associated object-related information. Each block consists of the data itself (the so-calledpayload) and a set ofmetadataattributes to describe the block content. Funded by the German Ministry of Education and Research, the project SemProM (Semantic Product Memory[6]) employssmart labelsin order to give products a digital memory and thus support intelligent applications along the product'slifecycle. By the use of integratedsensors, relations in the production process become transparent andsupply chainsas well as environmental influences retraceable. The producer gets supported and the consumer better informed about the product. Funded by the German Ministry of Education and Research, the project RES-COM (Resource Conservation by Context-Activated Machine-to-Machine Communication[7]) focuses on the development of technologies (containing interfaces, protocols and data models) forproactiveresource conservation based onM2M-communication. With a defined interaction with active digital object memories the project tries to leverage the integration of distributed and active components to existing centralized structures in the field of industry and manufacturing. The Aletheia project[8]is a leading innovation project, sponsored by the German Ministry of Education and Research that aims at obtaining comprehensive access to product information through the use ofsemantictechnologies. The project follows an approach which does not only consult structured data from company-owned information sources, such as product databases, to respond to inquiries, it also looks atunstructured datafrom office documents andweb 2.0sources, such aswikis,blogs, andinternet forums, as well as sensor andRFIDdata. The ADiWa project (Alliance Digital Product Flow[9]), funded by the German Ministry of Education and Research, makes the huge potential of information from the Internet of Things accessible for business-relevantworkflowsthat can be strategically planned and manipulated. For the data-level connection of objects from the real world, results from available solutions and from the SemProM project shall be used. ADiWa focuses onbusiness processes, which can be controlled and manipulated based on evaluated information from the real world. The SmartProducts project,[10]funded by the European Union in the 7th Research Framework Programme (FP7), develops the scientific and technological basis for building "smart products" with embedded proactive knowledge. Smart products help customers, designers and workers to deal with the ever-increasing complexity and variety of modern products. Such smart products leverage proactive knowledge to communicate and co-operate with humans, other products and the environment. The project thereby also focuses on small devices with limited storage capabilities and thus also requires efficient storage mechanisms. Moreover, the project aims to apply the results achieved by the incubator group for optimizing the data exchange between different smart products. Tales of Things and electronic Memory (TOTeM[11]) is a three-year collaborative project between five universities in the United Kingdom. A project aim is to explore the implications of Internet of Things technologies for the design of novel forms of augmented memory systems. While the potential implications of the Internet of Things for supply chain management and energy consumption have been acknowledged and discussed, its application for the engagement with personal and social memories has been rarely mentioned. More and more newly manufactured objects are often tagged at production and made traceable. Tales of Things provides a design space for exploring the value of a user-generated Internet of Old Things in which people's memories are linked to objects. The project Smart Product Networks (SmaProN[12]), funded by the German Ministry of Education and Research, explores the dynamic linking of smart products to product bundles and product hierarchies, based on the developed Tip'n'Tell architecture (to integrate distributed product information) and the product description model SPDO (using semantic web-based representation languages). The project RFID-Based Automotive Network (RAN[13]), founded by the German Ministry of Education and Research, focuses on the development of a RFID-based hybrid control architecture and valuation methods for value added chains. Using the example of automobile industry they develop a combined data management that exchanges product-related information in a decentralized way using RFID-tags. In addition order-related information are stored in centralized manufacturer databases. Such an architecture allows for transport of product-related data at the location of the physical object and process related data in real-time via backend systems.
https://en.wikipedia.org/wiki/Digital_object_memory
Electric Dreamsis a 1984science fictionromantic comedyfilm directed bySteve Barron(in hisfeature film directorial debut) and written byRusty Lemorande. The film starsLenny Von Dohlen,Virginia Madsen,Maxwell Caulfield, and the voice ofBud Cort. Electric Dreamswas released in the United States byMGM/UA Entertainment Co.on July 20, 1984, and in the United Kingdom by20th Century Foxon August 17, 1984.[1][3]The film received mixed reviews from critics. Miles Harding is an architect who envisions a brick shaped like a jigsaw puzzle piece that could enable buildings to withstand earthquakes. Seeking a way to get organized, he buys apersonal computerto help him develop his ideas. Although he is initially unsure that he will even be able to correctly operate the computer, he later buys numerous extra gadgets that were not necessary for his work, such as switches to control household appliances like the blender, aspeech synthesizer, and a microphone. The computer addresses Miles as "Moles", because Miles had incorrectly typed his name during the initial set-up. When Miles attempts to download the entire database from amainframe computerat work, his computer begins to overheat. In a state of panic, Miles uses a nearby bottle of champagne to douse the overheating machine, which then becomessentient. Miles initially is unaware of the computer's newfound sentience, but discovers it one night when he is awakened by the computer in the middle of the night when it mimics Miles talking in his sleep. A love triangle soon develops among Miles, his computer (who later identifies himself as Edgar), and Miles's neighbor, an attractive cellist named Madeline Robistat. Upon hearing her practicingMinuet in G major, BWV Anh. 114fromNotebook for Anna Magdalena Bachon her cello through an air vent connecting both apartments, Edgar promptly elaborates a parallel variation of the piece, leading to an improvised duet. Believing it was Miles who had engaged her in the duet, Madeline begins to fall in love with him though she has an ongoing relationship with fellow musician Bill. At Miles's request, Edgar composes a piece of music for Madeline. When their mutual love becomes evident, however, Edgar responds with jealousy upon not receiving credit for creating these songs, cancelling Miles's credit cards and registering him as an "armed and dangerous" criminal. Upon discovering this humiliation, Miles and Edgar have a confrontation, where Miles shoves the computer and tries to unplug it, getting an electric shock. Then, the computer retaliates by harassing him with an improvised maze of remotely controlled household electronics, in the style ofPac-Man. Eventually, Edgar accepts Madeline and Miles's love for each other, and appears to commit suicide by sending a large electric current out through hisacoustic couplermodem, around the world, and finally reaching back to himself just after he and Miles make amends. Later, as Madeline and Miles go on vacation together, Edgar's voice is heard on the radio dedicating a song to "the ones I love", titled "Together in Electric Dreams". The credits are interspersed with scenes of the song being heard all over California, including a radio station trying to shut it off, declaring that they do not know the signal's origin. Steve Barron had made more than 100 music videos and routinely sent them to his mother for comment. She particularly liked one he did forHaysi Fantayzee. She was doing continuity onYentl,co-produced by Rusty Lemorande and Larry deWaay, and she showed it to them. Lemorande had finished his own script forElectric Dreamsand was looking for a director, so he offered Barron the job.[4] Barron took the script toVirgin Films, and it agreed to finance within four days. The film was presold toMetro-Goldwyn-Mayerwho held rights for the U.S., Canada, Japan and South East Asia.[5]Two months after Virgin agreed to make the movie, filming began in San Francisco on October 11, 1983. Virginia Madsen later recalled she "was very spoiled on that movie, because it was such a lovefest that I now believe that every movie should be like that... I had a mad, crazy crush on Lenny Von Dohlen. God, we were so... we were head-over-heels for each other. Nothing happened, and at this point, I admit it: I wanted it to happen... He's still one of my best friends."[6] Bud Cort provided the voice of the computer. The director did not want Cort to be seen by the other actors during scenes so Cort had to do his lines in a padded box on asound stage. He said, "It got a little lonely in there, I must admit. I kept waiting to meet the other actors, but nobody came to say hello."Boy Georgevisited the set and, being a fan ofHarold and Maude,got Cort's autograph.[7] The computer hardware company's name in the film is "Pinecone," a play onApple Computer. Fans ofElectric Dreamshave noted the similarities between the film andSpike Jonze'sHer.When asked about it, Jonze claimed to have never seen the former film.[8] In 2009, Barron said that Madsen told him she was planning on being involved in a remake. He said, "She didn't ask me to do it, so I guess I blew my chance on the first one! I wouldn't actually do it, but it would have been nice for the ego to be asked."[citation needed] The soundtrack features music from prominent popular musicians of the time, being among the movies of this generation that actively explored the commercial link between a movie and its soundtrack. The soundtrack albumElectric Dreamswas re-issued on CD in 1998.[9]The movie features music fromGiorgio Moroder,Culture Club,Jeff Lynne(Electric Light Orchestra), andHeaven 17. During filming, Barron said, "The fact that there's so much music has to do with the success ofFlashdance.This film isn'tFlashdance 2.Flashdanceworked because of the dancing. It didn't have a story.Electric Dreamsdoes." The film also features the song "Together in Electric Dreams." Barron has said about its creation: Giorgio Moroderwas hired as composer and played me a demo track he thought would be good for the movie. It was the tune of "Together in Electric Dreams" but with some temporary lyrics sung by someone who sounded like a cheesy version ofNeil Diamond. Giorgio was insisting the song could be a hit so I thought I'd suggest someone to sing who would be as far from a cheesy Neil Diamond as one could possibly go.Phil Oakey. We then got Phil in who wrote some new lyrics on the back of a fag packet on the way to the recording studio and did two takes which Giorgio was well pleased with and everybody went home happy.[10] He has also said about the film's music: "Electric Dreamswas definitely an attempt to try and weave the early 1980smusic videogenre into a movie... [The film] isn't that deep. The closest parallel is probably that it's aCyrano de Bergerac-like exploration of how words and music can help nurture and grow feelings on the path to love. Oops that's too deep."[11]In 2015, he said when he made the film there was a prejudice against video clip directors doing drama, and becauseElectric Dreams"was a little bit like an extended music video... I didn't help that cause in a lot of ways."[12] OnRotten Tomatoesthe film has an approval rating of 45% based on 20 reviews.[13] The New York Timessaid that the film failed to "blend and balance its ingredients properly," and that it lost plot elements and taxed credibility.[14] TheLos Angeles Timescalled it "inspired and appealing... a romantic comedy of genuine sweetness and originality."[15] Film criticsGene SiskelandRoger Eberteach gave the film 3.5 out of 4 stars, with Siskel writing that it showed a new director eager to show off his talents and Ebert writing "One of the nicest things about the movie is the way it maintains its note of slightly bewildered innocence."[16][17] Electric Dreamswas released on VHS in 1984 and in 1991 in the US byMGM/UA Home Video, who released aLaserDiscin America in 1985.Warner Home Videoreleased aVideo CDversion for the Singapore market in 2001. The film received aRegion 2DVD release on April 6, 2009, by MGM. UK video label Second Sight released aBlu-rayon August 7, 2017, making its worldwide debut on Blu-ray.[18] Currently, MGM owns international rights to film due to the Virgin/M.E.C.G film catalog they gained in the mid-90s thru their acquisition of the pre-1996PolyGram Filmed Entertainmentlibrary. However, MGM does not own the film outright anymore (despite having distributed the film in the US initially) asShout! Studiosnow owns rights to the film in the US, putting the film up on VOD in April 2022, marking its digital debut. Additionally, it is also one of the few MGM titles from the pre-May 1986 MGM/UA library that are not owned byWarner Bros. Discovery(thruTurner Entertainment Co.) due to Virgin Pictures’ co-production of the film.
https://en.wikipedia.org/wiki/Electric_Dreams_(film)
Afour-dimensional product (4D product)considers a physical product as a life-like entity capable of changing form and physical properties autonomously over time. It is an evolving field ofproduct designpractice and research linked to similar concepts at the material scale (programmable matterandfour-dimensional printing), however, typically utilizes sensors and actuators in order to respond to environmental and human conditions, modifying the shape, color, character and other physical properties of the product. In this way 4D products share similarities withresponsive architecture, at the more human scale associated with products. The concept of imbuing products with similar life-like qualities has been an area of increasing research within academia and industry alike. However, researchers have used a variety of different terms to describe this research, for example transformational products,.[1]shape changing,[2]kinetic,[3]or in a more general sense, smart, connected, robotic or having a level ofartificial intelligence. Within industry, commercial examples of products capable of adaptation have received some attention. In 2005 Adidas released theAdidas 1shoe, which was capable of adjusting the compression characteristics in the heel with each stride, and accommodate for the different requirements of the foot during different activities like walking or running. More recently in 2016, Nike released theHyperAdapt 1.0shoe, capable of self-lacing as the user puts their foot into it. Additional micro adjustments were possible using manual controls, however, the designers claim a longer-term vision for such products to come alive and respond in real-time to user needs.[4] In 2008 BMW revealed a concept car calledGINAwhich featured a fabric body stretched over a movable aluminium wire and carbon fiber frame, capable of flexing in certain areas to reveal details like door openings, or modify aerodynamic properties of the car in real time. The 2016 incarnation of this concept car, the BMWVision Next 100, adopted similar capabilities with a more advanced flexible skin capable of expanding as the front wheels turn, reportedly reducing the drag coefficient of the car while cornering.[5]Changes in product form can be used to improve product performance. While such a dynamic car body is yet to be seen on the mainstream market, elements of this transformation can be seen in modernFormula Oneracing cars. These vehicles have movable rear wing flaps to modify drag for overtaking in certain sections of a race (known as theDrag Reduction Systemor DRS). Consumer-level cars, like the Audi TT, are also capable of automatically increasing the rear spoiler angle at high speeds to increase traction and safety. This suggests these life-like movements are slowly finding their way into the mainstream.
https://en.wikipedia.org/wiki/Four-dimensional_product
"Fourth Industrial Revolution", "4IR", or "Industry 4.0",[1]is aneologismdescribing rapidtechnological advancementin the 21st century.[2]It follows theThird Industrial Revolution(the "Information Age"). The term was popularised in 2016 byKlaus Schwab, theWorld Economic Forumfounder and former executive chairman,[3][4][5][6][7]who asserts that these developments represent a significant shift in industrialcapitalism.[8] A part of this phase of industrial change is the joining of technologies likeartificial intelligence,gene editing, to advancedroboticsthat blur the lines between the physical, digital, and biological worlds.[8][9] Throughout this, fundamental shifts are taking place in how the global production and supply network operates through ongoing automation of traditional manufacturing and industrial practices, using modern smart technology, large-scalemachine-to-machinecommunication (M2M), and theInternet of things(IoT). This integration results in increasing automation, improving communication and self-monitoring, and the use of smart machines that can analyse and diagnose issues without the need for human intervention.[10] It also represents a social, political, and economic shift from thedigital ageof the late 1990s and early 2000s to an era of embedded connectivity distinguished by the ubiquity of technology in society (i.e. ametaverse) that changes the ways humans experience andknowthe world around them.[11]It posits that we have created and are entering anaugmented social realitycompared to just thenatural sensesand industrial ability of humans alone.[8]The Fourth Industrial Revolution is sometimes expected to mark the beginning of animagination age, where creativity and imagination become the primary drivers of economic value.[12] The phraseFourth Industrial Revolutionwas first introduced by a team of scientists developing a high-tech strategy for the German government.[13]Klaus Schwab, former executive chairman of theWorld Economic Forum(WEF), introduced the phrase to a wider audience in a 2015 article published byForeign Affairs.[14]"Mastering the Fourth Industrial Revolution" was the 2016 theme of theWorld Economic Forum Annual Meeting, in Davos-Klosters, Switzerland.[15] On 10 October 2016, the Forum announced the opening of its Centre for the Fourth Industrial Revolution in San Francisco.[16]This was also subject and title of Schwab's 2016 book.[17]Schwab includes in this fourth era technologies that combine hardware, software, and biology (cyber-physical systems),[18]and emphasises advances in communication and connectivity. Schwab expects this era to be marked by breakthroughs in emerging technologies in fields such asrobotics,artificial intelligence,nanotechnology,quantum computing,biotechnology, theinternet of things, theindustrial internet of things, decentralised consensus,fifth-generation wireless technologies,3D printing, andfully autonomous vehicles.[19] InTheGreat Resetproposal by the WEF,The Fourth Industrial Revolutionis included as astrategic intelligencein the solution to rebuild the economy sustainably following theCOVID-19 pandemic.[20] The First Industrial Revolution was marked by a transition from hand production methods to machines through the use ofsteam powerandwater power. The implementation of new technologies took a long time, so the period which this refers to was between 1760 and 1820, or 1840 in Europe and the United States. Its effects had consequences ontextile manufacturing, which was first to adopt such changes, as well as iron industry, agriculture, and mining–although it also had societal effects with an ever strongermiddle class.[21] The Second Industrial Revolution, also known as the Technological Revolution, is the period between 1871 and 1914 that resulted from installations of extensive railroad andtelegraphnetworks, which allowed for faster transfer of people and ideas, as well as electricity. Increasing electrification allowed for factories to develop the modernproduction line.[22] The Third Industrial Revolution, also known as the Digital Revolution, began in the late 20th century. It is characterized by the shift to an economy centered oninformation technology, marked by the advent ofpersonal computers, theInternet, and the widespread digitalization of communication and industrial processes. A book byJeremy RifkintitledThe Third Industrial Revolution, published in 2011,[23]focused on the intersection of digital communications technology andrenewable energy. It was made into a 2017 documentary byVice Media.[24] In essence, the Fourth Industrial Revolution is the trend towardsautomationand data exchange in manufacturing technologies and processes which includecyber-physical systems(CPS),Internet of Things(IoT),[25]cloud computing,[26][27][28][29]cognitive computing, andartificial intelligence.[29][30] Machines improve human efficiency in performing repetitive functions, and the combination ofmachine learningandcomputing powerallows machines to carry out increasingly complex tasks.[31] The Fourth Industrial Revolution has been defined as technological developments in cyber-physical systems such as high capacity connectivity; newhuman-machine interactionmodes such astouch interfacesandvirtual realitysystems; and improvements in transferring digital instructions to the physical world including robotics and3D printing(additive manufacturing); "big data" andcloud computing; improvements to and uptake ofOff-Grid/ Stand-Alone Renewable Energy Systems:solar,wind,wave,hydroelectricand the electric batteries (lithium-ionrenewableenergy storage systems(ESS) and EV). It also emphasizes decentralized decisions – the ability of cyber physical systems to make decisions on their own and to perform their tasks as autonomously as possible. Only in the case of exceptions, interference, or conflicting goals, are tasks delegated to a higher level.[32][26] Proponents of the Fourth Industrial Revolution suggest it is a distinct revolution rather than simply a prolongation of the Third Industrial Revolution.[14]This is due to the following characteristics: Critics of the concept dismiss Industry 4.0 as amarketing strategy. They suggest that although revolutionary changes are identifiable in distinct sectors, there is no systemic change so far. In addition, the pace of recognition of Industry 4.0 and policy transition varies across countries; the definition of Industry 4.0 is not harmonised. One of the most known figures isJeremy Rifkinwho "agree[s] that digitalization is the hallmark and defining technology in what has become known as the Third Industrial Revolution".[33]However, he argues "that the evolution of digitalization has barely begun to run its course and that its new configuration in the form of the Internet of Things represents the next stage of its development".[33] The application of the Fourth Industrial Revolution operates through:[34] Industry 4.0 networks a wide range of new technologies to create value. Usingcyber-physical systemsthat monitor physical processes, a virtual copy of the physical world can be designed. Characteristics of cyber-physical systems include the ability to make decentralised decisions independently, reaching a high degree of autonomy.[34] The value created in Industry 4.0 can be relied upon in electronic identification, in which the smart manufacturing requires set technologies to be incorporated in the manufacturing process to thus be classified as in the development path of Industry 4.0 and no longerdigitisation.[35] The Fourth Industrial Revolution fosters "smart factories", which are production environments where facilities andlogisticssystems are organised with minimal human intervention. The technical foundations on which smart factories are based are cyber-physical systems that communicate with each other using IoT. An important part of this process is the exchange of data between the product and theproduction line. This enables more efficientsupply chainconnectivity and better organisation within a production environment.[citation needed] Within modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world, and make decentralised decisions.[36]Over the internet of things, cyber-physical systems communicate and cooperate with each other and with humans in synchronic time both internally and across organizational services offered and used by participants of thevalue chain.[26][37] Artificial intelligence(AI) has a wide range of applications across all sectors of the economy. It gained prominence following advancements indeep learningduring the 2010s, and its impact intensified in the 2020s with the rise ofgenerative AI, a period often referred to as the "AI boom".[38]Models likeGPT-4ocan engage in verbal and textual discussions and analyze images.[39] AI is a key driver of Industry 4.0, orchestrating technologies likerobotics,automated vehicles, and real-time data analytics. By enabling machines to perform complex tasks, AI is redefining production processes and reducing changeover times.[40]AI could also significantly accelerate, or even automatesoftware development.[41][42] Some experts believe that AI alone could be as transformative as an industrial revolution.[43]Multiple companies such asOpenAIandMetahave expressed the goal of creatingartificial general intelligence(AI that can do virtually any cognitive task a human can),[44][45]making large investments indata centersandGPUstotrainmore capable AI models.[46] Humanoid robotshave traditionally lacked usefulness. They had difficulty picking simple objects due to imprecise control and coordination, and they wouldn't understand their environment and how physics works. They were often explicitly programmed to do narrow tasks, failing when encountering new situations. Modern humanoid robots, however, are typically based onmachine learning, and in particularreinforcement learning. In 2024, humanoid robots are rapidly becoming more flexible, easier to train, and versatile.[47] Industry 4.0 facilitatespredictive maintenance, due to the use of advanced technologies, including IoT sensors. Predictive maintenance, which can identify potential maintenance issues in real time, allows machine owners to perform cost-effective maintenance before the machinery fails or gets damaged. For example, a company in Los Angeles could understand if a piece of equipment in Singapore is running at an abnormal speed or temperature. They could then decide whether or not it needs to be repaired.[48] The Fourth Industrial Revolution is said to have extensive dependency on3D printingtechnology. Some advantages of 3D printing for industry are that 3D printing can print many geometric structures, as well as simplify the product design process. It is also relatively environmentally friendly. In low-volume production, it can also decrease lead times and total production costs. Moreover, it can increase flexibility, reduce warehousing costs and help the company towards the adoption of a mass customisation business strategy. In addition, 3D printing can be very useful for printing spare parts and installing it locally, therefore reducing supplier dependence and reducing the supply lead time.[49] Sensors and instrumentation drive the central forces of innovation, not only for Industry 4.0 but also for other "smart" megatrends, such as smart production,smart mobility,smart homes,smart cities, and smart factories.[50] Smart sensorsare devices which generate the data and allow further functionality from self-monitoring and self-configuration to condition monitoring of complex processes. With the capability ofwireless communication, they reduce installation effort to a great extent and help realise a dense array of sensors.[51] The importance of sensors, measurement science, and smart evaluation for Industry 4.0 has been recognised and acknowledged by various experts and has already led to the statement "Industry 4.0: nothing goes without sensor systems."[52] However, there are a few issues, such as time synchronisation error,data loss, and dealing with large amounts of harvested data, which all limit the implementation of full-fledged systems. Moreover, additional limits on these functionalities represent the battery power. One example of the integration of smart sensors in the electronic devices, is the case ofsmart watches, where sensors receive the data from the movement of the user, process the data and as a result, provide the user with the information about how many steps they have walked in a day and also converts the data intocaloriesburned. Smart sensors in these two fields are still in the testing stage.[53]These connected sensors collect, interpret and communicate the information available in the plots (leaf area,vegetation index, chlorophyll,hygrometry, temperature, water potential, radiation). Based on this scientific data, the objective is to enable real-time monitoring via asmartphonewith a range of advice that optimises plot management in terms of results, time and costs. On the farm, these sensors can be used to detect crop stages and recommend inputs and treatments at the right time, as well as controlling the level ofirrigation.[54] Thefood industryrequires more and more security and transparency and full documentation is required. This new technology is used as a tracking system as well as the collection of human data and product data.[55] Knowledge economy is an economic system in which production and services are largely based on knowledge-intensive activities that contribute to an accelerated pace of technical and scientific advance, as well as rapid obsolescence.[56][57]Industry 4.0 aids transitions into knowledge economy by increasing reliance on intellectual capabilities rather than on physical inputs ornatural resources. Challenges in implementation of Industry 4.0:[58][59] Many countries have set up institutional mechanisms to foster the adoption of Industry 4.0 technologies. For example, Australiahas aDigital Transformation Agency(est. 2015) and the Prime Minister's Industry 4.0 Taskforce (est. 2016), which promotes collaboration with industry groups in Germany and the USA.[66] The term "Industrie 4.0", shortened to I4.0 or simply I4, originated in 2011 from a project in the high-tech strategy of theGerman governmentand specifically relates to that project policy, rather than a wider notion of a Fourth Industrial Revolution of 4IR,[8]which promotes thecomputerisationof manufacturing.[67]The term "Industrie 4.0" was publicly introduced in the same year at theHannover Fair.[68]German professorWolfgang Wahlsteris sometimes called the inventor of the "Industry 4.0" term.[69]In October 2012, the Working Group on Industry 4.0 presented a set of Industry 4.0 implementation recommendations to the German federal government. The workgroup members and partners are recognised as the founding fathers and driving force behind Industry 4.0. On 8 April 2013 at the Hannover Fair, the final report of the Working Group Industry 4.0 was presented. This working group was headed by Siegfried Dais, ofRobert Bosch GmbH, and Henning Kagermann, of theGerman Academy of Science and Engineering.[70] As Industry 4.0 principles have been applied by companies, they have sometimes been rebranded. For example, the aerospace parts manufacturerMeggitt PLChas branded its own Industry 4.0 research project M4.[71] The discussion of how the shift to Industry 4.0–and especiallydigitisation–will affect the labour market is being discussed in Germany under the topic ofWork 4.0.[72][needs update] The federal government in Germany is a leader in the development of the I4.0 policy through its ministries of theGerman federal Ministry of Education and Research(BMBF) andBMWi. Through the publishing of set objectives and goals for enterprises to achieve, the German federal government attempts to set the direction of the digital transformation. However, there is a gap between German enterprise's collaboration and knowledge of these set policies.[73]The biggest challenge SMEs in Germany are currently facing regarding digital transformation of their manufacturing processes is ensuring that there is a concrete IT and application landscape to support further digital transformation efforts.[73] The characteristics of the German government's Industry 4.0 strategy involve the strong customisation of products under the conditions of highly flexible (mass-) production.[74]The required automation technology is improved by the introduction of methods of self-optimization, self-configuration,[75]self-diagnosis, cognition and intelligent support of workers in their increasingly complex work.[76]The largest project in Industry 4.0 as of July 2013[update]is the BMBF leading-edge cluster "Intelligent Technical Systems Ostwestfalen-Lippe (its OWL)". Another major project is the BMBF project RES-COM,[77]as well as the Cluster of Excellence "Integrative Production Technology for High-Wage Countries".[78]In 2015, theEuropean Commissionstarted the internationalHorizon 2020research project CREMA (cloud-based rapid elastic manufacturing) as a major initiative to foster the Industry 4.0 topic.[79] InEstonia, the digital transformation dubbed as the 4th Industrial Revolution byKlaus Schwaband theWorld Economic Forumin 2015 started with therestoration of independence in 1991. Although a latecomer to theinformation revolutiondue to 50 years ofSoviet occupation, Estonialeapfroggedto the digital era, while skipping the analogue connections almost completely. The early decisions made by Prime MinisterMart Laaron the course of the country's economic development led to the establishment of what is today known ase-Estonia, one of the worlds mostdigitally advanced nations. According to the goals set in Estonia's Digital Agenda 2030,[80]the next advances in the country's digital transformation will involve switching to event-based and proactive services, both in private and business environments, as well as developing agreen, AI-powered, and human-centricdigital government. Another example is theIndonesianinitiative Making Indonesia 4.0, which focuses on improving industrial performance.[66] India, with its expandingeconomyand extensive manufacturing sector, has embraced the digital revolution, leading to significant advancements in manufacturing. The Indian program for Industry 4.0 centers around leveraging technology to produce globally competitive products at cost-effective rates while adopting the latest technological advancements of Industry 4.0.[81] Society 5.0envisions a society that prioritizes the well-being of its citizens, striking a harmonious balance between economic progress and the effective addressing of societal challenges through a closely interconnected system of both the digital realm and the physical world. This concept was introduced in 2019 in the 5th Science and Technology Basic Plan for Japanese Government as a blueprint for a forthcoming societal framework.[82] Malaysia's national policy on Industry 4.0 is known as Industry4WRD. Launched in 2018, key initiatives in this policy include enhancing digital infrastructure, equipping the workforce with 4IR skills, and fostering innovation and technology adoption across industries.[83] South Africaappointed a Presidential Commission on the Fourth Industrial Revolution in 2019, consisting of about 30 stakeholders with a background in academia, industry and government.[84][85]South Africa has also established an Inter ministerial Committee on Industry 4.0. TheRepublic of Koreahas had a Presidential Committee on the Fourth Industrial Revolution since 2017. The Republic of Korea's I-Korea strategy (2017) is focusing on new growth engines that include AI, drones, and autonomous cars, in line with the government's innovation-driven economic policy.[84] Ugandaadopted its own National 4IR Strategy in October 2020 with emphasis one-governance,urban management(smart cities),healthcare,education,agriculture, and the digital economy; to support local businesses, the government was contemplating introducing a local start-ups bill in 2020 which would require all accounting officers to exhaust the local market prior to procuring digital solutions from abroad.[84] In a policy paper published in 2019, theUK'sDepartment for Business, Energy & Industrial Strategy, titled "Regulation for the Fourth Industrial Revolution", outlined the need to evolve current regulatory models to remain competitive in evolving technological and social settings.[9] TheDepartment of Homeland Securityin 2019 published a paper called 'The Industrial Internet of things (IIOT): Opportunities, Risks, Mitigation'.[86]The base pieces of critical infrastructure are increasingly digitised for greater connectivity and optimisation. Hence, its implementation, growth and maintenance must be carefully planned and safeguarded. The paper discusses not only applications ofIIOTbut also the associated risks. It has suggested some key areas where risk mitigation is possible. To increase coordination between the public, private, law enforcement, academia and other stakeholders the DHS formed theNational Cybersecurity and Communications Integration Center(NCCIC).[86] Theaerospace industryhas sometimes been characterised as "too low volume for extensive automation". However, Industry 4.0 principles have been investigated by several aerospace companies, and technologies have been developed to improve productivity where the upfront cost of automation cannot be justified. One example of this is the aerospace parts manufacturerMeggitt PLC's M4 project.[71] The increasing use of theindustrial internet of thingsis referred to as Industry 4.0 atBosch, and generally in Germany. Applications include machines that can predict failures and trigger maintenance processes autonomously or self-organised coordination that react to unexpected changes in production.[87]in 2017, Bosch launched theConnectory, aChicago,Illinoisbased innovation incubator that specializes in IoT, including Industry 4.0. Industry 4.0 inspired Innovation 4.0, a move toward digitisation for academia andresearch and development.[88]In 2017, the £81M Materials Innovation Factory (MIF) at theUniversity of Liverpoolopened as a center for computer aidedmaterials science,[89]where robotic formulation,[90]data capture, and modelling are being integrated into development practices.[88] With the consistent development of automation of everyday tasks, some saw the benefit in the exact opposite of automation where self-made products are valued more than those that involved automation.[91]This valuation is named theIKEA effect, a term coined byMichael I. NortonofHarvard Business School, Daniel Mochon ofYale, andDan ArielyofDuke. Another problem that is expected to accelerate with the growth of IR4 is theprevalence of mental disorders,[92]a known issue within high-tech operators.[93]Also, the IR4 has sparked significant criticism regardingAI biasandethical issues, as algorithms used in decision-making processes often perpetuate existing social inequalities, disproportionately impacting marginalized groups while lacking transparency and accountability.[94] Industry 5.0 has been proposed as a strategy to create aparadigm shiftfor an industrial landscape in which the primary focus should no longer be on increasing efficiency, but rather on promoting thewell-beingof society andsustainability of the economyand industrial production.[95][96][97]
https://en.wikipedia.org/wiki/Fourth_Industrial_Revolution
TheInternet of Musical Things(also known asIoMusT) is a research area that aims to bringInternet of Thingsconnectivity[1][2][3]to musical and artistic practices. Moreover, it encompasses concepts coming frommusic computing, ubiquitous music,[4]human-computer interaction,[5][6]artificial intelligence,[7]augmented reality,virtual reality,gaming, participative art,[8]and new interfaces for musical expression.[9]From a computational perspective, IoMusT refers to local or remote networks embedded with devices capable of generating and/or playing musical content.[10][11] The term "Internet of Things" (IoT) is extensible to any everyday object connected to theinternet, having its capabilities increased by exchanging information with other elements present in the network to achieve a common goal.[1][2][3]Thanks to the technological advances that have occurred in the last decades, its use has spread to several areas of performance, assisting in medical analysis, traffic control and home security. When its concepts meet music, the Internet of Music Things (IoMusT) emerges. The term "Internet of Musical Things" also receives numerous classifications, according to the use of certain authors. Hazzardet al.,[12]for example, uses it in the context ofmusical instrumentsthat haveQR codethat directs the user to a page with information about this instrument, such as manufacturing date and history. Keller and Lazzarini,[13]use this term in ubiquitous music (ubimus) research, while Turchetet al.[14]define IoMusT as a subfield of the Internet of Things, where interoperable devices can connect to each other, aiding the interaction betweenmusiciansand theaudience. Like the IoT, the Internet of Music Things can encompass a variety of ecosystems. But generally, it is marked by being employed in musical activities (rehearsals,concerts, recordings, and music teaching) and relying on service and information providers. In addition to the technological and artistic advantages that this field offers, new opportunities are still arising for the music industry, providing the emergence of new services and applications capable of exploiting the interconnection betweenlogicalandphysical devices, always keeping the artistic purpose in mind.[10] A musical thing is formally defined as a "computational device capable of acquiring, processing, acting, or exchanging data that serves a musical purpose."[10]In short, these objects are entities that can be used for musical practice, can be connected in local and/or remote networks, and act as senders or receivers of messages. They can be, for example, a smart instrument (instruments that use sensors, actuators and wireless connection for audio processing),[15][16]wearable devicesor any other capable of controlling, generating or executing musical content over the network. Unlike traditional audio devices, such asmicrophonesandspeakers, musical things are not useful by themselves, thus the need arises to insert them into a chain of equipment.[17]Thus, the need arises to think about standards, protocols and means of communication between them. These challenges will be analyzed below. The first challenge concerns the hardware used in musical things.[17]First, one should keep in mind that these devices are notanalog. Because of this, they can be reprogrammed and must have internet connectivity and/or another possibility to communicate with other equipment. Secondly, they are not traditional computing devices. This means that they are programmed for a general purpose, not only to perform certain tasks, as is the case withsmartphonesandpersonal computers. Finally, it is important to note that they will be employed in an artistic and musical context. In this way, theaestheticcharacteristics are as important as the computational ones.[17] That said, thehardwarechallenges are clear, and these include the processing capacity, as well as the storage and power consumption of musical things, which must be good enough to withstand artistic performances, while not making these objects expensive orunergonomicand unwieldy. In addition, they should be able to take on different roles in different scenarios. Thus, they should allow users to add or remove components (such as sensors and actuators) aiming to be adaptable, expressive and versatile.[18][19] The second challenge deals with the behavior of musical things. They must primarily exchange sound data, but it is desirable that they also exchange control data and processing parameters.[10]In this sense, they must adapt their operating mode in order to be able to cooperate with the other elements present in the network, and also have theirsoftwareand operating logics updated remotely. The third adversity is possibly the most sensitive and most difficult topic to deal with. You have to think about what data is possible to share and how to do it. For audio formats, one can think ofPulse Code Moduation (PCM)formats, likeWAV, because it is the most common in real-time audio processing systems. However, issues such aslatencyandqualityare not guaranteed. Files inMP3,FLACorOGGformat, on the other hand, require more processing and the latency arising from this can make the environment impractical.[17] Possible solutions to these problems include the use of commonIoTelements in music practice or the assignment of networking capabilities on behalf of traditional audio objects.Effects units(such as guitar pedals) should be built so that the user is free to remove or insert buttons and sensors, while in logic units the software is modifiable. Audio equipment should send and receive data over the network, and also be remotely controlled. This can be useful for adapting these elements to the different types of data circulating on the network.[17] The musical instruments, on the other hand, will function similarly to smart musical instruments, where they will be equipped with sensors and actuators capable of capturing stimuli from the environment and from the musicians themselves. Musical aids such as metronomes and tuners can be transposed to digital media, while performance aids such as light and smoke cannons can be controlled and synchronized over the network.[17] However, IoMusT is not only about making adaptations of what already exists, but also by creating devices, capable of generating new perspectives for musical practices. This section reviews some of the various application domains that aid an IoMusT environment. The review is not intended to be exhaustive, aiming to describe the main features and functionalities of each area. A networked performance is a real-time interaction via machine, which allows artists dispersed across the globe to interact with each other as if they were in the same environment. While not intended to replace the traditional model, it contributes to music creation and its social interactions, promoting creativity and the exchange of cultures.[20][21] Among its main characteristics are: low latency, where the sounds produced should be heard almost instantaneously; synchronization, to prevent long delays from hindering interaction in the environment; interoperability through standardization, which allows different devices to communicate over the network;scalability, which makes the system comprehensive and allows distributed participation among users; and easy integration and participation, aspects that ensure that users have no difficulty in finding devices on the network, and can connect or disconnect from it whenever they want.[22]As for the challenges in this area, they can be illustrated by the requirement for highbandwidthand ordering in the transmitted stream, and sensitivity to delay in the delivery of data packets.[23] Arthas always had its interaction marked by the relationship between theartistand the medium he uses to materialize the work, while theaudiencehad only the role of passively observing everything. This began to change when artistic movements led byAllan Kaprowand theFluxusandGutaigroups began to allow for more active audience participation.[24]In this context,interactive artemerged, characterized by allowing the viewer a degree of active involvement in the show, either by walking among theinstallationsandsculptures, or by producingsounds,images, and movements.[25][26] The architecture of these environments is designed to handle different types of signals, ranging from audio and video to those produced by the human body, such as heartbeats. As such, they also require functionality that ensures interoperability and handles data delivery delays. Ubiquitous music,[4]usually abbreviated to ubimus, is a research field that combinesmusic,technology, and creative processes with strong social and community engagement. Although its original proposal is focused on music production, current technological developments have opened new social and cognitive dimensions to this field, leading it to become increasingly interested in educational and artistic topics. Thus, current perspectives encompass a wide diversity of subjects and actors, ranging from casual participants to highly trained musicians.[27] The ubimus ecosystem supports the integration of audio tools and audience interaction, and can be reconfigured to meet the needs of users. Consequently, the desired concepts are not dependent on specific implementations. Other important features are conceptual approaches and reliance on empirical methods. These aspects encourage the development of technologies for music creation, especially those that make use of common objects and spaces in the daily lives of those involved in the process. Web Audio is aJavaScriptAPIfor audio processing and synthesizing in web applications, representing a technological evolution in this segment. It presents some features common toDAWs, such as audio signal routing, low latency, and effects application. It also allows participative networked performances and expands the capabilities of using smartphones in these media. Its environment uses audio nodes for manipulating sound in a musical context. They are connected by their inputs and outputs to create paths for routing audio, which is modified by effect nodes along the way. In this way, it is able to support numerous sources with different layouts, as well as being flexible and creating complex functions with dynamic effects.[28] Web Audio paves the way for usingweb browsersfor musical purposes. Among the advantages observed from this are easy distribution (no installation required) and maintenance, platform and architecture independence, security (the browser can prevent plugins with incorrect behavior from affecting the system), and emergence of new types of collaboration.[28] Cloud computing, on the other hand, is a structure composed of distributed servers that simulate a centralized network, allowing load balancing and resource replication, minimizing the amount of network consumption and improving its scalability and reliability. It aims to provide numerous services, ranging from file storage to intercommunication between music applications, offering an unprecedented level of participation and performance.[29] Its main feature is to allow users to access the services without the need for knowledge about the technology used. Thus, they can access them on demand and regardless of location. Other points worth highlighting in this network are: broad access, elasticity, and resource management.[30] Cloud computing infrastructure is mostly composed of numerous physical machines connected together in a network. Each machine has the same software configurations, but can differ in thecentral processing unit, memory usage, anddisk storagecapacity. This model was developed with three main objectives in mind: i) reduce the cost in the acquisition and composition of the elements that form the network infrastructure, allowing it to be heterogeneous and adaptable to the resources required; ii) flexibility in adding or replacing computing resources; iii) ease of access to the services provided by it, where users only need their machines to have an operating system, browser and Internet access to access the resources available in the cloud.[31] Despite all the advantages listed above about using cloud computing, its centralized mode of operation creates a lot of service load on the network, in particular on costs and bandwidth resources for data transmission. In addition, network performance worsens as the amount of data increases. To address this problem,edge computinghas emerged, which is aparadigmthat combines cloud computing properties with real-time communication. The term "edge" refers to all the computational and network resources between the data sources and the cloud servers. In this way, objects present in the environment not only consume data and services, but also perform computational processing, decreasing stress on the network and significantly reducing latency in message exchange.[32][33] The key attributes of this computing model revolve around geographic distribution, mobility support, location recognition, computing resources and services close to the end user, low latency, context sensitivity, and heterogeneity.[34] Wearable computing is a new approach that has been redefining the wayhuman-machine interactionhappens, where electronic devices become directly connected to the user's body. They are calledwearable devicesand are built in such a way that the technologies and structures they contain are imperceptible, acting as an extension of the human being. Among the most popular models today aresmartwatchesandsmartbands.[35] Although they are small, they are capable of continuously detecting, collecting, and uploading numerous physiological and sensory data, which aim to improve typical everyday activities such as making payments, assisting in location tracking, monitoring physical and mental health, providing analysis on certain physical activity, and aiding in artistic practice. They must be able to fulfill three main goals: assign mobility to the user, that is, allow them to use the device in various locations; augment reality, such as generating images or sounds that are not part of the real world; and provide context sensitivity, which is the ability of the equipment to adapt to different environments and stimuli.[36] It is important to note that although they have connectivity and handle a large amount of data, not all wearable devices are IoT elements, and consequently, IoMusT elements. To be considered as such, they must have access to the internet. Following a slightly different line of thought, but still using concepts from wearable computing, are thee-textiles. These consist of clothing enhanced with sensors and present some advantages over wearable devices, such as more comfort, more natural interfaces for human interaction and less intrusiveness. From this, electronic devices that are worn next to the human body can be classified according to the location in which they are inserted (wrist,head,feetand so on) and whether they already exist or are still in the prototyping phase.[37] In addition to facing the problems inherent to the use of the technology[38]and also those present in IoT,[39]the Internet of Music Things faces specific problems, ranging from technological issues to those artistic and environmental. The main ones are highlighted below. The possibility of IoMusT occurrence is dependent on network aspects such as bandwidth, latency andjitter. From this, it is necessary that these networks expand their operation beyond the current state-of-the-art, in order to provide better connection conditions and deal with the three aspects mentioned, in addition to ensuring synchronization and good quality of the representation of multimodal audio content. With regard to latency, reliability and synchronization, they emerge as one of the main demands in the transmission of audio over a network and in real time, whether local or remote, wired or wireless. This occurs because of the random character of this type of communication, which can cause losses in the transmitted data and the desynchronization between them, even in small networks.[10] Still about synchronization, it is difficult to occur on devices that do not share the same global clock. Even in cases where this occurs, but with objects on different networks, resynchronization is required from time to time. Existing protocols are insufficient to meet this demand.[40] The importance of discussing interoperability and standardization of the devices present in this environment is that these concepts are essential pillars for its implementation. This is due to the fact that the devices do not know each other previously and do not have information about the elements in which they will connect. But given the heterogeneity of these objects, in many cases they do not operate under the same protocols nor are they able to interpret the data coming from their neighbors.[11] The main difference between IoMusT and IoT is the concern of the first field with artistic issues.[10]Despite providing advantages, such as the possibility of creation among musicians arranged in different locations around the globe, massive connectivity and new forms of participation by the audience, some problems stand out. Among them, the rupture with the traditional model of artistic interactions, as observed in bands and orchestras; lack of visual feedback; choice of which elements will be displayed and/or controlled by the audience; absence of backup systems for remote concerts; expensive, inaccessible and unergonomic devices and lack of investment to elaborate the necessary infrastructure.[10][11][41] With the enormous amount of data generated in these environments, legal concerns aboutpersonal dataarise, since the devices are able to collect information from users involved in the process. Issues also appear involving infringement on protected material,copyrightinfringement, andintellectual propertyinfringement.[10] Security issues are also worth mentioning. Because it is a system that communicates over the network, IoMusT is subject to attempts to steal sensitive data, denial of service attacks andtrojans. Possible solutions involveencryptionalgorithms, but this can lead to high energy and memory usage of the devices.[42] One of the first thinkers to analyze the impact of technology on society wasHerbert Marcuse.[38]Among the problems cited by the author are: abundance of technology for one part of the population and scarcity for another; establishment of standards and demands by theruling class; submission of workers to large corporations; retention of economic power and loss of individuality of thought. All these problems are present in IoMusT as well.[11] Allied to this, other problems can be accentuated, such as non-heterogeneous access to technologies, since people living insuburbanorruralareas do not have the same possibilities of access as people living in denser areas; lack of infrastructure, which increases the socio-cultural difference between people and classes; excessive consumption, constant need for innovation, and social apartheid.[11][43] While IoMusT can revolutionize the music industry by providingartificial intelligencealgorithms capable of mixing and altering sound, reducing production costs, it can also negatively impact the creative part of this field by replacing human tasks with machine-based solutions, as well as causing reduced employment opportunities in the field.[11] With the growth of electro-electronic devices generated by this area, there is also a concern about environmental issues, especially those concerning waste generation, pollution in the making and use of these materials, use of chemical materials that can be toxic, use of non-renewable resources, and possible occurrences of ecological disturbances.[44][45] IoMusT allows rethinking some musical activities, such as live performances and rehearsals, multiplying the possibilities of interaction between the actors involved in these scenarios (musicians, audience, sound engineers, teachers, students, etc.).[10]Given this brief elucidation, it is possible to think of some usage scenarios that are detailed below. Imagine that when people arrive at a concert of their favorite band, they can choose different interfaces that will accompany them throughout the performance. One person might choosesmartglasses(a computing device that adds information according to what the wearer sees), another chooses a wristband that responds to musical stimuli, and a third selects a set of sensors and speakers. All these objects can track the user's movements and send this information to the band. The band, in turn, can tailor its performance according to the audience's emotions, as well as send them stimuli that will be interpreted by the objects they are wearing.[10] Again, imagine users with wearable equipment capable of capturing their physiological data. From recording their wearer's movements and emotions (such as heart rate), musicians can decide what song to perform next, choreographers can create steps that best suit the recorded feelings, and the audience itself can make use of this data to control elements that aid the show, such as light and smoke cannons.[10][46] Meanwhile, people who were unable to physically attend the performance venue can experience the concert using virtual reality glasses or 360° video systems, allowing them to seebehind the scenesof the stage and the details behind the musicians. IoMusT also predicts the possibility of an application that allows remote control. In this way, aspects present at the concert can be modified by the audience that is around the globe.[10][46] Another possible scenario is a studio that uses IoMusT concepts to record solo artists, duos and small groups as well as orchestras with a variety of instruments. For this, the recording interface can adapt its size according to the amount of equipment connected to it. Musicians can record even if they are not in the same physical location, and audio files can be recorded for latermixingandmastering.[46]Other possibilities include capturing audio from an instrument that is not in the same physical location, remote mixing and configuration of audio systems, obtaining performance data from musicians, and many others.[10] Music learning is enriched by IoMusT by allowing the use of applications that display the scores to be played, capture audio in real time, and suggest improvements. Also, smart glasses can be used that indicate the correct position of the fingers on the instrument and share data in the cloud that can be viewed by teachers, who will indicate improvements and the next steps to be taken.[10] This model is about ajam sessionthat combines traditional instruments and electronic devices that exchange information over a network. These instruments can be plugged into speakers or connected to patches, while the users/musicians manipulate them from computer systems. Graphic elements such as videos, animations, and musical information can be displayed to assist the process; some users can participate only by controlling parameters of the instruments, such as volume, recording, instrument effects (delayandreverb, for example), as well as changing colors and resolutions in the graphics. It is also capable of having a sound technician who manages the connections, removing those with low network connection capacity or connecting those who wish to exchange information.[46]
https://en.wikipedia.org/wiki/Internet_of_Musical_Things
Internet of Things (IoT) security devicesare electronic tools connected viaInternetto a common network and are used to provide security measures. These devices can be controlled remotely through amobile application, web-based interface or any proprietary installedsoftware, and they often have capabilities such as remotevideo monitoring,intrusion detection, automaticalerts, and smart automation features. IoT security devices form an integral part of thesmart ecosystem, which is characterized by the interconnectivity of various appliances and devices through the Internet. The concept of IoT security devices began to gain traction in the early 2010s with the advent of smart technology. The initial devices were primarily focused on remote surveillance that would allow monitoring of the properties remotely usingwebcamsand similar devices. As technology advanced, these systems began to incorporate a wider range of features, such as intrusion detection and automatic alerts.[1] The rise of smart automation and the proliferation of IoT devices in the mid-2010s further accelerated the growth of IoT security devices. As of 2021, the market for IoT security devices is expected to continue its rapid expansion due to increasing consumer awareness about security and the continuing development ofIoT technology.[2] Despite their benefits, IoT security devices have also raised several concerns. The most significant of these is the potential forprivacy breaches. As these devices are connected to the internet, they are potentially vulnerable tohacking, which could result in unauthorized access tosensitive data.[4] There are also concerns about the reliance oninternet connectivity. If an internet connection goes down, some devices may become non-functional, potentially leaving the environment unprotected. Similarly, if a device's software isn't regularlyupdated, it could become vulnerable to security flaws.[5] But with the technological rise, IoT devices can be secured with the help ofvulnerability assessmentandpenetration testing. These tests are performed by expert pentesters in order to secure the IoT device. Manufacturers can now take a security audit of their IoT devices.
https://en.wikipedia.org/wiki/IoT_security_device
OpenWSN[1][2]is a project created at theUniversity of California Berkeleyand extended at theINRIAand at theOpen University of Catalonia(UOC)[3]which aims to build an open standard-based andopen sourceimplementation of a complete constrained network protocol stack forwireless sensor networksandInternet of Things. The root of OpenWSN is a deterministicMAC layerimplementing the IEEE 802.15.4e TSCH based on the concept ofTime Slotted Channel Hopping(TSCH). Above the MAC layer, the Low Power Lossy Network stack is based on IETF standards including the IETF 6TiSCH management and adaptation layer (a minimal configuration profile, 6top protocol and different scheduling functions). The stack is complemented by an implementation of 6LoWPAN, RPL in non-storing mode, UDP and CoAP, enabling access to devices running the stack from the nativeIPv6through open standards. OpenWSN is related to other projects including the following: OpenWSN is available forLinux,WindowsandOS Xplatforms. Current release of OpenWSN is 1.14.0.
https://en.wikipedia.org/wiki/OpenWSN
Quantified selfis both theculturalphenomenon of self-tracking with technology and a community of users and makers of self-tracking tools who share an interest in "self-knowledge through numbers".[1]Quantified self practices overlap with the practice oflifeloggingand other trends that incorporate technology and data acquisition into daily life, often with the goal of improving physical, mental, and emotional performance. The widespread adoption in recent years of wearable fitness and sleep trackers such as theFitbitor theApple Watch,[2]combined with the increased presence ofInternet of things in healthcareand in exercise equipment, have made self-tracking accessible to a large segment of the population. Other terms for using self-tracking data to improve daily functioning[3]are auto-analytics, body hacking, self-quantifying, self-surveillance,sousveillance(recording of personal activity), and personalinformatics.[4][5][6] According to Riphagen et al., the history of the quantimetric self-tracking using wearable computers began in the 1970s: "The history of self-tracking using wearable sensors in combination with wearable computing and wireless communication already exists for many years, and also appeared, in the form of sousveillance back in the 1970s [13, 12]"[7] Quantimetric self-sensing was proposed for the use of wearable computers to automatically sense and measure exercise and dietary intake in 2002: "Sensors that measure biological signals, ... a personal data recorder that records ... Lifelong videocapture together with blood-sugar levels, ... correlate blood-sugar levels with activities such as eating, by capturing a food record of intake."[8][9] The "quantified self" or "self-tracking" are contemporary labels. They reflect the broader trend of the progressions for organization and meaning-making in human history; there has been a use of self-taken measurements and data collection that attempted the same goals that the quantified movement has.[10]Scientisation plays a major role in legitimizing self-knowledge through self-tracking. As early as 2001, media artists such asEllie Harrisonand Alberto Frigo extensively pioneered the concept, proposing a new direction of labor-intensive self-tracking without using privacy infringing automation.[11][page needed] The termquantified selfappears to have been proposed in San Francisco byWiredmagazineeditorsGary Wolf[12]andKevin Kelly[13]in 2007[14]as "a collaboration of users and tool makers who share an interest in self knowledge through self-tracking." In 2010, Wolf spoke about the movement atTED,[15]and in May 2011, the first international conference was held inMountain View, California.[16]There are conferences in America and Europe. Gary Wolf said "Almost everything we do generates data." Wolf suggests that companies target advertising or recommend products use data from phones, tablets, computers, other technology, and credit cards. However, using the data they make can give people new ways to deal with medical problems, help sleep patterns, and improve diet. Within the quantified self community, the concept of "personal science" has been developed. It is defined as: "the practice of exploring personally consequential questions by conducting self-directed N-of-1 studies using a structured empirical approach".[17] Like anyempirical study, the primary method is the collection and analysis of data.[18]In many cases, data are collected automatically using wearable sensors, not limited to, but often worn on the wrist.[19]In other cases, data may be logged manually. The data are typically analyzed using traditional techniques such aslinear regressionto establishcorrelationsamong the variables under investigation. As in every attempt to understand potentially high-dimensional data,visualizationtechniques can suggest hypotheses that may be tested more rigorously using formal methods. One simple example of a visualization method is to view the change in some variable over time. Even though the idea is not new, the technology is. Technology has made it easier and simpler to gather and analyze personal data. Since these technologies have become smaller and cheaper to be put in smart phones or tablets, it is easier to take the quantitative methods used in science and business and apply them to the personal sphere. Narratives constitute a symbiotic relationship with large bodies of data. Therefore, quantified self participants are encouraged to share their experiences of self-tracking at various conferences and meetings.[20] A major application of quantified self has been in health and wellness improvement.[21][22]Many devices and services help with tracking physical activity, caloric intake, health patterns, sleep quality, posture, asthma and other factors involved in personal well-being. Corporate wellness programs, for example, will often encourage some form of tracking.Genetic testingand other services have also become popular. Quantified self is also being used to improve personal or professional productivity,[23]with tools and services being used to help people keep track of what they do during the workday, where they spend their time, and whom they interact with. Another application has been in the field of education, where wearable devices are being used in schools so that students can learn more about their own activities and related math and science.[24] A recent movement in quantified self isgamification. There is a wide variety of self-tracking technologies that allow everyday activities to be turned into games by awarding points or monetary value to encourage people to compete with their friends. The success ofconnected sportis part of the gamification movement. People can pledge a certain amount of real or fake money, or receive awards and trophies. Many of these self-tracking applications or technologies are compatible with each other and other websites so people can share information with one another.[25]Each technology may integrate with other apps or websites to show a bigger picture of health patterns, goals, and journaling.[26]For example, one may figure out that migraines were more likely to have painful side effects when using a particular migraine drug. Or one can study personal temporal associations between exercise and mood.[26] The quantified self is also demonstrating to be a major component of "big datascience", due to the amount of data that users are collecting on a daily basis. Although these data set streams are not conventional big data, they become sites for data analysis projects, that could be used in medical-related fields to predict health patterns or aide in genomic studies. Examples of studies that have been done using QS data include projects such as theDIYgenomicsstudies, theHarvard's Personal Genome Project, and the American Gut Microbiome Project.[10] Quantified baby is a branch of the quantified self movement that is concerned with collecting extensive data on a baby's daily activities, and using this data to make inferences about behavior and health. A number of software and hardware products exist to assist data collection by the parent or to collect data automatically for later analysis. Reactions to quantified baby are mixed.[27][28] Parents are often told by health professionals to record daily activities about their babies in the first few months, such as feeding times, sleeping times and diaper changes.[29]This is useful for both the parent (used to maintain a schedule and ensure they remain organised) and for the health professional (to make sure the baby is on target and occasionally to assist in diagnosis). For quantified self,knowledge is power, and knowledge about oneself easily translates as a tool forself-improvement.[15]The aim for many is to use this tracking to ultimately become better parents. Some parents use sleep trackers because they worry aboutsudden infant death syndrome.[30] A number of apps exist that have been made for parents wanting to track their baby's daily activities. The most frequently tracked metrics are feeding, sleeping anddiaper changes. Mood, activity, medical appointments andmilestonesare also sometimes covered. Other apps are specifically made forbreastfeedingmothers, or those who arepumping their milkto build up a supply for their baby. Quantified baby, as in quantified self, is associated with a combination of wearable sensors andwearable computing. The synergy of these is related to the concept of theInternet of things.[28] The quantified self movement has faced some criticism related to the limitations it inherently contains or might pose to other domains. Within these debates, there are some discussions around the nature, responsibility, and outcome of the quantified self movement and its derivative practices. Generally, most bodies of criticism tackle the issue of data exploitation and data privacy but alsohealth literacyskills in the practice of self-tracking. While most of the users engaging in self tracking practices are using the gathered data for self-knowledge and self-improvement, in some cases, self-tracking is pushed and forced by employers onto employees in certain workplace environments, health and life insurers or by substance addiction programs (drug and alcohol monitoring) in order to monitor the physical activity of the subject and analyze the data in order to gather conclusions. Usually the data gathered by this practice of self-tracking can be accessed by commercial, governmental, research and marketing agencies.[31] Another recurrent line of debate revolves around "data fetishism". Data fetishism is a phenomenon evolving when active users of self-tracking devices become enticed by the satisfaction and sense of achievement and fulfillment that numerical data offer.[32]Proponents of such lines of criticism tend to claim that data in this sense become simplistic, where complex phenomena become transcribed into reductionist data.[33]This reductionist line of criticism generally incorporates fears and concerns with the ways in which ideas on health are redefined, as well as doctor-patient dynamics and the experience of self-hood among self-trackers. Because of such arguments, the quantified self movement has been criticized for providing predetermined ideals of health, well-being and self-awareness. Rather than increasing the personal skills for self-knowledge, it distances the user from the self by offering an inherently normative and reductionist framework.[31] An alternative line of criticism still linked to the reductionist discourse but still proposing a more hopeful solution is related the lack of health literacy among most of self-trackers. The European Health Literacy Survey Consortium Health defines health literacy as "[...] people's knowledge, motivations, and competencies to access, understand, appraise, and apply health information in order to make judgments and take decisions in everyday life concerning healthcare, disease prevention and health promotion to maintain or improve quality of life during the life course."[34]Generally, people tend to focus mostly on the data collecting stage, while stages of data archiving, analysis and interpretation are often overlooked because of the skills necessary to conduct such processes, which explains the call for the improvement of health literacy skills among self-quantifiers.[35] The health literacy critique is different from the data-fetishist critique in its approach to data influence on the human life, health and experience. While the data-fetishist critical discourse ascribes a crucial power of influence to numbers and data, the health literacy critique views gathered data as useless and powerless without the human context and the analysis and reflection skills of the user that are needed to act on the numbers. Datum collection alone is not deterministic or normative, according to the health literacy critique. The "know thy numbers to know thyself" slogan of the quantified self movement is inconsistent, it has been claimed, in the sense that it does not fully acknowledge the need for auxiliary skills of health literacy to actually get to "know thyself".[35]The solution proposed by proponents of the health literacy critique in order to improve the practice of self-tracking and its results is a focus on addressing individual and systemic barriers. The individual barriers are faced by elderly citizens when having to deal with contemporary technology or in cases where there is a need for culturally-sound practices while systemic barriers could be overcome when involving the participation of more health literacy experts and the organization of health literacy education.[35] Another challenge of self-tracking is that it can be quite burdensome in the sense that it takes time but also that it can be experienced as a reminder that one is sick.[36]A study exploring self-tracking ofParkinson's diseasefound recommendations for balanced self-tracking: focusing on positive aspects, using improved tools, and discussing self-tracking results with healthcare providers.[37]
https://en.wikipedia.org/wiki/Quantified_self
Responsive computer-aided design(also simplified toresponsive design) is an approach tocomputer-aided design(CAD) that utilizes real-worldsensorsand data to modify a three-dimensional (3D) computer model. The concept is related tocyber-physical systemsthrough blurring of the virtual and physical worlds, however, applies specifically to the initial digital design of an object prior to production. The process begins with a designer creating a basic design of an object using CAD software withparametricoralgorithmicrelationships. These relationships are then linked to physical sensors, allowing them to drive changes to the CAD model within the established parameters. Reasons to allow sensors to modify a CAD model include customizing a design to fit a user'santhropometry, assisting people without CAD skills to personalize a design, or automating part of an iterative design process in similar fashion togenerative design. Once the sensors have affected the design it may then be manufactured as a one-off piece using adigital fabricationtechnology, or go through further development by a designer. Responsive computer-aided design is enabled byubiquitous computingand theInternet of Things, concepts which describe the capacity for everyday objects to contain computing and sensing technologies. It is also enabled by the ability to directly manufacture one-off objects from digital data, using technologies such as3D printingandcomputer numerical control(CNC) machines. Such digital fabrication technologies allow for customization, and are drivers of themass-customizationphenomenon.[1][2]They also provide new opportunities for consumers to participate in the design process, known asco-design. As these concepts mature, responsive design is emerging as an opportunity to reduce reliance ongraphical user interfaces(GUIs) as the only method for designers and consumers to design products,[3]aligning with claims by Golden Krishna that "the best design reduces work. The best computer is unseen. The best interaction is natural. The best interface is no interface."[4]Calls to reduce reliance on GUIs and automate some of the design process connects withMark Weiser's original vision of ubiquitous computing.[5] A variety of similar research areas are based ongesture recognition, with many projects usingmotion captureto track the physical motions of a designer and translate them into three-dimensional geometry suitable for digital fabrication.[6][7]While these share similarities to responsive design through their cyber-physical systems, they require direct intent to design an object and some level of skill. These are not considered responsive, as responsive design occurs autonomously and may even occur without the user being aware that they are designing at all. This topic has some common traits withresponsive web designandresponsive architecture, with both fields focused on systems design and adaptation based on functional conditions. Responsive computer-aided design has been used to customize fashion, and is currently an active area of research in footwear by large companies like New Balance who are looking to customize shoe midsoles using foot pressure data from customers.[8] Sound waveshave also been popular to customize 3D models and produce sculptural forms of a baby's first cries,[9]or a favorite song.[10]
https://en.wikipedia.org/wiki/Responsive_computer-aided_design
During the 2010s, international media reports revealed new operational details about the Anglophone cryptographic agencies'global surveillance[1]of both foreign and domestic nationals. The reports mostly relate totop secretdocumentsleakedby ex-NSAcontractorEdward Snowden. The documents consist of intelligence files relating to the U.S. and otherFive Eyescountries.[2]In June 2013, the first of Snowden's documents were published, with further selected documents released to various news outlets through the year. These media reports disclosed severalsecret treatiessigned by members of theUKUSA communityin their efforts to implementglobal surveillance. For example,Der Spiegelrevealed how theGermanFederal Intelligence Service(German:Bundesnachrichtendienst; BND) transfers "massive amounts of intercepted data to the NSA",[3]while Swedish Television revealed theNational Defence Radio Establishment(FRA) provided the NSA with data from itscable collection, under a secret agreement signed in 1954 for bilateral cooperation on surveillance.[4]Other security and intelligence agencies involved in the practice ofglobal surveillanceinclude those in Australia (ASD), Britain (GCHQ), Canada (CSE), Denmark (PET), France (DGSE), Germany (BND), Italy (AISE), the Netherlands (AIVD), Norway (NIS), Spain (CNI), Switzerland (NDB), Singapore (SID) as well as Israel (ISNU), which receives raw, unfiltered data of U.S. citizens from the NSA.[5][6][7][8][9][10][11][12] On June 14, 2013, United StatesprosecutorschargedEdward Snowden withespionageandtheft of government property. In late July 2013, he was granted a one-year temporaryasylumby the Russian government,[13]contributing to a deterioration ofRussia–United States relations.[14][15]Toward the end of October 2013, the British Prime MinisterDavid CameronwarnedThe Guardiannot to publishany more leaks, or it will receive aDA-Notice.[16]In November 2013, a criminal investigation of the disclosure was undertaken by Britain'sMetropolitan Police Service.[17]In December 2013,The GuardianeditorAlan Rusbridgersaid: "We have published I think 26 documents so far out of the 58,000 we've seen."[18] The extent to which the media reports responsibly informed the public is disputed. In January 2014, Obama said that "the sensational way in which these disclosures have come out has often shed more heat than light"[19]and critics such asSean Wilentzhave noted that many of the Snowden documents do not concern domestic surveillance.[20]The US & British Defense establishment weigh the strategic harm in the period following the disclosures more heavily than their civic public benefit. In its first assessment of these disclosures,the Pentagonconcluded that Snowden committed the biggest "theft" of U.S. secrets in thehistory of the United States.[21]SirDavid Omand, a former director ofGCHQ, described Snowden's disclosure as the "most catastrophic loss to British intelligence ever".[22] Snowden obtained the documents while working forBooz Allen Hamilton, one of the largest contractors for defense and intelligence in the United States.[2]The initial simultaneous publication in June 2013 byThe Washington PostandThe Guardian[23]continued throughout 2013. A small portion of the estimated full cache of documents was later published by other media outlets worldwide, most notablyThe New York Times(United States), theCanadian Broadcasting Corporation, theAustralian Broadcasting Corporation,Der Spiegel(Germany),O Globo(Brazil),Le Monde(France),L'espresso(Italy),NRC Handelsblad(the Netherlands),Dagbladet(Norway),El País(Spain), andSveriges Television(Sweden).[24] Barton Gellman, aPulitzer Prize–winning journalist who ledThe Washington Post's coverage of Snowden's disclosures, summarized the leaks as follows: Taken together, the revelations have brought to light aglobal surveillancesystem that cast off many of its historical restraints after theattacks of Sept. 11, 2001. Secret legal authorities empowered the NSA to sweep in the telephone, Internet and location records of whole populations. The disclosure revealed specific details of the NSA's close cooperation with U.S. federal agencies such as theFederal Bureau of Investigation(FBI)[26][27]and theCentral Intelligence Agency(CIA),[28][29]in addition to the agency's previously undisclosed financial payments to numerous commercial partners and telecommunications companies,[30][31][32]as well as its previously undisclosed relationships with international partners such as Britain,[33][34]France,[10][35]Germany,[3][36]and itssecret treatieswith foreign governments that were recently established for sharing intercepted data of each other's citizens.[5][37][38][39]The disclosures were made public over the course of several months since June 2013, by the press in several nations from the trove leaked by the former NSA contractor Edward J. Snowden,[40]who obtained the trove while working forBooz Allen Hamilton.[2] George Brandis, theAttorney-General of Australia, asserted that Snowden's disclosure is the "most serious setback for Western intelligence since theSecond World War."[41] As of December 2013[update], global surveillance programs include: The NSA was also getting data directly fromtelecommunications companiescode-named Artifice (Verizon), Lithium (AT&T), Serenade, SteelKnight, and X. The real identities of the companies behind these code names were not included in theSnowden document dumpbecause they were protected asExceptionally Controlled Informationwhich prevents wide circulation even to those (like Snowden) who otherwise have the necessary security clearance.[64][65] Although the exact size of Snowden's disclosure remains unknown, the following estimates have been put up by various government officials: As a contractor of the NSA, Snowden was granted access to U.S. government documents along withtop secretdocuments of severalalliedgovernments, via the exclusiveFive Eyesnetwork.[68]Snowden claims that he currently does not physically possess any of these documents, having surrendered all copies to journalists he met inHong Kong.[69] According to his lawyer, Snowden has pledged not to release any documents while in Russia, leaving the responsibility for further disclosures solely to journalists.[70]As of 2014, the following news outlets have accessed some of the documents provided by Snowden:Australian Broadcasting Corporation,Canadian Broadcasting Corporation,Channel 4,Der Spiegel,El País,El Mundo,L'espresso,Le Monde,NBC,NRC Handelsblad,Dagbladet,O Globo,South China Morning Post,Süddeutsche Zeitung,Sveriges Television,The Guardian,The New York Times, andThe Washington Post. In the 1970s, NSA analystPerry Fellwock(under the pseudonym "Winslow Peck") revealed the existence of theUKUSA Agreement, which forms the basis of theECHELONnetwork, whose existence was revealed in 1988 byLockheedemployee Margaret Newsham.[71][72]Months before theSeptember 11 attacksand during its aftermath, further details of theglobal surveillanceapparatus were provided by various individuals such as the formerMI5officialDavid Shaylerand the journalistJames Bamford,[73][74]who were followed by: In the aftermath of Snowden's revelations,The Pentagonconcluded that Snowden committed the biggest theft of U.S. secrets in thehistory of the United States.[21]In Australia, the coalition government described the leaks as the most damaging blow dealt toAustralian intelligencein history.[41]SirDavid Omand, a former director of GCHQ, described Snowden's disclosure as the "most catastrophic loss to British intelligence ever".[22] In April 2012, NSA contractorEdward Snowdenbegan downloading documents.[87]That year, Snowden had made his first contact with journalistGlenn Greenwald, then employed byThe Guardian, and he contacted documentary filmmakerLaura Poitrasin January 2013.[88][89] In May 2013, Snowden went on temporary leave from his position at the NSA, citing the pretext of receiving treatment for hisepilepsy. He traveled fromHawaiitoHong Kongat the end of May.[90][91]Greenwald, Poitras andThe Guardian's defence and intelligence correspondentEwen MacAskillflew to Hong Kong for meeting Snowden. After the U.S.-based editor ofThe Guardian,Janine Gibson, held several meetings in New York City, she decided that Greenwald, Poitras and theGuardian's defence and intelligence correspondent Ewen MacAskill would fly to Hong Kong to meet Snowden. On June 5, in the first media report based on the leaked material,[92]The Guardianexposed atop secretcourt order showing that the NSA had collected phone records from over 120 millionVerizon subscribers.[93]Under the order, the numbers of both parties on a call, as well as the location data, unique identifiers, time of call, and duration of call were handed over to the FBI, which turned over the records to the NSA.[93]According toThe Wall Street Journal, the Verizon order is part of a controversial data program, which seeks to stockpile records on all calls made in the U.S., but does not collect information directly fromT-Mobile USandVerizon Wireless, in part because of their foreign ownership ties.[94] On June 6, 2013, the second media disclosure, the revelation of thePRISM surveillance program(which collects the e-mail, voice, text and video chats of foreigners and an unknown number of Americans from Microsoft, Google, Facebook, Yahoo, Apple and other tech giants),[95][96][97][98]was published simultaneously byThe GuardianandThe Washington Post.[86][99] Der Spiegelrevealed NSA spying on multiple diplomatic missions of theEuropean Unionand theUnited Nations Headquartersin New York.[100][101]During specific episodes within a four-year period, the NSA hacked several Chinese mobile-phone companies,[102]theChinese University of Hong KongandTsinghua Universityin Beijing,[103]and the Asian fiber-optic network operatorPacnet.[104]OnlyAustralia, Canada, New Zealand and the UKare explicitly exempted from NSA attacks, whose main target in the European Union is Germany.[105]A method of bugging encrypted fax machines used at an EU embassy is codenamedDropmire.[106] During the2009 G-20 London summit, the British intelligence agencyGovernment Communications Headquarters(GCHQ) intercepted the communications of foreign diplomats.[107]In addition, GCHQ has been intercepting and storing mass quantities of fiber-optic traffic viaTempora.[108]Two principal components of Tempora are called "Mastering the Internet" (MTI) and "Global Telecoms Exploitation".[109]The data is preserved for three days whilemetadatais kept for thirty days.[110]Data collected by theGCHQunder Tempora is shared with theNational Security Agencyin the United States.[109] From 2001 to 2011, the NSA collected vast amounts of metadata records detailing the email and internet usage of Americans viaStellar Wind,[111]which was later terminated due to operational and resource constraints. It was subsequently replaced by newer surveillance programs such as ShellTrumpet, which "processed its one trillionth metadata record" by the end of December 2012.[112] The NSA follows specific procedures to target non-U.S. persons[113]and to minimize data collection from U.S. persons.[114]These court-approved policies allow the NSA to:[115][116] According toBoundless Informant, over 97 billion pieces of intelligence were collected over a 30-day period ending in March 2013. Out of all 97 billion sets of information, about 3 billiondata setsoriginated from U.S. computer networks[117]and around 500 million metadata records were collected from German networks.[118] In August 2013, it was revealed that theBundesnachrichtendienst(BND) of Germany transfers massive amounts of metadata records to the NSA.[119] Der Spiegeldisclosed that out of all27 member statesof the European Union, Germany is the most targeted due to the NSA's systematic monitoring and storage of Germany's telephone and Internet connection data. According to the magazine the NSA stores data from around half a billion communications connections in Germany each month. This data includes telephone calls, emails, mobile-phone text messages and chat transcripts.[120] The NSA gained massive amounts of information captured from the monitored data traffic in Europe. For example, in December 2012, the NSA gathered on an average day metadata from some 15 million telephone connections and 10 million Internet datasets. The NSA also monitored the European Commission in Brussels and monitored EU diplomatic Facilities in Washington and at the United Nations by placing bugs in offices as well as infiltrating computer networks.[121] The U.S. government made as part of itsUPSTREAM data collection programdeals with companies to ensure that it had access to and hence the capability to surveil undersea fiber-optic cables which deliver e-mails, Web pages, other electronic communications and phone calls from one continent to another at the speed of light.[122][123] According to the Brazilian newspaperO Globo, the NSA spied on millions of emails and calls of Brazilian citizens,[124][125]while Australia and New Zealand have been involved in the joint operation of the NSA's global analytical systemXKeyscore.[126][127]Among the numerousalliedfacilities contributing to XKeyscore are four installations in Australia and one in New Zealand: O Globoreleased an NSA document titled "Primary FORNSAT Collection Operations", which revealed the specific locations and codenames of theFORNSATintercept stations in 2002.[128] According to Edward Snowden, the NSA has established secret intelligence partnerships with manyWestern governments.[127]The Foreign Affairs Directorate (FAD) of the NSA is responsible for these partnerships, which, according to Snowden, are organized such that foreign governments can "insulate their political leaders" from public outrage in the event that theseglobal surveillancepartnerships are leaked.[129] In an interview published byDer Spiegel, Snowden accused the NSA of being "in bed together with the Germans".[130]The NSA granted the German intelligence agenciesBND(foreign intelligence) andBfV(domestic intelligence) access to its controversialXKeyscoresystem.[131]In return, the BND turned over copies of two systems named Mira4 and Veras, reported to exceed the NSA's SIGINT capabilities in certain areas.[3]Every day, massive amounts of metadata records are collected by the BND and transferred to the NSA via theBad Aibling StationnearMunich, Germany.[3]In December 2012 alone, the BND handed over 500 million metadata records to the NSA.[132][133] In a document dated January 2013, the NSA acknowledged the efforts of the BND to undermineprivacy laws: TheBNDhas been working to influence the German government to relax interpretation of the privacy laws to provide greater opportunities of intelligence sharing.[133] According to an NSA document dated April 2013, Germany has now become the NSA's "most prolific partner".[133]Under a section of a separate document leaked by Snowden titled "Success Stories", the NSA acknowledged the efforts of the German government to expand the BND's international data sharing with partners: The German government modifies its interpretation of theG-10 privacy law... to afford the BND more flexibility in sharing protected information with foreign partners.[49] In addition, the German government was well aware of the PRISM surveillance program long before Edward Snowden made details public. According to Angela Merkel's spokesmanSteffen Seibert, there are two separate PRISM programs – one is used by the NSA and the other is used byNATOforces inAfghanistan.[134]The two programs are "not identical".[134] The Guardianrevealed further details of the NSA'sXKeyscoretool, which allows government analysts to search through vast databases containing emails, online chats and the browsing histories of millions of individuals without prior authorization.[135][136][137]Microsoft "developed a surveillance capability to deal" with the interception of encrypted chats onOutlook.com, within five months after the service went into testing. NSA had access to Outlook.com emails because "Prism collects this data prior to encryption."[45] In addition, Microsoft worked with the FBI to enable the NSA to gain access to itscloud storage serviceSkyDrive. An internal NSA document dating from August 3, 2012, described the PRISM surveillance program as a "team sport".[45] TheCIA'sNational Counterterrorism Centeris allowed to examine federal government files for possible criminal behavior, even if there is no reason to suspect U.S. citizens of wrongdoing. Previously the NTC was barred to do so, unless a person was a terror suspect or related to an investigation.[138] Snowden also confirmed thatStuxnetwas cooperatively developed by the United States and Israel.[139]In a report unrelated to Edward Snowden, the French newspaperLe Monderevealed that France'sDGSEwas also undertaking mass surveillance, which it described as "illegal and outside any serious control".[140][141] Documents leaked by Edward Snowden that were seen bySüddeutsche Zeitung(SZ) andNorddeutscher Rundfunkrevealed that severaltelecomoperators have played a key role in helping the British intelligence agencyGovernment Communications Headquarters(GCHQ) tap into worldwidefiber-optic communications. The telecom operators are: Each of them were assigned a particular area of the internationalfiber-optic networkfor which they were individually responsible. The following networks have been infiltrated by GCHQ:TAT-14(EU-UK-US),Atlantic Crossing 1(EU-UK-US),Circe South(France-UK),Circe North(Netherlands-UK),Flag Atlantic-1,Flag Europa-Asia,SEA-ME-WE 3(Southeast Asia-Middle East-Western Europe),SEA-ME-WE 4(Southeast Asia-Middle East-Western Europe), Solas (Ireland-UK), UK-France 3, UK-Netherlands 14,ULYSSES(EU-UK),Yellow(UK-US) andPan European Crossing(EU-UK).[143] Telecommunication companies who participated were "forced" to do so and had "no choice in the matter".[143]Some of the companies were subsequently paid by GCHQ for their participation in the infiltration of the cables.[143]According to the SZ, GCHQ has access to the majority of internet and telephone communications flowing throughout Europe, can listen to phone calls, read emails and text messages, see which websites internet users from all around the world are visiting. It can also retain and analyse nearly the entire European internet traffic.[143] GCHQ is collecting all data transmitted to and from the United Kingdom and Northern Europe via the undersea fibre optic telecommunications cableSEA-ME-WE 3. TheSecurity and Intelligence Division(SID) of Singapore co-operates with Australia in accessing and sharing communications carried by the SEA-ME-WE-3 cable. TheAustralian Signals Directorate(ASD) is also in a partnership with British, American and Singaporean intelligence agencies to tap undersea fibre optic telecommunications cables that link Asia, the Middle East and Europe and carry much of Australia's international phone and internet traffic.[144] The U.S. runs a top-secret surveillance program known as theSpecial Collection Service(SCS), which is based in over 80 U.S. consulates and embassies worldwide.[145][146]The NSA hacked the United Nations' video conferencing system in Summer 2012 in violation of a UN agreement.[145][146] The NSA is not just intercepting the communications of Americans who are in direct contact with foreigners targeted overseas, but also searching the contents of vast amounts of e-mail and text communications into and out of the country by Americans who mention information about foreigners under surveillance.[147]It also spied onAl Jazeeraand gained access to its internal communications systems.[148] The NSA has built a surveillance network that has the capacity to reach roughly 75% of all U.S. Internet traffic.[149][150][151]U.S. Law-enforcement agencies use tools used by computer hackers to gather information on suspects.[152][153]An internal NSA audit from May 2012 identified 2776 incidents i.e. violations of the rules or court orders for surveillance of Americans and foreign targets in the U.S. in the period from April 2011 through March 2012, while U.S. officials stressed that any mistakes are not intentional.[154][155][156][157] The FISA Court that is supposed to provide critical oversight of the U.S. government's vast spying programs has limited ability to do so and it must trust the government to report when it improperly spies on Americans.[158]A legal opinion declassified on August 21, 2013, revealed that the NSA intercepted for three years as many as 56,000 electronic communications a year of Americans not suspected of having links to terrorism, before FISA court that oversees surveillance found the operation unconstitutional in 2011.[159][160][161][162]Under the Corporate Partner Access project, major U.S. telecommunications providers receive hundreds of millions of dollars each year from the NSA.[163]Voluntary cooperation between the NSA and the providers of global communications took off during the 1970s under the cover nameBLARNEY.[163] A letter drafted by the Obama administration specifically to inform Congress of the government's mass collection of Americans' telephone communications data was withheld from lawmakers by leaders of the House Intelligence Committee in the months before a key vote affecting the future of the program.[164][165] The NSA paid GCHQ over £100 Million between 2009 and 2012, in exchange for these funds GCHQ "must pull its weight and be seen to pull its weight." Documents referenced in the article explain that the weaker British laws regarding spying are "a selling point" for the NSA. GCHQ is also developing the technology to "exploit any mobile phone at any time."[166]The NSA has under a legal authority a secret backdoor into its databases gathered from large Internet companies enabling it to search for U.S. citizens' email and phone calls without a warrant.[167][168] ThePrivacy and Civil Liberties Oversight Boardurged the U.S. intelligence chiefs to draft stronger US surveillance guidelines on domestic spying after finding that several of those guidelines have not been updated up to 30 years.[169][170]U.S. intelligence analysts have deliberately broken rules designed to prevent them from spying on Americans by choosing to ignore so-called "minimisation procedures" aimed at protecting privacy[171][172]and used the NSA's agency's enormous eavesdropping power to spy on love interests.[173] After the U.S.Foreign Secret Intelligence Courtruled in October 2011 that some of the NSA's activities were unconstitutional, the agency paid millions of dollars to major internet companies to cover extra costs incurred in their involvement with the PRISM surveillance program.[174] "Mastering the Internet" (MTI) is part of theInterception Modernisation Programme(IMP) of the British government that involves the insertion of thousands of DPI (deep packet inspection) "black boxes" at variousinternet service providers, as revealed by the British media in 2009.[175] In 2013, it was further revealed that the NSA had made a £17.2  million financial contribution to the project, which is capable of vacuuming signals from up to 200 fibre-optic cables at all physical points of entry into Great Britain.[176] TheGuardianandThe New York Timesreported on secret documents leaked by Snowden showing that the NSA has been in "collaboration with technology companies" as part of "an aggressive, multipronged effort" to weaken theencryptionused in commercial software, and GCHQ has a team dedicated to cracking "Hotmail, Google, Yahoo and Facebook" traffic.[183] Germany's domestic security agencyBundesverfassungsschutz(BfV) systematically transfers the personal data of German residents to the NSA, CIA and seven other members of theUnited States Intelligence Community, in exchange for information and espionage software.[184][185][186]Israel, Sweden and Italy are also cooperating with American and British intelligence agencies. Under a secret treaty codenamed "Lustre", French intelligence agencies transferred millions of metadata records to the NSA.[62][63][187][188] The Obama Administration secretly won permission from the Foreign Intelligence Surveillance Court in 2011 to reverse restrictions on the National Security Agency's use of intercepted phone calls and e-mails, permitting the agency to search deliberately for Americans' communications in its massive databases. The searches take place under a surveillance program Congress authorized in 2008 under Section 702 of the Foreign Intelligence Surveillance Act. Under that law, the target must be a foreigner "reasonably believed" to be outside the United States, and the court must approve the targeting procedures in an order good for one year. But a warrant for each target would thus no longer be required. That means that communications with Americans could be picked up without a court first determining that there is probable cause that the people they were talking to were terrorists, spies or "foreign powers." The FISC extended the length of time that the NSA is allowed to retain intercepted U.S. communications from five years to six years with an extension possible for foreign intelligence or counterintelligence purposes. Both measures were done without public debate or any specific authority from Congress.[189] A special branch of the NSA called "Follow the Money" (FTM) monitors international payments, banking and credit card transactions and later stores the collected data in the NSA's own financial databank "Tracfin".[190]The NSA monitored the communications of Brazil's presidentDilma Rousseffand her top aides.[191]The agency also spied on Brazil's oil firmPetrobrasas well as French diplomats, and gained access to the private network of theMinistry of Foreign Affairs of Franceand theSWIFTnetwork.[192] In the United States, the NSA uses the analysis of phone call and e-mail logs of American citizens to create sophisticated graphs of their social connections that can identify their associates, their locations at certain times, their traveling companions and other personal information.[193]The NSA routinely shares raw intelligence data with Israel without first sifting it to remove information about U.S. citizens.[5][194] In an effort codenamed GENIE, computer specialists can control foreign computer networks using "covert implants," a form of remotely transmitted malware on tens of thousands of devices annually.[195][196][197][198]As worldwide sales ofsmartphonesbegan exceeding those offeature phones, the NSA decided to take advantage of the smartphone boom. This is particularly advantageous because the smartphone combines amyriadof data that would interest an intelligence agency, such as social contacts, user behavior, interests, location, photos and credit card numbers and passwords.[199] An internal NSA report from 2010 stated that the spread of the smartphone has been occurring "extremely rapidly"—developments that "certainly complicate traditional target analysis."[199]According to the document, the NSA has set uptask forcesassigned to several smartphone manufacturers andoperating systems, includingApple Inc.'siPhoneandiOSoperating system, as well asGoogle'sAndroidmobile operating system.[199]Similarly, Britain'sGCHQassigned a team to study and crack theBlackBerry.[199] Under the heading "iPhone capability", the document notes that there are smaller NSA programs, known as "scripts", that can perform surveillance on 38 different features of theiOS 3andiOS 4operating systems. These include themappingfeature,voicemailand photos, as well asGoogle Earth, Facebook andYahoo! Messenger.[199] On September 9, 2013, an internal NSA presentation on iPhone Location Services was published byDer Spiegel. One slide shows scenes from Apple's1984-themed television commercialalongside the words "Who knew in 1984..."; another shows Steve Jobs holding an iPhone, with the text "...that this would be big brother..."; and a third shows happy consumers with their iPhones, completing the question with "...and the zombies would be paying customers?"[200] On October 4, 2013,The Washington PostandThe Guardianjointly reported that the NSA and GCHQ had made repeated attempts to spy on anonymous Internet users who have been communicating in secret via the anonymity networkTor. Several of these surveillance operations involved the implantation of malicious code into the computers of Tor users who visit particular websites. The NSA and GCHQ had partly succeeded in blocking access to the anonymous network, diverting Tor users to insecure channels. The government agencies were also able to uncover the identity of some anonymous Internet users.[201][202][203][204] TheCommunications Security Establishment(CSE) has been using a program called Olympia to map the communications of Brazil'sMines and Energy Ministryby targeting the metadata of phone calls and emails to and from the ministry.[205][206] The Australian Federal Government knew about the PRISM surveillance program months before Edward Snowden made details public.[207][208] The NSA gathered hundreds of millions of contact lists from personal e-mail and instant messaging accounts around the world. The agency did not target individuals. Instead it collected contact lists in large numbers that amount to a sizable fraction of the world's e-mail and instant messaging accounts. Analysis of that data enables the agency to search for hidden connections and to map relationships within a much smaller universe of foreign intelligence targets.[209][210][211][212] The NSA monitored the public email account of former Ex-Mexican presidentFelipe Calderón(thus gaining access to the communications of high-ranking cabinet members), the emails of several high-ranking members of Mexico's security forces and text and the mobile phone communication of Ex-Mexican presidentEnrique Peña Nieto.[213][214]The NSA tries to gather cellular and landline phone numbers—often obtained from American diplomats—for as many foreign officials as possible. The contents of the phone calls are stored in computer databases that can regularly be searched using keywords.[215][216] The NSA has been monitoring telephone conversations of 35 world leaders.[217]The U.S. government's first public acknowledgment that it tapped the phones of world leaders was reported on October 28, 2013, by the Wall Street Journal after an internal U.S. government review turned up NSA monitoring of some 35 world leaders.[218]GCHQhas tried to keep its mass surveillance program a secret because it feared a "damaging public debate" on the scale of its activities which could lead to legal challenges against them.[219] The Guardianrevealed that the NSA had been monitoring telephone conversations of 35 world leaders after being given the numbers by an official in another U.S. government department. A confidential memo revealed that the NSA encouraged senior officials in such Departments as theWhite House,StateandThe Pentagon, to share their "Rolodexes" so the agency could add the telephone numbers of leading foreign politicians to their surveillance systems. Reacting to the news, German leaderAngela Merkel, arriving inBrusselsfor anEU summit, accused the U.S. of a breach of trust, saying: "We need to have trust in our allies and partners, and this must now be established once again. I repeat that spying among friends is not at all acceptable against anyone, and that goes for every citizen in Germany."[217]The NSA collected in 2010 data on ordinary Americans' cellphone locations, but later discontinued it because it had no "operational value."[220] Under Britain'sMUSCULARprogramme, the NSA and GCHQ have secretly broken into the main communications links that connectYahooandGoogledata centersaround the world and thereby gained the ability to collect metadata andcontentat will from hundreds of millions of user accounts.[221][222][223][224] The mobile phone of German ChancellorAngela Merkelmight have been tapped by U.S. intelligence.[225][226][227][228]According to the Spiegel this monitoring goes back to 2002[229][230]and ended in the summer of 2013,[218]whileThe New York Timesreported that Germany has evidence that the NSA's surveillance of Merkel began duringGeorge W. Bush's tenure.[231]After learning fromDer Spiegelmagazine that the NSA has been listening in to her personal mobile phone, Merkel compared the snooping practices of the NSA with those of theStasi.[232]It was reported in March 2014, byDer Spiegelthat Merkel had also been placed on an NSA surveillance list alongside 122 other world leaders.[233] On October 31, 2013,Hans-Christian Ströbele, a member of theGerman Bundestagwho visited Snowden in Russia, reported on Snowden's willingness to provide details of the NSA's espionage program.[234] A highly sensitive signals intelligence collection program known asStateroominvolves the interception of radio, telecommunications and internet traffic. It is operated out of the diplomatic missions of theFive Eyes(Australia, Britain, Canada, New Zealand, United States) in numerous locations around the world. The program conducted at U.S. diplomatic missions is run in concert by the U.S. intelligence agencies NSA and CIA in a joint venture group called "Special Collection Service" (SCS), whose members work undercover in shielded areas of the American Embassies and Consulates, where they are officially accredited as diplomats and as such enjoy special privileges. Under diplomatic protection, they are able to look and listen unhindered. The SCS for example used the American Embassy near the Brandenburg Gate in Berlin to monitor communications in Germany's government district with its parliament and the seat of the government.[228][235][236][237] Under the Stateroom surveillance programme, Australia operates clandestine surveillance facilities to intercept phone calls and data across much of Asia.[236][238] In France, the NSA targeted people belonging to the worlds of business, politics or French state administration. The NSA monitored and recorded the content of telephone communications and the history of the connections of each target i.e. the metadata.[239][240]The actual surveillance operation was performed by French intelligence agencies on behalf of the NSA.[62][241]The cooperation between France and the NSA was confirmed by the Director of the NSA,Keith B. Alexander, who asserted that foreign intelligence services collected phone records in "war zones" and "other areas outside their borders" and provided them to the NSA.[242] The French newspaperLe Mondealso disclosed newPRISMand Upstream slides (See Page 4, 7 and 8) coming from the "PRISM/US-984XN Overview" presentation.[243] In Spain, the NSA intercepted the telephone conversations, text messages and emails of millions of Spaniards, and spied on members of the Spanish government.[244]Between December 10, 2012, and January 8, 2013, the NSA collected metadata on 60 million telephone calls in Spain.[245] According to documents leaked by Snowden, the surveillance of Spanish citizens was jointly conducted by the NSA and the intelligence agencies of Spain.[246][247] The New York Timesreported that the NSA carries out an eavesdropping effort, dubbed Operation Dreadnought, against the Iranian leaderAyatollah Ali Khamenei. During his 2009 visit toIranian Kurdistan, the agency collaborated with GCHQ and the U.S.'sNational Geospatial-Intelligence Agency, collecting radio transmissions between aircraft and airports, examining Khamenei's convoy with satellite imagery, and enumerating military radar stations. According to the story, an objective of the operation is "communications fingerprinting": the ability to distinguish Khamenei's communications from those of other people inIran.[248] The same story revealed an operation code-named Ironavenger, in which the NSA intercepted e-mails sent between a country allied with the United States and the government of "an adversary". The ally was conducting aspear-phishingattack: its e-mails containedmalware. The NSA gathered documents andlogincredentials belonging to the enemy country, along with knowledge of the ally's capabilities forattacking computers.[248] According to the British newspaperThe Independent, the British intelligence agency GCHQ maintains a listening post on the roof of theBritish Embassy in Berlinthat is capable of intercepting mobile phone calls, wi-fi data and long-distance communications all over the German capital, including adjacent government buildings such as theReichstag(seat of the German parliament) and theChancellery(seat of Germany's head of government) clustered around theBrandenburg Gate.[249] Operating under the code-name "Quantum Insert", GCHQ set up a fake website masquerading asLinkedIn, a social website used forprofessional networking, as part of its efforts to install surveillance software on the computers of the telecommunications operatorBelgacom.[250][251][252]In addition, the headquarters of the oil cartelOPECwere infiltrated by GCHQ as well as the NSA, which bugged the computers of nine OPEC employees and monitored theGeneral Secretary of OPEC.[250] For more than three years GCHQ has been using an automated monitoring system code-named "Royal Concierge" to infiltrate thereservation systemsof at least 350 prestigious hotels in many different parts of the world in order to target, search and analyze reservations to detect diplomats and government officials.[253]First tested in 2010, the aim of the "Royal Concierge" is to track down the travel plans of diplomats, and it is often supplemented with surveillance methods related tohuman intelligence(HUMINT). Other covert operations include the wiretapping of room telephones and fax machines used in targeted hotels as well as the monitoring of computers hooked up to the hotel network.[253] In November 2013, theAustralian Broadcasting CorporationandThe Guardianrevealed that theAustralian Signals Directorate(DSD) had attempted to listen to the private phone calls of thepresident of Indonesiaand his wife. The Indonesian foreign minister,Marty Natalegawa, confirmed that he and the president had contacted the ambassador in Canberra. Natalegawa said any tapping of Indonesian politicians' personal phones "violates every single decent and legal instrument I can think of—national in Indonesia, national in Australia, international as well".[254] Other high-ranking Indonesian politicians targeted by the DSD include: Carrying the title "3Gimpact and update", a classified presentation leaked by Snowden revealed the attempts of the ASD/DSD to keep up to pace with the rollout of 3G technology in Indonesia and across Southeast Asia. The ASD/DSD motto placed at the bottom of each page reads: "Reveal their secrets—protect our own."[255] Under a secret deal approved by British intelligence officials, the NSA has been storing and analyzing the internet and email records of British citizens since 2007. The NSA also proposed in 2005 a procedure for spying on the citizens of the UK and otherFive-Eyes nations alliance, even where the partner government has explicitly denied the U.S. permission to do so. Under the proposal, partner countries must neither be informed about this particular type of surveillance, nor the procedure of doing so.[37] Toward the end of November,The New York Timesreleased an internal NSA report outlining the agency's efforts to expand its surveillance abilities.[256]The five-page document asserts that thelaw of the United Stateshas not kept up with the needs of the NSA to conduct mass surveillance in the "golden age" ofsignals intelligence, but there are grounds for optimism because, in the NSA's own words: The culture of compliance, which has allowed the American people to entrust NSA with extraordinary authorities, will not be compromised in the face of so many demands, even as we aggressively pursue legal authorities...[257] The report, titled "SIGINTStrategy 2012–2016", also said that the U.S. will try to influence the "global commercial encryption market" through "commercial relationships", and emphasized the need to "revolutionize" the analysis of its vast data collection to "radically increase operational impact".[256] On November 23, 2013, the Dutch newspaperNRC Handelsbladreported that the Netherlands was targeted by U.S. intelligence agencies in the immediate aftermath ofWorld War II. This period of surveillance lasted from 1946 to 1968, and also included the interception of the communications of other European countries including Belgium, France, West Germany and Norway.[258]The Dutch Newspaper also reported that NSA infected more than 50,000 computer networks worldwide, often covertly, with malicious spy software, sometimes in cooperation with local authorities, designed to steal sensitive information.[40][259] According to the classified documents leaked by Snowden, theAustralian Signals Directorate(ASD), formerly known as the Defense Signals Directorate, had offered to share intelligence information it had collected with the other intelligence agencies of theUKUSA Agreement. Data shared with foreign countries include "bulk, unselected, unminimized metadata" it had collected. The ASD provided such information on the condition that no Australian citizens were targeted. At the time the ASD assessed that "unintentional collection [of metadata of Australian nationals] is not viewed as a significant issue". If a target was later identified as being an Australian national, the ASD was required to be contacted to ensure that a warrant could be sought. Consideration was given as to whether "medical, legal or religious information" would be automatically treated differently to other types of data, however a decision was made that each agency would make such determinations on a case-by-case basis.[260]Leaked material does not specify where the ASD had collected the intelligence information from, however Section 7(a) of the Intelligence Services Act 2001 (Commonwealth) states that the ASD's role is "...to obtain intelligence about the capabilities, intentions or activities of people or organizations outside Australia...".[261]As such, it is possible ASD's metadata intelligence holdings was focused on foreign intelligence collection and was within the bounds of Australian law. The Washington Postrevealed that the NSA has been tracking the locations of mobile phones from all over the world by tapping into the cables that connect mobile networks globally and that serve U.S. cellphones as well as foreign ones. In the process of doing so, the NSA collects more than five billion records of phone locations on a daily basis. This enables NSA analysts to map cellphone owners' relationships by correlating their patterns of movement over time with thousands or millions of other phone users who cross their paths.[262][263][264][265] The Washington Post also reported that both GCHQ and the NSA make use of location data and advertising tracking files generated through normal internet browsing (withcookiesoperated by Google, known as "Pref") to pinpoint targets for government hacking and to bolster surveillance.[266][267][268] TheNorwegian Intelligence Service(NIS), which cooperates with the NSA, has gained access to Russian targets in theKola Peninsulaand other civilian targets. In general, the NIS provides information to the NSA about "Politicians", "Energy" and "Armament".[269]Atop secretmemo of the NSA lists the following years as milestones of the Norway–United States of America SIGINT agreement, orNORUS Agreement: The NSA considers the NIS to be one of its most reliable partners. Both agencies also cooperate to crack the encryption systems of mutual targets. According to the NSA, Norway has made no objections to its requests from the NIS.[270] On December 5,Sveriges Televisionreported theNational Defense Radio Establishment(FRA) has been conducting a clandestine surveillance operation in Sweden, targeting the internal politics of Russia. The operation was conducted on behalf of the NSA, receiving data handed over to it by the FRA.[271][272]The Swedish-American surveillance operation also targeted Russian energy interests as well as theBaltic states.[273]As part of theUKUSA Agreement, a secret treaty was signed in 1954 by Sweden with the United States, the United Kingdom, Canada, Australia and New Zealand, regarding collaboration and intelligence sharing.[274] As a result of Snowden's disclosures, the notion ofSwedish neutralityin international politics was called into question.[275]In an internal document dating from the year 2006, the NSA acknowledged that its "relationship" with Sweden is "protected at the TOP SECRET level because of that nation'spolitical neutrality."[276]Specific details of Sweden's cooperation with members of the UKUSA Agreement include: According to documents leaked by Snowden, theSpecial Source Operationsof the NSA has been sharing information containing "logins, cookies, and GooglePREFID" with theTailored Access Operationsdivision of the NSA, as well as Britain's GCHQ agency.[284] During the2010 G-20 Toronto summit, theU.S. embassy in Ottawawas transformed into a security command post during a six-day spying operation that was conducted by the NSA and closely coordinated with theCommunications Security Establishment Canada(CSEC). The goal of the spying operation was, among others, to obtain information on international development, banking reform, and to counter trade protectionism to support "U.S. policy goals."[285]On behalf of the NSA, the CSEC has set up covert spying posts in 20 countries around the world.[8] In Italy theSpecial Collection Serviceof the NSA maintains two separate surveillance posts in Rome andMilan.[286]According to a secret NSA memo dated September 2010, theItalian embassy in Washington, D.C.has been targeted by two spy operations of the NSA: Due to concerns that terrorist or criminal networks may be secretly communicating via computer games, the NSA, GCHQ, CIA, and FBI have been conducting surveillance and scooping up data from the networks of many online games, includingmassively multiplayer online role-playing games(MMORPGs) such asWorld of Warcraft, as well asvirtual worldssuch asSecond Life, and theXboxgaming console.[287][288][289][290] The NSA has cracked the most commonly used cellphone encryption technology,A5/1. According to a classified document leaked by Snowden, the agency can "process encrypted A5/1" even when it has not acquired an encryption key.[291]In addition, the NSA uses various types of cellphone infrastructure, such as the links between carrier networks, to determine the location of a cellphone user tracked byVisitor Location Registers.[292] US district court judge for the District of Columbia, Richard Leon,declared[293][294][295][296]on December 16, 2013, that the mass collection of metadata of Americans' telephone records by the National Security Agency probably violates thefourth amendmentprohibition of unreasonablesearches and seizures.[297]Leon granted the request for a preliminary injunction that blocks the collection of phone data for two private plaintiffs (Larry Klayman, a conservative lawyer, and Charles Strange, father of a cryptologist killed in Afghanistan when his helicopter was shot down in 2011)[298]and ordered the government to destroy any of their records that have been gathered. But the judge stayed action on his ruling pending a government appeal, recognizing in his 68-page opinion the "significant national security interests at stake in this case and the novelty of the constitutional issues."[297] However federal judge William H. Pauley III in New York Cityruled[299]the U.S. government's global telephone data-gathering system is needed to thwart potential terrorist attacks, and that it can only work if everyone's calls are swept in. U.S. District Judge Pauley also ruled that Congress legally set up the program and that it does not violate anyone's constitutional rights. The judge also concluded that the telephone data being swept up by NSA did not belong to telephone users, but to the telephone companies. He further ruled that when NSA obtains such data from the telephone companies, and then probes into it to find links between callers and potential terrorists, this further use of the data was not even a search under the Fourth Amendment. He also concluded that the controlling precedent isSmith v. Maryland: "Smith's bedrock holding is that an individual has no legitimate expectation of privacy in information provided to third parties," Judge Pauley wrote.[300][301][302][303]The American Civil Liberties Union declared on January 2, 2012, that it will appeal Judge Pauley's ruling that NSA bulk the phone record collection is legal. "The government has a legitimate interest in tracking the associations of suspected terrorists, but tracking those associations does not require the government to subject every citizen to permanent surveillance," deputy ACLU legal director Jameel Jaffer said in a statement.[304] In recent years, American and British intelligence agencies conducted surveillance on more than 1,100 targets, including the office of an Israeli prime minister, heads of international aid organizations, foreign energy companies and a European Union official involved in antitrust battles with American technology businesses.[305] Acatalog of high-tech gadgets and software developed by the NSA'sTailored Access Operations(TAO) was leaked by the German news magazineDer Spiegel.[306]Dating from 2008, the catalog revealed the existence of special gadgets modified to capture computerscreenshotsandUSB flash drivessecretly fitted with radio transmitters to broadcast stolen data over the airwaves, and fake base stations intended to intercept mobile phone signals, as well as many other secret devices and software implants listed here: The Tailored Access Operations (TAO) division of the NSA intercepted the shipping deliveries of computers and laptops in order to install spyware and physical implants on electronic gadgets. This was done in close cooperation with the FBI and the CIA.[306][307][308][309]NSA officials responded to the Spiegel reports with a statement, which said: "Tailored Access Operations is a unique national asset that is on the front lines of enabling NSA to defend the nation and its allies. [TAO's] work is centred on computer network exploitation in support of foreign intelligence collection."[310] In a separate disclosure unrelated to Snowden, the FrenchTrésor public, which runs acertificate authority, was found to have issued fake certificates impersonatingGooglein order to facilitate spying on French government employees viaman-in-the-middle attacks.[311] The NSA is working to build a powerfulquantum computercapable of breaking all types of encryption.[314][315][316][317][318]The effort is part of a US$79.7 million research program known as "Penetrating Hard Targets". It involves extensive research carried out in large, shielded rooms known asFaraday cages, which are designed to preventelectromagnetic radiationfrom entering or leaving.[315]Currently, the NSA is close to producing basic building blocks that will allow the agency to gain "complete quantum control on twosemiconductorqubits".[315]Once a quantum computer is successfully built, it would enable the NSA to unlock the encryption that protects data held by banks, credit card companies, retailers, brokerages, governments and health care providers.[314] According toThe New York Times, the NSA is monitoring approximately 100,000 computers worldwide with spy software named Quantum. Quantum enables the NSA to conduct surveillance on those computers on the one hand, and can also create a digital highway for launching cyberattacks on the other hand. Among the targets are the Chinese and Russian military, but also trade institutions within the European Union. The NYT also reported that the NSA can access and alter computers which are not connected with the internet by a secret technology in use by the NSA since 2008. The prerequisite is the physical insertion of the radio frequency hardware by a spy, a manufacturer or an unwitting user. The technology relies on a covert channel of radio waves that can be transmitted from tiny circuit boards and USB cards inserted surreptitiously into the computers. In some cases, they are sent to a briefcase-size relay station that intelligence agencies can set up miles away from the target. The technology can also transmit malware back to the infected computer.[40] Channel 4andThe Guardianrevealed the existence ofDishfire, a massivedatabaseof the NSA that collects hundreds of millions of text messages on a daily basis.[319]GCHQ has been given full access to the database, which it uses to obtain personal information of Britons by exploiting a legal loophole.[320] Each day, the database receives and stores the following amounts of data: The database is supplemented with an analytical tool known as the Prefer program, which processes SMS messages to extract other types of information including contacts frommissed callalerts.[321] ThePrivacy and Civil Liberties Oversight Board report on mass surveillancewas released on January 23, 2014. It recommends to end the bulk telephone metadata, i.e., bulk phone records – phone numbers dialed, call times and durations, but not call content collection – collection program, to create a "Special Advocate" to be involved in some cases before the FISA court judge and to release future and past FISC decisions "that involve novel interpretations of FISA or other significant questions of law, technology or compliance."[322][323][324] According to a joint disclosure byThe New York Times,The Guardian, andProPublica,[325][326][327][328]the NSA and GCHQ have begun working together to collect and store data from dozens ofsmartphoneapplication softwareby 2007 at the latest. A 2008 GCHQ report, leaked by Snowden asserts that "anyone usingGoogle Mapson a smartphone is working in support of a GCHQ system". The NSA and GCHQ have traded recipes for various purposes such as grabbing location data and journey plans that are made when a target usesGoogle Maps, and vacuuming upaddress books,buddy lists,phone logsand geographic data embedded in photos posted on the mobile versions of numerous social networks such as Facebook,Flickr,LinkedIn, Twitter, and other services. In a separate 20-page report dated 2012, GCHQ cited the popular smartphone game "Angry Birds" as an example of how an application could be used to extract user data. Taken together, such forms of data collection would allow the agencies to collect vital information about a user's life, including his or her home country, current location (throughgeolocation), age, gender,ZIP code,marital status, income,ethnicity,sexual orientation, education level, number of children, etc.[329][330] A GCHQ document dated August 2012 provided details of theSqueaky Dolphinsurveillance program, which enables GCHQ to conduct broad,real-timemonitoring of varioussocial mediafeatures and social media traffic such as YouTube video views, theLike buttonon Facebook, andBlogspot/Bloggervisits without the knowledge or consent of the companies providing those social media features. The agency's "Squeaky Dolphin" program can collect, analyze and utilize YouTube, Facebook and Blogger data in specific situations in real time for analysis purposes. The program also collects the addresses from the billions of videos watched daily as well as some user information for analysis purposes.[331][332][333] During the2009 United Nations Climate Change Conferencein Copenhagen, the NSA and itsFive Eyespartners monitored the communications of delegates of numerous countries. This was done to give their own policymakers a negotiating advantage.[334][335] TheCommunications Security Establishment Canada(CSEC) has been tracking Canadian air passengers via freeWi-Fiservices at a major Canadian airport. Passengers who exited the airport terminal continued to be tracked as they showed up at otherWi-Filocations across Canada. In a CSEC document dated May 2012, the agency described how it had gained access to two communications systems with over 300,000 users in order to pinpoint a specific imaginary target. The operation was executed on behalf of the NSA as a trial run to test a new technology capable of tracking down "any target that makes occasional forays into other cities/regions." This technology was subsequently shared with Canada'sFive Eyespartners – Australia, New Zealand, Britain, and the United States.[336][337][338][339] According to research bySüddeutsche Zeitungand TV networkNDRthe mobile phone of former German chancellorGerhard Schröderwas monitored from 2002 onward, reportedly because of his government's opposition tomilitary intervention in Iraq. The source of the latest information is a document leaked byEdward Snowden. The document, containing information about the National Sigint Requirement List (NSRL), had previously been interpreted as referring only toAngela Merkel's mobile. However,Süddeutsche Zeitungand NDR claim to have confirmation from NSA insiders that the surveillance authorisation pertains not to the individual, but the political post – which in 2002 was still held by Schröder. According to research by the two media outlets, Schröder was placed as number 388 on the list, which contains the names of persons and institutions to be put under surveillance by the NSA.[340][341][342][343] GCHQ launched acyber-attackon the activist network "Anonymous", usingdenial-of-service attack(DoS) to shut down a chatroom frequented by the network's members and to spy on them. The attack, dubbed Rolling Thunder, was conducted by a GCHQ unit known as theJoint Threat Research Intelligence Group(JTRIG). The unit successfully uncovered the true identities of several Anonymous members.[344][345][346][347] The NSA Section 215 bulk telephony metadata program which seeks to stockpile records on all calls made in the U.S. is collecting less than 30 percent of all Americans' call records because of an inability to keep pace with the explosion in cellphone use, according toThe Washington Post. The controversial program permits the NSA after a warrant granted by the secret Foreign Intelligence Surveillance Court to record numbers, length and location of every call from the participating carriers.[348][349] The Interceptreported that the U.S. government is using primarily NSA surveillance to target people for drone strikes overseas. In its reportThe Interceptauthor detail the flawed methods which are used to locate targets for lethal drone strikes, resulting in the deaths of innocent people.[350]According to the Washington Post NSA analysts and collectors i.e. NSA personnel which controls electronic surveillance equipment use the NSA's sophisticated surveillance capabilities to track individual targets geographically and in real time, while drones and tactical units aimed their weaponry against those targets to take them out.[351] An unnamed US law firm, reported to beMayer Brown, was targeted by Australia'sASD. According to Snowden's documents, the ASD had offered to hand over these intercepted communications to the NSA. This allowed government authorities to be "able to continue to cover the talks, providing highly useful intelligence for interested US customers".[352][353] NSA and GCHQ documents revealed that the anti-secrecy organizationWikiLeaksand otheractivist groupswere targeted for government surveillance and criminal prosecution. In particular, theIP addressesof visitors to WikiLeaks were collected in real time, and the US government urged its allies to file criminal charges against the founder of WikiLeaks,Julian Assange, due to his organization's publication of theAfghanistan war logs. The WikiLeaks organization was designated as a "malicious foreign actor".[354] Quoting an unnamed NSA official in Germany,Bild am Sonntagreported that while President Obama's order to stop spying on Merkel was being obeyed, the focus had shifted to bugging other leading government and business figures including Interior MinisterThomas de Maiziere, a close confidant of Merkel. Caitlin Hayden, a security adviser to President Obama, was quoted in the newspaper report as saying, "The US has made clear it gathers intelligence in exactly the same way as any other states."[355][356] The Interceptreveals that government agencies are infiltrating online communities and engaging in "false flag operations" to discredit targets among them people who have nothing to do with terrorism or national security threats. The two main tactics that are currently used are the injection of all sorts of false material onto the internet in order to destroy the reputation of its targets; and the use of social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable.[357][358][359][360] The Guardian reported that Britain's surveillance agency GCHQ, with aid from the National Security Agency, intercepted and stored the webcam images of millions of internet users not suspected of wrongdoing. The surveillance program codenamedOptic Nervecollected still images of Yahoo webcam chats (one image every five minutes) in bulk and saved them to agency databases. The agency discovered "that a surprising number of people use webcam conversations to show intimate parts of their body to the other person", estimating that between 3% and 11% of the Yahoo webcam imagery harvested by GCHQ contains "undesirable nudity".[361] The NSA has built an infrastructure which enables it to covertly hack into computers on a mass scale by using automated systems that reduce the level of human oversight in the process. The NSA relies on an automated system codenamedTURBINEwhich in essence enables the automated management and control of a large network of implants (a form of remotely transmitted malware on selected individual computer devices or in bulk on tens of thousands of devices). As quoted byThe Intercept, TURBINE is designed to "allow the current implant network to scale to large size (millions of implants) by creating a system that does automated control implants by groups instead of individually."[362]The NSA has shared many of its files on the use of implants with its counterparts in the so-called Five Eyes surveillance alliance – the United Kingdom, Canada, New Zealand, and Australia. Among other things due to TURBINE and its control over the implants the NSA is capable of: The TURBINE implants are linked to, and relies upon, a large network of clandestine surveillance "sensors" that the NSA has installed at locations across the world, including the agency's headquarters in Maryland and eavesdropping bases used by the agency in Misawa, Japan and Menwith Hill, England. Codenamed as TURMOIL, the sensors operate as a sort of high-tech surveillance dragnet, monitoring packets of data as they are sent across the Internet. When TURBINE implants exfiltrate data from infected computer systems, the TURMOIL sensors automatically identify the data and return it to the NSA for analysis. And when targets are communicating, the TURMOIL system can be used to send alerts or "tips" to TURBINE, enabling the initiation of a malware attack. To identify surveillance targets, the NSA uses a series of data "selectors" as they flow across Internet cables. These selectors can include email addresses, IP addresses, or the unique "cookies" containing a username or other identifying information that are sent to a user's computer by websites such as Google, Facebook, Hotmail, Yahoo, and Twitter, unique Google advertising cookies that track browsing habits, unique encryption key fingerprints that can be traced to a specific user, and computer IDs that are sent across the Internet when a Windows computer crashes or updates.[363][364][365][366] The CIA was accused by U.S. Senate Intelligence Committee Chairwoman Dianne Feinstein of spying on a stand-alone computer network established for the committee in its investigation of allegations of CIA abuse in a George W. Bush-era detention and interrogation program.[367] A voice interception program codenamedMYSTICbegan in 2009. Along with RETRO, short for "retrospective retrieval" (RETRO is voice audio recording buffer that allows retrieval of captured content up to 30 days into the past), the MYSTIC program is capable of recording "100 percent" of a foreign country's telephone calls, enabling the NSA to rewind and review conversations up to 30 days and the relating metadata. With the capability to store up to 30 days of recorded conversations MYSTIC enables the NSA to pull an instant history of the person's movements, associates and plans.[368][369][370][371][372][373] On March 21,Le Mondepublished slides from an internal presentation of theCommunications Security Establishment Canada, which attributed a piece of malicious software to French intelligence. The CSEC presentation concluded that the list of malware victims matched French intelligence priorities and found French cultural reference in the malware's code, including the nameBabar, a popular French children's character, and the developer name "Titi".[374] The French telecommunications corporationOrange S.A.shares its call data with the French intelligence agency DGSE, which hands over the intercepted data to GCHQ.[375] The NSA has spied on the Chinese technology companyHuawei.[376][377][378]Huawei is a leading manufacturer of smartphones, tablets, mobile phone infrastructure, and WLAN routers and installs fiber optic cable. According toDer Spiegelthis "kind of technology ... is decisive in the NSA's battle for data supremacy."[379]The NSA, in an operation named "Shotgiant", was able to access Huawei's email archive and the source code for Huawei's communications products.[379]The US government has had longstanding concerns that Huawei may not be independent of thePeople's Liberation Armyand that the Chinese government might use equipment manufactured by Huawei to conduct cyberespionage or cyberwarfare. The goals of the NSA operation were to assess the relationship between Huawei and the PLA, to learn more the Chinese government's plans and to use information from Huawei to spy on Huawei's customers, including Iran, Afghanistan, Pakistan, Kenya, and Cuba. Former Chinese PresidentHu Jintao, the Chinese Trade Ministry, banks, as well as telecommunications companies were also targeted by the NSA.[376][379] The Interceptpublished a document of an NSA employee discussing how to build a database of IP addresses, webmail, and Facebook accounts associated with system administrators so that the NSA can gain access to the networks and systems they administer.[380][381] At the end of March 2014,Der SpiegelandThe Interceptpublished, based on a series of classified files from the archive provided to reporters by NSA whistleblower Edward Snowden, articles related to espionage efforts by GCHQ and NSA in Germany.[382][383]The British GCHQ targeted three German internet firms for information about Internet traffic passing through internet exchange points, important customers of the German internet providers, their technology suppliers as well as future technical trends in their business sector and company employees.[382][383]The NSA was granted by theForeign Intelligence Surveillance Courtthe authority for blanket surveillance of Germany, its people and institutions, regardless whether those affected are suspected of having committed an offense or not, without an individualized court order specifying on March 7, 2013.[383]In addition Germany's chancellor Angela Merkel was listed in a surveillance search machine and database named Nymrod along with 121 others foreign leaders.[382][383]AsThe Interceptwrote: "The NSA uses the Nymrod system to 'find information relating to targets that would otherwise be tough to track down,' according to internal NSA documents. Nymrod sifts through secret reports based on intercepted communications as well as full transcripts of faxes, phone calls, and communications collected from computer systems. More than 300 'cites' for Merkel are listed as available in intelligence reports and transcripts for NSA operatives to read."[382] Toward the end of April, Edward Snowden said that the United States surveillance agencies spy on Americans more than anyone else in the world, contrary to anything that has been said by the government up until this point.[384] An article published by Ars Technica shows NSA's Tailored Access Operations (TAO) employees intercepting a Cisco router.[385] The InterceptandWikiLeaksrevealed information about which countries were having their communications collected as part of theMYSTICsurveillance program. On May 19,The Interceptreported that the NSA is recording and archiving nearly every cell phone conversation in the Bahamas with a system called SOMALGET, a subprogram ofMYSTIC. The mass surveillance has been occurring without the Bahamian government's permission.[386]Aside from the Bahamas,The Interceptreported NSA interception of cell phone metadata inKenya, thePhilippines,Mexico, and a fifth country it did not name due to "credible concerns that doing so could lead to increased violence." WikiLeaks released a statement on May 23 claiming thatAfghanistanwas the unnamed nation.[387] In a statement responding to the revelations, the NSA said "the implication that NSA's foreign intelligence collection is arbitrary and unconstrained is false."[386] Through its global surveillance operations the NSA exploits the flood of images included in emails, text messages, social media, videoconferences and other communications to harvest millions of images. These images are then used by the NSA in sophisticatedfacial recognition programsto track suspected terrorists and other intelligence targets.[388] Vodafonerevealed that there were secret wires that allowed government agencies direct access to their networks.[389]This access does not require warrants and the direct access wire is often equipment in a locked room.[389]In six countries where Vodafone operates, the law requires telecommunication companies to install such access or allows governments to do so.[389]Vodafone did not name these countries in case some governments retaliated by imprisoning their staff.[389]Shami ChakrabartiofLibertysaid "For governments to access phone calls at the flick of a switch is unprecedented and terrifying. Snowden revealed the internet was already treated as fair game. Bluster that all is well is wearing pretty thin – our analogue laws need a digital overhaul."[389]Vodafone published its first Law Enforcement Disclosure Report on June 6, 2014.[389]Vodafone group privacy officer Stephen Deadman said "These pipes exist, the direct access model exists. We are making a call to end direct access as a means of government agencies obtaining people's communication data. Without an official warrant, there is no external visibility. If we receive a demand we can push back against the agency. The fact that a government has to issue a piece of paper is an important constraint on how powers are used."[389]Gus Hosein, director ofPrivacy Internationalsaid "I never thought the telcos would be so complicit. It's a brave step by Vodafone and hopefully the other telcos will become more brave with disclosure, but what we need is for them to be braver about fighting back against the illegal requests and the laws themselves."[389] Above-top-secretdocumentation of a covert surveillance program named Overseas Processing Centre 1 (OPC-1) (codenamed "CIRCUIT") byGCHQwas published byThe Register. Based on documents leaked by Edward Snowden, GCHQ taps into undersea fiber optic cables via secret spy bases near theStrait of Hormuzand Yemen.BTandVodafoneare implicated.[390] The Danish newspaperDagbladet InformationandThe Interceptrevealed on June 19, 2014, the NSA mass surveillance program codenamedRAMPART-A. Under RAMPART-A, 'third party' countries tap into fiber optic cables carrying the majority of the world's electronic communications and are secretly allowing the NSA to install surveillance equipment on these fiber-optic cables. The foreign partners of the NSA turn massive amounts of data like the content of phone calls, faxes, e-mails, internet chats, data from virtual private networks, and calls made using Voice over IP software like Skype over to the NSA. In return these partners receive access to the NSA's sophisticated surveillance equipment so that they too can spy on the mass of data that flows in and out of their territory. Among the partners participating in the NSA mass surveillance program are Denmark and Germany.[391][392][393] During the week of July 4, a 31-year-old male employee ofGermany's intelligence serviceBNDwas arrested on suspicion ofspyingfor theUnited States. The employee is suspected of spying on theGerman Parliamentary Committee investigating the NSA spying scandal.[394] Former NSA official and whistleblowerWilliam Binneyspoke at aCentre for Investigative Journalismconference in London. According to Binney, "at least 80% of all audio calls, not just metadata, are recorded and stored in the US. The NSA lies about what it stores." He also stated that the majority offiber optic cablesrun through the U.S., which "is no accident and allows the US to view all communication coming in."[395] The Washington Postreleased a review of a cache provided by Snowden containing roughly 160,000 text messages and e-mails intercepted by the NSA between 2009 and 2012. The newspaper concluded that nine out of ten account holders whose conversations were recorded by the agency "were not the intended surveillance targets but were caught in a net the agency had cast for somebody else." In its analysis,The Postalso noted that many of the account holders were Americans.[396] On July 9, a soldier working withinGermany's Federal Ministry of Defence(BMVg) fell under suspicion of spying for the United States.[397]As a result of the July 4 case and this one, the German government expelled the CIA station chief in Germany on July 17.[398] On July 18, former State Department officialJohn Tyereleased an editorial inThe Washington Post, highlighting concerns over data collection underExecutive Order 12333. Tye's concerns are rooted in classified material he had access to through the State Department, though he has not publicly released any classified materials.[399] The Interceptreported that the NSA is "secretly providing data to nearly two dozen U.S. government agencies with a 'Google-like' search engine" called ICREACH. The database,The Interceptreported, is accessible to domestic law enforcement agencies including the FBI and theDrug Enforcement Administrationand was built to contain more than 850 billion metadata records about phone calls, emails, cellphone locations, and text messages.[400][401] Based on documents obtained from Snowden,The Interceptreported that theNSAandGCHQhad broken into the internal computer network ofGemaltoand stolen the encryption keys that are used inSIM cardsno later than 2010. As of 2015[update], the company is the world's largest manufacturer of SIM cards, making about two billion cards a year. With the keys, the intelligence agencies could eavesdrop on cell phones without the knowledge of mobile phone operators or foreign governments.[402] The New Zealand Herald, in partnership withThe Intercept, revealed that the New Zealand government used XKeyscore to spy on candidates for the position ofWorld Trade Organizationdirector general[403]and also members of theSolomon Islandsgovernment.[404] In January 2015, theDEArevealed that it had been collecting metadata records for all telephone calls made by Americans to 116 countries linked to drug trafficking. The DEA's program was separate from the telephony metadata programs run by the NSA.[405]In April,USA Todayreported that the DEA's data collection program began in 1992 and included all telephone calls between the United States and from Canada and Mexico. Current and former DEA officials described the program as the precursor of the NSA's similar programs.[406]The DEA said its program was suspended in September 2013, after a review of the NSA's programs and that it was "ultimately terminated."[405] Snowden provided journalists atThe Interceptwith GCHQ documents regarding another secret program "Karma Police", calling itself "the world's biggest" data mining operation, formed to create profiles on every visibleInternet user'sbrowsing habits. By 2009 it had stored over 1.1 trillion web browsing sessions, and by 2012 was recording 50 billion sessions per day.[407] In March 2017,WikiLeakspublished more than 8,000 documents on theCIA. The confidential documents, codenamedVault 7, dated from 2013 to 2016, included details on the CIA's hacking capabilities, such as the ability to compromisecars,smart TVs,[411]web browsers(includingGoogle Chrome,Microsoft Edge,Firefox, andOpera),[412][413]and the operating systems of mostsmartphones(includingApple'siOSandGoogle'sAndroid), as well as otheroperating systemssuch asMicrosoft Windows,macOS, andLinux.[414]WikiLeaks did not name the source, but said that the files had "circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive."[411] The disclosure provided impetus for the creation ofsocial movementsagainst mass surveillance, such asRestore the Fourth, and actions likeStop Watching UsandThe Day We Fight Back. On the legal front, theElectronic Frontier Foundationjoined acoalitionof diverse groups filing suit against the NSA. Severalhuman rights organizationsurged theObama administrationnot to prosecute, but protect, "whistleblowerSnowden":Amnesty International,Human Rights Watch,Transparency International, and theIndex on Censorship, among others.[419][420][421][422]On the economic front, several consumer surveys registered a drop in online shopping and banking activity as a result of the Snowden revelations.[423] However, it is argued long-term impact among the general population is negligible, as "the general public has still failed to adopt privacy-enhancing tools en masse."[424]A research study that tracked the interest in privacy-related webpages following the incident found that the public's interest reduced quickly, despite continuous discussion by the media about the events.[425] Domestically, PresidentBarack Obamaclaimed that there is "no spying on Americans",[426][427]andWhite HousePress SecretaryJay Carneyasserted that the surveillance programs revealed by Snowden have been authorized by Congress.[428] On the international front, U.S. Attorney GeneralEric Holderstated that "we cannot target even foreign persons overseas without a valid foreign intelligence purpose."[429] Prime MinisterDavid Cameronwarned journalists that "if they don't demonstrate some social responsibility it will be very difficult for government to stand back and not to act."[430]Deputy Prime MinisterNick Cleggemphasized that the media should "absolutely defend the principle of secrecy for the intelligence agencies".[431] Foreign SecretaryWilliam Hagueclaimed that "we take great care to balance individual privacy with our duty to safeguard the public and UK national security."[432]Hague defended theFive Eyesalliance and reiterated that the British-U.S. intelligence relationship must not be endangered because it "saved many lives".[433] Former Prime MinisterTony Abbottstated that "every Australian governmental agency, every Australian official at home and abroad, operates in accordance with the law".[434]Abbott criticized theAustralian Broadcasting Corporationfor being unpatriotic due to its reporting on the documents provided by Snowden, whom Abbott described as a "traitor".[435][436]Foreign MinisterJulie Bishopalso denounced Snowden as a traitor and accused him of "unprecedented" treachery.[437]Bishop defended theFive Eyesalliance and reiterated that the Australian–U.S. intelligence relationship must not be endangered because it "saves lives".[438] Chinese policymakers became increasingly concerned about the risk of cyberattacks following the disclosures, which demonstrated extensiveUnited States intelligence activities in China.[439]: 129As part of its response, theCommunist Partyin 2014 formed the Cybersecurity and InformationLeading Group.[439]: 129 In July 2013, ChancellorAngela Merkeldefended the surveillance practices of the NSA, and described the United States as "our truest ally throughout the decades".[440][441]After the NSA's surveillance on Merkel was revealed, however, the Chancellor compared the NSA with theStasi.[442]According toThe Guardian,Berlinis using the controversy over NSA spying as leverage to enter the exclusiveFive Eyesalliance.[443] Interior MinisterHans-Peter Friedrichstated that "the Americans take ourdata privacyconcerns seriously."[444]Testifying before theGerman Parliament, Friedrich defended the NSA's surveillance, and cited five terrorist plots on German soil that were prevented because of the NSA.[445]However, in April 2014, another German interior minister criticized the United States for failing to provide sufficient assurances to Germany that it had reined in its spying tactics.Thomas de Maiziere, a close ally of Merkel, toldDer Spiegel: "U.S. intelligence methods may be justified to a large extent by security needs, but the tactics are excessive and over-the-top."[446] Minister for Foreign AffairsCarl Bildt, defended theFRAand described its surveillance practices as a "national necessity".[447]Minister for DefenceKarin Enströmsaid that Sweden's intelligence exchange with other countries is "critical for our security" and that "intelligence operations occur within a framework with clear legislation, strict controls and under parliamentary oversight."[448][449] Interior MinisterRonald Plasterkapologized for incorrectly claiming that the NSA had collected 1.8 million records of metadata in the Netherlands. Plasterk acknowledged that it was in fact Dutch intelligence services who collected the records and transferred them to the NSA.[450][451] The Danish Prime MinisterHelle Thorning-Schmidthas praised the American intelligence agencies, claiming they have prevented terrorist attacks in Denmark, and expressed her personal belief that the Danish people "should be grateful" for the Americans' surveillance.[452]She has later claimed that the Danish authorities have no basis for assuming that American intelligence agencies have performed illegal spying activities toward Denmark or Danish interests.[453] In July 2013, the German government announced an extensive review of German intelligence services.[454][455] In August 2013, the U.S. government announced an extensive review of U.S. intelligence services.[456][457] In October 2013, the British government announced an extensive review of British intelligence services.[458] In December 2013, the Canadian government announced an extensive review of Canadian intelligence services.[459] In January 2014, U.S. PresidentBarack Obamasaid that "the sensational way in which these disclosures have come out has often shed more heat than light"[19]and critics such asSean Wilentzclaimed that "the NSA has acted far more responsibly than the claims made by the leakers and publicized by the press." In Wilentz' view "The leakers have gone far beyond justifiably blowing the whistle on abusive programs. In addition to their alarmism about [U.S.] domestic surveillance, many of the Snowden documents released thus far have had nothing whatsoever to do with domestic surveillance."[20]Edward Lucas, former Moscow bureau chief forThe Economist, agreed, asserting that "Snowden's revelations neatly and suspiciously fits the interests of one country: Russia" and citingMasha Gessen's statement that "The Russian propaganda machine has not gotten this much mileage out of a US citizen sinceAngela Davis's murder trial in 1971."[460] Bob Cescaobjected toThe New York Timesfailing to redact the name of an NSA employee and the specific location where anal Qaedagroup was being targeted in a series of slides the paper made publicly available.[461] Russian journalistAndrei Soldatovargued that Snowden's revelations had had negative consequences for internet freedom in Russia, as Russian authorities increased their own surveillance and regulation on the pretext of protecting the privacy of Russian users. Snowden's name was invoked by Russian legislators who supported measures forcing platforms such asGoogle,Facebook,Twitter,Gmail, andYouTubeto locate their servers on Russian soil or installSORMblack boxes on their servers so that Russian authorities could control them.[462]Soldatov also contended that as a result of the disclosures, international support for having national governments take over the powers of the organizations involved in coordinating the Internet's global architectures had grown, which could lead to a Balkanization of the Internet that restricted free access to information.[463]TheMontevideo Statement on the Future of Internet Cooperationissued in October 2013, byICANNand other organizations warned against "Internet fragmentation at a national level" and expressed "strong concern over the undermining of the trust and confidence of Internet users globally due to recent revelations".[464] In late 2014,Freedom Housesaid "[s]ome states are using the revelations of widespread surveillance by the U.S. National Security Agency (NSA) as an excuse to augment their own monitoring capabilities, frequently with little or no oversight, and often aimed at the political opposition and human rights activists."[465] The material consisted of:
https://en.wikipedia.org/wiki/2013_global_surveillance_disclosures
Incriminology, thebroken windows theorystates that visible signs ofcrime,antisocial behaviorandcivil disordercreate anurban environmentthat encourages further crime and disorder, including serious crimes.[1]The theory suggests that policing methods that target minor crimes, such asvandalism,loitering,public drinkingandfare evasion, help to create an atmosphere of order and lawfulness. The theory was introduced in a 1982 article by conservative think tanks social scientistsJames Q. WilsonandGeorge L. Kelling.[1]It was popularized in the 1990s byNew York Citypolice commissionerWilliam Bratton, whose policing policies were influenced by the theory. The theory became subject to debate both within thesocial sciencesand the public sphere. Broken windows policing has been enforced with controversial police practices, such as the high use ofstop-and-frisk in New York Cityin the decade up to 2013. James Q. WilsonandGeorge L. Kellingfirst introduced the broken windows theory in an article titled "Broken Windows", in the March 1982 issue ofThe Atlantic Monthly: Social psychologists and police officers tend to agree that if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken. This is as true in nice neighborhoods as in rundown ones. Window-breaking does not necessarily occur on a large scale because some areas are inhabited by determined window-breakers whereas others are populated by window-lovers; rather, one un-repaired broken window is a signal that no one cares, and so breaking more windows costs nothing. (It has always been fun.)[1] The article received a great deal of attention and was very widely cited. A 1996criminologyandurban sociologybook,Fixing Broken Windows: Restoring Order and Reducing Crime in Our CommunitiesbyGeorge L. Kellingand Catharine Coles, is based on the article but develops the argument in greater detail. It discusses the theory in relation tocrimeand strategies to contain or eliminate crime from urban neighborhoods.[2] A successful strategy for preventing vandalism, according to the book's authors, is to address the problems when they are small. Repair the broken windows within a short time, say, a day or a week, and the tendency is that vandals are much less likely to break more windows or do further damage. Clean up the sidewalk every day, and the tendency is forlitternot to accumulate (or for the rate of littering to be much less). Problems are less likely to escalate and thus respectable residents do not flee the neighborhood. Oscar Newmanintroduceddefensible space theoryin his 1972 bookDefensible Space. He argued that although police work is crucial to crime prevention, police authority is not enough to maintain a safe andcrime-freecity. People in the community help with crime prevention. Newman proposed that people care for and protect spaces that they feel invested in, arguing that an area is eventually safer if the people feel a sense of ownership and responsibility towards the area. Broken windows and vandalism are still prevalent because communities simply do not care about the damage. Regardless of how many times the windows are repaired, the community still must invest some of their time to keep it safe. Residents' negligence of broken window-type decay signifies a lack of concern for the community. Newman says this is a clear sign that the society has accepted this disorder—allowing the unrepaired windows to display vulnerability and lack of defense.[3]Malcolm Gladwell also relates this theory to the reality ofNew York Cityin his book,The Tipping Point.[4] Thus, the theory makes a few major claims: that improving the quality of the neighborhood environment reduces petty crime, anti-social behavior, and low-level disorder, and that major crime is also prevented as a result. Criticism of the theory has tended to focus on the latter claim.[5] The reason the state of the urban environment may affect crime consists of three factors:social normsandconformity; the presence or lack of routinemonitoring; and social signaling andsignal crime. In an anonymous urban environment, with few or no other people around, social norms and monitoring are not clearly known. Thus, individuals look for signals within the environment as to the social norms in the setting and the risk of getting caught violating those norms; one of the signals is the area's general appearance. Under the broken windows theory, an ordered and clean environment, one that is maintained, sends the signal that the area is monitored and that criminal behavior is not tolerated. Conversely, a disordered environment, one that is not maintained (broken windows, graffiti, excessive litter), sends the signal that the area is not monitored and that criminal behavior has little risk of detection. The theory assumes that the landscape "communicates" to people. A broken window transmits to criminals the message that a community displays a lack of informal social control and so is unable or unwilling to defend itself against a criminal invasion. It is not so much the actual broken window that is important, but the message the broken window sends to people. It symbolizes the community's defenselessness and vulnerability and represents the lack ofcohesivenessof the people within. Neighborhoods with a strong sense of cohesion fix broken windows and assert social responsibility on themselves, effectively giving themselves control over their space. The theory emphasizes the built environment, but must also consider human behavior.[6] Under the impression that a broken window left unfixed leads to more serious problems, residents begin to change the way they see their community. In an attempt to stay safe, a cohesive community starts to fall apart, as individuals start to spend less time in communal space to avoid potential violent attacks by strangers.[1]The slow deterioration of a community, as a result of broken windows, modifies the way people behave when it comes to their communal space, which, in turn, breaks down community control. As rowdy teenagers, panhandlers, addicts, and prostitutes slowly make their way into a community, it signifies that the community cannot assert informal social control, and citizens become afraid that worse things will happen. As a result, they spend less time in the streets to avoid these subjects and feel less and less connected from their community, if the problems persist. At times, residents tolerate "broken windows" because they feel they belong in the community and "know their place". Problems, however, arise when outsiders begin to disrupt the community's cultural fabric. That is the difference between "regulars" and "strangers" in a community. The way that "regulars" act represents the culture within, but strangers are "outsiders" who do not belong.[6] Consequently, daily activities considered "normal" for residents now become uncomfortable, as the culture of the community carries a different feel from the way that it was once. With regard to social geography, the broken windows theory is a way of explaining people and their interactions with space. The culture of a community can deteriorate and change over time, with the influence of unwanted people and behaviors changing the landscape. The theory can be seen as people shaping space, as the civility and attitude of the community create spaces used for specific purposes by residents. On the other hand, it can also be seen as space shaping people, with elements of the environment influencing and restricting day-to-day decision making. However, with policing efforts to remove unwanted disorderly people that put fear in the public's eyes, the argument would seem to be in favor of "people shaping space", as public policies are enacted and help to determine how one is supposed to behave. All spaces have their own codes of conduct, and what is considered to be right and normal will vary from place to place. The concept also takes into consideration spatial exclusion and social division, as certain people behaving in a given way are considered disruptive and therefore, unwanted. It excludes people from certain spaces because their behavior does not fit the class level of the community and its surroundings. A community has its own standards and communicates a strong message to criminals, by social control, that their neighborhood does not tolerate their behavior. If, however, a community is unable to ward off would-be criminals on their own, policing efforts help. By removing unwanted people from the streets, the residents feel safer and have a higher regard for those that protect them. People of less civility who try to make a mark in the community are removed, according to the theory.[6] Many claim thatinformal social controlcan be an effective strategy to reduce unruly behavior.Garland (2001)expresses that "community policing measures in the realization that informal social control exercised through everyday relationships and institutions is more effective than legal sanctions."[7]Informal social control methods have demonstrated a "get tough" attitude by proactive citizens, and express a sense that disorderly conduct is not tolerated. According to Wilson and Kelling, there are two types of groups involved in maintaining order, 'community watchmen' and 'vigilantes'.[1]The United States has adopted in many ways policing strategies of old European times, and at that time, informal social control was the norm, which gave rise to contemporary formal policing. Though, in earlier times, because there were no legal sanctions to follow, informal policing was primarily 'objective' driven, as stated by Wilson and Kelling (1982). Wilcox et al. 2004argue that improperland usecan cause disorder, and the larger the public land is, the more susceptible to criminal deviance.[8]Therefore, nonresidential spaces, such as businesses, may assume to the responsibility of informal social control "in the form ofsurveillance, communication, supervision, and intervention".[9]It is expected that more strangers occupying the public land creates a higher chance for disorder.Jane Jacobscan be considered one of the original pioneers of this perspective ofbroken windows. Much of her book,The Death and Life of Great American Cities, focuses on residents' and nonresidents' contributions to maintaining order on the street, and explains how local businesses, institutions, and convenience stores provide a sense of having "eyes on the street".[10] On the contrary, many residents feel that regulating disorder is not their responsibility. Wilson and Kelling found that studies done by psychologists suggest people often refuse to go to the aid of someone seeking help, not due to a lack of concern or selfishness "but the absence of some plausible grounds for feeling that one must personally accept responsibility".[1]On the other hand, others plainly refuse to put themselves in harm's way, depending on how grave they perceive the nuisance to be; a 2004 study observed that "most research on disorder is based on individual level perceptions decoupled from a systematic concern with the disorder-generating environment."[11]Essentially, everyone perceives disorder differently, and can contemplate seriousness of a crime based on those perceptions. However, Wilson and Kelling feel that although community involvement can make a difference, "the police are plainly the key to order maintenance."[1] Ranasinghe argues that the concept of fear is a crucial element of broken windows theory, because it is the foundation of the theory.[12]She also adds that public disorder is "... unequivocally constructed as problematic because it is a source of fear".[13]Fear is elevated as perception of disorder rises; creating a social pattern that tears the social fabric of a community and leaves the residents feeling hopeless and disconnected. Wilson and Kelling hint at the idea, but do not focus on its central importance. They indicate that fear was a product of incivility, not crime, and that people avoid one another in response to fear, weakening controls.[1]Hinkle and Weisburd found that police interventions to combat minor offenses, as per the broken windows model, "significantly increased the probability of feeling unsafe," suggesting that such interventions might offset any benefits of broken windows policing in terms of fear reduction.[14] Broken windows policing is sometimes described as a "zero tolerance" policing style,[15]including in some academic studies.[16]Bratton and Kelling have said that broken windows policing and zero tolerance are different, and that minor offenders should receive lenient punishment.[17] In an earlier publication ofThe Atlanticreleased March 1982, Wilson wrote an article indicating that police efforts had gradually shifted from maintaining order to fighting crime.[1]This indicated that order maintenance was something of the past, and soon it would seem as it has been put on the back burner. The shift was attributed to the rise of the social urban riots of the 1960s, and "social scientists began to explore carefully the order maintenance function of the police, and to suggest ways of improving it—not to make streets safer (its original function) but to reduce the incidence of mass violence".[1]Other criminologists argue between similar disconnections, for example, Garland argues that throughout the early and mid 20th century, police in American cities strived to keep away from the neighborhoods under their jurisdiction.[7]This is a possible indicator of the out-of-control social riots that were prevalent at that time.[citation needed]Still many would agree that reducing crime and violence begins with maintaining social control/order.[18] Jane Jacobs'The Death and Life of Great American Citiesis discussed in detail by Ranasinghe, and its importance to the early workings of broken windows, and claims that Kelling's original interest in "minor offences and disorderly behaviour and conditions" was inspired by Jacobs' work.[19]Ranasinghe includes that Jacobs' approach toward social disorganization was centralized on the "streets and their sidewalks, the main public places of a city" and that they "are its most vital organs, because they provide the principal visual scenes".[20]Wilson and Kelling, as well as Jacobs, argue on the concept of civility (or the lack thereof) and how it creates lasting distortions between crime and disorder. Ranasinghe explains that the common framework of both set of authors is to narrate the problem facing urban public places. Jacobs, according to Ranasinghe, maintains that "Civility functions as a means of informal social control, subject little to institutionalized norms and processes, such as the law" 'but rather maintained through an' "intricate, almost unconscious, network of voluntary controls and standards among people... and enforced by the people themselves".[21] Before the introduction of this theory by Wilson and Kelling,Philip Zimbardo, aStanfordpsychologist, arranged an experiment testing the broken-window theory in 1969. Zimbardo arranged for an automobile with no license plates and the hood up to be parked idle in aBronxneighbourhood and a second automobile, in the same condition, to be set up inPalo Alto, California. The car in the Bronx was attacked within minutes of its abandonment. Zimbardo noted that the first "vandals" to arrive were a family—a father, mother, and a young son—who removed the radiator and battery. Within twenty-four hours of its abandonment, everything of value had been stripped from the vehicle. After that, the car's windows were smashed in, parts torn, upholstery ripped, and children were using the car as a playground. At the same time, the vehicle sitting idle in Palo Alto sat untouched for more than a week until Zimbardo himself went up to the vehicle and deliberately smashed it with a sledgehammer. Soon after, people joined in for the destruction, although criticism has been levelled at this claim as the destruction occurred after the car was moved to the campus of Stanford university and Zimbardo's own students were the first to join him. Zimbardo observed that a majority of the adult "vandals" in both cases were primarily well dressed, Caucasian, clean-cut and seemingly respectable individuals. It is believed that, in a neighborhood such as the Bronx where the history of abandoned property and theft is more prevalent, vandalism occurs much more quickly, as the community generally seems apathetic. Similar events can occur in any civilized community when communal barriers—the sense of mutual regard and obligations of civility—are lowered by actions that suggest apathy.[1][22] In 1985, theNew York City Transit AuthorityhiredGeorge L. Kelling, the author ofBroken Windows, as a consultant.[23]Kelling was later hired as a consultant to theBostonand theLos Angelespolice departments. One of Kelling's adherents,David L. Gunn, implemented policies and procedures based on the Broken Windows Theory, during his tenure as President of the New York City Transit Authority. One of his major efforts was to lead a campaign from 1984 to 1990 to ridgraffitifrom New York's subway system. In 1990,William J. Brattonbecame head of theNew York City Transit Police. Bratton was influenced by Kelling, describing him as his "intellectual mentor". In his role, he implemented a tougher stance onfare evasion, fasterarrestee processingmethods, andbackground checkson all those arrested. After being electedMayor of New York Cityin 1993, as aRepublican,Rudy Giulianihired Bratton as hispolice commissionerto implement similar policies and practices throughout the city. Giuliani heavily subscribed to Kelling and Wilson's theories. Such policies emphasized addressing crimes that negatively affectquality of life. In particular, Bratton directed the police to more strictly enforce laws against subway fare evasion,public drinking,public urination, and graffiti. Bratton also revived theNew York City Cabaret Law, a previously dormant Prohibition era ban on dancing in unlicensed establishments. Throughout the late 1990s, NYPD shut down many of the city's acclaimed night spots for illegal dancing. According to a 2001 study of crime trends in New York City by Kelling and William Sousa, rates of both petty and serious crime fell significantly after the aforementioned policies were implemented. Furthermore, crime continued to decline for the following ten years. Such declines suggested that policies based on the Broken Windows Theory wereeffective.[24]Later, in 2016, Brian Jordan Jefferson used the precedent of Kelling and Sousa's study to conduct fieldwork in the 70th precinct of New York City, which it was corroborated that crime mitigation in the area were concerning "quality of life" issues, which included noise complaints and loitering.[25]The falling crime rates throughout New York City had built a mutual relationship between residents and law enforcement in vigilance of disorderly conduct.[citation needed] However, other studies do not find acause and effectrelationship between the adoption of such policies and decreases in crime.[5][26]The decrease may have been part of a broader trend across the United States. The rates of most crimes, including all categories of violent crime, made consecutive declines from their peak in 1990, under Giuliani's predecessor,David Dinkins. Other cities also experienced less crime, even though they had different police policies. Other factors, such as the 39% drop in New York City'sunemployment ratebetween 1992 and 1999,[27]could also explain the decrease reported by Kelling and Sousa.[27] A 2017 study found that when the New York Police Department (NYPD) stopped aggressively enforcing minor legal statutes in late 2014 and early 2015 that civilian complaints of three major crimes (burglary, felony assault, and grand larceny) decreased (slightly with large error bars) during and shortly after sharp reductions inproactive policing. There was no statistically significant effect on other major crimes such as murder, rape, robbery, or grand theft auto. These results are touted as challenging prevailing scholarship as well as conventional wisdom on authority and legal compliance by implying that aggressively enforcing minor legal statutes incites more severe criminal acts.[28] Albuquerque,New Mexico, instituted the Safe Streets Program in the late 1990s based on the Broken Windows Theory. The Safe Streets Program sought to deter and reduce unsafe driving and incidence of crime by saturating areas where high crime and crash rates were prevalent with law enforcement officers. Operating under the theory that AmericanWesternersuse roadways much in the same way that AmericanEasternersuse subways, the developers of the program reasoned that lawlessness on the roadways had much the same effect as it did on theNew York City Subway. Effects of the program were reviewed by the USNational Highway Traffic Safety Administration(NHTSA) and were published in a case study.[29]The methodology behind the program demonstrates the use ofdeterrence theoryin preventing crime.[30] In 2005,Harvard UniversityandSuffolk Universityresearchers worked with local police to identify 34 "crime hot spots" inLowell, Massachusetts. In half of the spots, authorities cleared trash, fixed streetlights, enforced building codes, discouragedloiterers, made moremisdemeanorarrests, and expandedmental health servicesand aid for thehomeless. In the other half of the identified locations, there was no change to routine police service. The areas that received additional attention experienced a 20% reduction in calls to the police. The study concluded that cleaning up the physical environment was more effective than misdemeanor arrests.[31][32] In 2007 and 2008, Kees Keizer and colleagues from theUniversity of Groningenconducted a series of controlled experiments to determine if the effect of existing visible disorder (such as litter or graffiti) increased other crime such as theft, littering, or otherantisocial behavior. They selected several urban locations, which they arranged in two different ways, at different times. In each experiment, there was a "disorder" condition in which violations of social norms as prescribed by signage or national custom, such as graffiti and littering, were clearly visible as well as a control condition where no violations of norms had taken place. The researchers then secretly monitored the locations to observe if people behaved differently when the environment was "disordered". Their observations supported the theory. The conclusion was published in the journalScience: "One example of disorder, like graffiti or littering, can indeed encourage another, like stealing."[33][34] An 18-month study by Carlos Vilalta in Mexico City showed that framework of Broken Windows Theory on homicide in suburban neighborhoods was not a direct correlation, but a "concentrated disadvantage" in the perception of fear and modes of crime prevention.[35]In areas with more social disorder (such as public intoxication), an increased perception of law-abiding citizens to feel unsafe amplified the impact of homicide occurring in the neighborhood. It was also found that it was more effective in preventing instances of violent crime among people living in areas with less physical structural decay (such asgraffiti), lending credence to the Broken Windows Theory basis that law enforcement is trusted more among those in areas with less disorder. Furthering this data, a 2023 study conducted by Ricardo Massa on residency near clandestinedumpsitesassociated economic disenfranchisement with high physical disorder.[36]The neighborhoods that had high concentrations of landfill waste were correlated with crimes (such as vehicle theft and robbery), and most significantly crimes related to property. In a space where property damage and neglect is normalized, a person's response to this type of environment can also greatly be affected by their perception of their surroundings. It was also concluded that non-residents of these high-concentration areas tended to fear and avoid these locations, seeing as there was typically less surveillance and lack of community efficacy surrounding clandestine dumpsites. However, despite this fear, Massa also notes that, in this case, individual targets for crime (such as homicide or rape) were unlikely compared to the vandalism of public and private property. Other side effects of better monitoring and cleaned up streets may well be desired by governments or housing agencies and the population of a neighborhood: broken windows can count as an indicator of low real estate value and may deter investors. Real estate professionals may benefit from adopting the "Broken Windows Theory", because if the number of minor transgressions is monitored in a specific area, there is likely to be a reduction in major transgressions as well. This may actually increase or decrease value in a house or apartment, depending on the area.[37]Fixing windows is, therefore, also a step ofreal estate development, which may lead, whether it is desired or not, togentrification. By reducing the number of broken windows in the community, the inner cities would appear to be attractive to consumers with more capital. Eliminating danger in spaces that are notorious for criminal activity, such as downtown New York City and Chicago, would draw in investment from consumers, increase the city's economic status, and provide a safe and pleasant image for present and future inhabitants.[26] In education, the broken windows theory is used to promote order in classrooms and school cultures. The belief is that students are signaled by disorder or rule-breaking and that they in turn imitate the disorder. Several school movements encourage strict paternalistic practices to enforce student discipline. Such practices include language codes (governing slang, curse words, or speaking out of turn), classroom etiquette (sitting up straight, tracking the speaker), personal dress (uniforms, little or no jewelry), and behavioral codes (walking in lines, specified bathroom times). From 2004 to 2006, Stephen B. Plank and colleagues fromJohns Hopkins Universityconducted a correlational study to determine the degree to which the physical appearance of the school and classroom setting influence student behavior, particularly in respect to the variables concerned in their study: fear, social disorder, and collective efficacy.[38]They collected survey data administered to 6th-8th students by 33 public schools in a largemid-Atlanticcity. From analyses of the survey data, the researchers determined that the variables in their study are statistically significant to the physical conditions of the school and classroom setting. The conclusion, published in theAmerican Journal of Education, was: ...the findings of the current study suggest that educators and researchers should be vigilant about factors that influence student perceptions of climate and safety. Fixing broken windows and attending to the physical appearance of a school cannot alone guarantee productive teaching and learning, but ignoring them likely greatly increases the chances of a troubling downward spiral.[38] A 2015 meta-analysis of broken windows policing implementations found that disorder policing strategies, such as "hot spots policing" orproblem-oriented policing, result in "consistent crime reduction effects across a variety of violent, property, drug, and disorder outcome measures".[39]As a caveat, the authors noted that "aggressive order maintenance strategies that target individual disorderly behaviors do not generate significant crime reductions," pointing specifically tozero tolerancepolicing models that target singular behaviors such as public intoxication and remove disorderly individuals from the street via arrest. The authors recommend that police develop "community co-production" policing strategies instead of merely committing to increasing misdemeanor arrests.[39] Several studies have argued that many of the apparent successes of broken windows policing (such as New York City in the 1990s) were the result of other factors.[40]They claim that the "broken windows theory" closely relatescorrelationwithcausality: reasoning prone tofallacy. David Thacher, assistant professor of public policy and urban planning at theUniversity of Michigan, stated in a 2004 paper:[40] [S]ocial science has not been kind to the broken windows theory. A number of scholars reanalyzed the initial studies that appeared to support it.... Others pressed forward with new, more sophisticated studies of the relationship between disorder and crime. The most prominent among them concluded that the relationship between disorder and serious crime is modest, and even that relationship is largely an artifact of more fundamental social forces. C. R. Sridhar, in his article in theEconomic and Political Weekly, also challenges the theory behind broken windows policing and the idea that the policies ofWilliam Brattonand theNew York Police Departmentwas the cause of the decrease of crime rates inNew York City.[16]The policy targeted people in areas with a significant amount of physical disorder and there appeared to be a causal relationship between the adoption of broken windows policing and the decrease in crime rate. Sridhar, however, discusses other trends (such as New York City's economic boom in the late 1990s) that created a "perfect storm" that contributed to the decrease of crime rate much more significantly than the application of the broken windows policy. Sridhar also compares this decrease in crime rate with other major cities that adopted various policies and determined that the broken windows policy is not as effective. In a 2007 study called "Reefer Madness" in the journalCriminology and Public Policy, Harcourt and Ludwig found further evidence confirming thatmean reversionfully explained the changes in crime rates in the different precincts in New York in the 1990s.[41]Further alternative explanations that have been put forward include the waning of thecrack epidemic,[42]unrelated growth in the prison population by theRockefeller drug laws,[42]and that the number of males from 16 to 24 was dropping regardless of the shape of the USpopulation pyramid.[43] It has also been argued that rates of major crimes also dropped in many other US cities during the 1990s, both those that had adopted broken windows policing and those that had not.[44]It is thought that this is due to the exposure of children to environmental lead, which leads to loss of impulse control and, when they reach young adulthood, criminal acts. There appears to be a correlation between a 25-year lag between the addition and removal of lead from paint and gasoline and rises and falls in murder arrests.[45][46] In his book, Baltimore criminologist Ralph B. Taylor argues that fixing windows is only a partial and short-term solution. His data supports a materialist view: changes in physical decay, superficial social disorder, and racial composition do not lead to higher crime, but economic decline does. He contends that the example shows that real, long-term reductions in crime require that urban politicians, businesses, and community leaders work together to improve the economic fortunes of residents in high-crime areas.[47] In 2015, Northeastern University assistant professor Daniel T. O'Brien criticised the broken theory model. Using hisBig Databased research model, he argues that the broken window model fails to capture the origins of crime in a neighbourhood. He concludes that crime comes from thesocial dynamicsof communities and private spaces and spills into public spaces.[48] According to a study byRobert J. SampsonandStephen Raudenbush, the premise on which the theory operates, that social disorder and crime are connected as part of a causal chain, is faulty. They argue that a third factor, collective efficacy, "defined as cohesion among residents combined with shared expectations for the social control of public space," is the cause of varying crime rates observed in an altered neighborhood environment. They also argue that the relationship between public disorder and crime rate is weak.[49] In the winter 2006 edition of theUniversity of Chicago Law Review,Bernard Harcourtand Jens Ludwig looked at the laterDepartment of Housing and Urban Developmentprogram that rehoused inner-city project tenants in New York into more-orderly neighborhoods.[26]The broken windows theory would suggest that these tenants would commit less crime once moved because of the more stable conditions on the streets. However, Harcourt and Ludwig found that the tenants continued to commit crimes at the same rate. Another tack was taken by a 2010 study questioning the theory's legitimacy concerning the subjectivity of disorder as perceived by persons living in neighborhoods. It concentrated on whether citizens view disorder as separate from crime or identical to it. The study noted that crime cannot be the result of disorder if the two are identical, agreed that disorder provided evidence of "convergent validity" and concluded that broken windows theory misinterprets the relationship between disorder and crime.[50] Broken windows policing has sometimes become associated with zealotry, which has led to critics suggesting that it encourages discriminatory behaviour. Some campaigns such asBlack Lives Matterhave called for an end to broken windows policing.[51]In 2016, aDepartment of Justicereport argued that it had led theBaltimore Police Departmentto discriminate against and alienate minority groups.[52] A central argument is that the term disorder is vague, and giving the police broad discretion to decide what disorder is will lead to discrimination. InDorothy Roberts's article, "Foreword: Race, Vagueness, and the Social Meaning of Order Maintenance and Policing", she says that the broken windows theory in practice leads to the criminalization of communities of color, who are typically disfranchised.[53]She underscores the dangers of vaguely written ordinances that allow for law enforcers to determine who engages in disorderly acts, which, in turn, produces a racially skewed outcome in crime statistics.[54]Similarly, Gary Stewart wrote, "The central drawback of the approaches advanced by Wilson, Kelling, and Kennedy rests in their shared blindness to the potentially harmful impact of broad police discretion on minority communities."[55]According to Stewart, arguments for low-level police intervention, including the broken windows hypothesis, often act "as cover forracistbehavior".[55] The theory has also been criticized for its unsound methodology and its manipulation of racialized tropes. Specifically, Bench Ansfield has shown that in their 1982 article, Wilson and Kelling cited only one source to prove their central contention that disorder leads to crime: the Philip Zimbardo vandalism study (see Precursor Experiments above).[56]But Wilson and Kelling misrepresented Zimbardo's procedure and conclusions, dispensing with Zimbardo's critique of inequality and community anonymity in favor of the oversimplified claim that one broken window gives rise to "a thousand broken windows". Ansfield argues that Wilson and Kelling used the image of the crisis-ridden 1970s Bronx to stoke fears that "all cities would go the way of the Bronx if they didn't embrace their new regime of policing."[57]Wilson and Kelling manipulated the Zimbardo experiment to avail themselves of the racialized symbolism found in the broken windows of the Bronx.[56] Robert J. Sampson argues that based on common misconceptions by the masses, it is implied that those who commit disorder and crime have a clear tie to groups suffering from financial instability and may be of minority status: "The use of racial context to encode disorder does not necessarily mean that people are racially prejudiced in the sense of personal hostility." He notes that residents make a clear implication of who they believe is causing the disruption, which has been termed as implicit bias.[58]He further states that research conducted on implicit bias and stereotyping of cultures suggests that community members hold unrelenting beliefs of African Americans and other disadvantaged minority groups, associating them with crime, violence, disorder, welfare, and undesirability as neighbors.[58]A later study indicated that this contradicted Wilson and Kelling's proposition that disorder is an exogenous construct that has independent effects on how people feel about their neighborhoods.[50] In response, Kelling and Bratton have argued that broken windows policing does not discriminate against law-abiding communities of minority groups if implemented properly.[17]They citedDisorder and Decline: Crime and the Spiral of Decay in American Neighborhoods,[59]a study by Wesley Skogan atNorthwestern University. The study, which surveyed 13,000 residents of large cities, concluded that different ethnic groups have similar ideas as to what they would consider to be "disorder". Minority groups have tended to be targeted at higher rates by the Broken Windows style of policing. Broken Windows policies have been utilized more heavily in minority neighborhoods where low-income, poor infrastructure and social disorder were widespread, causing minority groups to perceive that they were beingracially profiledunder Broken Windows policing.[23][60] A common criticism of broken windows policing is the argument that it criminalizes the poor and homeless. That is because the physical signs that characterize a neighborhood with the "disorder" that broken windows policing targets correlate with the socio-economic conditions of its inhabitants. Many of the acts that are considered legal but "disorderly" are often targeted in public settings and are not targeted when they are conducted in private. Therefore, those without access to a private space are frequently criminalized. Critics, such asRobert J. SampsonandStephen RaudenbushofHarvard University, see the application of the broken windows theory in policing as a war against the poor, as opposed to a war against more serious crimes.[61]Since minority groups in most cities are more likely to be poorer than the rest of the population, a bias against the poor would be linked to a racial bias.[53] According to Bruce D. Johnson, Andrew Golub, and James McCabe, applying the broken windows theory in policing and policymaking can result in development projects that decrease physical disorder but promote undesiredgentrification. Often, when a city is so "improved" in this way, the development of an area can cause the cost of living to rise higher than residents can afford, which forces low-income people out of the area. As the space changes, the middle and upper classes, often white, begin to move into the area, resulting in the gentrification of urban, poor areas. The residents are affected negatively by such an application of the broken windows theory and end up evicted from their homes as if their presence indirectly contributed to the area's problem of "physical disorder".[53] InMore Guns, Less Crime(2000),economistJohn Lott, Jr.examined the use of the broken windows approach as well as community- andproblem-oriented policingprograms in cities over 10,000 in population, over two decades. He found that the impacts of these policing policies were inconsistent across different types of crime. Lott's book has beensubject to criticism, whileother groups supportLott's conclusions. In the 2005 bookFreakonomics, coauthorsSteven D. LevittandStephen J. Dubnerconfirm and question the notion that the broken windows theory was responsible for New York's drop in crime, saying "the pool of potential criminals had dramatically shrunk". Levitt had in theQuarterly Journal of Economicsattributed that possibility to the legalization of abortion withRoe v. Wade, which correlated with a decrease, one generation later, in the number of delinquents in the population at large.[62] In his 2012 bookUncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society,Jim Manziwrites that of the randomized field trials conducted in criminology, onlynuisance abatementper broken windows theory has been successfully replicated.[63][64]
https://en.wikipedia.org/wiki/Broken_windows_theory
Closed-circuit television(CCTV), also known asvideo surveillance,[1][2]is the use ofclosed-circuit television camerasto transmit a signal to a specific place on a limited set of monitors. It differs frombroadcast televisionin that the signal is not openly transmitted, though it may employpoint-to-point,point-to-multipoint(P2MP), ormeshwired orwireless links. Even though almost all video cameras fit this definition, the term is most often applied to those used forsurveillancein areas that require additional security or ongoing monitoring (videotelephonyis seldom called "CCTV"[3][4]). The deployment of this technology has facilitated significant growth in state surveillance, a substantial rise in the methods of advanced social monitoring and control, and a host of crime prevention measures throughout the world.[5]Though surveillance of the public using CCTV is common in many areas around the world, video surveillance has generated significant debate about balancing its use with individuals'right to privacyeven when in public.[6][7][8] Inindustrial plants, CCTV equipment may be used to observe parts of a process from a centralcontrol room, especially if the environments observed are dangerous or inaccessible to humans. CCTV systems may operate continuously or only as required to monitor a particular event. A more advanced form of CCTV, usingdigital video recorders(DVRs), provides recording for possibly many years, with a variety of quality and performance options and extra features (such asmotion detectionandemailalerts). More recently, decentralizedIP cameras, perhaps equipped with megapixel sensors, support recording directly tonetwork-attached storagedevices or internal flash for stand-alone operation. An earlymechanicalCCTV system was developed in June 1927 by Russian physicistLeon Theremin.[9]Originally requested by CTO (the SovietCouncil of Labor and Defense), the system consisted of a manually-operated scanning-transmitting camera and wireless shortwave transmitter and receiver, with a resolution of a hundred lines. Having been commandeered byKliment Voroshilov, Theremin's CCTV system was demonstrated toJoseph Stalin,Semyon Budyonny, andSergo Ordzhonikidze, and subsequently installed in the courtyard of theMoscow Kremlinto monitor approaching visitors.[9] Another early CCTV system was installed bySiemens AGatTest Stand VIIinPeenemünde, Nazi Germany, in 1942, for observing the launch ofV-2 rockets.[10] In the United States, the first commercial closed-circuit television system became available in 1949 fromRemington Randand designed byCBS Laboratories, called "Vericon".[11]Vericon was advertised as not requiring a government permit due to the system using cabled connections between camera and monitor rather than over-the-air transmission.[12] The earliest video surveillance systems involved constant monitoring because there was no way to record and store information. The development ofreel-to-reelmedia enabled the recording of surveillance footage. These systems required magnetic tapes to be changed manually, with the operator having to manually thread the tape from the tape reel through the recorder onto a take-up reel. Due to these shortcomings, video surveillance was not widespread.[13] Later,videocassette recordertechnology became available in the 1970s, making it easier to record and erase information, and the use of video surveillance became more common.[13]During the 1990s, digitalmultiplexingwas developed, allowing several cameras to record at once, as well astime lapseand motion-only recording. This saved time and money which then led to an increase in the use of CCTV.[14]Recently, CCTV technology has been shifting towards Internet-based products and systems, and other technological developments.[15] Early CCTV systems were installed in central London by the Metropolitan Police between 1960 and 1965.[16]By 1963, CCTV was being used in Munich to monitor traffic.[17]Closed-circuit television was used as a form ofpay-per-viewtheatre televisionfor sports such asprofessional boxingandprofessional wrestling, and from 1964 through 1970, theIndianapolis 500automobile race. Boxing telecasts were broadcast live to a select number of venues, mostly theaters, with arenas, stadiums, schools, and convention centres also being less often used venues, where viewers paid for tickets to watch the fight live.[18][19]The first fight with a closed-circuit telecast wasJoe Louisvs.Joe Walcottin 1948.[20] Closed-circuit telecasts peaked in popularity withMuhammad Aliin the 1960s and 1970s,[18][19]with "The Rumble in the Jungle" fight drawing 50million CCTV viewers worldwide in 1974,[21]and the "Thrilla in Manila" drawing 100million CCTV viewers worldwide in 1975.[22]In 1985, theWrestleMania Iprofessional wrestling show was seen by over one million viewers with this scheme.[23]As late as 1996, theJulio César Chávez vs. Oscar De La Hoyaboxing fight had 750,000 viewers.[24]Although closed-circuit television was gradually replaced bypay-per-viewhomecable televisionin the 1980s and 1990s, it is still in use today for most awards shows and other events that are transmitted live to most venues but do not air as such on network television, and later re-edited for broadcast.[19] In September 1968,Olean, New York, was the first city in the United States to install CCTV video cameras along its main business street in an effort to fight crime.[25]Marie Van Brittan Brownreceived a patent for the design of a CCTV-based home security system in 1969. (U.S. patent 3,482,037). Another early appearance was in 1973 inTimes SquareinNew York City.[26]The NYPD installed it to deter crime in the area; however, crime rates did not appear to drop much due to the cameras.[26]Nevertheless, during the 1980s, video surveillance began to spread across the country specifically targeting public areas.[14]It was seen as a cheaper way to deter crime compared to increasing the size of the police departments.[26]Some businesses as well, especially those that were prone to theft, began to use video surveillance.[26]From the mid-1990s on, police departments across the country installed an increasing number of cameras in various public spaces including housing projects, schools, and public parks.[26]CCTV later became common in banks and stores to discourage theft by recording evidence of criminal activity. In 1997, 3,100 CCTV systems were installed in public housing and residential areas in New York City.[27] Experiments in the UK during the 1970s and 1980s, including outdoor CCTV inBournemouthin 1985, led to several larger trial programs later that decade. The first use by local government was inKing's Lynn, Norfolk, in 1987.[28] A 2008 report by UK Police Chiefs concluded that only 3% of crimes were solved by CCTV.[29]In London, aMetropolitan Policereport showed that in 2008 only one crime was solved per 1000 cameras.[30]In some cases CCTV cameras have become a target of attacks themselves.[31]A 2009 systematic review by researchers fromNortheastern Universityand theUniversity of Cambridgeusedmeta-analytictechniques to pool the average effect of CCTV on crime across 41 different studies.[32]The studies included in the meta-analysis usedquasi-experimental evaluation designsthat involved before-and-after measures of crime in experimental and control areas.[32]However, researchers have argued that the British car park studies included in the meta-analysis cannot accurately control for the fact that CCTV was introduced simultaneously with a range of other security-related measures.[33]Second, some have noted that, in many of the studies, there may be issues withselection biassince the introduction of CCTV was potentiallyendogenousto previous crime trends.[34]In particular, the estimated effects may be biased if CCTV is introduced in response to crime trends.[35] In 2012, cities such as Manchester in the UK are usingDVR-based technology to improve accessibility for crime prevention.[36]In 2013, City of Philadelphia Auditor found that the $15 million system was operational only 32% of the time.[37]There isanecdotal evidencethat CCTV aids in detection and conviction of offenders; for example, UK police forces routinely seek CCTV recordings after crimes.[38]Cameras have also been installed onpublic transportin the hope of deterring crime.[39][40] A 2017 review published in theJournal of Scandinavian Studies in Criminology and Crime Preventioncompiles seven studies that use such research designs. The studies found that CCTV reduced crime by 24–28% in public streets and urban subway stations. It also found that CCTV could decrease unruly behaviour in football stadiums and theft in supermarkets/mass merchant stores. However, there was no evidence of CCTV having desirable effects in parking facilities or suburban subway stations. Furthermore, the review indicates that CCTV is more effective in preventing property crimes than in violent crimes.[41]However, a 2019, 40-year-longsystematic reviewstudy reported that the most consistent effects of crime reduction of CCTV were in car parks.[42] A more open question is whether most CCTV is cost-effective. While low-quality domestic kits are cheap, the professional installation and maintenance of high definition CCTV is expensive.[43]Gill and Spriggs did acost-effectiveness analysis(CEA) of CCTV in crime prevention that showed little monetary saving with the installation of CCTV as most of the crimes prevented resulted in little monetary loss.[44]Critics however noted that benefits of non-monetary value cannot be captured in a traditional cost effectiveness analysis and were omitted from their study.[44] In October 2009, an "Internet Eyes" website was announced which would pay members of the public to view CCTV camera images from their homes and report any crimes they witnessed. The site aimed to add "more eyes" to cameras which might be insufficiently monitored. Civil liberties campaigners criticized the idea as "a distasteful and a worrying development".[45]Russia has also implemented a video surveillance system called 'Safe City', which has the capability to recognize facial features and moving objects, sending the data automatically to government authorities. However, the widespread tracking of individuals through video surveillance has raised significant privacy issues.[46] Material collected by surveillance cameras has been used as a tool in post-event forensics to identify tactics and perpetrators ofterrorist attacks. Furthermore, there are various projects—such asINDECT—that aim to detect suspicious behaviours of individuals and crowds.[47]It has been argued that terrorists will not be deterred by cameras, that terror attacks are not really the subject of the current use of video surveillance and that terrorists might even see it as an extra channel forpropagandaand publication of their acts.[48][49]In Germany, calls for extended video surveillance by the country's main political parties,SPD,CDU, andCSUhave been dismissed as "little more than aplacebofor a subjective feeling of security" by a member of the Left party.[50] In Singapore, since 2012, thousands of CCTV cameras have helped deterloan sharks, nab litterbugs, and stop illegal parking, according to government figures.[51]In 2013,Oaxaca, Mexico, hired deaf police officers tolip readconversations to uncover criminal conspiracies.[52] In recent years, the use ofbody-worn videocameras has been introduced for a number of uses. For example, as a new form of surveillance in law enforcement, there are surveillance cameras that are worn by the police officer and are usually located on a police officer's chest or head.[53][54]According to theBureau of Justice Statistics(BJS), in the United States, in 2016, about 47% of the 15,328 general-purposelaw enforcementagencies had acquired body-worn cameras.[55] Many cities andmotorwaynetworks have extensive traffic-monitoring systems. Many of these cameras however, are owned by private companies and transmit data to drivers'GPSsystems. Highways Englandhas a publicly owned CCTV network of over 3000 pan–tilt–zoom cameras covering the British motorway and trunk road network. These cameras are primarily used to monitor traffic conditions and are not used asspeed cameras. With the addition of fixed cameras for theactive traffic managementsystem, the number of cameras on the Highways England's CCTV network is likely to increase significantly over the next few years.[56]TheLondon congestion chargeis enforced by cameras positioned at the boundaries of and inside the congestion charge zone, which automatically read the number plates of vehicles that enter the zone. If the driver does not pay the charge then a fine will be imposed.[57]Similar systems are being developed as a means of locating cars reported stolen.[58]Other surveillance cameras serve astraffic enforcement cameras.[59] InMecca, Saudi Arabia, CCTV cameras are used for monitoring (and thusmanaging) the flow of crowds.[60]In the Philippines,barangay San Antonioused CCTV cameras and artificial intelligence software to detect theformation of crowdsduring anoutbreak of a disease. Security personnel were sent whenever a crowd formed at a particular location in the city.[61][62] In the United States, Britain, Canada,[63]Australia,[64]and New Zealand, CCTV is widely used in schools to preventbullying,vandalism, monitoring visitors, and maintaining a record of evidence of a crime. There are some restrictions: cameras are not typically installed in areas where there is a "reasonableexpectation of privacy", such as bathrooms, gym locker areas, and private offices. Cameras are generally acceptable in parking lots, cafeterias, and supply rooms. Though some teachers object to the installation of cameras.[65]A study of high school students in Israeli schools shows that students' views on CCTV used in school are based on how they think of their teachers, school, and authorities.[66]It also stated that most students do not want CCTV installed inside a classroom.[66] Many homeowners choose to install CCTV systems either inside or outside their own homes, sometimes both. Modern CCTV systems can be monitored through mobile phone apps with internet coverage. Some systems also provide motion detection, so when movement is detected, an alert can be sent to a phone.[67] On adriver-only operatedtrain, CCTV cameras may allow the driver to confirm that people are clear of doors before closing them and starting the train.[68]A trial byRETin 2011 withfacial recognitioncameras mounted on trams made sure that people who were banned from them did not sneak on anyway.[69]CCTV has also been frequently operated in many department stores and shopping malls to mitigate concerns of potential theft. In some countries, malls must obtain approval from theMinistry of Interior(MOI)[70]orInformation Commissioner's Office(ICO) before installing CCTVs.[71]Some organizations also use CCTV to monitor the actions of workers in a workplace.[72] Many sporting events in the United States use CCTV inside the venue, either to display on the stadium or arena'sscoreboardor in the concourse or restroom areas to allow people to view action outside the seating bowl. The cameras send the feed to a central control centre where a producer selects feeds to send to the television monitors that people can view. In a trial with CCTV cameras, football club fans no longer needed to identify themselves manually, but could pass freely after being authorized by the facial recognition system.[73] Criminals may use surveillance cameras to monitor the public. For example, ahidden cameraat anATMcan capture people'sPINsas they are entered without their knowledge. The devices are small enough not to be noticed, and are placed where they can monitor the keypad of the machine as people enter their PINs. Images may be transmitted wirelessly to the criminal. Even lawful surveillance cameras sometimes have their data received by people who have no legal right to receive it.[74] About 65% of CCTV cameras in the world are installed in Asia.[76]In Asia, different human activities attracted the use of surveillance camera systems and services, including but not limited to business and related industries,[77]transportation,[78]sports,[79]and care for the environment.[80] In 2018, China was reported to have over 170 million CCTV cameras.[81]In 2023, China was estimated to havea huge surveillance networkof around 540–626 million surveillance cameras, though numbers differ significantly between sources.[82][83]Beijing, China's capital city, has the most cameras for a city overall, with a total of 1.15 million installed.[84]The cameras are used to record details such as gender, age, and ethnicity. Cameras have been used in a southern Chinese city to issue tickets to people forinfractions.[85]In India, the cities ofHyderabadandDelhi, the capital, have around 900,000 and 450,000 cameras, respectively.[83]The city ofChennaihas the highest density per area of CCTV cameras worldwide, with 657 cameras per square kilometer in 2020 (from 280,000 CCTVs). China and India have some of the highest-density and the most amount of CCTVs in cities.[84] South Korea's military has removed over 1,300 surveillance Chinese cameras from its bases for security reasons.[86]In Hong Kong, the police have stated that they are planning to install up to 7,000 surveillance cameras across Hong Kong in roughly three years time, up from the estimated 600 installed cameras in 2024; this amounts to roughly 2,000 planned cameras every year starting from 2025.[87]Earlier, in June 2024, the cameras have also been vaguely planned to be integrated with facial recognition artificial intelligence.[88][89]The plan has been criticized for the potential for the country to become similar to the "intense surveillance of mainland China".[90]In Japan, an estimation byNikkei Businessestimated that the total number of security cameras in Japan is approximately 5 million in 2018.[91]In Singapore, it was estimated that the total number of CCTVs was around 90,000 in 2021.[92] In 2009, there were an estimated 15,000 CCTV systems inChicago, many linked to an integrated camera network.[93][94][95]New York City'sDomain Awareness Systemhas 6,000 video surveillance cameras linked together,[96]there are over 4,000 cameras on the subway system (although nearly half of them do not work),[97]and two-thirds of large apartment and commercial buildings use video surveillance cameras.[98][99]InWashington, D.C., there are more than 30,000 surveillance cameras in schools,[100]and theMetrohas nearly 6,000 cameras in use across the system.[101] There were an estimated 30 million surveillance cameras in the United States in 2011.[102]Video surveillance has been common in the United States since the 1990s; for example, one manufacturer reported net earnings of $120 million in 1995.[103]With lower cost and easier installation, sales of home security cameras increased in the early 21st century. Following theSeptember 11 attacks, the use of video surveillance in public places became more common to deter future terrorist attacks.[26]Under theHomeland Security Grant Program, government grants are available for cities to install surveillance camera networks.[104][105][106]In 2018, there are approximately 70 million surveillance cameras in the United States.[107] In Canada, Project SCRAM is a policing effort by the Canadian policing serviceHalton Regional Police Serviceto register and help consumers understand privacy and safety issues related to the installations of home security systems. The project service has not been extended to commercial businesses.[108] In Latin America, the CCTV market is growing rapidly with the increase of property crime.[109]In Brazil, CCTV usage is only permitted in public areas, though individuals must be informed about the presence of the camera according to the BrazilianLGPD(which broadly aligns with the EU's GDPR),[110]theBrazilian Civil Code,[111]and theBrazilian Association of Technical Standards. However, starting in 2023, in Brazil, the Smart Sampa project, a project that plans to deploy 20,000 facial recognition cameras by 2024, has been criticized for its potential to be "biased againstBlack individuals" and overall risks of data privacy.[112] In 2017, in Russia, the Moscow network included 160,000 CCTV cameras and 95 percent of residential buildings; over 3,500 Russian cameras were connected to the General Centre for Data Storage and Processing.[113]Video recordings are used to solve 70 percent of offenses and crimes.[114]In 2024, there are over 1 million video surveillance cameras in Russia.[115]About 230,000 are in use in Moscow alone.[116]According to data from the Russian Minister for Digital Development,Maksut Shadayev, one in three of all CCTVs in Russia were connected to afacial recognition system. A leaked document revealed that the president of Russia,Vladimir Putin, called on theRussian security servicesto fund "a massive AI-based surveillance apparatus". The spending of overUS$115 million was planned for the system in 2024–2026.[117] In the United Kingdom, the vast majority of CCTV cameras are operated not by government bodies, but by private individuals or companies, especially to monitor the interiors of shops and businesses. According to theFreedom of Information Act 2000requests, the total number of local government-operated CCTV cameras was around 52,000 over the entirety of the UK.[118] An article published inCCTV Imagemagazine estimated the number of private and local government-operated cameras in the United Kingdom was 1.85 million in 2011. The estimate was based on extrapolating from a comprehensive survey of public and private cameras within theCheshire Constabularyjurisdiction. This works out as an average of one camera for every 32 people in the UK, although the density of cameras varies greatly from place to place. The Cheshire report also claims that the average person on a typical day would be seen by 70 CCTV cameras.[119] The Cheshire figure is regarded as more dependable than a previous study by Michael McCahill and Clive Norris ofUrbanEyepublished in 2002.[119][120]Based on a small sample inPutneyHigh Street, McCahill and Norris extrapolated the number of surveillance cameras inGreater Londonto be around 500,000 and the total number of cameras in the UK to be around 4.2 million. According to their estimate, the UK has one camera for every 14 people. Although it has been acknowledged for several years that the methodology behind this figure is flawed,[121]it has been widely quoted. Furthermore, the figure of 500,000 for Greater London is often confused with the figure for the police and local government-operated cameras in theCity of London, which was about 650 in 2011.[118] TheCCTV User Groupestimated that there were around 1.5 million private and local government CCTV cameras in city centres, stations, airports, and major retail areas in the UK.[122]Research conducted by the Scottish Centre for Crime and Justice Research and based on a survey of all Scottish local authorities identified that there are over 2,200 public space CCTV cameras in Scotland.[123]The UK has often been cited as a country that has one of the most CCTV cameras in Europe.[124][125] In South Africa, due to thehigh crime rate, CCTV surveillance is widely prevalent. The firstIP camerawas released in 1996 byAxis Communications, but IP cameras did not arrive in South Africa until 2008.[126]To regulate the number of suppliers in 2001, the Private Security Industry Regulation Act was passed requiring all security companies to be registered with the Private Security Industry Regulatory Authority (PSIRA).[127]In Egypt, the capital city ofCairohas approximately 47,000 cameras,[128]while theNew Administrative Capitalhas more than 6,000 surveillance cameras in 2023.[129]In South Sudan, the Ministry of Interior has reinstated the operation of CCTV surveillance cameras inJubaafter the cameras have been inactive for over four years;[130]South Sudan also launched a drone security system in 2024 in Juba.[131] Proponents of CCTV cameras argue that cameras are effective at deterring and solving crime, and that appropriate regulation and legal restrictions on surveillance ofpublicspaces can provide sufficient protections so that an individual'sright to privacycan reasonably be weighed against the benefits of surveillance.[132]However,anti-surveillance activistshave held that there is a right to privacy in public areas, that the development of CCTV in public areas, linked to databases of people's pictures and identity, presents a breach ofcivil libertiesand the loss ofanonymityinpublic places.[133] Furthermore, some scholars have argued that situations wherein a person's rights can be justifiably compromised are so rare as to not sufficiently warrant the frequent compromising of public privacy rights that occurs in regions with widespread CCTV surveillance. For example, in her bookSetting the Watch: Privacy and the Ethics of CCTV Surveillance, Beatrice von Silva-Tarouca Larsen argues that CCTV surveillance is ethically permissible only in "certain restrictively defined situations", such as when a specific location has a "comprehensively documented and significant criminal threat".[134] In theUnited States, the Constitution does not explicitly include theright to privacyalthough theSupreme Courthas said several of the amendments to the Constitution implicitly grant this right.[135]Access to video surveillance recordings may require a judge'swrit, which is readily available.[136]However, there is little legislation and regulation specific to video surveillance.[137][138]InCanada, the use of video surveillance has grown very rapidly. InOntario, both themunicipalandprovincialversions of theFreedom of Information and Protection of Privacy Actoutline guidelines that control how images and information can be gathered by this method and or released.[139] All countries in theEuropean Unionare signatories to theEuropean Convention on Human Rights, which protects individual rights, including the right to privacy. TheGeneral Data Protection Regulation(GDPR) required that the footage should only be retained for as long as necessary for the purpose for which it was collected. InSweden, the use of CCTV in public spaces is regulated both nationally and via GDPR. In an opinion poll commissioned byLund Universityin August 2017, the general public of Sweden was asked to choose one measure that would ensure their need for privacy when subject to CCTV operation in public spaces: 43% favored regulation in the form of clear routines for managing, storing, and distributing image material generated from surveillance cameras, 39% favored regulation in the form of clear signage informing that camera surveillance in public spaces is present, 10% favored regulation in the form of having restrictive policies for issuing permits for surveillance cameras in public spaces, 6% were unsure, and 2% favored regulation in the form of having permits restricting the use of surveillance cameras during certain times.[140] In an updated opinion poll commissioned byLund Universityin December 2019, the general public of Sweden was asked to share their attitudes toward the use of surveillance cameras (CCTV) in public spaces. A significant majority, 88%, expressed a positive view—45% were very positive and 43% quite positive—while only 11% held negative views, and 1% were unsure. Participants were also asked whether they believed surveillance cameras in various environments violated their personal privacy. A majority rejected that such surveillance violated their privacy at national border-crossings (82%), in city centers (77%), parks and green spaces (74%), large public events (80%), and healthcare units (68%). Somewhat less rejection was observed for surveillance in residential areas, where 67% rejected the notion that it violated their privacy. When asked about the perceived use of automatic facial recognition in surveillance cameras in Sweden, 9% believed it was used quite a lot, 55% believed it was not used much, 21% believed it was not used at all, and 15% were unsure. Regarding privacy risks, 55% of respondents believed the greatest risk came from commercial documentation of individuals (e.g., data collection tracking online consumer behavior), followed by 20% who pointed to other members of the public documenting them (e.g., photography or audio recording), and 11% who saw the greatest risk in public sector data collection (e.g., by law enforcement or healthcare providers). 15% were unsure. When asked to whom they would turn to report a privacy breach related to public camera surveillance, 35% said the Swedish National Police, 6% mentioned the Swedish Data Protection Authority, and 39% did not know where to turn.[141] In theUnited Kingdom, theData Protection Act 1998imposes legal restrictions on the uses of CCTV recordings and mandates the registration of CCTV systems with the Data Protection Agency. In 2004, the successor to the Data Protection Agency, theInformation Commissioner's Office, clarified that this required registration of all CCTV systems with the Commissioner and prompt deletion of archived recordings. However, subsequent case law (Durant vs. FSA) limited the scope of the protection provided by this law, and not all CCTV systems are currently regulated.[142] A 2007 report by the UK Information Commissioner's Office highlighted the need for the public to be made more aware of the growing use of surveillance and the potential impact on civil liberties.[143][144]In the same year, a campaign group claimed that the majority of CCTV cameras in the UK are operated illegally or are in breach of privacy guidelines.[145]In response, the Information Commissioner's Office rebutted the claim and added that any reported abuses of the Data Protection Act are swiftly investigated.[145]Even if there are some concerns arising from the use of CCTV such as involving privacy,[146]more commercial establishments are still installing CCTV systems in the UK. In 2012, the UK government enacted theProtection of Freedoms Actwhich includes several provisions related to controlling the storage and use of information about individuals. Under this Act, theHome Officepublished a code of practice in 2013 for the use of surveillance cameras by government and local authorities. The code wrote that "surveillance by consent should be regarded as analogous topolicing by consent."[147] In thePhilippines, the main laws governing CCTV usage areData Privacy Act of 2012and theCybercrime Prevention Act of 2012. The Data Privacy Act of 2012 (Republic Act No. 10173) is the primary law that governs data privacy in the Philippines. The Act mandates that the privacy of individuals must be respected and protected. The law applies to CCTV cameras as they collect and process personal data. The Cybercrime Prevention Act of 2012 (Republic Act No. 10175) includes provisions that apply to CCTV usage. Under the Act, the unauthorized access to, interception of, or interference with data is a criminal offense. This means that unauthorized access to CCTV footage could potentially be considered a cybercrime.[148][149][150] Computer-controlled cameras can identify,track, and categorize objects in their field of view.[151]Video content analysis, also referred to as video analytics, is the capability of automatically analyzingvideoto detect and determine temporal events not based on a single image but rather onobject classification.[152]Advanced VCA applications can measure object speed. Some video analytics applications can be used to apply rules to designated areas. These rules can relate to access control. For example, they can describe which objects can enter into a specific area.[153]There are different approaches to implementing VCA technology. Data may be processed on the camera itself (edge processing) or by a centralized server.[154]Artificial intelligence-powered CCTV cameras have also been further tested to detect congestion,[155]be used as a facial recognition system, and predict signs of criminal activities.[156] There is a cost in the retention of the images produced by CCTV systems. The amount and quality of data stored on storage media is subject to compression ratios, images stored per second, and image size, and is affected by the retention period of the videos or images.[157]DVRs store images in a variety ofproprietary file formats. CCTV security cameras can either store the images on a local hard disk drive, an SD card, or in the cloud. Recordings may be retained for a preset amount of time and then automatically archived, overwritten, or deleted, the period being determined by the organisation that generated them. A growing branch in CCTV isinternet protocolcameras (IP cameras). It is estimated that 2014 was the first year that IP cameras outsold analog cameras.[158]IP cameras use theInternet Protocol(IP) used by mostlocal area networks(LANs) to transmit video across data networks in digital form. IP can optionally be transmitted across the public internet, allowing users to view their cameras remotely on a computer or phone via an internet connection.[159]IP cameras are considered part of theInternet of things(IoT) and have many of the same benefits and security risks as other IP-enabled devices.[160]Smart doorbellsare one example of a type of CCTV that uses IP to allow it to send alerts. Main types of IP cameras include fixed cameras,pan–tilt–zoom(PTZ) cameras, and multi-sensor cameras.[161]Fixed cameras' resolution typically does not exceed 20megapixels. The main feature of a PTZ is its remote directional andoptical zoomcapability. With multi-sensor cameras, wider areas can be monitored. Industrial video surveillance systems usenetwork video recordersto support IP cameras. These devices are responsible for the recording, storage, video stream processing, and alarm management. Since 2008, IP video surveillance manufacturers can use a standardized network interface (ONVIF) to support compatibility between systems.[162]For professional or public infrastructure security applications, IP video is restricted to within a private network orVPN.[163] The city ofChicagooperates a networked video surveillance system which combines CCTV video feeds of government agencies with those of the private sector, installed in city buses, businesses, public schools, subway stations, housing projects, etc.[164]Even homeowners are able to contribute footage. It is estimated to incorporate the video feeds of a total of 15,000 cameras.[165]The system is used by Chicago'sOffice of Emergency Managementin case of an emergency call: it detects the caller's location and instantly displays the real-time video feed of the nearest security camera to the operator, not requiring any user intervention. While the system is far too vast to allow complete real-time monitoring, it stores the video data for use as evidence in criminal cases.[166] Many consumers are turning to wireless security cameras for home surveillance. Wireless cameras do not require a video cable for video/audio transmission, simply a cable for power. Wireless cameras are also easy and inexpensive to install.[167]Previous generations of wireless security cameras relied on analogue technology; modern wireless cameras use digital technology with usually more secure and interference-free signals.[168]Wireless mesh networkshave been used for connection with the other radios in the same group.[169]There are also cameras using solar power. Wireless IP cameras can become a client on theWLAN, and they can be configured with encryption andauthentication protocolswith a connection to anaccess point.[169] InWiltshire, United Kingdom, in 2003, apilot schemefor what is now known as "Talking CCTV" was put into action, allowing operators of CCTV cameras to communicate through the camera via a speaker when it is needed. In 2005,Ray Mallon, the mayor and former senior police officer ofMiddlesbrough, implemented "Talking CCTV" in his area.[170]Other towns have had such cameras installed. In 2007, several of the devices were installed inBridlingtontown centre,East Riding of Yorkshire.[171] In December 2016, a form of anti-CCTV and facial recognition sunglasses called "reflectacles" were invented by a craftsman based in Chicago named Scott Urban.[172]They reflect infrared and, optionally, visible light which makes the user's face a white blur to cameras. The project passed its funding goal of $28,000, and "reflectacles" became commercially available in June 2017.[173]
https://en.wikipedia.org/wiki/Closed-circuit_television
COVID-19 surveillanceinvolves monitoring the spread of thecoronavirus diseasein order to establish the patterns of disease progression.The World Health Organization(WHO) recommends activesurveillance, with focus of case finding,testingandcontact tracingin all transmission scenarios.[1]COVID-19 surveillance is expected to monitor epidemiological trends, rapidly detect new cases, and based on this information, provide epidemiological information to conduct risk assessment and guide disease preparedness.[1] Syndromic surveillance is done based on the symptoms of an individual who corresponds to COVID-19. As of March 2020, the WHO recommends the following case definitions:[1] The WHO recommends reporting probable and confirmed cases of COVID-19 infection within 48 hours of identification.[1]Countries should report on a case-by-case basis as far as possible but, in case of limitation in resources, aggregate weekly reporting is also possible.[2]Some organizations have created crowdsourced apps forsyndromic surveillance, where people can report their symptoms to help researchers map areas with concentration of COVID-19 symptoms.[3] TheCentre for Evidence-Based Medicine(CEBM) compared case definitions from WHO, the European Union'sCenters for Disease Control(ECDC), the USCenters for Disease Control(CDC), China,Public Health England, andItaly, and found that while the definition of suspected cases relies on clinical criteria, those are generally replaced by a single PCR test result when it comes to confirmatory diagnosis, and the "there is no guidance providing details on the specific RNA sequences required by testing, a threshold for the test result and the need for confirmatory testing." They note that currently "any person meeting the laboratory criteria is a confirmed case" although per the CDC'sIntroduction to Epidemiology, a case definition should be "a set of standard criteria for classifying whether a person has a certain disease, syndrome, or other health condition". They urge that PCR test positivity counts include "a standardized threshold level of detection, and at a minimum, the recording of the presence or absence of symptoms."[4] Virological surveillance is done by using molecular tests for COVID-19.[5]WHO has published resources for laboratories on how to perform testing for COVID-19.[5]In the European Union, laboratory confirmed cases of COVID-19 are reported within 24 hours of identification.[6]Several countries conduct virological surveillance onwastewaterto test for the presence or prevalence of COVID-19 in the population residing in a wastewater catchment.[7] At least 24 countries have established digital surveillance of their citizens.[8]The digital surveillance technologies includeCOVID-19 apps, location data and electronic tags.[8]TheCenter For Disease Control and Preventionin USA tracks the travel information of individuals using airline passenger data.[9][10] Tracking wristbands can take the place of smartphone apps for users who either do not own a smartphone, or who own a smartphone unable to supportBluetooth Low Energyfunctionality. In the UK, as of 2020 more than ten percent of smartphones lack this functionality. In addition, in South Korea, people found to be breaking quarantine are issued tracking wristbands designed to alert authorities if the band is removed.[11]At least one jurisdiction in the U.S. has used existingankle bracelettechnology to enforce quarantine on patients found to be in violation.[12] In Hong Kong, authorities are requiring a bracelet and an app for all travellers. A GPS app is used to track the locations of individuals in South Korea to ensure against quarantine breach, sending alerts to the user and to authorities if people leave designated areas.[13][14]In Singapore, individuals have to report their locations with photographic proof. Thailand is using an app and SIM cards for all travelers to enforce their quarantine.[8]India is planning to manufacture location and temperature-monitoring bands.[11]Israel's internal security service,Shin Bet, had already tracked all Israeli phone-call metadata for decades prior to the outbreak, and in March 2020 was ordered by emergency decree to track and notify people exposed to the virus. The decree was replaced by legislation in June 2020. From June to December 2020, reportedly 950,000 people were flagged for quarantine by the surveillance, of whom 46,000 were infected.[15] Human rights organizations have criticized some of these measures, asking the governments not to use the pandemic as a cover to introduce invasive digital surveillance.[16][17]
https://en.wikipedia.org/wiki/COVID-19_surveillance
Information privacyis the relationship between the collection and dissemination ofdata,technology, the publicexpectation of privacy,contextual information norms, and thelegalandpoliticalissues surrounding them.[1]It is also known asdata privacy[2]ordata protection. Various types ofpersonal informationoften come under privacy concerns. This describes the ability to control what information one reveals about oneself over cable television, and who can access that information. For example, third parties can trackIP TVprograms someone has watched at any given time. "The addition of any information in a broadcasting stream is not required for an audience rating survey, additional devices are not requested to be installed in the houses of viewers or listeners, and without the necessity of their cooperations, audience ratings can be automatically performed in real-time."[3] In the United Kingdom in 2012, the Education SecretaryMichael Govedescribed theNational Pupil Databaseas a "rich dataset" whose value could be "maximised" by making it more openly accessible, including to private companies. Kelly Fiveash ofThe Registersaid that this could mean "a child's school life including exam results, attendance, teacher assessments and even characteristics" could be available, with third-party organizations being responsible for anonymizing any publications themselves, rather than the data being anonymized by the government before being handed over. An example of a data request that Gove indicated had been rejected in the past, but might be possible under an improved version of privacy regulations, was for "analysis on sexual exploitation".[4] Information about a person's financial transactions, including the amount of assets, positions held in stocks or funds, outstanding debts, and purchases can be sensitive. If criminals gain access to information such as a person's accounts or credit card numbers, that person could become the victim offraudoridentity theft. Information about a person's purchases can reveal a great deal about that person's history, such as places they have visited, whom they have contact with, products they have used, their activities and habits, or medications they have used. In some cases, corporations may use this information totargetindividuals withmarketingcustomized towards those individual's personal preferences, which that person may or may not approve.[4] As heterogeneous information systems with differing privacy rules are interconnected and information is shared,policy applianceswill be required to reconcile, enforce, and monitor an increasing amount of privacy policy rules (and laws). There are two categories of technology to address privacy protection incommercialIT systems: communication and enforcement. Computer privacy can be improved throughindividualization. Currently security messages are designed for the "average user", i.e. the same message for everyone. Researchers have posited that individualized messages and security "nudges", crafted based on users' individual differences and personality traits, can be used for further improvements for each person's compliance with computer security and privacy.[5] Improve privacy through data encryption By converting data into a non-readable format, encryption prevents unauthorized access. At present, common encryption technologies include AES and RSA. Use data encryption so that only users with decryption keys can access the data.[6] The ability to control the information one reveals about oneself over the internet and who can access that information has become a growing concern. These concerns include whetheremailcan be stored or read by third parties without consent or whether third parties can continue to track the websites that someone visited. Another concern is whether websites one visits can collect, store, and possibly sharepersonally identifiable informationabout users. The advent of varioussearch enginesand the use ofdata miningcreated a capability for data about individuals to be collected and combined from a wide variety of sources very easily.[7][8][9]AI facilitated creating inferential information about individuals and groups based on such enormous amounts of collected data, transforming the information economy.[10]The FTC has provided a set of guidelines that represent widely accepted concepts concerning fair information practices in an electronic marketplace, called theFair Information Practice Principles. But these have been critiqued for their insufficiency in the context of AI-enabled inferential information.[10] On the internet many users give away a lot of information about themselves: unencrypted emails can be read by the administrators of ane-mail serverif the connection is not encrypted (noHTTPS), and also theinternet service providerand other partiessniffingthe network traffic of that connection are able to know the contents. The same applies to any kind of traffic generated on the Internet, includingweb browsing,instant messaging, and others. In order not to give away too much personal information, emails can be encrypted and browsing of webpages as well as other online activities can be done anonymously viaanonymizers, or by open source distributed anonymizers, so-calledmix networks.Nym[11]andI2P[12]are examples of well-knownmix nets. Email is not the only internet content with privacy concerns. In an age where increasing amounts of information are online, social networking sites pose additional privacy challenges. People may be tagged in photos or have valuable information exposed about themselves either by choice or unexpectedly by others, referred to asparticipatory surveillance. Data about location can also be accidentally published, for example, when someone posts a picture with a store as a background. Caution should be exercised when posting information online. Social networks vary in what they allow users to make private and what remains publicly accessible.[13]Without strong security settings in place and careful attention to what remains public, a person can be profiled by searching for and collecting disparate pieces of information, leading to cases ofcyberstalking[14]or reputation damage.[15] Cookies are used on websites so that users may allow the website to retrieve some information from the user's internet, but they usually do not mention what the data being retrieved is.[16]In 2018, the General Data Protection Regulation (GDPR) passed a regulation that forces websites to visibly disclose to consumers their information privacy practices, referred to as cookie notices.[16]This was issued to give consumers the choice of what information about their behavior they consent to letting websites track; however, its effectiveness is controversial.[16]Some websites may engage in deceptive practices such as placing cookie notices in places on the page that are not visible or only giving consumers notice that their information is being tracked but not allowing them to change their privacy settings.[16]Apps like Instagram and Facebook collect user data for a personalized app experience; however, they track user activity on other apps, which jeopardizes users' privacy and data. By controlling how visible these cookie notices are, companies can discreetly collect data, giving them more power over consumers.[16] As location tracking capabilities of mobile devices are advancing (location-based services), problems related to user privacy arise.Location datais among the most sensitive data currently being collected.[17]A list of potentially sensitive professional and personal information that could be inferred about an individual knowing only their mobility trace was published in 2009 by theElectronic Frontier Foundation.[18]These include the movements of a competitor sales force, attendance of a particular church or an individual's presence in a motel, or at an abortion clinic. A recent MIT study[19][20]by de Montjoye et al. showed that four spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5 million people in a mobility database. The study further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity to the person. People may not wish for their medical records to be revealed to others due to the confidentiality and sensitivity of what the information could reveal about their health. For example, they might be concerned that it might affect their insurance coverage or employment. Or, it may be because they would not wish for others to know about any medical or psychological conditions or treatments that would bring embarrassment upon themselves. Revealing medical data could also reveal other details about one's personal life.[21]There are three major categories of medical privacy: informational (the degree of control over personal information), physical (the degree of physical inaccessibility to others), and psychological (the extent to which the doctor respects patients' cultural beliefs, inner thoughts, values, feelings, and religious practices and allows them to make personal decisions).[22]Physicians and psychiatrists in many cultures and countries have standards fordoctor–patient relationships, which include maintaining confidentiality. In some cases, thephysician–patient privilegeis legally protected. These practices are in place to protect the dignity of patients, and to ensure that patients feel free to reveal complete and accurate information required for them to receive the correct treatment.[23]To view the United States' laws on governing privacy of private health information, seeHIPAAand theHITECH Act. The Australian law is the Privacy Act 1988 Australia as well as state-based health records legislation. Political privacyhas been a concern sincevoting systemsemerged in ancient times. Thesecret ballotis the simplest and most widespread measure to ensure that political views are not known to anyone other than the voters themselves—it is nearly universal in moderndemocracyand considered to be a basic right ofcitizenship. In fact, even where other rights ofprivacydo not exist, this type of privacy very often does. There are several forms of voting fraud or privacy violations possible with the use of digital voting machines.[24] The legal protection of the right toprivacyin general – and of data privacy in particular – varies greatly around the world.[25] Laws and regulations related to Privacy and Data Protection are constantly changing, it is seen as important to keep abreast of any changes in the law and to continually reassess compliance with data privacy and security regulations.[26]Within academia,Institutional Review Boardsfunction to assure that adequate measures are taken to ensure both the privacy and confidentiality of human subjects in research.[27] Privacyconcerns exist whereverpersonally identifiable informationor othersensitive informationis collected, stored, used, and finally destroyed or deleted – indigital formor otherwise. Improper or non-existent disclosure control can be the root cause for privacy issues.Informed consentmechanisms includingdynamic consentare important in communicating to data subjects the different uses of their personally identifiable information. Data privacy issues may arise in response to information from a wide range of sources, such as:[28] Data protection laws across the globe aim to secure personal information and safeguard individual privacy in a digital era. The European Union’s General Data Protection Regulation (GDPR) sets a high benchmark, emphasizing consent, transparency, and robust accountability by imposing strict penalties. Many countries adopt similar principles, mandating that organizations implement effective security measures, respect user rights, and notify breaches. In regions such as North America, Asia, and Oceania, data protection frameworks vary from sector-specific regulations to comprehensive legislation. Globally, these laws balance innovation with privacy, ensuring that personal data is appropriately accessible, managed ethically while mitigating misuse and cyber threats. TheUnited States Department of Commercecreated theInternational Safe Harbor Privacy Principlescertification program in response to the1995 Directive on Data Protection(Directive 95/46/EC) of the European Commission.[29]Both the United States and the European Union officially state that they are committed to upholding information privacy of individuals, but the former has caused friction between the two by failing to meet the standards of the EU's stricter laws on personal data. The negotiation of the Safe Harbor program was, in part, to address this long-running issue.[30]Directive 95/46/EC declares in Chapter IV Article 25 that personal data may only be transferred from the countries in theEuropean Economic Areato countries which provideadequateprivacy protection. Historically, establishing adequacy required the creation of national laws broadly equivalent to those implemented by Directive 95/46/EU. Although there are exceptions to this blanket prohibition – for example where the disclosure to a country outside the EEA is made with the consent of the relevant individual (Article 26(1)(a)) – they are limited in practical scope. As a result, Article 25 created a legal risk to organizations which transfer personal data from Europe to the United States. The program regulates the exchange ofpassenger name recordinformation between the EU and the US. According to the EU directive, personal data may only be transferred to third countries if that country provides an adequate level of protection. Some exceptions to this rule are provided, for instance when the controller themself can guarantee that the recipient will comply with the data protection rules. TheEuropean Commissionhas set up the "Working party on the Protection of Individuals with regard to the Processing of Personal Data," commonly known as the "Article 29 Working Party". The Working Party gives advice about the level of protection in theEuropean Unionand third countries.[31] The Working Party negotiated with U.S. representatives about the protection of personal data, theSafe Harbor Principleswere the result. Notwithstanding that approval, the self-assessment approach of the Safe Harbor remains controversial with a number of European privacy regulators and commentators.[32] The Safe Harbor program addresses this issue in the following way: rather than a blanket law imposed on all organizations in theUnited States, a voluntary program is enforced by theFederal Trade Commission. U.S. organizations which register with this program, having self-assessed their compliance with a number of standards, are "deemed adequate" for the purposes of Article 25. Personal information can be sent to such organizations from the EEA without the sender being in breach of Article 25 or its EU national equivalents. The Safe Harbor was approved as providing adequate protection for personal data, for the purposes of Article 25(6), by the European Commission on 26 July 2000.[33] Under the Safe Harbor, adoptee organizations need to carefully consider their compliance with theonward transfer obligations, wherepersonal dataoriginating in the EU is transferred to the US Safe Harbor, and then onward to a third country. The alternative compliance approach of "binding corporate rules", recommended by many EU privacy regulators, resolves this issue. In addition, any dispute arising in relation to the transfer of HR data to the US Safe Harbor must be heard by a panel of EU privacy regulators.[34] In July 2007, a new, controversial,[35]Passenger Name Recordagreement between the US and the EU was made.[36]A short time afterwards, theBush administrationgave exemption for theDepartment of Homeland Security, for theArrival and Departure Information System(ADIS) and for theAutomated Target Systemfrom the1974 Privacy Act.[37] In February 2008,Jonathan Faull, the head of the EU's Commission of Home Affairs, complained about the US bilateral policy concerning PNR.[38]The US had signed in February 2008 a memorandum of understanding (MOU) with theCzech Republicin exchange of a visa waiver scheme, without concerting before with Brussels.[35]The tensions between Washington and Brussels are mainly caused by a lesser level of data protection in the US, especially since foreigners do not benefit from the USPrivacy Act of 1974. Other countries approached for bilateral MOU included the United Kingdom, Estonia, Germany and Greece.[39]
https://en.wikipedia.org/wiki/Data_privacy
Data retentiondefines the policies ofpersistent dataandrecords managementfor meeting legal and businessdata archivalrequirements. Although sometimes interchangeable, it is not to be confused with theData Protection Act 1998. The different data retention policies weigh legal andprivacyconcerns economics andneed-to-knowconcerns to determine the retention time, archival rules, data formats, and the permissible means ofstorage,access, andencryption.[1] In the field oftelecommunications, "data retention" generally refers to the storage ofcall detail records(CDRs) oftelephonyandinternettrafficandtransactiondata (IPDRs) by governments and commercial organisations.[2]In the case of government data retention, the data that is stored is usually of telephone calls made and received,emailssent and received, andwebsitesvisited.Location datais also collected. The primary objective in government data retention istraffic analysisandmass surveillance. By analysing the retained data, governments can identify the locations of individuals, an individual's associates and the members of a group such as political opponents. These activities may or may not be lawful, depending on the constitutions and laws of each country. In many jurisdictions, access to these databases may be made by a government with little or no judicial oversight.[citation needed][3][4] In the case of commercial data retention, the data retained will usually be on transactions and web sites visited. Data retention also covers data collected by other means (e.g., byAutomatic number-plate recognitionsystems) and held by government and commercial organisations. Adata retention policyis a recognized and proven protocol within an organization for retaining information for operational use while ensuring adherence to the laws and regulations concerning them. The objectives of a data retention policy are to keep important information for future use or reference, to organize information so it can be searched and accessed at a later date and to dispose of information that is no longer needed.[5] The data retention policies within an organization are a set of guidelines that describes which data will be archived, how long it will be kept, what happens to the data at the end of the retention period (archive or destroy) and other factors concerning the retention of the data.[6] A part of any effective data retention policy is the permanent deletion of the retained data; achieving secure deletion of data by encrypting the data when stored, and then deleting the encryption key after a specified retention period. Thus, effectively deleting the data object and its copies stored in online and offline locations.[7] In 2015, the Australian government introducedmandatory data retention lawsthat allows data to be retained up to two years.[8]The scheme is estimated to cost at least AU$400 million per year to implement, working out to at least $16 per user per year.[9]It requires telecommunication providers and ISPs to retain telephony, Internet and email metadata for two years, accessible without a warrant, and could possibly be used totarget file sharing.[10][11]TheAttorney-Generalhas broad discretion on which agencies are allowed to access metadata, including private agencies.[12] The Greenswere strongly opposed to the introduction of these laws, citing privacy concerns and the increased prospect of 'speculative invoicing' over alleged copyright infringement cases.[13][14]The Labor Party initially opposed as well, but later agreed to passing the law after additional safeguards were put in place to afford journalists some protection.[15][16] On 15 March 2006, theEuropean Unionadopted theData Retention Directive.[17][18]It required Member States to ensure that communications providers retain data as specified in the Directive for a period of between 6 months and 2 years in order to: The data was required to be available to "competent" national authorities "for the purpose of the investigation, detection and prosecution of serious crime, as defined by each Member State in its national law". The Directive coveredfixed telephony,mobile telephony, Internet access, email, andVoIP. Member States were required to transpose it into national law within 18 months—no later than September 2007. However, they could if they wished postpone the application of the Directive to Internet access, email, and VoIP for a further 18 months after this date. A majority of Member States exercised this option. All 28 EU States at the time notified the European Commission about the transposition of the Directive into their national law. Of these, however, Germany and Belgium had only transposed the legislation partially.[19] A report evaluating the Directive was published by the European Commission in April 2011.[20]It concluded that data retention was a valuable tool for ensuring criminal justice and public protection, but that it had achieved only limited harmonisation. There were serious concerns from service providers about the compliance costs and from civil society organisations who claimed that mandatory data retention was an unacceptable infringement of the fundamental right to privacy and the protection of personal data according to EU law. In response to the report, on May 31, 2011, theEuropean Data Protection Supervisorexpressed some concerns on the European Data Retention Directive, underlining that the Directive "does not meet the requirements imposed by the fundamental rights to privacy and data protection".[21] In November 2012, answers to a parliamentary inquiry in theGerman Bundestagrevealed plans of some EU countries including France to extend data retention to chats and social media. Furthermore, the GermanFederal Office for the Protection of the Constitution(Germany's domestic intelligence agency) has confirmed that it has been working with the ETSI LI Technical Committee since 2003.[22][23][24][25][26] Criticisms of the directive arose. The council's Legal Services was reported to have stated in closed session that paragraph 59 of the European Court of Justice's ruling "suggests that general and blanket data retention is no longer possible".[27]A legal opinion funded by the Greens/EFA Group in the European Parliament finds that the blanket retention data of unsuspected persons generally violates the EU Charter of Fundamental Rights, both in regard to national telecommunications data retention laws and to similar EU data retention schemes (PNR, TFTP, TFTS, LEA access to EES, Eurodac, VIS).[28] Digital Rights Ireland brought the directive to the High Court of Ireland, which then brought it further to the European Court of Justice of the European Union. The case was also joined by the Constitutional Court of Austria. The Court on 8 April 2014 declared the Directive 2006–24/EC invalid for violating fundamental rights, stating that "the directive interferes in a particularly serious manner with the fundamental rights to respect for private life and to the protection of personal data".[29][30] This led further to that the member states in various degrees abolished or modified their implementations of the directive. Since the Swedish implementation of the directive was kept in a similar manner, the Swedish implementation was brought to the European Court by the telecom provider Tele2, and the case was merged with a similar case from the United Kingdom, initiated by three persons with intervention by Open Rights Group, Privacy International and The Law Society of England and Wales. Since the original directive no longer existed, the basis for the judgment was an exception to theDirective on privacy and electronic communications[31]in its Article 15(1), referring to the possibility to exceptionally apply data retention for fighting serious crime. On the 21 of December 2016 the Court ruled that "the protection of privacy in the electronic communications sector must be interpreted as precluding national legislation which, for the purpose of fighting crime, provides for general and indiscriminate retention of all traffic and location data of all subscribers and registered users relating to all means of electronic communication."[32]Blanket data retention was ruled out another time, but the actual consequences all over the EU are varied and under discussion since then. Implementation of the directive was part of Act. No. 259/2010 Coll. on electronic communications as later amended. Under Art. 97 (3), telecommunication data are to be stored between 6 and 12 months. TheCzech Constitutional Courthas deemed the law unconstitutional and found it to be infringing on the peoples right to privacy.[33] As of July 2012, new legislation was on its way.[34] Denmarkhas implemented the EU data retention directive and much more, by logging all internet flow or sessions between operators and operators and consumers.[35] TheGermanBundestaghad implemented the directive in "Gesetz zur Neuregelung der Telekommunikationsüberwachung und anderer verdeckter Ermittlungsmaßnahmen sowie zur Umsetzung der Richtlinie 2006/24/EG".[36]The law became valid on 1 January 2008. Any communications data had to be retained for six months. On 2 March 2010, theFederal Constitutional Court of Germanyruled the lawunconstitutionalas a violation of the guarantee of the secrecy of correspondence.[37]On 16 October 2015, a second law for shorter, up to 10 weeks long, data retention excluding email communication was passed by parliament.[38][39][40]However, this act was ruled incompatible with German and European laws by an injunction of the Higher Administrative Court ofNorth Rhine-Westphalia. As a result, on June 28, 2017, three days before the planned start of data retention, theFederal Network Agencysuspended the introduction of data retention until a final decision in the principle proceedings.[41] In July 2005 new legal requirements[42]on data retention came into force in Italy. Italy already required the retention of telephony traffic data for 48 months, but without location data. Italy has adopted the EU Directive on Privacy and Electronic Communications 2002 but with an exemption to the requirement to erase traffic data. The directive was transposed into law by Law 32/2008.[43]In December 2017, D3 – Defesa dos Direitos Digitais, a Portuguese digital rights organization, presented a complaint to the JusticeOmbudsman, based on the case law of the Court of Justice of the European Union,[44][45]following several opinions of the Portuguese Data Protection Authority. In January 2019, the Ombudsman issued an official recommendation to the Justice Ministry,[46]defending the need to change the national law, in order to comply with the CJEU case law. Some weeks later, in March, the Ombudsman received an answer by the Minister of Justice, where the Minister refused changes to the law.[47]As such, in August 2019, the Ombudsman decided to ask the Portuguese Constitutional Court for a ruling on the constitutionality of the law.[48]In 2022, the Portuguese Constitutional Court published its decision,[49]striking down Law 32/2008 as unconstitutional. Among other things, the Court considered that an undifferentiated and generalized obligation to store all traffic and location data relating to all people did not respect the proportionality principle.[50]In response to this decision, the parliament created a data retention working party, which studied the subject for more than a year and held several hearings with experts. In 2023, a law proposal was approved in the parliament.[51]However, the President of the Republic decided to make use of its prerogative of asking the Constitutional Court for a preventive rule, before approving the law. In this ruling, the Constitutional Court once again decided against the proposed data retention regime,[52]for similar reasons, as the law still required indiscriminate and general data retention of traffic and location data. The diploma was returned to the Parliament and did not become law. In 2024, the Parliament approved a new law proposal.[53]This time, the President of the Republic opted for not requesting a preventive rule from the Constitutional Court, and so the law was published and entered into force.[54]The digital rights association D3 – Defesa dos Direitos Digitais maintains that the current law is still a violation of fundamental rights, as it delegates core elements of a fundamental rights restriction to a special formation of the Supreme Court. This makes it impossible to demonstrate the required proportionality of the restriction, or to demonstrate how the data retention regime preserves the essential core of the restricted fundamental rights, as it must.[55] The EU directive has been transposed intoRomanian lawas well, initially as Law 298/2008.[56]However, theConstitutional Court of Romaniasubsequently struck down the law in 2009 as violating constitutional rights.[57]The court held that the transposing act violated the constitutional rights of privacy, of confidentiality in communications, and of free speech.[58]TheEuropean Commissionhas subsequently sued Romania in 2011 for non-implementation, threatening Romania with a fine of 30,000 euros per day.[59]The Romanian parliament passed a new law in 2012, which was signed by presidentTraian Băsescuin June.[60]The Law 82/2012 has been nicknamed "Big Brother" (using the untranslated English expression) by various Romanian non-governmental organizations opposing it.[59][61][62]On July 8, 2014, this law too was declared unconstitutional by theConstitutional Court of Romania.[63] Slovakiahas implemented the directive in Act No. 610/2003 Coll. on electronic communications as later amended. Telecommunication data are stored for six months in the case of data related to Internet, Internet email and Internet telephony (art. 59a (6) a), and for 12 months in the case of other types of communication (art. 59a (6) b). In April 2014, the Slovak Constitutional Court preliminary suspended effectiveness of the Slovak implementation of Data Retention Directive and accepted the case for the further review.[64][65]In April 2015 Constitutional court decided that some parts of Slovak laws implementing DR Directive are not in compliance with Slovak constitution and Convention for the Protection of Human Rights and Fundamental Freedoms.[66]According to now invalid provisions of the Electronic Communications Act, the providers of electronic communications were obliged to store traffic data, localization data and data about the communicating parties for a period of 6 months (in the case Internet, email or VoIP communication) or for a period of 12 months (in case of other communication).[67] Swedenimplemented the EU's 2006Data Retention Directivein May 2012, and it was fined €3 million by theCourt of Justice of the European Unionfor its belatedtransposition(the deadline was 15 September 2007).[68][69][70][71]The directive allowed member states to determine the duration data is retained, ranging from six months to two years; theRiksdag, Sweden's legislature, opted for six months.[72] In April 2014, however, the CJEU struck down the Data Retention Directive. Following the judgement,PTS, Sweden's telecommunications regulator, told SwedishISPsand telcos that they would no longer have to retain call records and internet metadata.[73]The Swedish government initiated a one-man investigation that stated that Sweden could keep on with data-retention. After that, the PTS reversed course.[74]Most of Sweden's major telecommunications companies complied immediately, thoughTele2appealed this order before the Administrative Court in Stockholm claiming that the Swedish implementation should be reversed following the directive being declared unvalid, including the fact that the Swedish implementation went further than the directive, including registration of failed telephone calls and the geographic endpoint of a mobile communications. The appeal was rejected. The one holdout ISP,Bahnhof, was given an order to comply by November 24 deadline or face a five millionkrona($680,000) fine.[75] Tele2 appealed the first level court rejection to the Swedish Administrative Court of Appeal, that sent the matter to the European Court of Justice of the European Union. That led to a judgement that once again invalidated blanket data retention of all communications of all citizens' communications to combat crime. See under European Union above. The Data Retention and Investigatory Powers Act came into force in 2014. It is the answer by the United Kingdom parliament after a declaration of invalidity was made by the Court of Justice of the European Union in relation to Directive 2006/ 24/EC in order to make provision, about the retention of certain communications data.[76]In addition, the purpose of the act is to: The act is also to ensure that communication companies in the UK retain communications data so that it continues to be available when it is needed by law enforcement agencies and others to investigate committed crimes and protect the public.[77]Data protection law requires data that isn't of use to be deleted. This means that the intention of this Act could be using data retention to acquire further policing powers using, as the Act make data retention mandatory. An element of this Act is the provision of the investigatory powers to be reported by 1 May 2015.[78] TheData Retention and Investigatory Powers Act 2014was referred to as the "snooper's charter" communications data bill.[79]Theresa May, a strong supporter of the Parliament Act, said in a speech that "If we (parliament) do not act, we risk sleepwalking into a society in which crime can no longer be investigated and terrorists can plot their murderous schemes undisrupted."[79] TheUnited Kingdom parliamentits new laws increasing the power of data retention is essential to tackling crime and protecting the public. However, not all agree and believe that the primary objective in the data retention by the government ismass surveillance. After Europe's highest court said the depth of data retention breaches citizens' fundamentalright to privacyand the UK created its own Act, it has led to the British government being accused of breaking the law by forcing telecoms and internet providers to retain records of phone calls, texts and internet usage.[80]From this information, governments can identify an individual's associates, location, group memberships, political affiliations and other personal information. In a television interview, the EUAdvocate GeneralPedro Cruz Villalón highlighted the risk that the retained data might be used illegally in ways that are "potentially detrimental to privacy or, more broadly, fraudulent or even malicious".[80] The bodies that are able to access retained data in the United Kingdom are listed in theRegulation of Investigatory Powers Act 2000(RIPA). These are the following: However, theRegulation of Investigatory Powers Act 2000(RIPA) also gives theHome Secretary powersto change the list of bodies with access to retained data throughsecondary legislation. The list of authorised bodies now includes:[84] The justifications for accessing retained data in the UK are set out in the Regulation of Investigatory Powers Act 2000 (RIPA). They include: The EU's Data Retention Directive has been implemented intoNorwegian lawin 2011,[85]but this will not be in effect before 1 January 2015.[86] A 2016anti-terroristfederal law 374-FZ known asYarovaya Lawrequires all telecommunication providers to store phone call, text and email metadata, as well as the actual voice recordings for up to 6 months. Messaging services likeWhatsAppare required to provide cryptographicbackdoorsto law-enforcement.[87]The law has been widely criticized both in Russia and abroad as an infringement of human rights and a waste of resources.[88][89][90][91] On 29 June 2010, theSerbian parliamentadopted the Law on Electronic Communications, according to which the operator must keep the data on electronic communications for 12 months. This provision was criticized as unconstitutional by opposition parties and by OmbudsmanSaša Janković.[92] As from 7 July 2016, theSwiss Federal Lawabout the Surveillance of the Post and Telecommunications entered into force, passed by the Swiss government on 18 March 2016.[93] Swiss mobile phone operators have to retain the following data for six months according to the BÜPF: All Internet service providers must retain the following data for six months: Email application refers to SMTP-, POP3-, IMAP4, webmail- and remail-server.[94] Switzerland only applies data retention to the largest Internet service providers with over 100 million CHF in annual Swiss-sourced revenue. This notably exempts derived communications providers such as ProtonMail, a popular encrypted email service based in Switzerland.[95] TheNational Security Agency(NSA) commonly records Internet metadata for the whole planet for up to a year in itsMARINAdatabase, where it is used forpattern-of-life analysis.U.S. personsare not exempt because metadata are not considered data under US law (section 702 of theFISA Amendments Act).[96]Its equivalent for phone records isMAINWAY.[97]The NSA recordsSMSand similartext messagesworldwide throughDISHFIRE.[98] Various United States agencies leverage the (voluntary) data retention practised by many U.S. commercial organizations through programs such asPRISMandMUSCULAR. Amazon is known to retain extensive data on customer transactions. Google is also known to retain data on searches, and other transactions. If a company is based in the United States theFederal Bureau of Investigation(FBI) can obtain access to such information by means of aNational Security Letter(NSL). TheElectronic Frontier Foundationstates that "NSLs are secret subpoenas issued directly by the FBI without any judicial oversight. These secret subpoenas allow the FBI to demand that online service providers orecommercecompanies produce records of their customers' transactions. The FBI can issue NSLs for information about people who haven't committed any crimes. NSLs are practically immune to judicial review. They are accompanied by gag orders that allow no exception for talking to lawyers and provide no effective opportunity for the recipients to challenge them in court. This secret subpoena authority, which was expanded by the controversialUSA PATRIOT Act, could be applied to nearly any online service provider for practically any type of record, without a court ever knowing".The Washington Posthas published a well researched article on the FBI's use ofNational Security Letters.[99] The United States does not have anyInternet Service Provider(ISP) mandatory data retention laws similar to the EuropeanData Retention Directive,[100]which was retroactively invalidated in 2014 by theCourt of Justice of the European Union. Some attempts to create mandatory retention legislation have failed: While it is often argued that data retention is necessary to combat terrorism and other crimes, there are still others who oppose data retention. Data retention may assist the police and security services to identify potential terrorists and their accomplices before or after an attack has taken place. For example, the authorities in Spain and the United Kingdom stated that retained telephony data made a significant contribution to police enquires into the2004 Madrid train bombingsand the2005 London bombings.[106] The opponents of data retention make the following arguments: The current directive proposal (see above) would force ISPs to record the internet communications of its users. The basic assumption is that this information can be used to identify with whom someone, whether innocent citizen or terrorist, communicated throughout a specific timespan. Believing that such as mandate would be useful is ignoring that some very committed community of crypto professionals has been preparing for such legislation for decades. Below are some strategies available today to anyone to protect themselves, avoid such traces, and render such expensive and legally dubious logging operations useless. There areanonymizing proxiesthat provide slightly more private web access. Proxies must useHTTPSencryption in order to provide any level of protection at all. Unfortunately, proxies require the user to place a large amount of trust in the proxy operator (since they see everything the user does over HTTP), and may be subject totraffic analysis. Some P2P services like file transfer or voice over IP use other computers to allow communication between computers behind firewalls. This means that trying to follow a call between two citizens might, mistakenly, identify a third citizen unaware of the communication. For security conscious citizens with some basic technical knowledge, tools likeI2P – The Anonymous Network,Tor,Mixmasterand the cryptography options integrated into any many modern mail clients can be employed. I2P is an international peer-to-peer anonymizing network, which aims at not only evading data retention, but also at making spying by other parties impossible. The structure is similar to the one TOR (see next paragraph) uses, but there are substantial differences. It protects better against traffic analysis and offers strong anonymity and for net-internal trafficend-to-end encryption. Due to unidirectional tunnels it is less prone to timing attacks than Tor. In I2P, several services are available: anonymous browsing, anonymous e-mails, anonymous instant messenger, anonymous file-sharing, and anonymous hosting of websites, among others. Tor is a project of the U.S. non-profit Tor Project[112]to develop and improve anonion routingnetwork to shield its users from traffic analysis. Mixmaster is a remailer service that allows anonymous email sending. JAPis a project very similar to Tor. It is designed to route web requests through several proxies to hide the end user's Internet address. Tor support has been included into JAP. The Arbeitskreis Vorratsdatenspeicherung (German Working Group on Data Retention) is an association of civil rights campaigners, data protection activists and Internet users. The Arbeitskreis coordinates the campaign against the introduction of data retention in Germany.[113] An analysis of federal Crime Agency (BKA) statistics published on 27 January 2010 by civil liberties NGO AK Vorrat revealed that data retention did not make a prosecution of serious crime any more effective.[114] As the EU Commission is currently considering changes to the controversial EU data retention directive, a coalition of more than 100 civil liberties, data protection and human rights associations, jurists, trade unions and others are urging the commission to propose the repeal of the EU requirements regarding data retention in favour of a system of expedited preservation and targeted collection of traffic data.[114]
https://en.wikipedia.org/wiki/Data_retention
Discipline and Punish: The Birth of the Prison(French:Surveiller et punir : Naissance de la prison) is a 1975 book by French philosopherMichel Foucault. It is an analysis of the social and theoretical mechanisms behind the changes that occurred in Westernpenal systemsduring the modern age based on historical documents from France. Foucault argues that prison did not become the principal form of punishment just because of thehumanitarianconcerns ofreformists. He traces the cultural shifts that led to the predominance of prison via the body and power. Prison is used by the "disciplines" – new technological powers that can also be found, according to Foucault, in places such as schools, hospitals, and military barracks.[1] The main ideas ofDiscipline and Punishcan be grouped according to its four parts:torture, punishment, discipline, and prison.[1] Foucault begins by contrasting two forms of penalty: the violent and chaotic public torture ofRobert-François Damiens, who was convicted of attemptedregicidein the mid-18th century, and the highly regimented daily schedule for inmates from an early 19th-century prison (Mettray). These examples provide a picture of just how profound the changes in Western penal systems were after less than a century. Foucault wants the reader to consider what led to these changes and how Western attitudes shifted so radically.[2] He believes that the question of the nature of these changes is best asked by assuming that they were not used to create a more humanitarian penal system, nor to more exactly punish or rehabilitate, but as part of a continuing trajectory of subjection. Foucault wants to tie scientific knowledge and technological development to the development of the prison to prove this point. He defines a "micro-physics" of power, which is constituted by a power that is strategic and tactical rather than acquired, preserved or possessed. He explains that power and knowledge imply one another, as opposed to the common belief that knowledge exists independently of power relations (knowledge is always contextualized in a framework which makes it intelligible, so the humanizing discourse of psychiatry is an expression of the tactics of oppression).[3]: 26–27That is, the ground of the game of power is not won by "liberation", because liberation already exists as a facet of subjection. "The man described for us, whom we are invited to free, is already in himself the effect of a subjection much more profound than himself."[3]: 30The problem for Foucault is in some sense a theoretical modelling which posits a soul, an identity (the use of soul being fortunate since "identity" or "name" would not properly express the method of subjection—e.g., if mere materiality were used as a way of tracking individuals then the method of punishment would not have switched from torture to psychiatry) which allows a whole materiality of prison to develop. In "What is an Author?" Foucault also deals with notion of identity, and its use as a method of control, regulation, and tracking.[2] He begins by examining public torture and execution. He argues that the public spectacle of torture and execution was a theatrical forum, the original intentions of which eventually produced several unintended consequences. Foucault stresses the exactitude with which torture is carried out, and describes an extensive legal framework in which it operates to achieve specific purposes. Foucault describes public torture as a ceremony. The intended purposes were: "It [torture] assured the articulation of the written on the oral, the secret on the public, the procedure of investigation on the operation of the confession; it made it possible to reproduce the crime on the visible body of the criminal; in the same horror, the crime had to be manifested and annulled. It also made the body of the condemned man the place where the vengeance of the sovereign was applied, the anchoring point for a manifestation of power, an opportunity of affirming the dissymmetry of forces."[3]: 55 Foucault looks at public torture as the outcome "of a certain mechanism of power" that views crime in a military schema. Crime and rebellion are akin to a declaration of war. The sovereign was not concerned with demonstrating the ground for the enforcement of its laws, but of identifying enemies and attacking them, the power of which was renewed by the ritual of investigation and the ceremony of public torture.[3]: 57 Someunintended consequenceswere: Public torture and execution was a method the sovereign deployed to express his or her power, and it did so through the ritual of investigation and the ceremony of execution—the reality and horror of which was supposed to express the omnipotence of the sovereign but actually revealed that the sovereign's power depended on the participation of the people. Torture was made public in order to create fear in the people, and to force them to participate in the method of control by agreeing with its verdicts. But problems arose in cases in which the people through their actions disagreed with the sovereign, by heroizing the victim (admiring the courage in facing death) or in moving to physically free the criminal or to redistribute the effects of the strategically deployed power. Thus, he argues, the public execution was ultimately an ineffective use of the body, qualified as non-economical. As well, it was applied non-uniformly and haphazardly. Hence, its political cost was too high. It was the antithesis of the more modern concerns of the state: order and generalization. So it had to be reformed to allow for greater stability of property for thebourgeoisie. Firstly, the switch to prison was not immediate and sudden. There was a more graded change, though it ran its course rapidly. Prison was preceded by a different form of public spectacle. The theater of public torture gave way to publicchain gangs. Punishment became "gentle", though not forhumanitarianreasons, Foucault suggests. He argues that reformists were unhappy with the unpredictable, unevenly distributed nature of the violence the sovereign would inflict on the convict. The sovereign's right to punish was so disproportionate that it was ineffective and uncontrolled. Reformists felt the power to punish and judge should become more evenly distributed, the state's power must be a form of public power. This, according to Foucault, was of more concern to reformists than humanitarian arguments. Out of this movement towards generalized punishment, a thousand "mini-theatres" of punishment would have been created wherein the convicts' bodies would have been put on display in a more ubiquitous, controlled, and effective spectacle. Prisoners would have been forced to do work that reflected their crime, thus repaying society for their infractions. This would have allowed the public to see the convicts' bodies enacting their punishment, and thus to reflect on the crime. But these experiments lasted less than twenty years. Foucault argues that this theory of "gentle" punishment represented the first step away from the excessive force of the sovereign, and towards more generalized and controlled means of punishment. But he suggests that the shift towards prison that followed was the result of a new "technology" andontologyfor the body being developed in the 18th century, the "technology" of discipline, and the ontology of "man as machine." The emergence of prison as the form of punishment for every crime grew out of the development of discipline in the 18th and 19th centuries, according to Foucault. He looks at the development of highly refined forms of discipline, of discipline concerned with the smallest and most precise aspects of a person's body. Discipline, he suggests, developed a new economy and politics for bodies. Modern institutions required that bodies must be individuated according to their tasks, as well as for training, observation, and control. Therefore, he argues, discipline created a whole new form of individuality for bodies, which enabled them to perform their duty within the new forms of economic, political, and military organizations emerging in the modern age and continuing to today. The individuality that discipline constructs (for the bodies it controls) has four characteristics, namely it makes individuality which is: Foucault suggests this individuality can be implemented in systems that are officiallyegalitarian, but use discipline to construct non-egalitarian power relations: Foucault's argument is that discipline creates "docile bodies", ideal for the new economics, politics and warfare of the modernindustrial age– bodies that function in factories, ordered military regiments, and school classrooms. But, to construct docile bodies thedisciplinary institutionsmust be able to constantly observe and record the bodies they control and ensure the internalization of the disciplinary individuality within the bodies being controlled. That is, discipline must come about without excessive force through careful observation, and molding of the bodies into the correct form through this observation. This requires a particular form of institution, exemplified, Foucault argues, byJeremy Bentham'spanopticon. This architectural model, though it was never adopted by architects according to Bentham's exact blueprint, becomes an important conceptualization of power relations for prison reformers of the 19th century, and its general principle is a recurring theme in modern prison construction. The panopticon was the ultimate realization of a modern disciplinary institution. It allowed for constant observation characterized by an "unequal gaze"; the constant possibility of observation. Perhaps the most important feature of the panopticon was that it was specifically designed so that the prisoner could never be sure whether they were being observed at any moment. The unequal gaze caused the internalization of disciplinary individuality, and the docile body required of its inmates. This means one is less likely to break rules or laws if they believe they are being watched, even if they are not. Thus, prisons, and specifically those that follow the model of the panopticon, provide the ideal form of modern punishment. Foucault argues that this is why the generalized, "gentle" punishment of public work gangs gave way to the prison. It was the ideal modernization of punishment, so its eventual dominance was natural. Having laid out the emergence of the prison as the dominant form of punishment, Foucault devotes the rest of the book to examining its precise form and function in society, laying bare the reasons for its continued use, and questioning the assumed results of its use. In examining the construction of the prison as the central means of criminal punishment, Foucault builds a case for the idea that prison became part of a larger "carceral system" that has become an all-encompassing sovereign institution in modern society. Prison is one part of a vast network, including schools, military institutions, hospitals, and factories, which build a panoptic society for its members. This system creates "disciplinary careers"[3]: 300for those locked within its corridors. It is operated under the scientific authority of medicine, psychology, andcriminology. Moreover, it operates according to principles that ensure that it "cannot fail to produce delinquents."[3]: 266Delinquency, indeed, is produced when social petty crime (such as taking wood from the lord's lands) is no longer tolerated, creating a class of specialized "delinquents" acting as the police's proxy in surveillance of society. The structures Foucault chooses to use as his starting positions help highlight his conclusions. In particular, his choice as a perfect prison of thepenal institution at Mettrayhelps personify the carceral system. Within it is included the Prison, the School, the Church, and the work-house (industry) – all of which feature heavily in his argument. The prisons atNeufchatelandMettraywere perfect examples for Foucault, because they, even in their original state, began to show the traits for which Foucault was searching. Moreover, they showed the body of knowledge being developed about the prisoners, the creation of the "delinquent" class, and the disciplinary careers emerging.[4] The publication of the book was "widely noted in major French cultural venues" of the time such asLe Nouvel ObservateurandLe Monde.[5]: 148After its publication in English in 1977 it was "widely reviewed in non-academic venues", however scholarly reviews were "far less common".[5]: 149 The historianPeter GaydescribedDiscipline and Punishas the key text by Foucault that has influenced scholarship on the theory and practice of 19th-century prisons. Though Gay wrote that Foucault "breathed fresh air into the history of penology and severely damaged, without wholly discrediting, traditional Whig optimism about the humanization of penitentiaries as one long success story", he nevertheless gave a negative assessment of Foucault's work, endorsing the critical view ofGordon Wrightin his 1983 bookBetween the Guillotine and Liberty: Two Centuries of the Crime Problem in France. Gay concluded that Foucault and his followers overstate the extent to which keeping "the masses quiet" motivates those in power, thereby underestimating factors such as "contingency, complexity, the sheer anxiety or stupidity of power holders", or their authentic idealism.[6] Law ProfessorDavid Garlandwrote an explication and critique ofDiscipline and Punish. Towards the end, he sums up the main critiques that have been made. He states, "the major critical theme which emerges, and is independently made by many different critics, concerns Foucault's overestimation of the political dimension.Discipline and Punishconsistently proposes an explanation in terms of power—sometimes in the absence of any supporting evidence—where other historians would see a need for other factors and considerations to be brought into account."[7] Another criticism leveled against Foucault's approach is that he often studies the discourse of "prisons" rather than their concrete practice; this is taken up by Fred Alford: "Foucault has mistaken the idea of prison, as reflected in the discourse of criminologists, for its practice. More precisely put, Foucault presents the utopian ideals of eighteenth-century prison reformers, most of which were never realized, as though they were the actual reforms of the eighteenth and nineteenth centuries. One can see this even in the pictures inDiscipline and Punish, many of which are drawings for ideal prisons that were never built. One photograph is of the panopticon prison buildings at Stateville, but it is evidently an old photograph, one in which no inmates are evident. Nor are the blankets and cardboard that now enclose the cells."[8]
https://en.wikipedia.org/wiki/Discipline_and_Punish
Paul-Michel Foucault(UK:/ˈfuːkoʊ/FOO-koh,US:/fuːˈkoʊ/foo-KOH;[3]French:[pɔlmiʃɛlfuko]; 15 October 1926 – 25 June 1984) was a Frenchhistorian of ideasandphilosopherwho was also an author,literary critic,political activist, and teacher. Foucault's theories primarily addressed the relationships betweenpowerversusknowledgeandliberty, and he analyzed how they are used as a form ofsocial controlthrough multiple institutions. Though often cited as astructuralistandpostmodernist, Foucault rejected these labels and sought to critiqueauthoritywithout limits on himself.[4]His thought has influenced academics within a large number of contrasting areas of study, with this especially including those working inanthropology,communication studies,criminology,cultural studies,feminism,literary theory,psychology, andsociology. His efforts againsthomophobiaandracial prejudiceas well as against otherideological doctrineshave also shaped research intocritical theoryandMarxism–Leninismalongside other topics. Born inPoitiers, France, into anupper-middle-classfamily, Foucault was educated at theLycée Henri-IV, at theÉcole Normale Supérieure, where he developed an interest in philosophy and came under the influence of his tutorsJean HyppoliteandLouis Althusser, and at theUniversity of Paris(Sorbonne), where he earned degrees in philosophy and psychology. After several years as a cultural diplomat abroad, he returned to France and published his first major book,The History of Madness(1961). After obtaining work between 1960 and 1966 at theUniversity of Clermont-Ferrand, he producedThe Birth of the Clinic(1963) andThe Order of Things(1966), publications that displayed his increasing involvement with structuralism, from which he later distanced himself. These first three histories exemplified ahistoriographicaltechnique Foucault was developing, which he called "archaeology". From 1966 to 1968, Foucault lectured at theUniversity of Tunis, before returning to France, where he became head of the philosophy department at the new experimental university ofParis VIII. Foucault subsequently publishedThe Archaeology of Knowledge(1969). In 1970, Foucault was admitted to theCollège de France, a membership he retained until his death. He also became active in severalleft-wing groupsinvolved in campaigns against racism and other violations ofhuman rights, focusing on struggles such aspenal reform. Foucault later publishedDiscipline and Punish(1975) andThe History of Sexuality(1976), in which he developed archaeological andgenealogicalmethods that emphasized the role that power plays in society. Foucault died in Paris from complications ofHIV/AIDS. He became the firstpublic figurein France to die from complications of the disease, with hischarismaandcareer influencechanging mass awareness of thepandemic. This occurrence influencedHIV/AIDS activism; his partner,Daniel Defert, founded theAIDEScharity in his memory. It continues to campaign as of 2024, despite the deaths of both Defert (in 2023) and Foucault (in 1984). Paul-Michel Foucault was born on 15 October 1926 in the city ofPoitiers, west-central France, as the second of three children in a prosperous,socially conservative,upper-middle-classfamily.[5]Family tradition prescribed naming him after his father, Paul Foucault (1893–1959), but his mother insisted on the addition of Michel; referred to as Paul at school, he expressed a preference for "Michel" throughout his life.[6] His father, a successful local surgeon born inFontainebleau, moved toPoitiers, where he set up his own practice.[7]He married Anne Malapert, the daughter of prosperous surgeon Prosper Malapert, who owned a private practice and taught anatomy at the University of Poitiers' School of Medicine.[8]Paul Foucault eventually took over his father-in-law's medical practice, while Anne took charge of their large mid-19th-century house, Le Piroir, in the village ofVendeuvre-du-Poitou.[9]Together the couple had three children—a girl named Francine and two boys, Paul-Michel and Denys—who all shared the same fair hair and bright blue eyes.[10]The children were raised to be nominal Catholics, attending mass at the Church of Saint-Porchair, and while Michel briefly became analtar boy, none of the family was devout.[11] In later life, Foucault revealed very little about his childhood.[12]Describing himself as a "juvenile delinquent", he said his father was a "bully" who sternly punished him.[13]In 1930, two years early, Foucault began his schooling at the local Lycée Henry-IV. There he undertook two years of elementary education before entering the mainlycée, where he stayed until 1936. Afterwards, he took his first four years of secondary education at the same establishment, excelling in French, Greek, Latin, and history, though doing poorly at mathematics, includingarithmetic.[14] In 1939, the Second World War began, followed byNazi Germany's occupation of France in 1940. Foucault's parents opposed the occupation and theVichy regime, but did not join theResistance.[15]That year, Foucault's mother enrolled him in theCollège Saint-Stanislas, a strict Catholic institution run by theJesuits. Although he later described his years there as an "ordeal", Foucault excelled academically, particularly in philosophy, history, and literature.[16]In 1942, he entered his final year, theterminale, where he focused on the study of philosophy, earning hisbaccalauréatin 1943.[17] Returning to the local Lycée Henry-IV, he studied history and philosophy for a year,[18]aided by a personal tutor, the philosopher Louis Girard.[19]Rejecting his father's wishes that he become a surgeon, in 1945 Foucault went to Paris, where he enrolled in one of the country's most prestigious secondary schools, which was also known as theLycée Henri-IV. Here he studied under the philosopherJean Hyppolite, anexistentialistand expert on the work of 19th-century German philosopherGeorg Wilhelm Friedrich Hegel. Hyppolite had devoted himself to uniting existentialist theories with thedialectical theoriesof Hegel andKarl Marx. These ideas influenced Foucault, who adopted Hyppolite's conviction that philosophy must develop through a study of history.[20] I wasn't always smart, I was actually very stupid in school ... [T]here was a boy who was very attractive who was even stupider than I was. And to ingratiate myself with this boy who was very beautiful, I began to do his homework for him—and that's how I became smart, I had to do all this work to just keep ahead of him a little bit, to help him. In a sense, all the rest of my life I've been trying to do intellectual things that would attract beautiful boys. In autumn 1946, attaining excellent results, Foucault was admitted to the éliteÉcole Normale Supérieure(ENS), for which he undertook exams and an oral interrogation byGeorges Canguilhemand Pierre-Maxime Schuhl to gain entry. Of the hundred students entering the ENS, Foucault ranked fourth based on his entry results, and encountered the highly competitive nature of the institution. Like most of his classmates, he lived in the school's communal dormitories on the Parisian Rue d'Ulm.[22] He remained largely unpopular, spending much time alone, reading voraciously. His fellow students noted his love of violence and the macabre; he decorated his bedroom with images of torture and war drawn during theNapoleonic Warsby Spanish artistFrancisco Goya, and on one occasion chased a classmate with a dagger.[23]Prone toself-harm, Foucault allegedlyattempted suicidein 1948; his father sent him to see the psychiatristJean Delayat theSainte-Anne Hospital Center. Obsessed with the idea of self-mutilation and suicide, Foucault attempted the latter several times in ensuing years, praising suicide in later writings.[24]The ENS's doctor examined Foucault's state of mind, suggesting that his suicidal tendencies emerged from the distress surrounding his homosexuality, because same-sex sexual activity was socially taboo in France.[25]At the time, Foucault engaged in homosexual activity with men whom he encountered in the underground Parisiangay scene, also indulging in drug use; according to biographerJames Miller, he enjoyed the thrill and sense of danger that these activities offered him.[26] Although studying various subjects, Foucault soon gravitated towards philosophy, reading not only Hegel and Marx but alsoImmanuel Kant,Edmund Husserland most significantly,Martin Heidegger.[27]He began reading the publications of philosopherGaston Bachelard, taking a particular interest in his work exploring thehistory of science.[28]He graduated from the ENS with aB.A. (licence)in Philosophy in 1948[1]and aDiplôme d'études supérieures(DES), roughly equivalent to anM.A.degree) inPhilosophyin 1949.[1]His DES thesis under the direction of Hyppolite was titledLa Constitution d'un transcendental dans La Phénoménologie de l'esprit de Hegel(The Constitution of a Historical Transcendental in Hegel's Phenomenology of Spirit).[1] In 1948, the philosopherLouis Althusserbecame a tutor at the ENS. AMarxist, he influenced both Foucault and a number of other students, encouraging them to join theFrench Communist Party. Foucault did so in 1950, but never became particularly active in its activities, and never adopted anorthodox Marxistviewpoint, rejecting core Marxist tenets such asclass struggle.[29]He soon became dissatisfied with the bigotry that he experienced within the party's ranks; he personally facedhomophobiaand was appalled by theanti-semitismexhibited during the 1952–53 "doctors' plot" in theSoviet Union. He left the Communist Party in 1953, but remained Althusser's friend and defender for the rest of his life.[30]Although failing at the first attempt in 1950, he passed hisagrégationin philosophy on the second try, in 1951.[31]Excused fromnational serviceon medical grounds, he decided to start a doctorate at theFondation Thiersin 1951, focusing on the philosophy of psychology,[32]but he relinquished it after only one year in 1952.[33] Foucault was also interested in psychology and he attendedDaniel Lagache's lectures at the University of Paris, where he obtained aB.A. (licence)in psychology in 1949 and aSpecialist Diploma inPsychopathology(Diplôme de psychopathologie) from the university's institute of psychology (now called the Institut de psychologie de l'université Paris Descartes} in June 1952.[1] Over the following few years, Foucault embarked on a variety of research and teaching jobs.[34]From 1951 to 1955, he worked as a psychology instructor at the ENS at Althusser's invitation.[35]In Paris, he shared a flat with his brother, who was training to become a surgeon, but for three days in the week commuted to the northern town ofLille, teaching psychology at theUniversité de Lillefrom 1953 to 1954.[36]Many of his students liked his lecturing style.[37]Meanwhile, he continued working on his thesis, visiting theBibliothèque Nationaleevery day to read the work of psychologists such asIvan Pavlov,Jean PiagetandKarl Jaspers.[38]Undertaking research at the psychiatric institute of the Sainte-Anne Hospital, he became an unofficial intern, studying the relationship between doctor and patient and aiding experiments in theelectroencephalographiclaboratory.[39]Foucault adopted many of the theories of the psychoanalystSigmund Freud, undertaking psychoanalytical interpretation of his dreams and making friends undergoRorschach tests.[40] Embracing the Parisianavant-garde, Foucault entered into a romantic relationship with theserialistcomposerJean Barraqué. Together, they tried to produce their greatest work, heavily used recreational drugs and engaged insado-masochisticsexual activity.[41]In August 1953, Foucault and Barraqué holidayed in Italy, where the philosopher immersed himself inUntimely Meditations(1873–1876), a set of four essays by the philosopherFriedrich Nietzsche. Later describing Nietzsche's work as "a revelation", he felt that reading the book deeply affected him, being a watershed moment in his life.[42]Foucault subsequently experienced another groundbreaking self-revelation when watching a Parisian performance ofSamuel Beckett's new play,Waiting for Godot, in 1953.[43] Interested in literature, Foucault was an avid reader of the philosopherMaurice Blanchot's book reviews published inNouvelle Revue Française. Enamoured of Blanchot's literary style and critical theories, in later works he adopted Blanchot's technique of "interviewing" himself.[44]Foucault also came acrossHermann Broch's 1945 novelThe Death of Virgil, a work that obsessed both him and Barraqué. While the latter attempted to convert the work into anepic opera, Foucault admired Broch's text for its portrayal of death as an affirmation of life.[45]The couple took a mutual interest in the work of such authors as theMarquis de Sade,Fyodor Dostoyevsky,Franz KafkaandJean Genet, all of whose works explored the themes of sex and violence.[46] I belong to that generation who, as students, had before their eyes, and were limited by, a horizon consisting of Marxism, phenomenology and existentialism. For me the break was first Beckett'sWaiting for Godot, a breathtaking performance. Interested in the work of Swiss psychologistLudwig Binswanger, Foucault aided family friend Jacqueline Verdeaux in translating his works into French. Foucault was particularly interested in Binswanger's studies ofEllen Westwho, like himself, had a deep obsession with suicide, eventually killing herself.[48]In 1954, Foucault authored an introduction to Binswanger's paper "Dream and Existence", in which he argued that dreams constituted "the birth of the world" or "the heart laid bare", expressing the mind's deepest desires.[49]That same year, Foucault published his first book,Maladie mentale et personalité(Mental Illness and Personality), in which he exhibited his influence from both Marxist and Heideggerian thought, covering a wide range of subject matter from the reflex psychology of Pavlov to the classic psychoanalysis of Freud. Referencing the work ofsociologistsand anthropologists such asÉmile DurkheimandMargaret Mead, he presented his theory that illness was culturally relative.[50]BiographerJames Millernoted that while the book exhibited "erudition and evident intelligence", it lacked the "kind of fire and flair" that Foucault exhibited in subsequent works.[51]It was largely critically ignored, receiving only one review at the time.[52]Foucault grew to despise it, unsuccessfully attempting to prevent its republication and translation into English.[53] Foucault spent the next five years abroad, first in Sweden, working as cultural diplomat at theUniversity of Uppsala, a job obtained through his acquaintance with historian of religionGeorges Dumézil.[54]AtUppsala, he was appointed a Reader in French language and literature, while simultaneously working as director of the Maison de France, thus opening the possibility of a cultural-diplomatic career.[55]Although finding it difficult to adjust to the "Nordic gloom" and long winters, he developed close friendships with two Frenchmen, biochemist Jean-François Miquel and physicist Jacques Papet-Lépine, and entered into romantic and sexual relationships with various men. In Uppsala he became known for his heavy alcohol consumption and reckless driving in his newJaguar car.[56]In spring 1956, Barraqué broke from his relationship with Foucault, announcing that he wanted to leave the "vertigo of madness".[57]In Uppsala, Foucault spent much of his spare time in the university'sCarolina Redivivalibrary, making use of their Bibliotheca Walleriana collection of texts on the history of medicine for his ongoing research.[58]Finishing his doctoral thesis, Foucault hoped that Uppsala University would accept it, butSten Lindroth, apositivistichistorian of science there, remained unimpressed, asserting that it was full of speculative generalisations and was a poor work of history; he refused to allow Foucault to be awarded a doctorate at Uppsala. In part because of this rejection, Foucault left Sweden.[59]Later, Foucault admitted that the work was a first draft with certain lack of quality.[60] Again at Dumézil's behest, in October 1958 Foucault arrived inWarsaw, the capital of thePolish People's Republic, and took charge of theUniversity of Warsaw's Centre Français.[61]Foucault found life in Poland difficult due to the lack of material goods and services following the destruction of the Second World War. Witnessing the aftermath of thePolish Octoberof 1956, when students had protested against the governing communistPolish United Workers' Party, he felt that most Poles despised their government as apuppet regimeof theSoviet Union, and thought that the system ran "badly".[62]Considering the university a liberal enclave, he traveled the country giving lectures; proving popular, he adopted the position ofde factocultural attaché.[63]Like France and Sweden, Poland legally tolerated but socially frowned on homosexual activity, and Foucault undertook relationships with a number of men; one was with a Polish security agent who hoped to trap Foucault in an embarrassing situation, which therefore would reflect badly on the French embassy. Wracked in diplomatic scandal, he was ordered to leave Poland for a new destination.[64]Various positions were available inWest Germany, and so Foucault relocated to the Institut français Hamburg (where he served as director in 1958–1960), teaching the same courses he had given in Uppsala and Warsaw.[65][66]Spending much time in theReeperbahnred-light district, he entered into a relationship with atransvestite.[67] Histoire de la folieis not an easy text to read, and it defies attempts to summarise its contents. Foucault refers to a bewildering variety of sources, ranging from well-known authors such asErasmusandMolièreto archival documents and forgotten figures in the history of medicine and psychiatry. His erudition derives from years pondering, to citePoe, 'over many a quaint and curious volume of forgotten lore', and his learning is not always worn lightly. In West Germany, Foucault completed in 1960 his primary thesis (thèse principale) for hisState doctorate, titledFolie et déraison: Histoire de la folie à l'âge classique(trans. "Madness and Insanity: History of Madness in the Classical Age"), a philosophical work based upon his studies into thehistory of medicine. The book discussed how West European society had dealt withmadness, arguing that it was a social construct distinct frommental illness. Foucault traces the evolution of the concept of madness through three phases: theRenaissance, the later 17th and 18th centuries, and the modern experience. The work alludes to the work of French poet and playwrightAntonin Artaud, who exerted a strong influence over Foucault's thought at the time.[69] Histoire de la foliewas an expansive work, consisting of 943 pages of text, followed by appendices and a bibliography.[70]Foucault submitted it at theUniversity of Paris, although the university's regulations for awarding a State doctorate required the submission of both his main thesis and a shorter complementary thesis.[71]Obtaining a doctorate in France at the period was a multi-step process. The first step was to obtain arapporteur, or "sponsor" for the work: Foucault choseGeorges Canguilhem.[72]The second was to find a publisher, and as a resultFolie et déraisonwas published in French in May 1961 by the companyPlon, whom Foucault chose overPresses Universitaires de Franceafter being rejected byGallimard.[73]In 1964, a heavily abridged version was published as a mass market paperback, then translated into English for publication the following year asMadness and Civilization: A History of Insanity in the Age of Reason.[74] Folie et déraisonreceived a mixed reception in France and in foreign journals focusing on French affairs. Although it was critically acclaimed byMaurice Blanchot,Michel Serres,Roland Barthes,Gaston Bachelard, andFernand Braudel, it was largely ignored by the leftist press, much to Foucault's disappointment.[75]It was notably criticised for advocatingmetaphysicsby young philosopherJacques Derridain aMarch 1963 lectureat theUniversity of Paris. Responding with a vicious retort, Foucault criticised Derrida's interpretation ofRené Descartes. The two remained bitter rivals until reconciling in 1981.[76]In the English-speaking world, the work became a significant influence on theanti-psychiatrymovement during the 1960s; Foucault took a mixed approach to this, associating with a number of anti-psychiatrists but arguing that most of them misunderstood his work.[77] Foucault's secondary thesis (thèse complémentaire), written in Hamburg between 1959 and 1960, was a translation and commentary on German philosopherImmanuel Kant'sAnthropology from a Pragmatic Point of View(1798);[66]the thesis was titledIntroduction à l'Anthropologie.[78]Largely consisting of Foucault's discussion of textual dating—an "archaeology of the Kantian text"—he rounded off the thesis with an evocation of Nietzsche, his biggest philosophical influence.[79]This work'srapporteurwas Foucault's old tutor and then-director of the ENS, Hyppolite, who was well acquainted with German philosophy.[70]After both theses were championed and reviewed, he underwent his public defense of hisdoctoral thesis(soutenance de thèse) on 20 May 1961.[80]The academics responsible for reviewing his work were concerned about the unconventional nature of his major thesis; reviewerHenri Gouhiernoted that it was not a conventional work of history, making sweeping generalisations without sufficient particular argument, and that Foucault clearly "thinks in allegories".[81]They all agreed however that the overall project was of merit, awarding Foucault his doctorate "despite reservations".[82] In October 1960, Foucault took a tenured post in philosophy at theUniversity of Clermont-Ferrand, commuting to the city every week from Paris,[83]where he lived in a high-rise block on the rue du Dr Finlay.[84]Responsible for teaching psychology, which was subsumed within the philosophy department, he was considered a "fascinating" but "rather traditional" teacher at Clermont.[85]The department was run byJules Vuillemin, who soon developed a friendship with Foucault.[86]Foucault then took Vuillemin's job when the latter was elected to theCollège de Francein 1962.[87]In this position, Foucault took a dislike to another staff member whom he considered stupid:Roger Garaudy, a senior figure in the Communist Party. Foucault made life at the university difficult for Garaudy, leading the latter to transfer to Poitiers.[88]Foucault also caused controversy by securing a university job for his lover, the philosopherDaniel Defert, with whom he retained a non-monogamous relationship for the rest of his life.[89] Foucault maintained a keen interest in literature, publishing reviews in literary journals, includingTel QuelandNouvelle Revue Française, and sitting on the editorial board ofCritique,[90]In May 1963, he published a book devoted to poet, novelist, and playwrightRaymond Roussel. It was written in under two months, published by Gallimard, and was described by biographerDavid Maceyas "a very personal book" that resulted from a "love affair" with Roussel's work. It was published in English in 1983 asDeath and the Labyrinth: The World of Raymond Roussel.[91]Receiving few reviews, it was largely ignored.[92]That same year he published a sequel toFolie et déraison, titledNaissance de la Clinique, subsequently translated asThe Birth of the Clinic: An Archaeology of Medical Perception. Shorter than its predecessor, it focused on the changes that the medical establishment underwent in the late 18th and early 19th centuries.[93]Like his preceding work,Naissance de la Cliniquewas largely critically ignored, but later gained a cult following.[92]It was of interest within the field ofmedical ethics, as it considered the ways in which the history of medicine and hospitals, and the training that those working within them receive, bring about a particular way of looking at the body: the 'medicalgaze'.[94]Foucault was also selected to be among the "Eighteen Man Commission" that assembled between November 1963 and March 1964 to discuss university reforms that were to be implemented byChristian Fouchet, the GaullistMinister of National Education. Implemented in 1967, they brought staff strikes and student protests.[95] In April 1966, Gallimard published Foucault'sLes Mots et les choses[fr](Words and Things), later translated asThe Order of Things: An Archaeology of the Human Sciences.[96]Exploring how man came to be an object of knowledge, it argued that all periods of history have possessed certain underlying conditions of truth that constituted what was acceptable as scientific discourse. Foucault argues that these conditions of discourse have changed over time, from one period'sépistémèto another.[97]Although designed for a specialist audience, the work gained media attention, becoming a surprise bestseller in France.[98]Appearing at the height of interest instructuralism, Foucault was quickly grouped with scholars such asJacques Lacan,Claude Lévi-Strauss, andRoland Barthes, as the latest wave of thinkers set to topple theexistentialismpopularized byJean-Paul Sartre. Although initially accepting this description, Foucault soon vehemently rejected it, because he "never posited a universal theory of discourse, but rather sought to describe the historical forms taken by discoursive practices".[99][100]Foucault and Sartre regularly criticised one another in the press. Both Sartre andSimone de Beauvoirattacked Foucault's ideas as "bourgeois", while Foucault retaliated against their Marxist beliefs by proclaiming that "Marxism exists in nineteenth-century thought as a fish exists in water; that is, it ceases to breathe anywhere else."[101] I lived [in Tunisia] for two and a half years. It made a real impression. I was present for large, violent student riots that preceded by several weeks what happened in May in France. This was March 1968. The unrest lasted a whole year: strikes, courses suspended, arrests. And in March, a general strike by the students. The police came into the university, beat up the students, wounded several of them seriously, and started making arrests ... I have to say that I was tremendously impressed by those young men and women who took terrible risks by writing or distributing tracts or calling for strikes, the ones who really risked losing their freedom! It was a political experience for me. In September 1966, Foucault took a position teaching psychology at theUniversity of Tunisin Tunisia. His decision to do so was largely because his lover, Defert, had been posted to the country as part of hisnational service. Foucault moved a few kilometres fromTunis, to the village ofSidi Bou Saïd, where fellow academic Gérard Deledalle lived with his wife. Soon after his arrival, Foucault announced that Tunisia was "blessed by history", a nation which "deserves to live forever because it was whereHannibalandSt. Augustinelived".[103]His lectures at the university proved very popular, and were well attended. Although many young students were enthusiastic about his teaching, they were critical of what they believed to be his right-wing political views, viewing him as a "representative of Gaullist technocracy", even though he considered himself a leftist.[104] Foucault was in Tunis during the anti-government and pro-Palestinian riots that rocked the city in June 1967, and which continued for a year. Although highly critical of the violent, ultra-nationalistic and anti-semitic nature of many protesters, he used his status to try to prevent some of his militant leftist students from being arrested and tortured for their role in the agitation. He hid their printing press in his garden, and tried to testify on their behalf at their trials, but was prevented when the trials became closed-door events.[105]While in Tunis, Foucault continued to write. Inspired by a correspondence with the surrealist artistRené Magritte, Foucault started to write a book about theimpressionistartistÉdouard Manet, but never completed it.[106] In 1968, Foucault returned to Paris, moving into an apartment on the Rue de Vaugirard.[107]After the May 1968 student protests, Minister of EducationEdgar Faureresponded by founding new universities with greater autonomy. Most prominent of these was theCentre Expérimental de VincennesinVincenneson the outskirts of Paris. A group of prominent academics were asked to select teachers to run the centre's departments, and Canguilheim recommended Foucault as head of the Philosophy Department.[108]Becoming a tenured professor of Vincennes, Foucault's desire was to obtain "the best in French philosophy today" for his department, employingMichel Serres,Judith Miller,Alain Badiou,Jacques Rancière,François Regnault,Henri Weber,Étienne Balibar, andFrançois Châtelet; most of them were Marxists or ultra-left activists.[109] Lectures began at the university in January 1969, and straight away its students and staff, including Foucault, were involved in occupations and clashes with police, resulting in arrests.[110]In February, Foucault gave a speech denouncing police provocation to protesters at theMaison de la Mutualité.[111]Such actions marked Foucault's embrace of the ultra-left,[112]undoubtedly influenced by Defert, who had gained a job at Vincennes' sociology department and who had become aMaoist.[113]Most of the courses at Foucault's philosophy department wereMarxist–Leninistoriented, although Foucault himself gave courses on Nietzsche, "The end of Metaphysics", and "The Discourse of Sexuality", which were highly popular and over-subscribed.[114]While the right-wing press was heavily critical of this new institution, new Minister of EducationOlivier Guichardwas angered by its ideological bent and the lack of exams, with students being awarded degrees in a haphazard manner. He refused national accreditation of the department's degrees, resulting in a public rebuttal from Foucault.[115] Foucault desired to leave Vincennes and become a fellow of the prestigiousCollège de France. He requested to join, taking up a chair in what he called the "history of systems of thought", and his request was championed by members Dumézil, Hyppolite, and Vuillemin. In November 1969, when an opening became available, Foucault was elected to the Collège, though with opposition by a large minority.[116]He gave his inaugural lecture in December 1970, which was subsequently published asL'Ordre du discours(The Discourse of Language).[117]He was obliged to give 12 weekly lectures a year—and did so for the rest of his life—covering the topics that he was researching at the time; these became "one of the events of Parisian intellectual life" and were repeatedly packed out events.[118]On Mondays, he also gave seminars to a group of students; many of them became a "Foulcauldian tribe" who worked with him on his research. He enjoyed this teamwork and collective research, and together they published a number of short books.[119]Working at the Collège allowed him to travel widely, giving lectures in Brazil, Japan, Canada, and the United States over the next 14 years.[120]In 1970 and 1972, Foucault served as a professor in the French Department of theUniversity at Buffaloin Buffalo, New York.[121] In May 1971, Foucault co-founded the Groupe d'Information sur les Prisons (GIP) along with historianPierre Vidal-Naquetand journalistJean-Marie Domenach. The GIP aimed to investigate and expose poor conditions in prisons and give prisoners and ex-prisoners a voice in French society. It was highly critical of the penal system, believing that it converted petty criminals into hardened delinquents.[122]The GIP gave press conferences and staged protests surrounding the events of the Toul prison riot in December 1971, alongside other prison riots that it sparked off; in doing so it faced a police crackdown and repeated arrests.[123]The group became active across France, with 2,000 to 3,000, members, but disbanded before 1974.[124]Also campaigning against the death penalty, Foucault co-authored a short book on the case of the convicted murderer Pierre Rivière.[125]After his research into the penal system, Foucault publishedSurveiller et punir: Naissance de la prison(Discipline and Punish: The Birth of the Prison) in 1975, offering a history of the system in western Europe. In it, Foucault examines the penal evolution away from corporal and capital punishment to the penitentiary system that began in Europe and the United States around the end of the 18th century.[126]BiographerDidier Eribondescribed it as "perhaps the finest" of Foucault's works, and it was well received.[127] Foucault was also active inanti-racistcampaigns; in November 1971, he was a leading figure in protests following the perceived racist killing of Arab migrant Djellali Ben Ali.[citation needed]In this he worked alongside his old rival Sartre, the journalistClaude Mauriac, and one of his literary heroes, Jean Genet. This campaign was formalised as the Committee for the Defence of the Rights of Immigrants, but there was tension at their meetings as Foucault opposed the anti-Israeli sentiment of many Arab workers and Maoist activists.[128]At a December 1972 protest against the police killing of Algerian worker Mohammad Diab, both Foucault and Genet were arrested, resulting in widespread publicity.[129]Foucault was also involved in founding the Agence de Press-Libération (APL), a group of leftist journalists who intended to cover news stories neglected by the mainstream press. In 1973, they established the daily newspaperLibération, and Foucault suggested that they establish committees across France to collect news and distribute the paper, and advocated a column known as the "Chronicle of the Workers' Memory" to allow workers to express their opinions. Foucault wanted an active journalistic role in the paper, but this proved untenable, and he soon became disillusioned withLibération, believing that it distorted the facts; he did not publish in it until 1980.[130] In 1975 he had anLSDexperience with Simeon Wade and Michael Stoneman inDeath Valley, California, and later wrote "it was the greatest experience of his life, and that it profoundly changed his life and his work". In front ofZabriskie Pointthey took LSD while listening to a well-prepared music program:Richard Strauss'sFour Last Songs, followed byCharles Ives'sThree Places in New England, ending with a few avant-garde pieces byStockhausen.[131][132]According to Wade, as soon as he came back to Paris, Foucault scrapped the secondThe History of Sexuality's manuscript, and totally rethought the whole project.[133] In 1976, Gallimard published Foucault'sHistoire de la sexualité: la volonté de savoir(The History of Sexuality: The Will to Knowledge), a short book exploring what Foucault called the "repressive hypothesis". It revolved largely around the concept of power, rejecting both Marxist and Freudian theory. Foucault intended it as the first in a seven-volume exploration of the subject.[134]Histoire de la sexualitéwas a best-seller in France and gained positive press, but lukewarm intellectual interest, something that upset Foucault, who felt that many misunderstood his hypothesis.[135]He soon became dissatisfied with Gallimard after being offended by senior staff memberPierre Nora.[136]Along withPaul VeyneandFrançois Wahl, Foucault launched a new series of academic books, known asDes travaux(Some Works), through the companySeuil, which he hoped would improve the state of academic research in France.[137]He also produced introductions for the memoirs ofHerculine BarbinandMy Secret Life.[138] Foucault'sHistoire de la sexualitéconcentrates on the relation between truth and sex.[139]He defines truth as a system of ordered procedures for the production, distribution, regulation, circulation, and operation of statements.[140]Through this system of truth, power structures are created and enforced. Though Foucault's definition of truth may differ from other sociologists before and after him, his work with truth in relation to power structures, such as sexuality, has left a profound mark on social science theory. In his work, he examines the heightened curiosity regarding sexuality that induced a "world of perversion" during the elite, capitalist 18th and 19th century in the western world. According to Foucault inHistory of Sexuality, society of the modern age is symbolized by the conception of sexual discourses and their union with the system of truth.[139]In the "world of perversion", including extramarital affairs, homosexual behavior, and other such sexual promiscuities, Foucault concludes that sexual relations of the kind are constructed around producing the truth.[141]Sex became not only a means of pleasure, but an issue of truth.[141]Sex is what confines one to darkness, but also what brings one to light.[142] Similarly, inThe History of Sexuality, society validates and approves people based on how closely they fit the discursive mold of sexual truth.[143]As Foucault reminds us, in the 18th and 19th centuries, the Church was the epitome of power structure within society. Thus, many aligned their personal virtues with those of the Church, further internalizing their beliefs on the meaning of sex.[143]However, those who unify their sexual relation to the truth become decreasingly obliged to share their internal views with those of the Church. They will no longer see the arrangement of societal norms as an effect of the Church's deep-seated power structure. There exists an international citizenry that has its rights, and has its duties, and that is committed to rise up against every abuse of power, no matter who the author, no matter who the victims. After all, we are all ruled, and as such, we are in solidarity. Foucault remained a political activist, focusing on protesting government abuses of human rights around the world. He was a key player in the 1975 protests against the Spanish government who were set to execute 11 militants sentenced to death without fair trial. It was his idea to travel toMadridwith six others to give a press conference there; they were subsequently arrested and deported back to Paris.[145]In 1977, he protested the extradition ofKlaus Croissantto West Germany, and his rib was fractured during clashes with riot police.[146]In July that year, he organised an assembly ofEastern Blocdissidents to mark the visit ofSoviet general secretaryLeonid Brezhnevto Paris.[147]In 1979, he campaigned for Vietnamese political dissidents to be granted asylum in France.[148] In 1977, Italian newspaperCorriere della seraasked Foucault to write a column for them. In doing so, in 1978 he travelled toTehranin Iran, days after theBlack Fridaymassacre. Documenting the developingIranian Revolution, he met with opposition leaders such asMohammad Kazem ShariatmadariandMehdi Bazargan, and discovered the popular support forIslamism.[149]Returning to France, he was one of the journalists who visited theAyatollah Khomeini, before visiting Tehran. His articles expressed awe of Khomeini's Islamist movement, for which he was widely criticised in the French press, including by Iranian expatriates. Foucault's response was that Islamism was to become a major political force in the region, and that the West must treat it with respect rather than hostility.[150]In April 1978, Foucault traveled to Japan, where he studiedZen BuddhismunderOmori Sogenat the Seionji temple inUenohara.[120] Although remaining critical of power relations, Foucault expressed cautious support for theSocialist Partygovernment ofFrançois Mitterrandfollowing itselectoral victory in 1981.[151]But his support soon deteriorated when that party refused to condemn the Polish government's crackdown on the1982 demonstrations in Polandorchestrated by theSolidaritytrade union. He and sociologistPierre Bourdieuauthored a document condemning Mitterrand's inaction that was published inLibération, and they also took part in large public protests on the issue.[152]Foucault continued to support Solidarity, and with his friendSimone Signorettraveled to Poland as part of aMédecins du Mondeexpedition, taking time out to visit theAuschwitz concentration camp.[153]He continued his academic research, and in June 1984 Gallimard published the second and third volumes ofHistoire de la sexualité. Volume two,L'Usage des plaisirs, dealt with the "techniques of self" prescribed by ancient Greek pagan morality in relation to sexual ethics, while volume three,Le Souci de soi, explored the same theme in the Greek and Latin texts of the first two centuries CE. A fourth volume,Les Aveux de la chair, was to examine sexuality in early Christianity, but it was not finished.[154] In October 1980, Foucault became a visiting professor at theUniversity of California, Berkeley, giving the Howison Lectures on "Truth and Subjectivity", while in November he lectured at theHumanities InstituteatNew York University. His growing popularity in American intellectual circles was noted byTimemagazine, while Foucault went on to lecture atUniversity of California, Los Angelesin 1981, theUniversity of Vermontin 1982, and Berkeley again in 1983, where his lectures drew huge crowds.[155]Foucault spent many evenings in theSan Francisco gay scene, frequentingsado-masochisticbathhouses, engaging in unprotected sex. He praised sado-masochistic activity in interviews with the gay press, describing it as "the real creation of new possibilities of pleasure, which people had no idea about previously".[156]Foucault contractedHIVand eventually developedAIDS. Little was known of the virus at the time; the first cases had only been identified in 1980.[157]Foucault initially referred to AIDS as a "dreamed-up disease".[158]In summer 1983, he developed a persistent dry cough, which concerned friends in Paris, but Foucault insisted it was just a pulmonary infection.[159]Only when hospitalized was Foucault correctly diagnosed as beingHIV-positive; treated with antibiotics, he delivered a final set of lectures at the Collège de France.[160]Foucault entered Paris'Hôpital de la Salpêtrière—the same institution that he had studied inMadness and Civilisation—on 10 June 1984, with neurological symptoms complicated bysepsis. He died in the hospital on 25 June.[161] On 26 June 1984,Libérationannounced Foucault's death, mentioning the rumour that it had been brought on by AIDS. The following day,Le Mondeissued a medical bulletin cleared by his family that made no reference to HIV/AIDS.[162]On 29 June, Foucault'sla levée du corpsceremony was held, in which the coffin was carried from the hospital morgue. Hundreds attended, including activists and academic friends, whileGilles Deleuzegave a speech using excerpts fromThe History of Sexuality.[163]His body was then buried atVendeuvre-du-Poitouin a small ceremony.[164]Soon after his death, Foucault's partnerDaniel Defertfounded the first national HIV/AIDS organisation in France,AIDES; a play on the French word for "help" (aide) and the English-language acronym for the disease.[165]On the second anniversary of Foucault's death, Defert publicly revealed inThe Advocatethat Foucault's death was AIDS-related.[166] Foucault's first biographer,Didier Eribon, described the philosopher as "a complex, many-sided character", and that "under one mask there is always another".[167]He also noted that he exhibited an "enormous capacity for work".[168]At the ENS, Foucault's classmates unanimously summed him up as a figure who was both "disconcerting and strange" and "a passionate worker".[169]As he aged, his personality changed: Eribon noted that while he was a "tortured adolescent", post-1960, he had become "a radiant man, relaxed and cheerful", even being described by those who worked with him as adandy.[170]He noted that in 1969, Foucault embodied the idea of "the militant intellectual".[171] Foucault was anatheist.[172][173]He loved classical music, particularly enjoying the work ofJohann Sebastian BachandWolfgang Amadeus Mozart,[174]and became known for wearingturtleneck sweaters.[175]After his death, Foucault's friendGeorges Dumézildescribed him as having possessed "a profound kindness and goodness", also exhibiting an "intelligence [that] literally knew no bounds".[176]His life-partnerDaniel Defertinherited his estate,[177]whose archive was sold in 2012 to theNational Library of Francefor €3.8 million ($4.5 million in April 2021).[178] Politically, Foucault was a leftist throughout much of his life, though his particular stance within the left often changed. In the early 1950s, while never adopting an orthodoxMarxistviewpoint, Foucault had been a member of theFrench Communist Party, leaving the party after three years as he expressed disgust in the prejudice within its ranks against Jews and homosexuals. After spending some time working inPoland, he became further disillusioned with communist ideology. As a result, in the early 1960s, Foucault was considered to be "violently anticommunist" by some of his detractors,[179]even though he was involved in leftist campaigns along with most of his students and colleagues.[180] Foucault's colleaguePierre Bourdieusummarized the philosopher's thought as "a long exploration of transgression, of going beyond social limits, always inseparably linked to knowledge and power".[181] The theme that underlies all Foucault's work is the relationship between power and knowledge, and how the former is used to control and define the latter. What authorities claim as 'scientific knowledge' are really just means of social control. Foucault shows how, for instance, in the eighteenth century 'madness' was used to categorize and stigmatize not just the mentally ill but the poor, the sick, the homeless and, indeed, anyone whose expressions of individuality were unwelcome. Philosopher Philip Stokes of theUniversity of Readingnoted that overall, Foucault's work was "dark and pessimistic". Though it does, however, leave some room for optimism, in that it illustrates how the discipline of philosophy can be used to highlight areas of domination. In doing so, as Stokes claimed, the ways in which we are being dominated become better understood, so that we may strive to build social structures that minimise this risk of domination.[182]In all of this development there had to be close attention to detail; it is the detail which eventually individualizes people.[183] Later in his life, Foucault explained that his work was less about analyzing power as a phenomenon than about trying to characterize the different ways in which contemporary society has expressed the use of power to "objectivise subjects". These have taken three broad forms: one involving scientific authority to classify and 'order' knowledge about human populations; the second has been to categorize and 'normalise' human subjects (by identifying madness, illness, physical features, and so on); and the third relates to the manner in which the impulse to fashion sexual identities and train one's own body to engage in routines and practices ends up reproducing certain patterns within a given society.[184] In addition to his philosophical work, Foucault also wrote on literature.Death and the Labyrinth: The World of Raymond Roussel, published in 1963 and translated in English in 1986, is Foucault's only book-length work on literature. He described it as "by far the book I wrote most easily, with the greatest pleasure, and most rapidly".[185]Foucault explores theory, criticism, and psychology with reference to the texts ofRaymond Roussel, one of the first notable experimental writers. Foucault also gave a lecture responding to Roland Barthes' famous essay "The Death of the Author" titled "What Is an Author?" in 1969, later published in full.[186]According to literary theoretician Kornelije Kvas, for Foucault, "denying the existence of a historical author on account of his/her irrelevance for interpretation is absurd, for the author is a function of the text that organizes its sense".[187] Foucault's analysis of power comes in two forms:empiricalandtheoretical. The empirical analyses concern themselves with historical (and modern) forms of power and how these emerged from previous forms of power. Foucault describes three types of power in his empirical analyses:sovereignpower,disciplinarypower, andbiopower.[188] Foucault is generally critical of "theories" that try to give absolute answers to "everything". Therefore, he considered his own "theory" of power to be closer to a method than a typical "theory". According to Foucault, most people misunderstand power. For this reason, he makes clear that power cannot be completely described as:[188] Foucault is not critical of considering these phenomena as "power", but claims that these theories of power cannot completely describeallforms of power. Foucault also claims that a liberal definition of power has effectively hidden other forms of power to the extent that people have uncritically accepted them.[188] Foucault's power analysis begins on micro-level, with singular "force relations".Richard A. Lynchdefines Foucault's concept of "force relation" as "whatever in one's social interactions that pushes, urges or compels one to do something".[189]According to Foucault, force relations are an effect of difference, inequality or unbalance that exists in other forms of relationships (such as sexual or economic). Force, and power, is however not something that a person or group "holds" (such as in the sovereign definition of power), instead power is a complex group of forces that comes from "everything" and therefore exists everywhere. That relations of power always result from inequality, difference or unbalance also means that power always has a goal or purpose. Power comes in two forms: tactics and strategies. Tactics is power on the micro-level, which can for example be how a person chooses to express themselves through their clothes. Strategies on the other hand, is power on macro-level, which can be the state of fashion at any moment. Strategies consist of a combination of tactics. At the same time, power is non-subjective according to Foucault. This posits a paradox, according to Lynch, since "someone" has to exert power, while at the same time there can be no "someone" exerting this power.[188]According to Lynch this paradox can be solved with two observations: According to Foucault, force relations are constantly changing, constantly interacting with other force relations which may weaken, strengthen or change one another. Foucault writes that power always includes resistance, which means there is always a possibility that power and force relations will change in some way. According to Richard A. Lynch, the purpose of Foucault's work on power is to increase peoples' awareness of how power has shaped their way of being, thinking and acting, and by increasing this awareness making it possible for them to change their way of being, thinking and acting.[188] With "sovereign power" Foucault alludes to a power structure that is similar to a pyramid, where one person or a group of people (at the top of the pyramid) holds the power, while the "normal" (and oppressed) people are at the bottom of the pyramid. In the middle parts of the pyramid are the people who enforce the sovereign's orders. A typical example of sovereign power isabsolute monarchy.[188] In historical absolute monarchies, crimes had been considered a personal offense against the sovereign and his/her power. The punishment was often public and spectacular, partly to deter others from committing crimes, but also to reinstate the sovereign's power. This was however both expensive and ineffective—it led far too often to people sympathizing with the criminal. In modern times, when disciplinary power is dominant, criminals are instead subjected to various disciplinary techniques to "remold" the criminal into a "law abiding citizen".[190] According toChloë Taylor, a characteristic for sovereign power is that the sovereign has the right to take life, wealth, services, labor and products. The sovereign has a right to subtract—to take life, to enslave life, etc.—but not the right to control life in the way that later happens in disciplinary systems of power. According to Taylor, the form of power that the philosopherThomas Hobbesis concerned about, is sovereign power. According to Hobbes, people are "free" so long they are not literally placed in chains.[191] What Foucault calls "disciplinary power" aims to use bodies' skills as effectively as possible.[192]The more useful the body becomes, the more obedient it also has to become. The purpose of this is not only to use the bodies' skills, but also prevent these skills from being used to revolt against the power.[192] Disciplinary power has "individuals" as its object, target and instrument. According to Foucault, "individual" is however a construct created by disciplinary power.[192]The disciplinary power's techniques create a "rational self-control",[193]which in practice means that the disciplinary power is internalized and therefore doesn't continuously need external force. Foucault says that disciplinary power is primarily not an oppressing form of power, but rather so a productive form of power. Disciplinary power doesn't oppress interests or desires, but instead subjects bodies to reconstructed patterns of behavior to reconstruct their thoughts, desires and interests. According to Foucault this happens in factories, schools, hospitals and prisons.[194]Disciplinary power creates a certain type of individual by producing new movements, habits and skills. It focuses on details, single movements, their timing and speed. It organizes bodies in time and space, and controls every movement for maximal effect. It uses rules, surveillance, exams and controls.[194]The activities follow certain plans, whose purpose it is to lead the bodies to certain pre-determined goals. The bodies are also combined with each other, to reach a productivity that is greater than the sum of all bodies activities.[192] Disciplinary power has according to Foucault been especially successful due to its usage of three technologies: hierarchical observation, normalizing judgement and exams.[192]By hierarchical observation, the bodies become constantly visible to the power. The observation is hierarchical since there is not a single observer, but rather so a "hierarchy" of observers. An example of this ismental asylumsduring the 19th century, when the psychiatrist was not the only observer, but also nurses and auxiliary staff. From these observations and scientific discourses, a norm is established and used to judge the observed bodies. For the disciplinary power to continue to exist, this judgement has to be normalized. Foucault mentions several characteristics of this judgement: (1) all deviations, even small ones, from correct behavior are punished, (2) repeated rule violations are punished extra, (3) exercises are used as a behavior correcting technique and punishment, (4) rewards are used together with punishment to establish a hierarchy of good and bad behavior/people, (5) rank/grades/etc. are used as punishment and reward. Examinations combine the hierarchical observation with judgement. Exams objectify and individualize the observed bodies by creating extensive documentation about every observed body. The purpose of the exams is therefore to gather further information about each individual, track their development and compare their results to the norm.[192] According to Foucault, the "formula" for disciplinary power can be seen in philosopherJeremy Bentham's plan for the "optimal prison": thepanopticon. Such a prison consists of a circle-formed building where every cell is inhabited by only one prisoner. In every cell there are two windows—one to let in light from outside and one pointing to the middle of the circle-formed building. In this middle there is a tower where a guard can be placed to observe the prisoners. Since the prisoners will never be able to know whether they are being watched or not at a given moment, they will internalize the disciplinary power and regulate their own behavior (as ifthey were constantly being watched). Foucault says this construction (1) creates an individuality by separating prisoners from each other in the physical room, (2) since the prisoners cannot know if they are being watched at any given moment, they internalize the disciplinary power and regulate their own behavior as if they were always watched, (3) the surveillance makes it possible to create extensive documentation about each prisoner and their behavior. According to Foucault the panopticon has been used as a model also for other disciplinary institutions, such as mental asylums in the 19th century.[192] With "biopower" Foucault refers to power overbios(life)—power over populations. Biopower primarily rests on norms which are internalized by people, rather than external force. It encourages, strengthens, controls, observes, optimizes and organize the forces below it. Foucault has sometimes described biopower as separate from disciplinary power, but at other times he has described disciplinary power as an expression of biopower. Biopower can use disciplinary techniques, but in contrast to disciplinary power its target is populations rather than individuals.[191] Biopower studies populations regarding (for example) number of births, life expectancy, public health, housing, migration, crime, which social groups are over-represented in deviations from the norm (regarding health, crime, etc.) and tries to adjust, control or eliminate these norm deviations. One example is the age distribution in a population. Biopower is interested in age distribution to compensate for future (or current) lacks of labor power, retirement homes, etc. Yet another example is sex: because sex is connected to population growth, sex and sexuality have been of great interest to biopower. On a disciplinary level, people who engaged in non-reproductive sexual acts have been treated for psychiatric diagnoses such as "perversion", "frigidity" and "sexual dysfunction". On a biopower-level, the usage of contraceptives has been studied, some social groups have (by various means) been encouraged to have children, while others (such as poor, sick, unmarried women, criminals or people with disabilities) have been discouraged or prevented from having children.[191] In the era of biopower, death has become a scandal and a catastrophe, but despite this biopower has according to Foucault killed more people than any other form of power has ever done before it. Under sovereign power, the sovereign king could kill people to exert his power or start wars simply to extend his kingdom, but during the era of biopower wars have instead been motivated by an ambition to "protect life itself". Similar motivations has also been used for genocide. For example, Nazi Germany motivated its attempt to eradicate Jews, the mentally ill and disabled with the motivation that Jews were "a threat to the German health", and that the money spent on healthcare for mentally ill and disabled people would be better spent on "viable Germans".Chloë Tayloralso mentions theIraq Warwas motivated by similar tenets. The motivation was at first that Iraq was thought to have weapons of mass destruction and connections toAl-Qaeda. However, when theBushandBlairadministrations did not find any evidence to support either of these theories, the motivation for the war was changed. In the new motivation, the cause of the war was said to be thatSaddam Husseinhad committed crimes against his own population. Taylor means that in modern times, war has to be "concealed" under a rhetoric of humanitarian aid, despite the fact that these wars often cause humanitarian crises.[191] During the 19th century, slums were increasing in number and size across the western world. Criminality, illness, alcoholism and prostitution was common in these areas, and the middle class considered the people who lived in these slums as "unmoral" and "lazy". The middle class also feared that this underclass sooner or later would "take over" because the population growth was greater in these slums than it was in the middle class. This fear gave rise to the scientific study ofeugenics, whose founderFrancis Galtonhad been inspired byCharles Darwinand his theory of natural selection. According to Galton, society was preventing natural selection by helping "the weak", thus causing a spread of the "negative qualities" into the rest of the population.[191] According to Foucault, the body is not something objective that stands outside history and culture. Instead, Foucault argues, the body has been and is continuously shaped by society and history—by work, diet, body ideals, exercise, medical interventions, etc. Foucault presents no "theory" of the body, but does write about it inDiscipline and Punishas well as inThe History of Sexuality. Foucault was critical of all purely biological explanations of phenomena such as sexuality, madness and criminality. Further, Foucault argues, that the body is not sufficient as a basis for self-understanding and understanding of others.[194] InDiscipline and Punish, Foucault shows how power and the body are tied together, for example by the disciplinary power primarily focusing on individual bodies and their behavior. Foucault argues that power, by manipulating bodies/behavior, also manipulates people's minds. Foucault turns theplatonicsaying "the body is the prison of the soul" (Phaedo, 66a–67d) and instead posits that "the soul is the prison of the body".[194] According to Foucault,sexologyhas tried to exert itself as a "science" by referring to the material (the body). In contrast to this, Foucault argues that sexology is apseudoscience, and that "sex" is a pseudo-scientific idea. For Foucault the idea of a natural, biologically grounded and fundamental sexuality is a normative historical construct that has also been used as an instrument of power. By describing sex as the biological and fundamental cause to peoples'gender identity, sexual identity and sexual behavior, power has effectively been able to normalize sexual and gendered behavior. This has made it possible to evaluate, pathologize and "correct" peoples' sexual and gendered behavior, by comparing bodies behaviors to the constructed "normal" behavior. For Foucault, a "normal sexuality" is as much of a construct as a "natural sexuality". Therefore, Foucault was also critical of the popular discourse that dominated the debate over sexuality during the 1960s and 1970s. During this time, the popular discourse argued for a "liberation" of sexuality from a cultural, moral and capitalistic oppression. Foucault, however, argues that peoples' opinions about and experiences of sexuality arealwaysa result of cultural and power mechanisms. To "liberate" sexuality from one group of norms only means that another group of norms takes its place. This, however, does not mean that Foucault considers resistance to be futile. What Foucault argues for is rather that it is impossible to become completely free from power, and that there is simply no "natural" sexuality. Power always involves a dimension of resistance, and therefore also a possibility for change. Although Foucault considers it impossible to step outside of power-networks, it is always possible to change these networks or navigate them differently.[194] According to Foucault, the body is not only an "obedient and passive object" that is dominated by discourses and power. The body is also the "seed" to resistance against dominant discourses and power techniques. The body is never fully compliant, and experiences can never fully be reduced to linguistic descriptions. There is always a possibility to experience something that is not possible to describe with words, and in this discrepancy there is also a possibility for resistance against dominant discourses.[194] Foucault's view of the historical construction of the body has influenced many feminist andqueer-theorists. According toJohanna Oksala, Foucault's influence on queer theory has been so great than he can be considered one of the founders of queer theory. The fundamental idea behind queer theory is that there is no natural fundament that lies behind identities such as gay, lesbian, heterosexual, etc. Instead these identities are considered cultural constructions that have been constructed through normative discourses and relations of power. Feminists have with the help of Foucault's ideas studied different ways that women form their bodies: through plastic surgery, diet, eating disorders, etc. Foucault's historization of sex has also affected feminist theorists such asJudith Butler, who used Foucault's theories about the relation between subject, power and sex to question gendered subjects. Butler follows Foucault by saying that there is no "true" gender behind gender identity that constitutes its biological and objective fundament. However, Butler is critical of Foucault, arguing that Foucault "naively" presents bodies and pleasures as a ground for resistance against power without extending his historization of sexuality to gendered subjects/bodies. Foucault has received criticism from other feminists, such asSusan BordoandKate Soper.[194] Johanna Oksala argues that Foucault, by saying that sex/sexuality are constructed, doesn't deny the existence of sexuality. Oksala also argues that the goal of critical theories such as Foucault's is not to liberate the body and sexuality from oppression, but rather to question and deny the identities that are posited as "natural" and "essential" by showing how these identities are historical and cultural constructions.[194] In May 1977, Foucault signeda petition to the French parliament, calling for the lowering of the homosexual age of consent from 21 to 15 to match with the heterosexual one.[195][196]In a 1978 broadcast, Foucault argued that children could givesexual consent, saying that "to assume that a child is incapable of explaining what happened and was incapable of giving his consent are two abuses that are intolerable, quite unacceptable."[197][non-primary source needed] Foucault considered his primary project to be the investigation of how people through history have been made into "subjects".[198]Subjectivity, for Foucault, is not a state of being, but a practice—an active "being".[199]According to Foucault, "the subject" has, by western philosophers, usually been considered as something given; natural and objective. On the contrary, Foucault considers subjectivity to be a construction created by power.[198]Foucault talks of "assujettissement", which is a French term that for Foucault refers to a process where power creates subjects while also oppressing them using social norms. For Foucault "social norms" are standards that people are encouraged to follow, that are also used to compare and define people. As an example of "assujettissement", Foucault mentions "homosexual", a historically contingent type of subjectivity that was created by sexology. Foucault writes thatsodomywas previously considered a serious sexual deviation, but a temporary one. Homosexuality, however, became a "species", a past, a childhood and a type of life. "Homosexuals" have by the same power that created this subjectivity been discriminated against, due to homosexuality being considered as a deviation from the "normal" sexuality. However, Foucault argues, the creation of a subjectivity such as "homosexuality" does not only have negative consequences for the people who are subjectivised—subjectivity of homosexuality has also led to the creation of gay bars and thepride parade.[200] According to Foucault, scientific discourses have played an important role in the disciplinary power system, by classifying and categorizing people, observing their behavior and "treating" them when their behavior has been considered "abnormal". He defines discourse as a form ofoppressionthat does not require physical force. He identifies its production as "controlled, selected, organized and redistributed by a certain number of procedures", which are driven by individuals' aspiration of knowledge to create "rules" and "systems" that translate into social codes.[201]Moreover, discourse creates a force that extends beyond societal institutions and could be found in social and formal fields such as health care systems, education and law enforcement. The formation of these fields may seem to contribute to social development; however, Foucault warns against discourses' harmful aspects on society. Sciences such as psychiatry, biology, medicine, economics, psychoanalysis, psychology, sociology, ethnology, pedagogy and criminology have all categorized behaviors as rational, irrational, normal, abnormal, human, inhuman, etc. By doing so, they have all created various types of subjectivity and norms,[193]which are then internalized by people as "truths". People have then adapted their behavior to get closer to what these sciences has labeled as "normal".[194]For example, Foucault claims that psychological observation/surveillance and psychological discourses have created a type of psychology-centered subjectivity, which has led to people considering unhappiness a fault in their psychology rather than in society. This has also, according to Foucault, been a way for society to resist criticism—criticism against society has been turned against the individual and their psychological health.[190] According to Foucault, subjectivity is not necessarily something that is forced upon people externally—it is also something that is established in a person's relation to themselves.[199]This can, for example, happen when a person is trying to "find themselves" or "be themselves", something Edward McGushin describes as a typical modern activity. In this quest for the "true self", the self is established in two levels: as a passive object (the "true self" that is searched for) and as an active "searcher". The ancientCynicsand the 19th-century philosopherFriedrich Nietzscheposited that the "true self" can only be found by going through great hardship and/or danger. The ancientStoicsand 17th-century philosopherRené Descartes, however, argued that the "self" can be found by quiet and solitary introspection. Yet another example isSocrates, who argued that self-awareness can only be found by having debates with others, where the debaters question each other's foundational views and opinions. Foucault, however, argued that "subjectivity" is a process, rather than a state of being. As such, Foucault argued that there is no "true self" to be found. Rather so, the "self" is constituted/created in activities such as the ones employed to "find" the "self". In other words, exposing oneself to hardships and danger does not "reveal" the "true self", according to Foucault, but rather creates a particular type of self and subjectivity. However, according to Foucault the "form" for the subject is in great part already constituted by power, before these self-constituting practices are employed. Schools, workplaces, households, government institutions, entertainment media and the healthcare sector all, through disciplinary power, contribute to forming people into being particular types of subjects.[202] Todd Maydefines Foucault's concept of freedom as: that which we can do of ourselves within our specific historical context. A condition for this, according to Foucault, is that we are aware of our situation and how it has been created/affected (and is still being affected) by power. According to May, two of the aspects of how power has shaped peoples' way of being, thinking and acting are described in the books where Foucault deals with disciplinary power and the history of sexuality. However, May argues, there will always be aspects of peoples' formation that will be unknown to them, hence the constant necessity for the type of analyses that Foucault did.[190] Foucault argues that the forces that have affected people can be changed; people always have the capacity to change the factors that limit their freedom.[190]Freedom is thus not a state of being, but a practice—a way of being in relation to oneself, to others and to the world.[203]According to Todd May, Foucault's concept of freedom also includes constructing histories like the ones Foucault did about the history of disciplinary power and sexuality—histories that investigate and describe the forces that have influenced people into becoming who they are. From the knowledge that is reached through such investigations, people can thereafter decide which forces they believe are acceptable and which they consider to be intolerable and has to be changed. Freedom is for Foucault a type of "experimentation" with different "transformations". Since these experiments cannot be controlled completely, May argues they may either lead to the reconstruction of intolerable power relations or the creation of new ones. Thus, May argues, it is always necessary to continue with such experimentation and Foucauldian analyses.[190] Foucault's "alternative" to the modern subjectivity is described byCressida Heyesas "critique". For Foucault there are no "good" and "bad" forms of subjectivity, since they are all a result of power relations.[200]In the same way, Foucault argues there are no "good" and "bad" norms. All norms and institutions are at the same time enabling as they are oppressing. Therefore, Foucault argues, it is always crucial to continue with the practice of "critique".[199]Critique is for Foucault a practice that searches for the processes and events that led to our way of being—a questioning of who we "are" and how this "we" came to be. Such a "criticalontologyof the present" shows that peoples' current "being" is in fact a historically contingent, unstable and changeable construction. Foucault emphasizes that since the current way of being is not a necessity, it is also possible to change it.[203]Critique also includes investigating how and when people are being enabled and when they are being oppressed by the current norms and institutions, finding ways to reduce limitations on freedom, resist normalization and develop new and different way of relating to oneself and others. Foucault argues that it is impossible to go beyond power relations, but that it is always possible to navigate power relations in a different way.[199] As an alternative to the modern "search" for the "true self",[202]and as a part of "the work of freedom",[203]Foucault discusses the antique Greek termepimeleia heautou, "care for the self" (ἐπιμέλεια ἑαυτοῦ). According to Foucault, among the ancient Greek philosophers, self-awareness was not a goal in itself, but rather something that was sought after in order to "care for oneself". Care for the self consists of what Foucault calls "the art of living" or "technologies of the self".[202]The goal of these techniques was, according to Foucault, to transform oneself into a more ethical person. As an example of this, Foucault mentionsmeditation,[193]thestoicactivity of contemplating past and future actions and evaluating if these actions are in line with one's values and goals, and "contemplation of nature". Contemplation of nature is another stoic activity, that consists of reflecting on how "small" one's existence is when compared to the greatercosmos.[202] Foucault is described by Mary Beth Mader as anepistemologicalconstructivistandhistoricist.[204]Foucault is critical of the idea that humans can reach "absolute" knowledge about the world. A fundamental goal in many of Foucault's works is to show how that which has traditionally been considered as absolute, universal and true in fact are historically contingent. To Foucault, even the idea of absolute knowledge is a historically contingent idea. This does however not lead to epistemological nihilism; rather, Foucault argues that we "always begin anew" when it comes to knowledge.[198]At the same time Foucault is critical of modern western philosophy for lacking "spirituality". With "spirituality" Foucault refers to a certain type of ethical being, and the processes that lead to this state of being. Foucault argues that such a spirituality was a natural part of the ancient Greek philosophy, where knowledge was considered as something that was only accessible to those that had an ethical character. According to Foucault this changed in the "cartesian moment", the moment whenRené Descartesreached the "insight" that self-awareness was something given (Cogito, ergo sum, "I think, therefore I am"), and from this "insight" Descartes drew conclusions about God, the world, and knowledge. According to Foucault, since Descartes knowledge has been something separate from ethics. In modern times, Foucault argues, anyone can reach "knowledge", as long as they are rational beings, educated, willing to participate in the scientific community and use a scientific method. Foucault is critical of this "modern" view of knowledge.[205] Foucault describes two types of "knowledge": "savoir" and "connaissance", two French terms that both can be translated as "knowledge" but with separate meanings for Foucault. By "savoir" Foucault is referring to a process where subjects are created, while at the same time these subjects also become objects for knowledge. An example of this can be seen in criminology and psychiatry. In these sciences, subjects such as "the rational person", "the mentally ill person", "the law abiding person", "the criminal", etc. are created, and these sciences center their attention and knowledge on these subjects. The knowledge about these subjects is "connaissance", while the process in which subjects and knowledge are created is "savoir".[204]A similar term in Foucault's corpus is "pouvoir/savoir" (power/knowledge). With this term Foucault is referring to a type of knowledge that is considered "common sense", but that is created and withheld in that position (as "common sense") by power. The term power/knowledge comes fromJeremy Bentham's idea thatpanopticonswouldn't only be prisons, but would be used for experiments where the criminals' behaviour would be studied. Power/knowledge thus refers to forms of power where the power compares individuals, measures differences, establishes a norm and then forces this norm unto the subjects. This is especially successful when the established norm is internalized and institutionalized (by "institutionalized" Foucault refers to when the norm is omnipresent). Because then, when the norm is internalized and institutionalized, it has effectively become a part of peoples' "common sense"—the "obvious", the "given", the "natural". When this has happened, this "common sense" also affects the explicit knowledge (scientific knowledge), Foucault argues. Ellen K. Feder states that the premise "the world consists of women and men" is an example of this. This premise, Feder argues, has been considered "common sense", and has led to the creation of the psychiatric diagnosisgender identity disorder(GID). For example, during the 1970s, children with behavior that was not considered appropriate for their gender was diagnosed with GID. The treatment then consisted of trying to make the child adapt to the prevailing gender norms. Feder argues that this is an example of power/knowledge since psychiatry, from the "common sense" premise "the world consists of women and men" (a premise which is upheld in this status by power), created a new diagnosis, a new type of subject and a whole body of knowledge surrounding this new subject.[206] Foucault's works have exercised a powerful influence over numerous humanistic and social scientific disciplines as one of the most influential and controversial scholars of the post-World War II period.[207][208]According to aLondon School of Economics' analysis in 2016, his worksDiscipline and PunishandThe History of Sexualitywere among the 25 most cited books in the social sciences of all time, at just over 100,000 citations.[209]In 2007, Foucault was listed as the single most cited scholar in the humanities by theISI Web of Scienceamong a large quantity of French philosophers, the compilation's author commenting that "What this says of modern scholarship is for the reader to decide—and it is imagined that judgments will vary from admiration to despair, depending on one's view".[210] According toGary Gutting, Foucault's "detailed historical remarks on the emergence of disciplinary and regulatorybiopowerhave been widely influential".[211]Leo Bersaniwrote that: "[Foucault] is our most brilliant philosopher of power. More originally than any other contemporary thinker, he has attempted to define the historical constraints under which we live, at the same time that he has been anxious to account for—if possible, even to locate—the points at which we might resist those constraints and counter some of the moves of power. In the present climate of cynical disgust with the exercise of political power, Foucault's importance can hardly be exaggerated."[212] Foucault's work on "biopower" has been widely influential within the disciplines of philosophy andpolitical theory, particularly for such authors asGiorgio Agamben,Roberto Esposito,Antonio Negri, andMichael Hardt.[213]His discussions on power and discourse have inspired manycritical theorists, who believe that Foucault's analysis of power structures could aid the struggle against inequality. They claim that throughdiscourse analysis, hierarchies may be uncovered and questioned by way of analyzing the corresponding fields of knowledge through which they are legitimated. This is one of the ways that Foucault's work is linked to critical theory.[214]His workDiscipline and Punishinfluenced his friend and contemporaryGilles Deleuze, who published the paper "Postscript on the Societies of Control", praising Foucault's work but arguing that contemporary western society has in fact developed from a 'disciplinary society' into a 'society of control'.[215]Deleuze went on to publish a book dedicated to Foucault's thought in 1988 under the titleFoucault. Foucault's discussions of the relationship between power and knowledge has influenced postcolonial critiques in explaining the discursive formation ofcolonialism, particularly inEdward Said's workOrientalism.[216]Foucault's work has been compared to that ofErving Goffmanby the sociologistMichael Hviid Jacobsenand Soren Kristiansen, who list Goffman as an influence on Foucault.[217]Foucault's writings, particularlyThe History of Sexuality, have also been very influential infeminist philosophyand queer theory, particularly the work of the major Feminist scholar Judith Butler due to his theories regarding the genealogy of maleness and femaleness, power, sexuality, and bodies.[207] Douglas Murray, writing in his bookThe War on The West, argued that "Foucault's obsessive analysis of everything through a quasi-Marxist lens of power relations diminished almost everything in society into a transactional, punitive and meaningless dystopia".[218] A prominent critique of Foucault's thought concerns his refusal to propose positive solutions to the social and political issues that he critiques. Since no human relation is devoid of power, freedom becomes elusive—even as an ideal. This stance which critiques normativity as socially constructed and contingent, but which relies on an implicit norm to mount the critique led philosopherJürgen Habermasto describe Foucault's thinking as "crypto-normativist", covertly reliant on the veryEnlightenmentprinciples he attempts to argue against.[219]A similar critique has been advanced byDiana Taylor, and byNancy Fraserwho argues that "Foucault's critique encompasses traditional moral systems, he denies himself by recourse to concepts such as 'freedom' and 'justice', and therefore lacks the ability to generate positive alternatives."[220] The philosopherRichard Rortyhas argued that Foucault's "archaeology of knowledge" is fundamentally negative, and thus fails to adequately establish any "new" theory of knowledgeper se. Rather, Foucault simply provides a few valuable maxims regarding the reading of history. Rorty writes: As far as I can see, all he has to offer are brilliant redescriptions of the past, supplemented by helpful hints on how to avoid being trapped by old historiographical assumptions. These hints consist largely of saying: "do not look for progress or meaning in history; do not see the history of a given activity, of any segment of culture, as the development of rationality or of freedom; do not use any philosophical vocabulary to characterize the essence of such activity or the goal it serves; do not assume that the way this activity is presently conducted gives any clue to the goals it served in the past".[221] Genealogy emerged in Foucault's writing in the early 1970s as a corrective to what he saw as deficiencies in the previous methodology he described as archaeology. Specifically Genealogy allowed Foucault a method to explain how "the epistemological succession of discursive formations" could occur within the methodology of the older Archaeology.[222] Foucault has frequently been criticized by historians for what they consider to be a lack of rigor in his analyses.[223]For example,Hans-Ulrich Wehlerharshly criticized Foucault in 1998.[224]According to Wehler, Foucault's works are not only insufficient in their empirical historical aspects, but also often contradictory and lacking in clarity. For example, Foucault's concept of power is "desperatingly undifferentiated", and Foucault's thesis of a "disciplinary society" is, according to Wehler, only possible because Foucault does not properly differentiate between authority, force, power, violence and legitimacy.[225] T'Jampens and Versieren describe Foucault as having experienced a severe personal crisis at the start of his "Il faut défendre la société" lecture series on the basis of the account ofGilles Deleuze. They describe this crisis as the culmination of Foucault's own doubts regarding "certain key aspects of the genealogical method." They describe these lectures as the construction of a dialectic between the older archaeological method and the newer genealogical one, using the methodological tools of the first to correct Foucault's self-identified flaws in the second.[222] Though American feminists have built on Foucault's critiques of the historical construction of gender roles and sexuality, some feminists note the limitations of the masculinist subjectivity and ethical orientation that he describes.[226]A related issue raised by scholarsElizabeth Povinelliand Kathryn Yusoff is the almost complete absence of any discussion of race in his writings. Yusoff (2018, p. 211) says "Povinelli draws our attention to the provinciality of Foucault's project in its conceptualization of a Western European genealogy".[227] The philosopherRoger Scrutonargues inSexual Desire(1986) that Foucault was incorrect to claim, inThe History of Sexuality, that sexual morality is culturally relative. He criticizes Foucault for assuming that there could be societies in which a "problematisation" of the sexual did not occur, concluding that, "No history of thought could show the 'problematisation' of sexual experience to be peculiar to certain specific social formations: it is characteristic of personal experience generally, and therefore of every genuine social order."[228] Foucault's approach to sexuality, which he sees as socially constructed, has become influential inqueer theory. Foucault's resistance toidentity politics, and his rejection of the psychoanalytic concept of "object choice", stand at odds with some theories of queer identity.[226] Foucault is sometimes criticized for his purportedsocial constructionism, which some see as an affront to the concept oftruth. In Foucault's 1971televised debatewithNoam Chomsky, Foucault argued against the possibility of any fixed human nature, as posited by Chomsky's concept of innate human faculties. Chomsky argued that concepts of justice were rooted in human reason, whereas Foucault rejected the universal basis for a concept of justice.[229]Following the debate, Chomsky was stricken with Foucault's total rejection of the possibility of a universal morality, stating "He struck me as completely amoral, I'd never met anyone who was so totally amoral [...] I mean, I liked him personally, it's just that I couldn't make sense of him. It's as if he was from a different species, or something."[230] Peruvian writerMario Vargas Llosa, while acknowledging that Foucault contributed to give a right of citizenship in cultural life to certain marginal and eccentric experiences (of sexuality, of cultural repression, of madness), asserts that his radical critique of authority was detrimental to education.[231] One of Foucault's claims regarding the subjectivity of the self has been disputed. Opposing Foucault's view of subjectivity, Terje Sparby, Friedrich Edelhäuser, and Ulrich W. Weger argue that other factors, such as biological, environmental, and cultural are explanations for the self.[232] Jean Baudrillard's 1977 tractOublier Foucault(trans.Forget Foucault) made Baudrillard instantly infamous within France, as it was a devastating critical analysis of Foucault's book theHistory of Sexuality—and of Foucault's entire oeuvre. In 1976, Jean Baudrillard sent this essay to the French magazine Critique, where Michel Foucault was an editor. Foucault was asked to reply, but remained silent. In it Baudrillard also launched an attacks on those philosophers, likeGilles DeleuzeandFélix Guattari, who believed thatsexual desireandsexual liberationcould be a revolutionary force. Baudrillard asserted that "Foucault's discourse is a mirror of the powers it describes."[233]: 30Since "it is possible at last to talk with such definitive understanding about power, sexuality, the body, and discipline [...] it is because at some point all this is here and now over with." Therefore, with "the coincidence between this new version of power and the new version ofdesireproposed byDeleuzeandLyotard[...] [which was] not accidental: it's simply that in Foucault power takes the place of desire [...] That is why there is no desire in Foucault: its place is already taken [...] When power blends into desire and desire blends into power, let's forget them both."[233] Oublier Foucaultwas published in English in 1998 as part of theSemiotext(e)Foreign Agents series asForget Foucault.
https://en.wikipedia.org/wiki/Michel_Foucault
Fearis an unpleasant emotion that arises in response toperceiveddangers orthreats. Fear causes physiological and psychological changes. It may producebehavioralreactions such as mounting an aggressive response or fleeing the threat, commonly known as thefight-or-flight response. Extreme cases of fear can trigger an immobilizedfreeze response. Fear in humans can occur in response to a presentstimulusoranticipationof a future threat. Fear is involved in somemental disorders, particularlyanxiety disorders. In humans and other animals, fear is modulated bycognitionand learning. Thus, fear is judged asrationaland appropriate, orirrationaland inappropriate. Irrational fears arephobias. Fear is closely related to the emotionanxiety, which occurs as the result of often future threats that are perceived to be uncontrollable or unavoidable.[1]The fear response serves survival and has been preserved throughoutevolution.[2]Even simple invertebrates display an emotion "akin to fear."[3]Research suggests that fears are not solely dependent on their nature but also shaped by social relations and culture, which guide an individual's understanding of when and how to fear.[4] Many physiological changes in the body are associated with fear, summarized as thefight-or-flight response. An innate response for coping with danger, it works by accelerating the breathing rate (hyperventilation), heart rate, vasoconstriction of the peripheral blood vessels leading to blood pooling, dilating the pupils, increasing muscle tension including the muscles attached to each hair follicle to contract and causing "goosebumps", or more clinically,piloerection(making a cold person warmer or a frightened animal look more impressive), sweating, increased blood glucose (hyperglycemia), increased serum calcium, increase in white blood cells called neutrophilic leukocytes, alertness leading to sleep disturbance and "butterflies in the stomach" (dyspepsia). This primitive mechanism may help an organism survive by either running away or fighting the danger.[5]With the series of physiological changes, the consciousness realizes an emotion of fear. There are observable physical reactions in individuals who experience fear. An individual might experience a feeling of dizziness, lightheaded, like they are being choked, sweating, shortness of breath, vomiting or nausea, numbness or shaking and any other like symptoms. These bodily reactions inform the individual that they are afraid and should proceed to remove or get away from the stimulus that is causing that fear.[6] An influential categorization of stimuli causing fear was proposed by psychologistJeffrey Alan Gray;[7]namely,intensity,novelty, special evolutionary dangers, stimuli arising during social interaction, andconditionedstimuli.[8]Another categorization was proposed by Archer,[9]who, besides conditioned fear stimuli, categorized fear-evoking (as well asaggression-evoking) stimuli into three groups; namely,pain, novelty, andfrustration, although he also described "looming", which refers to an object rapidly moving towards the visual sensors of a subject, and can be categorized as "intensity". Russell[10]described a more functional categorization of fear-evoking stimuli, in which for instance novelty is a variable affecting more than one category: 1) Predator stimuli (including movement, suddenness, proximity, but also learned and innate predator stimuli); 2) Physical environmental dangers (including intensity and heights); 3) Stimuli associated with increased risk of predation and other dangers (including novelty, openness, illumination, and being alone); 4) Stimuli stemming from conspecifics (including novelty, movement, and spacing behavior); 5) Species-predictable fear stimuli and experience (special evolutionary dangers); and 6) Fear stimuli that are not species predictable (conditioned fear stimuli). Although many fears are learned, the capacity to fear is part ofhuman nature. Many studies[11]have found that certain fears (e.g. animals, heights) are much more common than others (e.g. flowers, clouds). These fears are also easier to induce in the laboratory. This phenomenon is known aspreparedness. Because early humans that were quick to fear dangerous situations were more likely to survive and reproduce; preparedness is theorized to be a genetic effect that is the result ofnatural selection.[12] From anevolutionary psychologyperspective, different fears may be differentadaptationsthat have been useful in our evolutionary past. They may have developed during different time periods. Some fears, such as fear of heights, may be common to allmammalsand developed during themesozoicperiod. Other fears, such as fear of snakes, may be common to allsimiansand developed during thecenozoictime period (the still-ongoing geological era encompassing the last 66 million of history). Still others, such as fear of mice and insects, may be unique to humans and developed during thePaleolithicandNeolithictime periods (when mice and insects become important carriers of infectious diseases and harmful for crops and stored foods).[13] Nonhuman animals and humans innovate specific fears as a result of learning. This has been studied in psychology asfear conditioning, beginning with John B. Watson'sLittle Albert experimentin 1920, which was inspired after observing a child with an irrational fear of dogs. In this study, an 11-month-old boy was conditioned to fear a white rat in the laboratory. The fear became generalized to include other white, furry objects, such as a rabbit, dog, and even a Santa Claus mask with white cotton balls in the beard. Fear can be learned by experiencing or watching a frighteningtraumaticaccident. For example, a child falling into a well and struggling to get out may develop a fear of wells, heights (acrophobia), enclosed spaces (claustrophobia), or water (aquaphobia). There are studies looking at areas of the brain that are affected in relation to fear. When looking at these areas (such as theamygdala), it was proposed that a person learns to fear regardless of whether they themselves have experienced trauma, or if they have observed the fear in others. In a study completed by Andreas Olsson, Katherine I. Nearing and Elizabeth A. Phelps, the amygdala were affected both when subjects observed someone else being submitted to an aversive event, knowing that the same treatment awaited themselves, and when subjects were subsequently placed in a fear-provoking situation.[14]This suggests that fear can develop in both conditions, not just simply from personal history. Fear is affected by cultural and historical context. For example, in the early 20th century, many Americans fearedpolio, a disease that can lead to paralysis.[15]There are consistent cross-cultural differences in how people respond to fear.[16]Display rulesaffect how likely people are to express the facial expression of fear and other emotions. Fear ofvictimizationis a function of perceived risk and seriousness of potential harm.[17] According to surveys, some of the most common fears are ofdemonsandghosts, the existence ofevilpowers,cockroaches,spiders,snakes,heights,water,enclosed spaces,tunnels,bridges,needles,social rejection,failure,examinations, andpublic speaking.[18][19][20] Regionally some may more so fearterrorist attacks,death,war,criminalorgang violence,being alone, the future,nuclear war,[21]flying,clowns,intimacy,people, anddriving.[22] Fear of the unknown orirrational fearis caused by negative thinking (worry) which arises fromanxietyaccompanied by a subjective sense of apprehension or dread.[23]Irrational fear shares a common neural pathway with other fears, a pathway that engages the nervous system to mobilize bodily resources in the face of danger or threat. Many people are scared of the "unknown". The irrational fear can branch out to many areas such as the hereafter, the next ten years or even tomorrow. Chronic irrational fear has deleterious effects since the elicitor stimulus is commonly absent or perceived from delusions. Such fear can createcomorbiditywith theanxiety disorderumbrella.[24]Being scared may cause people to experienceanticipatoryfear of what may lie ahead rather than planning and evaluating for the same. For example, "continuation of scholarly education" is perceived by many educators as a risk that may cause them fear and stress,[25]and they would rather teach things they've been taught than go and do research.[citation needed] The ambiguity of situations that tend to be uncertain and unpredictable can cause anxiety in addition to other psychological and physical problems in some populations; especially those who engage it constantly, for example, in war-ridden places or in places of conflict, terrorism, abuse, etc. Poorparentingthat instills fear can also debilitate a child's psyche development or personality. For example, parents tell their children not to talk to strangers in order to protect them. In school, they would be motivated to not show fear in talking with strangers, but to be assertive and also aware of the risks and the environment in which it takes place. Ambiguous and mixed messages like this can affect their self-esteem and self-confidence. Researchers say talking to strangers is not something to be thwarted but allowed in a parent's presence if required.[26]Developing a sense ofequanimityto handle various situations is often advocated as an antidote to irrational fear and as an essential skill by a number of ancient philosophies. Fear of the unknown (FOTU) "may be a, or possibly the, fundamental fear" from early times when there were many threats to life.[27] Although fear behavior varies from species to species, it is often divided into two main categories; namely, avoidance/flight and immobility.[9]To these, different researchers have added different categories, such as threatdisplayand attack,[28]protective responses (includingstartleandloomingresponses),[29]defensive burying,[30]and social responses (including alarm vocalizations and submission).[28][31]Finally, immobility is often divided intofreezingandtonic immobility.[28][31] The decision as to which particular fear behavior to perform is determined by the level of fear as well as the specific context, such as environmental characteristics (escape route present, distance to refuge), the presence of a discrete and localized threat, the distance between threat and subject, threat characteristics (speed, size, directness of approach), the characteristics of the subject under threat (size, physical condition, speed, degree ofcrypsis, protective morphological structures), social conditions (group size), and the amount of experience with the type of the threat.[8][9][31][32][33] Often laboratory studies with rats are conducted to examine the acquisition and extinction ofconditioned fearresponses.[34]In 2004, researchers conditioned rats (Rattus norvegicus) to fear a certain stimulus, through electric shock.[35]The researchers were able to then cause an extinction of this conditioned fear, to a point that no medications or drugs were able to further aid in the extinction process. The rats showed signs of avoidance learning, not fear, but simply avoiding the area that brought pain to the test rats. The avoidance learning of rats is seen as aconditioned response, and therefore the behavior can be unconditioned, as supported by the earlier research. Species-specific defense reactions (SSDRs) oravoidance learningin nature is the specific tendency to avoid certain threats or stimuli, it is how animals survive in the wild. Humans and animals both share these species-specific defense reactions, such as the flight-or-fight, which also include pseudo-aggression, fake or intimidating aggression and freeze response to threats, which is controlled by thesympathetic nervous system. These SSDRs are learned very quickly through social interactions between others of the same species, other species, and interaction with the environment.[36]These acquired sets of reactions or responses are not easily forgotten. The animal that survives is the animal that already knows what to fear and how to avoid this threat. An example in humans is the reaction to the sight of a snake, many jump backwards before cognitively realizing what they are jumping away from, and in some cases, it is a stick rather than a snake. As with many functions of the brain, there are various regions of the brain involved in deciphering fear in humans and other nonhuman species.[37]Theamygdalacommunicates both directions between theprefrontal cortex,hypothalamus, thesensory cortex, thehippocampus,thalamus,septum, and thebrainstem. The amygdala plays an important role in SSDR, such as the ventral amygdalofugal, which is essential forassociative learning, and SSDRs are learned through interaction with the environment and others of the same species. An emotional response is created only after the signals have been relayed between the different regions of the brain, and activating the sympathetic nervous systems; which controls theflight, fight, freeze, fright, and faint response.[38][39]Often a damaged amygdala can cause impairment in the recognition of fear (like the human case ofpatient S.M.).[40]This impairment can cause different species to lack the sensation of fear, and often can become overly confident, confronting larger peers, or walking up to predatory creatures. Robert C. Bolles(1970), a researcher at University of Washington, wanted to understand species-specific defense reactions and avoidance learning among animals, but found that the theories of avoidance learning and the tools that were used to measure this tendency were out of touch with the natural world.[41]He theorized the species-specific defense reaction (SSDR).[42]There are three forms of SSDRs: flight, fight (pseudo-aggression), or freeze. Even domesticated animals have SSDRs, and in those moments it is seen that animals revert to atavistic standards and become "wild" again. Dr. Bolles states that responses are often dependent on the reinforcement of a safety signal, and not the aversive conditioned stimuli. This safety signal can be a source of feedback or even stimulus change. Intrinsic feedback or information coming from within, muscle twitches, increased heart rate, are seen to be more important in SSDRs than extrinsic feedback, stimuli that comes from the external environment. Dr. Bolles found that most creatures have some intrinsic set of fears, to help assure survival of the species. Rats will run away from anyshockingevent, and pigeons will flap their wings harder when threatened. The wing flapping in pigeons and the scattered running of rats are considered species-specific defense reactions or behaviors. Bolles believed that SSDRs are conditioned throughPavlovianconditioning, and not operant conditioning; SSDRs arise from the association between the environmental stimuli and adverse events.[43]Michael S. Fanselowconducted an experiment, to test some specific defense reactions, he observed that rats in two different shock situations responded differently, based on instinct or defensive topography, rather than contextual information.[44] Species-specific defense responses are created out of fear, and are essential for survival.[45]Rats that lack the genestathminshow no avoidance learning, or a lack of fear, and will often walk directly up to cats and be eaten.[46]Animals use these SSDRs to continue living, to help increase their chance offitness, by surviving long enough to procreate. Humans and animals alike have created fear to know what should be avoided, and this fear can be learned throughassociationwith others in the community, or learned through personal experience with a creature, species, or situations that should be avoided. SSDRs are an evolutionary adaptation that has been seen in many species throughout the world including rats,chimpanzees,prairie dogs, and evenhumans, an adaptation created to help individual creatures survive in a hostile world. Fear learning changes across the lifetime due to natural developmental changes in the brain.[47][48]This includes changes in theprefrontal cortexand theamygdala.[49] The visual exploration of an emotional face does not follow a fixed pattern but modulated by the emotional content of the face. Scheller et al.[50]found that participants paid more attention to the eyes when recognising fearful or neutral faces, while the mouth was fixated on when happy faces are presented, irrespective of task demands and spatial locations of face stimuli. These findings were replicated when fearful eyes are presented[51]and when canonical face configurations are distorted for fearful, neutral and happy expressions.[52] The brain structures that are the center of most neurobiological events associated with fear are the twoamygdalae, located behind the pituitary gland. Each amygdala is part of a circuitry of fear learning.[2]They are essential for proper adaptation to stress and specific modulation of emotional learning memory. In the presence of a threatening stimulus, the amygdalae generate the secretion of hormones that influence fear and aggression.[53]Once a response to the stimulus in the form of fear or aggression commences, the amygdalae may elicit the release of hormones into the body to put the person into a state of alertness, in which they are ready to move, run, fight, etc. This defensive response is generally referred to in physiology as thefight-or-flight responseregulated by the hypothalamus, part of thelimbic system.[54]Once the person is in safe mode, meaning that there are no longer any potential threats surrounding them, the amygdalae will send this information to the medialprefrontal cortex(mPFC) where it is stored for similar future situations, which is known asmemory consolidation.[55] Some of the hormones involved during the state of fight-or-flight includeepinephrine, which regulates heart rate and metabolism as well as dilating blood vessels and air passages,norepinephrineincreasing heart rate, blood flow to skeletal muscles and the release of glucose from energy stores,[56]andcortisolwhich increases blood sugar, increases circulating neutrophilic leukocytes, calcium amongst other things.[57] After a situation which incites fear occurs, the amygdalae andhippocampusrecord the event through synapticplasticity.[58]The stimulation to the hippocampus will cause the individual to remember many details surrounding the situation.[59]Plasticity and memory formation in the amygdala are generated by activation of the neurons in the region. Experimental data supports the notion that synaptic plasticity of the neurons leading to the lateral amygdalae occurs with fear conditioning.[60]In some cases, this forms permanent fear responses such aspost-traumatic stress disorder(PTSD) or aphobia.[61]MRI and fMRI scans have shown that the amygdalae in individuals diagnosed with such disorders includingbipolarorpanic disorderare larger and wired for a higher level of fear.[62] Pathogens can suppress amygdala activity. Rats infected with thetoxoplasmosisparasite become less fearful of cats, sometimes even seeking out their urine-marked areas. This behavior often leads to them being eaten by cats. The parasite then reproduces within the body of the cat. There is evidence that the parasite concentrates itself in the amygdala of infected rats.[63]In a separate experiment, rats with lesions in the amygdala did not express fear or anxiety towards unwanted stimuli. These rats pulled on levers supplying food that sometimes sent out electrical shocks. While they learned to avoid pressing on them, they did not distance themselves from these shock-inducing levers.[64] Several brain structures other than the amygdalae have also been observed to be activated when individuals are presented with fearful vs. neutral faces, namely the occipitocerebellarregions including thefusiform gyrusand theinferior parietal/superior temporalgyri.[65]Fearful eyes, brows and mouth seem to separately reproduce these brain responses.[65]Scientists from Zurich studies show that the hormone oxytocin related to stress and sex reduces activity in your brain fear center.[66] In threatening situations, insects, aquatic organisms, birds, reptiles, and mammals emit odorant substances, initially called alarm substances, which are chemical signals now called alarmpheromones. This is to defend themselves and at the same time to inform members of the same species of danger and leads to observable behavior change like freezing, defensive behavior, or dispersion depending on circumstances and species. For example, stressed rats release odorant cues that cause other rats to move away from the source of the signal. After the discovery of pheromones in 1959, alarm pheromones were first described in 1968 in ants[67]and earthworms,[68]and four years later also found in mammals, both mice and rats.[69]Over the next two decades, identification and characterization of these pheromones proceeded in all manner of insects and sea animals, including fish, but it was not until 1990 that more insight into mammalian alarm pheromones was gleaned. In 1985, a link between odors released by stressed rats andpain perceptionwas discovered: unstressed rats exposed to these odors developed opioid-mediated analgesia.[70]In 1997, researchers found that bees became less responsive to pain after they had been stimulated withisoamyl acetate, a chemical smelling of banana, and a component of bee alarm pheromone.[71]The experiment also showed that the bees' fear-inducedpain tolerancewas mediated by anendorphin. By using theforced swimming testin rats as a model of fear-induction, the first mammalian "alarm substance" was found.[72]In 1991, this "alarm substance" was shown to fulfill criteria for pheromones: well-defined behavioral effect, species specificity, minimal influence of experience and control for nonspecific arousal. Rat activity testing with the alarm pheromone, and their preference/avoidance for odors from cylinders containing the pheromone, showed that the pheromone had very lowvolatility.[73] In 1993 a connection between alarm chemosignals in mice and theirimmune responsewas found.[74]Pheromone production in mice was found to be associated with or mediated by thepituitary glandin 1994.[75] In 2004, it was demonstrated that rats' alarm pheromones had different effects on the "recipient" rat (the rat perceiving the pheromone) depending which body region they were released from: Pheromone production from the face modified behavior in the recipient rat, e.g. caused sniffing or movement, whereas pheromone secreted from the rat's anal area inducedautonomic nervous systemstress responses, like an increase in core body temperature.[76]Further experiments showed that when a rat perceived alarm pheromones, it increased its defensive and risk assessment behavior,[77]and its acousticstartle reflexwas enhanced. It was not until 2011 that a link between severe pain, neuroinflammation and alarm pheromones release in rats was found: real timeRT-PCRanalysis of rat brain tissues indicated that shocking the footpad of a rat increased its production ofproinflammatory cytokinesin deep brain structures, namely ofIL-1β, heteronuclearCorticotropin-releasing hormoneandc-fosmRNA expressions in both theparaventricular nucleusand the bed nucleus of thestria terminalis, and it increased stress hormone levels in plasma (corticosterone).[78] Theneurocircuitfor how rats perceive alarm pheromones was shown to be related to thehypothalamus,brainstem, andamygdalae, all of which are evolutionary ancient structures deep inside or in the case of the brainstem underneath the brain away from the cortex, and involved in thefight-or-flight response, as is the case in humans.[79] Alarm pheromone-induced anxiety in rats has been used to evaluate the degree to whichanxiolyticscan alleviate anxiety in humans. For this, the change in theacoustic startle reflexof rats with alarm pheromone-induced anxiety (i.e. reduction of defensiveness) has been measured. Pretreatment of rats with one of five anxiolytics used in clinical medicine was able to reduce their anxiety: namelymidazolam,phenelzine(a nonselective monoamine oxidase (MAO) inhibitor),propranolol, a nonselectivebeta blocker,clonidine, analpha 2 adrenergic agonistorCP-154,526, acorticotropin-releasing hormone antagonist.[80] Faulty development of odor discrimination impairs theperceptionof pheromones and pheromone-related behavior, likeaggressive behaviorand mating in male rats: The enzymeMitogen-activated protein kinase 7(MAPK7) has been implicated in regulating the development of the olfactory bulb and odor discrimination and it is highly expressed in developing rat brains, but absent in most regions of adult rat brains.Conditional deletionof the MAPK7gene in mouse neural stem cells impairs several pheromone-mediated behaviors, including aggression and mating in male mice. These behavior impairments were not caused by a reduction in the level of testosterone, by physical immobility, by heightened fear or anxiety or by depression. Using mouse urine as a natural pheromone-containing solution, it has been shown that the impairment was associated with defective detection of related pheromones, and with changes in their inborn preference for pheromones related to sexual and reproductive activities.[81] Lastly, alleviation of an acute fear response because a friendly peer (or in biological language: an affiliativeconspecific)tends and befriendsis called "social buffering". The term is in analogy to the 1985 "buffering" hypothesis in psychology, wheresocial supporthas been proven to mitigate the negative health effects of alarm pheromone mediated distress.[82]The role of a "social pheromone" is suggested by the recent discovery that olfactory signals are responsible in mediating the "social buffering" in male rats.[83]"Social buffering" was also observed to mitigate the conditioned fear responses of honeybees. A bee colony exposed to an environment of high threat of predation did not show increased aggression and aggressive-like gene expression patterns in individual bees, but decreased aggression. That the bees did not simplyhabituateto threats is suggested by the fact that the disturbed colonies also decreased their foraging.[84] Biologists have proposed in 2012 that fear pheromones evolved as molecules of "keystone significance", a term coined in analogy tokeystone species. Pheromones may determinespecies compositionsand affect rates of energy and material exchange in anecological community. Thus pheromones generate structure in afood weband play critical roles in maintainingnatural systems.[85] Evidence of chemosensory alarm signals in humans has emerged slowly: Although alarm pheromones have not been physically isolated and their chemical structures have not been identified in humans so far, there is evidence for their presence.Androstadienone, for example, a steroidal, endogenous odorant, is a pheromone candidate found in human sweat, axillary hair and plasma. The closely related compoundandrostenoneis involved in communicating dominance, aggression or competition; sex hormone influences on androstenone perception in humans showed a high testosterone level related to heightened androstenone sensitivity in men, a high testosterone level related tounhappinessin response to androstenone in men, and a high estradiol level related to disliking of androstenone in women.[86] A German study from 2006 showed when anxiety-induced versus exercise-induced human sweat from a dozen people was pooled and offered to seven study participants, of five able to olfactorily distinguish exercise-induced sweat from room air, three could also distinguish exercise-induced sweat from anxiety induced sweat. Theacoustic startle reflexresponse to a sound when sensing anxiety sweat was larger than when sensing exercise-induced sweat, as measured byelectromyographyanalysis of the orbital muscle, which is responsible for the eyeblink component. This showed for the first time that fear chemosignals can modulate the startle reflex in humans without emotional mediation; fear chemosignals primed the recipient's "defensive behavior" prior to the subjects' conscious attention on the acoustic startle reflex level.[87] In analogy to the social buffering of rats and honeybees in response to chemosignals, induction ofempathyby "smelling anxiety" of another person has been found in humans.[88] A study from 2013 provided brain imaging evidence that human responses to fear chemosignals may begender-specific. Researchers collected alarm-induced sweat and exercise-induced sweat from donors extracted it, pooled it and presented it to 16 unrelated people undergoing functional brainMRI. While stress-induced sweat from males produced a comparably strong emotional response in both females and males, stress-induced sweat from females produced markedly stronger arousal in women than in men. Statistical tests pinpointed this gender-specificity to the right amygdala and strongest in the superficial nuclei. Since no significant differences were found in theolfactory bulb, the response to female fear-induced signals is likely based on processing the meaning, i.e. on the emotional level, rather than the strength of chemosensory cues from each gender, i.e. the perceptual level.[89] Anapproach-avoidance taskwas set up where volunteers seeing either an angry or a happy cartoon face on a computer screen pushed away or pulled toward them a joystick as fast as possible. Volunteers smelling androstadienone, masked with clove oil scent responded faster, especially to angry faces than those smelling clove oil only, which was interpreted as androstadienone-related activation of the fear system.[90]A potential mechanism of action is, thatandrostadienonealters the "emotional face processing". Androstadienone is known to influence the activity of thefusiform gyruswhich is relevant forface recognition. Cognitive-consistencytheories assume that "when two or more simultaneously active cognitive structures are logically inconsistent, arousal is increased, which activates processes with the expected consequence of increasing consistency and decreasing arousal."[91]In this context, it has been proposed that fear behavior is caused by an inconsistency between a preferred, or expected, situation and the actually perceived situation, and functions to remove the inconsistent stimulus from the perceptual field, for instance by fleeing or hiding, thereby resolving the inconsistency.[91][92][9]This approach puts fear in a broader perspective, also involvingaggressionandcuriosity. When the inconsistency between perception and expectancy is small, learning as a result of curiosity reduces inconsistency by updating expectancy to match perception. If the inconsistency is larger, fear or aggressive behavior may be employed to alter the perception in order to make it match expectancy, depending on the size of the inconsistency as well as the specific context. Aggressive behavior is assumed to alter perception by forcefully manipulating it into matching the expected situation, while in some cases thwarted escape may also trigger aggressive behavior in an attempt to remove the thwarting stimulus.[91] In order to improve our understanding of the neural and behavioral mechanisms of adaptive and maladaptive fear, investigators use a variety of translational animal models.[93]These models are particularly important for research that would be too invasive for human studies. Rodents such as mice and rats are common animal models, but other species are used. Certain aspects of fear research still requires more research such as sex, gender, and age differences. These animal models include, but are not limited to, fear conditioning, predator-based psychosocial stress, single prolonged stress, chronic stress models, inescapable foot/tail shocks, immobilization or restraint, and stress enhanced fear learning. While the stress and fear paradigms differ between the models, they tend to involve aspects such as acquisition, generalization, extinction, cognitive regulation, and reconsolidation.[94][95] Fear conditioning, also known as Pavlovian or classical conditioning, is a process of learning that involves pairing a neutral stimulus with an unconditional stimulus (US).[96]A neutral stimulus is something like a bell, tone, or room that does not elicit a response normally where a US is a stimulus that results in a natural or unconditioned response (UR) – in Pavlov's famous experiment the neutral stimulus is a bell and the US would be food with the dog's salvation being the UR. Pairing the neutral stimulus and the US results in the UR occurring not only with the US but also the neutral stimulus. When this occurs the neutral stimulus is referred to as the conditional stimulus (CS) and the response the conditional response (CR). In the fear conditioning model of Pavlovian conditioning the US is an aversive stimulus such as a shock, tone, or unpleasant odor. Predator-based psychosocial stress (PPS) involves a more naturalistic approach to fear learning.[97]Predators such as a cat, a snake, or urine from a fox or cat are used along with other stressors such as immobilization or restraint in order to generate instinctual fear responses.[98] Chronic stress models include chronic variable stress, chronic social defeat, and chronic mild stress.[97][99]These models are often used to study how long-term or prolonged stress/pain can alter fear learning and disorders.[97][100] Single prolonged stress (SPS) is a fear model that is often used to study PTSD.[101][102]Its paradigm involves multiple stressors such as immobilization, a force swim, and exposure to ether delivered concurrently to the subject.[102]This is used to study non-naturalistic, uncontrollable situations that can cause a maladaptive fear responses that is seen in a lot of anxiety and traumatic based disorders. Stress enhanced fear learning (SEFL) like SPS is often used to study the maladaptive fear learning involved in PTSD and other traumatic based disorders.[97][103]SEFL involves a single extreme stressor such as a large number of footshocks simulating a single traumatic stressor that somehow enhances and alters future fear learning.[97][104][105] A drug treatment for fear conditioning and phobias via the amygdalae is the use ofglucocorticoids.[106]In one study, glucocorticoid receptors in the central nuclei of the amygdalae were disrupted in order to better understand the mechanisms of fear and fear conditioning. The glucocorticoid receptors were inhibited using lentiviral vectors containing Cre-recombinase injected into mice. Results showed that disruption of the glucocorticoid receptors prevented conditioned fear behavior. The mice were subjected to auditory cues which caused them to freeze normally. A reduction of freezing was observed in the mice that had inhibited glucocorticoid receptors.[107] Cognitive behavioral therapyhas been successful in helping people overcome their fear. Because fear is more complex than just forgetting or deletingmemories, an active and successful approach involves people repeatedly confronting their fears. By confronting their fears in a safe manner a person can suppress the "fear-triggering memories" or stimuli.[108] Exposure therapyhas known to have helped up to 90% of people with specificphobiasto significantly decrease their fear over time.[55][108] Another psychological treatment issystematic desensitization, which is a type of behavior therapy used to completely remove the fear or produce a disgusted response to this fear and replace it. The replacement that occurs will be relaxation and will occur through conditioning. Through conditioning treatments, muscle tensioning will lessen and deep breathing techniques will aid in de-tensioning. There are other methods for treating or coping with one's fear, such as writing down rational thoughts regarding fears. Journal entries are a healthy method of expressing one's fears without compromising safety or causing uncertainty. Another suggestion is a fear ladder. To create a fear ladder, one must write down all of their fears and score them on a scale of one to ten. Next, the person addresses their phobia, starting with the lowest number. Religion can help some individuals cope with fear.[109] People who have damage to theiramygdalae, which can be caused by a rare genetic disease known asUrbach–Wiethe disease, are unable to experience fear. The disease destroys both amygdalae in late childhood. Since the discovery of the disease, there have only been 400 recorded cases. A lack of fear can allow someone to get into a dangerous situation they otherwise would have avoided.[110] The fear of the end of life and its existence is, in other words, the fear of death. Historically, attempts were made to reduce this fear by performing rituals which have helped collect the cultural ideas that we now have in the present.[citation needed]These rituals also helped preserve the cultural ideas. The results and methods of human existence had been changing at the same time that social formation was changing. When people are faced with their own thoughts of death, they either accept that they are dying or will die because they have lived a full life or they will experience fear. A theory was developed in response to this, which is called theterror management theory. The theory states that a person's cultural worldviews (religion, values, etc.) will mitigate the terror associated with the fear of death through avoidance. To help manage their terror, they find solace in their death-denying beliefs, such as their religion. Another way people cope with their death related fears is pushing any thoughts of death into the future or by avoiding these thoughts all together through distractions.[111]Although there are methods for one coping with the terror associated with their fear of death, not everyone suffers from these same uncertainties. People who believe they have lived life to the "fullest" typically do not fear death. Death anxiety is multidimensional; it covers "fears related to one's own death, the death of others, fear of the unknown after death, fear of obliteration, and fear of the dying process, which includes fear of a slow death and a painful death".[112] TheYalephilosopherShelly Kaganexamined fear of death in a 2007 Yale open course[113]by examining the following questions: Is fear of death a reasonable appropriate response? What conditions are required and what are appropriate conditions for feeling fear of death? What is meant by fear, and how much fear is appropriate? According to Kagan for fear in general to make sense, three conditions should be met: The amount of fear should be appropriate to the size of "the bad". If the three conditions are not met, fear is an inappropriate emotion. He argues, that death does not meet the first two criteria, even if death is a "deprivation of good things" and even if one believes in a painful afterlife. Because death is certain, it also does not meet the third criterion, but he grants that the unpredictability of when one diesmaybe cause to a sense of fear.[113] In a 2003 study of 167 women and 121 men, aged 65–87, lowself-efficacypredicted fear of the unknown after death and fear of dying for women and men better than demographics, social support, and physical health. Fear of death was measured by a "Multidimensional Fear of Death Scale" which included the 8 subscales Fear of Dying, Fear of the Dead, Fear of Being Destroyed, Fear for Significant Others, Fear of the Unknown, Fear of Conscious Death, Fear for the Body After Death, and Fear of Premature Death. Inhierarchical multiple regressionanalysis, the most potent predictors of death fears were low "spiritual health efficacy", defined as beliefs relating to one's perceived ability to generate spiritually based faith and inner strength, and low "instrumental efficacy", defined as beliefs relating to one's perceived ability to manage activities of daily living.[112] Psychologists have tested the hypotheses that fear of death motivates religious commitment, and that assurances about an afterlife alleviate the fear, with equivocal results.[citation needed]Religiosity can be related to fear of death when the afterlife is portrayed as time of punishment. "Intrinsic religiosity", as opposed to mere "formal religious involvement", has been found to be negatively correlated with death anxiety.[112]In a 1976 study of people of various Christian denominations, those who were most firm in their faith, who attended religious services weekly, were the least afraid of dying. The survey found a negative correlation between fear of death and "religious concern".[114][better source needed] In a 2006 study of white, Christian men and women the hypothesis was tested that traditional, church-centered religiousness and de-institutionalized spiritual seeking are ways of approaching fear of death in old age. Both religiousness and spirituality were related to positive psychosocial functioning, but only church-centered religiousness protected subjects against the fear of death.[115][116][better source needed] Statiusin theThebaid(Book 3, line 661) aired the irreverent suggestion that "fear first made gods in the world".[117] From a Christian theological perspective, the wordfearcan encompass more than simple dread. Robert B. Strimple says that fear includes the "... convergence of awe, reverence, adoration, humility..".[118]Sometranslations of the Bible, such as theNew International Version, sometimes express the concept offearwith the wordreverence. A similar phrase, "God-fearing", is sometimes used as a rough synonym for "pious". It is a standard translation for theArabicwordtaqwa(Arabic:تقوى; "forbearance, restraint"[119]) inMuslimcontexts.[120]InJudaism, "fear of God" describes obedience toJewish laweven when invisible to others.[121] Fear may be politically and culturally manipulated to persuade citizenry of ideas which would otherwise be widely rejected or dissuade citizenry from ideas which would otherwise be widely supported. In contexts of disasters, nation-states manage the fear not only to provide their citizens with an explanation about the event or blaming some minorities, but also to adjust their previous beliefs. Fear can alter how a person thinks or reacts to situations because fear has the power to inhibit one's rational way of thinking. As a result, people who do not experience fear, are able to use fear as a tool to manipulate others. People who are experiencing fear, seek preservation through safety and can be manipulated by a person who is there to provide that safety that is being sought after. "When we're afraid, a manipulator can talk us out of the truth we see right in front of us. Words become more real than reality"[122]By this, a manipulator can use our fear to manipulate us out the truth and instead make us believe and trust in their truth. Politicians are notorious for using fear to manipulate the people into supporting their policies. This strategy taps into primal human emotions, leveraging fear of the unknown, external threats, or perceived dangers to influence decision-making.[123] Fear is found and reflected inmythologyand folklore as well as in works offictionsuch as novels and films. Works of dystopian and (post)apocalyptic fiction convey the fears and anxieties of societies.[124][125] The fear ofthe world's endis about as old as civilization itself.[126]In a 1967 study,Frank Kermodesuggests that the failure of religious prophecies led to a shift in how society apprehends this ancient mode.[127]Scientific and critical thought supplanting religious andmythical thoughtas well as a public emancipation may be the cause of eschatology becoming replaced by more realistic scenarios. Such might constructively provoke discussion and steps to be taken to prevent depictedcatastrophes. The Story of the Youth Who Went Forth to Learn What Fear Wasis a German fairy tale dealing with the topic of not knowing fear. Many stories also include characters who fear the antagonist of the plot. One important characteristic of historical and mythicalheroesacross cultures is to be fearless in the face of big and often lethal enemies.[citation needed] The Magnus Archivesis a fiction horror podcast written by Jonathan Sims and directed by Alexander J. Newall that, among other things, formulates anarchetypalontologyof fear through the dissemination of case files at aparanormalresearch institute set in a world where themetaphysicalbasis of paranormal activity and unexplainable horrors is fear incarnate.[128]The diegesis states that true categorization of fear is impossible, that fear is all one unknowable thing;[129]however, there exists an ontological structure of fear archetypes in this universe proposed by a fictional version of the architectRobert Smirke. It is a unique construction of fear in that it is not based on the science or neurology of fear, but on thematic and experiential connections between different phobias. For example, the fear of disease and vermin comes from the same place as the fear of abusive relationships, as both lie in fearing corruptions to the self.[130][131]The final season of the podcast consists almost entirely ofpoeticmeditations on the nature of fear. Fear in art has been explored by the Japanese scholarKyoko Nakano, in a series of books and a 2017 exhibition aboutkowai-e(lit. scary pictures).[132] In the world of athletics, fear is often used as a means of motivation to not fail.[133]This situation involves using fear in a way that increases the chances of a positive outcome. In this case, the fear that is being created is initially a cognitive state to the receiver.[134]This initial state is what generates the first response of the athlete, this response generates a possibility of fight or flight reaction by the athlete (receiver), which in turn will increase or decrease the possibility of success or failure in the certain situation for the athlete.[135]The amount of time that the athlete has to determine this decision is small but it is still enough time for the receiver to make a determination through cognition.[134]Even though the decision is made quickly, the decision is determined through past events that have been experienced by the athlete.[136]The results of these past events will determine how the athlete will make his cognitive decision in the split second that he or she has.[133] Fear of failure as described above has been studied frequently in the field of sport psychology. Many scholars have tried to determine how often fear of failure is triggered within athletes, as well as what personalities of athletes most often choose to use this type of motivation. Studies have also been conducted to determine the success rate of this method of motivation. Murray's Exploration in Personal (1938) was one of the first studies that actually identified fear of failure as an actual motive to avoid failure or to achieve success. His studies suggested that inavoidance, the need to avoid failure, was found in many college-aged men during the time of his research in 1938.[137]This was a monumental finding in the field of psychology because it allowed other researchers to better clarify how fear of failure can actually be a determinant of creating achievement goals as well as how it could be used in the actual act of achievement.[138] In the context of sport, a model was created by R.S. Lazarus in 1991 that uses the cognitive-motivational-relational theory of emotion.[134] It holds that Fear of Failure results when beliefs or cognitive schemas about aversive consequences of failing are activated by situations in which failure is possible. These belief systems predispose the individual to make appraisals of threat and experience the state anxiety that is associated with Fear of Failure in evaluative situations.[138][134] Another study was done in 2001 by Conroy, Poczwardowski, and Henschen that created five aversive consequences of failing that have been repeated over time. The five categories include (a) experiencing shame and embarrassment, (b) devaluing one's self-estimate, (c) having an uncertain future, (d) important others losing interest, (e) upsetting important others.[133]These five categories can help one infer the possibility of an individual to associate failure with one of these threat categories, which will lead them to experiencing fear of failure. In summary, the two studies that were done above created a more precise definition of fear of failure, which is "a dispositional tendency to experience apprehension and anxiety in evaluative situations because individuals have learned that failure is associated with aversive consequences".[138] The author andinternet content creatorJohn Greenwrote about “the yips”—a common colloquialism for a debilitating, often chronic manifestation of athletic anxiety experienced by some professional athletes—in an essay for hispodcastand novelThe Anthropocene Reviewed.[139]Green discusses famous examples of when athletic anxiety has ruined careers and juxtaposes it with the nature ofgeneral anxietyas a whole. Green settles, however, on a conclusion for the essay evokingresilienceandhopein thehuman conditionby describing a circumstance where thebaseballplayerRick Ankielreset his career back to the minor leagues as an outfielder after getting the yips as a major league pitcher.
https://en.wikipedia.org/wiki/Fear#Manipulation
Agovernment databasecollects information for various reasons, including climate monitoring,securities lawcompliance,geological surveys,patent applicationsand grants,surveillance,national security,border control,law enforcement,public health,voter registration,vehicle registration,social security, andstatistics. Various government bodies maintain databases about citizens and residents of the United Kingdom. Under theData Protection Act 1998and theProtection of Freedoms Act 2012, legal provisions exist that control and restrict the collection, storage, retention, and use of information in government databases. NATGRID:Indiais setting up a national intelligence grid calledNATGRID,[60]which will become operational in 2013. NATGRID would allow access to each individual's data ranging from land records, Internet logs, air and rail PNR, phone records, gun records, driving license, property records, insurance, and income tax records in real time and with no oversight.[61]With a UID from theUnique Identification Authority of Indiabeing given to every Indian from February 2011, the government would be able track people in real time. A national population registry of all citizens will be established by the 2011 census, during which fingerprints and iris scans would be taken along with GPS records of each household.[62][63]Access to the combined data will be given to 11 agencies, including theResearch and Analysis Wing, the Intelligence Bureau, the Enforcement Directorate, the National Investigation Agency, theCentral Bureau of Investigation, the Directorate of Revenue Intelligence and the Narcotics Control Bureau.
https://en.wikipedia.org/wiki/Government_databases
Techno-authoritarianism, also known asIT-backed authoritarianism,digital authoritarianismordigital dictatorship,[1][2]refers to the state use of information technology in order to control or manipulate both foreign and domestic populations.[3]Tactics of digital authoritarianism may includemass surveillanceincluding through biometrics such asfacial recognition,internet firewallsand censorship,internet blackouts,disinformation campaigns, and digitalsocial credit systems.[4][5]Although some institutions assert that this term should only be used to refer toauthoritarian governments,[6]others argue that the tools of digital authoritarianism are being adopted and implemented by governments with "authoritarian tendencies", including democracies.[7] Most notably, China and Russia have been accused by theBrookings Institutionof leveraging the Internet and information technology to repress opposition domestically while undermining democracies abroad.[3] IT-backed authoritarianism refers to anauthoritarianregime using cutting-edgeinformation technologyin order to penetrate, control and shape the behavior of actors within society and the economy.[citation needed] According to reports and articles on China's practice, the basis of the digital authoritarianism is an advanced, all-encompassing and in large parts real-timesurveillancesystem, which merges government-run systems and data bases (e.g. traffic monitoring, financial credit rating, education system, health sector etc.) with company surveillance systems (e.g. of shopping preferences, activities on social media platforms etc.).[8]IT-backed authoritarianism institutionalizes the data transfer between companies and governmental agencies providing the government with full and regular access to data collected by companies. The authoritarian government remains the only entity with unlimited access to the collected data. IT-backed authoritarianism thus increases the authority of the regime vis-à-vis national and multinational companies as well as vis-à-vis other decentral or subnational political forces and interest groups. The collected data is utilized by the authoritarian regime to analyze and influence the behavior of a country’s citizens, companies and other institutions.[8]It does so with the help of algorithms based on the principles and norms of the authoritarian regime, automatically calculating credit scores for every individual and institution. In contrast to financial credit ratings, these “social credit scores” are based on the full range of collected surveillance data, including financial as well as non-financial information.[9]IT-backed authoritarianism only allows full participation in a country’s economy and society for those who have a good credit scoring and thus respect the rules and norms of the respective authoritarian regime. Behavior deviating from these norms incurs automatic punishment through a bad credit scoring, which leads to economic or social disadvantages (loan conditions, lower job opportunities, no participation in public procurement etc.). Severe violation or non-compliance can lead to the exclusion from any economic activities on the respective market or (for individuals) to an exclusion from public services.[citation needed] China has been viewed as the cutting edge and the enabler of digital authoritarianism.[10]With itsGreat Firewallof a state-controlled Internet, it has deployed high-techrepression against Uyghursin Xinjiang and exported surveillance and monitoring systems to 18 countries as of 2019.[3] According toFreedom House, the China model of digital authoritarianism through Internet control against those who are critical of the CCP features legislations of censorship, surveillance usingartificial intelligence(AI) andfacial recognition, manipulation or removal of online content,cyberattacksandspear phishing, suspension and revocation of social media accounts, detention and arrests, andforced disappearanceand torture, among other means.[2]A report byCarnegie Endowment for International Peacealso highlights similar digital repression techniques.[11]In 2013,The Diplomatreported that the Chinese hackers behind themalwareattacks on Falun Gong supporters in China, the Philippines, and Vietnam were the same ones responsible for attacks against foreign military powers, targeting email accounts and stealing Microsoft Outlook login information and email contents.[12] The 2022 analysis byThe New York Timesof over 100,000 Chinese government bidding documents revealed a range of surveillance and data collection practices, frompersonal biometricsto behavioral data, which are fed into AI systems.[13]China utilizes these data capabilities not only to enhance governmental and infrastructural efficiency but also to monitor and suppress dissent among its population, particularly in Xinjiang, where the government targets the Uyghur community under the guise of counterterrorism and public security.[13]China is also regarded as an exporter of these technologies and practices to other states, while simultaneously serving as a model for other regimes seeking to adopt similar technologies, or governance.[14] The Russian model of digital authoritarianism relies on strict laws of digital expression and the technology to enforce them.[15]Since 2012, as part of a broader crackdown on civil society, theRussian Parliamenthas adopted numerous laws curtailing speech and expression.[16][17]Hallmarks of Russian digital authoritarianism include:[18] Sincethe coup d'état in February 2021, themilitary juntablocked all but 1,200 websites and imposed Internet shutdowns, with pro-military dominating the content on the remaining accessible websites.[25]In May 2021,Reutersreported that telecom andInternet service providershad been secretly ordered to installspywareallowing the military to "listen in on calls, view text messages and web traffic including emails, and track the locations of users without the assistance of the telecom and internet firms."[26]In February 2022, Norwegian service providerTelenorwas forced to sell its operation to a local company aligned with the military junta.[27][28]The military junta also sought to criminalizevirtual private networks(VPNs), imposed mandatory registration of devices, and increased surveillance on both social media platforms and via telecom companies.[28] In July 2022, the military executed activistKyaw Min Yu, after arresting him in November 2021 for prodemocracy social media posts criticizing the coup.[29][30] A study by the African Digital Rights Network (ADRN) revealed that governments in ten African countries—South Africa,Cameroon,Zimbabwe,Uganda,Nigeria,Zambia,Sudan,Kenya,Ethiopia, andEgypt—have employed various forms of digital authoritarianism.[31]The most common tactics includedigital surveillance,disinformation, Internet shutdowns, censorship legislation, and arrests for anti-government speech.[31]The researchers highlighted the growing trend of complete Internet or mobile system shutdowns.[31]Additionally, all ten countries utilized Internet surveillance, mobile intercept technologies, or artificial intelligence to monitor targeted individuals using specific keywords.[31]
https://en.wikipedia.org/wiki/IT-backed_authoritarianism
Lawful interception(LI) refers to the facilities intelecommunicationsandtelephone networksthat allowlaw enforcement agencieswith court orders or other legal authorization toselectively wiretapindividual subscribers. Most countries require licensed telecommunications operators to provide their networks with Legal Interception gateways and nodes for the interception of communications. The interfaces of these gateways have been standardized by telecommunication standardization organizations. As with many law enforcement tools, LI systems may be subverted for illicit purposes. With the legacypublic switched telephone network(PSTN), wireless, and cable systems, lawful interception (LI) was generally performed by accessing the mechanical or digital switches supporting the targets' calls. The introduction of packet-switched networks, softswitch technology, and server-based applications during the past two decades fundamentally altered how LI is undertaken. Lawful interception differs from thedragnet-typemass surveillancesometimes done byintelligence agencies, where all data passing afiber-optic spliceor other collection point is extracted for storage or filtering. It is also separate from thedata retentionof metadata that has become a legal requirement in some jurisdictions. Lawful interception is obtainingcommunications networkdata pursuant to lawful authority for the purpose ofanalysisorevidence. Such data generally consist ofsignalingornetwork managementinformation or, in fewer instances, the content of the communications. If the data are not obtained in real-time, the activity is referred to as access toretained data(RD).[1] There are many bases for this activity that include infrastructure protection and cybersecurity. In general, the operator of public network infrastructure can undertake LI activities for those purposes. Operators of private network infrastructures in the United States have an inherent right to maintain LI capabilities within their own networks unless otherwise prohibited.[2] One of the bases for LI is the interception of telecommunications by law enforcement agencies (LEAs), regulatory or administrative agencies, and intelligence services, in accordance with local law. Under some legal systems, implementations—particularly real-time access to content—may require due process and receiving proper authorization from competent authorities—an activity that was formerly known as "wiretapping" and has existed since the inception of electronic communications. The material below primarily treats this narrow segment of LI.[3] Almost all countries have lawful interception capability requirements and have implemented them using global LI requirements and standards developed by theEuropean Telecommunications Standards Institute (ETSI), Third Generation Partnership Project (3GPP), or CableLabs organizations—for wireline/Internet, wireless, and cable systems, respectively. In the USA, the comparable requirements are enabled by theCommunications Assistance for Law Enforcement Act(CALEA), with the specific capabilities promulgated jointly by theFederal Communications Commissionand the Department of Justice. In the USA, lawful intercept technology is currently patented by a company named Voip-pal.com under the USPTO Publication #: 20100150138.[4] Governments requirephone service providersto install a legal interception gateway (LIG), along legal interception nodes (LIN), which allow them to intercept in real-time the phone calls, SMS messages, emails and some file transfers or instant messages.[5][6]These LI measures for governmentalsurveillancehave been in place since the beginning of digital telephony.[7] To prevent investigations' being compromised, LI systems may be designed in a manner that hides the interception from the telecommunications operator concerned. This is a requirement in some jurisdictions. Alternatively, LI systems may be designed using technology such astransparent decryption, which ensures that access or interception is necessarily overt in order to disincentivize abuse of authority. To ensure systematic procedures for carrying out interception, while also lowering the costs of interception solutions, industry groups and government agencies worldwide have attempted to standardize the technical processes behind lawful interception. One organization,ETSI, has been a major driver in lawful interception standards not only for Europe, but worldwide. This architecture attempts to define a systematic and extensible means by which network operators and law enforcement agents (LEAs) can interact, especially as networks grow in sophistication and scope of services. Note this architecture applies to not only “traditional” wireline and wireless voice calls, but to IP-based services such asvoice over IP, email, instant messaging, etc. The architecture is now applied worldwide (in some cases with slight variations in terminology), including in the United States in the context ofCALEAconformance. Three stages are called for in the architecture: The call data (known as intercept related information (IRI) in Europe and call data (CD) in the US) consists of information about the targeted communications, including destination of a voice call (e.g., called party’s telephone number), source of a call (caller’s phone number), time of the call, duration, etc. Call content is namely the stream of data carrying the call. Included in the architecture is the lawful interception management function, which covers interception session set-up and tear-down, scheduling, target identification, etc. Communications between the network operator and LEA are via the handover interfaces (HI). Communications data and content are typically delivered from the network operator to the LEA in an encrypted format over an IP-based VPN. The interception of traditional voice calls still often relies on the establishment of an ISDN channel that is set up at the time of the interception. As stated above, the ETSI architecture is equally applicable to IP-based services where IRI/CD is dependent on parameters associated with the traffic from a given application to be intercepted. For example, in the case of email IRI would be similar to the header information on an email message (e.g., destination email address, source email address, time email was transmitted) as well as pertinent header information within the IP packets conveying the message (e.g., source IP address of email server originating the email message). Of course, more in-depth information would be obtained by the interception system so as to avoid the usual email address spoofing that often takes place (e.g., spoofing of source address). Voice-over-IP likewise has its own IRI, including data derived fromSession Initiation Protocol(SIP) messages that are used to set up and tear down a VOIP call. ETSI LI Technical Committee work today is primarily focussed on developing the new Retained Data Handover andnext-generation networkspecifications, as well as perfecting the innovative TS102232 standards suite that apply to most contemporary network uses. USA interception standards that help network operators and service providers conform to CALEA are mainly those specified by the Federal Communications Commission (which has both plenary legislative and review authority under CALEA),CableLabs, and theAlliance for Telecommunications Industry Solutions(ATIS). ATIS's standards include new standards for broadband Internet access and VoIP services, as well as legacy J-STD-025B, which updates the earlier J-STD-025A to include packetized voice and CDMA wireless interception. To ensure the quality of evidence, the Commission on Accreditation for Law Enforcement Agencies (CALEA) has outlined standards for electronic surveillance once a Title III surveillance application is approved: Generic global standards have also been developed by Cisco via theInternet Engineering Task Force(IETF) that provide a front-end means of supporting most LI real-time handover standards. All of these standards have been challenged as "deficient" by theU.S. Department of Justicepursuant to CALEA. The principal global treaty-based legal instrument relating to LI (including retained data) is theConvention on Cybercrime(Budapest, 23 Nov 2001). The secretariat for the Convention is the Council of Europe. However, the treaty itself has signatories worldwide and provides a global scope. Individual countries have different legal requirements relating to lawful interception. The Global Lawful Interception Industry Forum lists many of these, as does the Council of Europe secretariat. For example, in the United Kingdom the law is known as RIPA (Regulation of Investigatory Powers Act), in theUnited Statesthere is an array of federal and state criminal law, inCommonwealth of Independent Statescountries asSORM. In theEuropean Union, theEuropean CouncilResolution of 17 January 1995 on the Lawful Interception of Telecommunications (Official Journal C 329) mandated similar measures to CALEA on a pan-European basis.[8]Although some EU member countries reluctantly accepted this resolutionout of privacy concerns(which are more pronounced in Europe than the US[citation needed]), there appears now to be general agreement with the resolution. Interception mandates in Europe are generally more rigorous than those of the US; for example, both voice andISPpublic network operators in theNetherlandshave been required to support interception capabilities for years. In addition, publicly available statistics indicate that the number of interceptions in Europe exceed by many hundreds of times those undertaken in the U.S.[citation needed] Europe continues to maintain its global leadership role in this sector through the adoption by the European Parliament and Council in 2006 of the far reachingData Retention Directive. The provisions of the Directive apply broadly to almost all public electronic communications and require the capture of most related information, including location, for every communication. The information must be stored for a period of at least six months, up to two years, and made available to law enforcement upon lawful request. The Directive has been widely emulated in other countries. On 8 April 2014, the Court of Justice of the European Union declared the Directive 2006/24/EC invalid for violating fundamental rights. In theUnited States, three Federal statutes authorize lawful interception. The 1968Omnibus Crime Control and Safe Streets Act, Title III pertains mainly to lawful interceptioncriminal investigations. The second law, the 1978Foreign Intelligence Surveillance Act, or FISA, as amended by thePatriot Act, governs wiretapping forintelligence purposeswhere the subject of the investigation must be a foreign (non-US) national or a person working as anagenton behalf of a foreign country. The Administrator of the U.S. Courts annual reports indicate that the federal cases are related toillegal drug distribution, withcell phonesas the dominant form of intercepted communication.[9] During the 1990s, as in most countries, to help law enforcement and theFBImore effectively carry out wiretap operations, especially in view of the emergingdigital voiceandwireless networksat the time, the U.S. Congress passed theCommunications Assistance for Law Enforcement Act(CALEA) in 1994.[10]This act provides the Federal statutory framework fornetwork operatorassistance to LEAs in providing evidence and tactical information. In 2005, CALEA was applied to publicbroadband networksInternet access andVoice over IPservices that are interconnected to thePublic Switched Telephone Network(PSTN). In the 2000s, surveillance focus turned to terrorism.NSA warrantless surveillanceoutside the supervision of the FISA court caused considerable controversy. It was revealed in2013 mass surveillance disclosuresthat since 2007, theNational Security Administrationhas been collecting connection metadata for all calls in the United States under the authority of section 215 PATRIOT Act, with the mandatory cooperation of phone companies and with the approval of the FISA court and briefings to Congress. The government claims it does not access the information in its own database on contacts between American citizens without a warrant. Lawful interception can also be authorized under local laws for state and local police investigations.[11] Police ability to lawfully intercept private communications is governed by Part VI of theCriminal Code of Canada(Invasion Of Privacy).[12]When evaluating Canada’s position on lawful interception, Canadian courts have issued two major rulings on this issue.[13]In June 2014, the Supreme Court ruled thatlaw enforcementofficers need a search warrant before accessing information from internet service providers about users’ identities. The context behind this 8-0 ruling is an adolescent Saskatchewan man charged with possessing and distributing child pornography.[14]The police used the man’s IP address to access his personal information from his online service provider— all of which was done without a search warrant. The plaintiff’s attorneys argued that their client’s rights were violated, as he was victim to unlawful search and seizure. Despite the court’s ruling, the evidence gathered from the unwarranted search was used as evidence in trial, as the court claimed that the police were acting in good faith. In accordance to the ruling, the court proclaims that a warrant is not needed if: The second court case to refer to is from the same year but in December. Essentially, theSupreme Court of Canadaargued that police are allowed access to a suspect’s cell phone, but they must abide by very strict guidelines. This ruling came about from the argument of Kevin Fearon who was convicted of armed robbery in 2009. After robbing a Toronto Jewelry kiosk, Fearon argued that the police unlawfully violated hischarter rightsupon searching his cellphone without a warrant. Although divided, the Supreme Court laid out very detailed criteria for law enforcement officers to follow when searching a suspect's phone without a warrant. There are four rules which officers must follow in these instances: To continue a search without a warrant, the situation at-hand would need to meet three of the four guidelines stated above. Nonetheless, the court highly encourages law enforcement to request a warrant before searching a cellphone to promote and protect privacy in Canada. Due toYarovaya Law, law enforcement is entitled to stored private communication data. Rule 4 of the IT (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules 2009 provides that ‘the competent authority may authorise an agency of the Government to intercept, monitor or decrypt information generated, transmitted, received or stored in any computer resource for the purpose specified in sub-section (1) of Section 69 of the Act’. · The Statutory order (S.O.) dated 20.12.2018 has been issued in accordance with rules framed in year 2009 and in vogue since then. · No new powers have been conferred to any of the security or law enforcement agencies by the S.O. dated 20.12.2018. · Notification has been issued to notify the ISPs, TSPs, Intermediaries etc. to codify the existing orders. · Each case of interception, monitoring, decryption is to be approved by the competent authority i.e. Union Home secretary. These powers are also available to the competent authority in the State governments as per IT (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules 2009. · As per rule 22 of the IT (Procedure and Safeguards for Interception, Monitoring and Decryption of Information) Rules 2009, all such cases of interception or monitoring or decryption are to be placed before the review committee headed by Cabinet Secretary, which shall meet at least once in two months to review such cases. In case of State governments, such cases are reviewed by a committee headed by the Chief Secretary concerned. ·S.O dated 20.12.2018 will help in following ways: I. To ensure that any interception, monitoring or decryption of any information through any computer resource is done as per due process of law. II. Notification about the agencies authorized to exercise these powers and preventing any unauthorized use of these powers by any agency, individual or intermediary. III. The above notification will ensure that provisions of law relating to lawful interception or monitoring of computer resource are followed and if any interception, monitoring or decryption is required for purposes specified in Section 69 of the IT Act, the same is done as per due process of law and approval of competent authority i.e. Union Home Secretary. Most countries worldwide maintain LI requirements similar to those Europe and the U.S., and have moved to the ETSI handover standards. TheConvention on Cybercrimerequires such capabilities. As with many law enforcement tools, LI systems may be subverted for illicit purposes, producing a violation of human rights, as declared byEuropean Court of Human Rightsin the caseBettino CraxiIII v. Italy.[17]It also occurred in Greece during the 2004 Olympics: the telephone operatorVodafone Greecewas fined $100,000,000 in 2006[18](or €76,000,000[19]) for failing to secure its systems against unlawful access. According to Monshizadeh et al., the event is representative of mobile networks and Internet Service Providers vulnerability to cyber attacks because they use outdated LI mechanism.[20]
https://en.wikipedia.org/wiki/Lawful_interception
This is alist of government surveillance projectsand related databases throughout the world.
https://en.wikipedia.org/wiki/List_of_government_surveillance_projects
National security, ornational defence(national defenseinAmerican English), is thesecurityanddefenceof asovereign state, including itscitizens,economy, andinstitutions, which is regarded as a duty ofgovernment. Originally conceived as protection againstmilitary attack, national security is widely understood to include also non-military dimensions, such as the security fromterrorism, minimization ofcrime,economic security,energy security,environmental security,food security, andcyber-security. Similarly, national security risks include, in addition to the actions of otherstates, action byviolent non-state actors, bynarcotic cartels,organized crime, bymultinational corporations, and also the effects ofnatural disasters. Governments rely on a range of measures, includingpolitical,economic, andmilitarypower, as well asdiplomacy, to safeguard the security of a state. They may also act to build the conditions of security regionally and internationally by reducingtransnationalcauses of insecurity, such asclimate change,economic inequality,political exclusion, andnuclear proliferation. The concept of national security remains ambiguous, having evolved from simpler definitions which emphasised freedom from military threat and from political coercion.[1]: 1–6[2]: 52–54Among the many definitions proposed to date are the following, which show how the concept has evolved to encompass non-military concerns: Potential causes of national insecurity include actions by other states (e.g.militaryorcyber attack),violent non-state actors(e.g.terrorist attack),organised criminal groupssuch asnarcotic cartels, and also the effects ofnatural disasters(e.g. flooding, earthquakes).[3]: v, 1–8[8][9]Systemic drivers of insecurity, which may betransnational, includeclimate change,economic inequalityandmarginalisation,political exclusion, andnuclear proliferation.[8]: 3[9] In view of the wide range of risks, the security of a state has several dimensions, includingeconomic security,energy security,physical security,environmental security,food security,border security, andcyber security. These dimensions correlate closely withelements of national power. Increasingly, governments organise theirsecurity policiesinto a national security strategy (NSS);[10]as of 2017, Spain, Sweden, the United Kingdom, and the United States are among the states to have done so.[11][12][13][14]Some states also appoint aNational Security Counciland/or aNational Security Advisorwhich is an executive government agency, it feeds the head of the state on topics concerning national security and strategic interest. The national security council/advisor strategies long term, short term, contingency national security plans.Indiaholds one such system in current, which was established on 19 November 1998. Although states differ in their approach, various forms of coercive power predominate, particularlymilitary capabilities.[8]The scope of these capabilities has developed. Traditionally, military capabilities were mainly land- or sea-based, and in smaller countries, they still are. Elsewhere, the domains of potential warfare now include theair,space,cyberspace, andpsychological operations.[15]Military capabilities designed for these domains may be used for national security, or equally for offensive purposes, for example to conquer and annex territory and resources. In practice, national security is associated primarily with managing physical threats and with themilitarycapabilities used for doing so.[11][13][14]That is, national security is often understood as the capacity of a nation to mobilise military forces to guarantee its borders and to deter or successfully defend against physical threats includingmilitary aggressionand attacks bynon-state actors, such asterrorism. Most states, such as South Africa and Sweden,[16][12]configure their military forces mainly for territorial defence; others, such as France, Russia, the UK and the US,[17][18][13][14]invest in higher-costexpeditionary capabilities, which allow their armed forces toproject powerand sustainmilitary operationsabroad. Infrastructure security is thesecurityprovided to protectinfrastructure, especiallycritical infrastructure, such asairports,highways,[19]rail transport,hospitals,bridges,transport hubs, network communications,media, theelectricity grid,dams,power plants,seaports,oil refineries, andwater systems. Infrastructure security seeks to limit vulnerability of these structures and systems tosabotage,terrorism, andcontamination.[20] Many countries have established government agencies to directly manage the security of critical infrastructure, usually, through the Ministry of Interior/Home Affairs, dedicated security agencies to protect facilities such as United StatesFederal Protective Service, and also dedicated transport police such as theBritish Transport Police. There are also commercial transportation security units such as theAmtrak Policein the United States. Critical infrastructure is vital for the essential functioning of a country. Incidental or deliberate damage can have a serious impact on the economy and essential services. Some of the threats to infrastructure include: Computer security, also known as cybersecurity or IT security, refers to the security of computing devices such ascomputersand smartphones, as well ascomputer networkssuch as private and public networks, and theInternet. It concerns the protection of hardware, software, data, people, and also the procedures by which systems are accessed, and the field has growing importance due to the increasing reliance on computer systems in most societies.[21]Since unauthorized access to critical civil and military infrastructure is now considered a major threat, cyberspace is now recognised as a domain of warfare. One such example is the use ofStuxnetby the US and Israel against theIranian nuclear programme.[15] Barry Buzan,Ole Wæver,Jaap de Wildeand others have argued that national security depends onpolitical security: the stability of the social order.[22]Others, such as Paul Rogers, have added that the equitability of the international order is equally vital.[9]Hence, political security depends on the rule ofinternational law(including thelaws of war), the effectiveness ofinternational political institutions, as well asdiplomacyandnegotiationbetween nations and other security actors.[22]It also depends on, among other factors, effective political inclusion of disaffected groups and thehuman securityof the citizenry.[9][8][23] Economic security, in the context ofinternational relations, is the ability of anation stateto maintain and develop the national economy, without which other dimensions of national security cannot be managed. Economic capability largely determines the defence capability of a nation, and thus a sound economic security directly influences the national security of a nation. That is why we see countries with sound economy, happen to have sound security setup too, such asThe United States,China,Indiaamong others. In larger countries, strategies for economic security expect to access resources and markets in other countries and to protect their own markets at home.Developing countriesmay be less secure than economically advanced states due to high rates of unemployment and underpaid work.[citation needed] Environmental security, also known as ecological security, refers to the integrity ofecosystemsand thebiosphere, particularly in relation to their capacity to sustain adiversity of life-forms(including human life). The security of ecosystems has attracted greater attention as the impact of ecological damage by humans has grown.[24]The degradation of ecosystems, includingtopsoil erosion,deforestation,biodiversity loss, andclimate change, affect economic security and can precipitatemass migration, leading to increased pressure on resources elsewhere. Ecological security is also important since most of the countries in the world are developing and dependent onagricultureand agriculture gets affected largely due to climate change. This effect affects the economy of the nation, which in turn affects national security. The scope and nature of environmental threats to national security and strategies to engage them are a subject of debate.[3]: 29–33Romm (1993) classifies the major impacts of ecological changes on national security as:[3]: 15 Resources include water, sources of energy, land, and minerals. Availability of adequate natural resources is important for a nation to develop its industry and economic power. For example, in thePersian Gulf Warof 1991,IraqcapturedKuwaitpartly in order to secure access to its oil wells, and one reason for the US counter-invasion was the value of the same wells to its own economy.[citation needed]Water resources are subject to disputes between many nations, includingIndiaandPakistan, and in theMiddle East. The interrelations between security, energy, natural resources, and their sustainability is increasingly acknowledged in national security strategies and resource security is now included among theUN Sustainable Development Goals.[12][11][27][14][28]In the US, for example, the military has installedsolar photovoltaicmicrogridson their bases in case ofpower outage.[29][30] The dimensions of national security outlined above are frequently in tension with one another. For example: If tensions such as these are mismanaged, national security policies and actions may be ineffective or counterproductive. Increasingly, national security strategies have begun to recognise that nations cannot provide for their own security without also developing the security of their regional and international context.[14][27][11][12]For example, Sweden's national security strategy of 2017 declared: "Wider security measures must also now encompass protection against epidemics and infectious diseases, combating terrorism and organised crime, ensuring safe transport and reliable food supplies, protecting against energy supply interruptions, countering devastating climate change, initiatives for peace and global development, and much more."[12] The extent to which this matters, and how it should be done, is the subject of debate. Some argue that the principal beneficiary of national security policy should be the nation state itself, which should centre its strategy on protective and coercive capabilities in order to safeguard itself in a hostile environment (and potentially to project that power into its environment, and dominate it to the point ofstrategic supremacy).[35][36][37]Others argue that security depends principally on building the conditions in which equitable relationships between nations can develop, partly by reducing antagonism between actors, ensuring that fundamental needs can be met, and also that differences of interest can be negotiated effectively.[38][8][9]In the UK, for example, Malcolm Chalmers argued in 2015 that the heart of the UK's approach should be support for the Western strategic military alliance led throughNATOby the United States, as "the key anchor around which international order is maintained".[39] Approaches to national security can have a complex impact onhuman rightsandcivil liberties. For example, the rights and liberties of citizens are affected by the use ofmilitary personnelandmilitarised police forcesto control public behaviour; the use ofsurveillance, includingmass surveillanceincyberspace, which has implications forprivacy;military recruitmentandconscriptionpractices; and the effects ofwarfareonciviliansandcivil infrastructure. This has led to adialecticalstruggle, particularly inliberal democracies, between governmentauthorityand the rights and freedoms of the general public. Even where the exercise of national security is subject togood governance, and therule of law, a risk remains that the termnational securitymay become apretextfor suppressingunfavorable political and social views. In the US, for example, the controversialUSA Patriot Actof 2001, and the revelation byEdward Snowdenin 2013 that theNational Security Agencyharvests the personal data of the general public, brought these issues to wide public attention. Among the questions raised are whether and how national security considerations at times of war should lead to the suppression of individual rights and freedoms, and whether such restrictions are necessary when a state is at peace. National security ideology as taught by theUS Army School of the Americasto military personnel was vital in causing the military coup of 1964 in Brazil and the 1976 one in Argentina. The military dictatorships were installed on the claim by the military that Leftists were an existential threat to the national interests.[40] China's military is thePeople's Liberation Army(PLA). The military is the largest in the world, with 2.3 million active troops in 2005. TheMinistry of State Securitywas established in 1983 to ensure "the security of the state through effective measures against enemy agents, spies, and counterrevolutionary activities designed to sabotage or overthrow China's socialist system."[41] ForSchengen area[42]some parts of national security and external border control are enforced byFrontex[43]according to theTreaty of Lisbon. Thesecurity policy of the European Unionis set byHigh Representative of the Union for Foreign Affairs and Security Policyand assisted byEuropean External Action Service.[44]Europolis one of theagencies of the European Unionresponsible for combating various forms ofcrimein the European Union through coordinating law enforcement agencies of the EU member states.[45] European Union national security has been accused of insufficiently preventing foreign threats.[46] The state of the Republic of India's national security is determined by its internal stability and geopolitical interests. While Islamic upsurge in Indian State of Jammu and Kashmir demanding secession and far left-wing terrorism in India'sred corridorremain some key issues in India's internal security,terrorism from Pakistan-based militant groupshas been emerging as a major concern for New Delhi. TheNational Security Advisor of Indiaheads theNational Security Council of India, receives all kinds of intelligence reports, and is chief advisor to thePrime Minister of Indiaover national and international security policy. The National Security Council has India'sdefence,foreign,home,financeministers and deputy chairman ofNITI Aayogas its members and is responsible for shaping strategies for India's security in all aspects.[47] A lawyer Ashwini Upadhyay filed aPublic interest litigation(PIL) in the "Supreme Court of India" (SC) to identify and deport illegal immigrants. Responding to this PIL,Delhi Policetold the SC in July 2019 that nearly 500 illegal Bangladeshi immigrants have been deported in the preceding 28 months.[48]There are estimated 600,000 to 700,000 illegal Bangladeshi andRohingyaimmigrants inNational Capital Region(NCR) region specially in the districts ofGurugram,Faridabad, andNuh(Mewatregion), as well as interior villages ofBhiwaniandHisar. Most of them are Muslims who have acquired fake Hindu identity, and under questioning, they pretend to be from West Bengal. In September 2019, theChief Minister of Haryana,Manohar Lal Khattarannounced the implementation ofNRC for Haryanaby setting up a legal framework under the former judge of the Punjab and Haryana High Court, Justice HS Bhalla for updating NRC which will help in weeding out these illegal immigrants.[49] In the years 1997 and 2000, Russia adopted documents titled "National Security Concept" that described Russia's global position, the country's interests, listed threats to national security, and described the means to counter those threats. In 2009, these documents were superseded by the "National Security Strategy to 2020". The key body responsible for coordinating policies related to Russia's national security is theSecurity Council of Russia. According to provision 6 of theNational Security Strategy to 2020, national security is "the situation in which the individual, the society and the state enjoy protection from foreign and domestic threats to the degree that ensures constitutional rights and freedoms, decent quality of life for citizens, as well as sovereignty, territorial integrity and stable development of the Russian Federation, the defence and security of the state." Total Defence is Singapore'swhole-of-societynational defence concept[50]based on the premise that the strongest defence of a nation is collective defence[51]– when every aspect of society stays united for the defence of the country.[52]Adopted from the national defence strategies of Sweden and Switzerland,[53]Total Defence was introduced in Singapore in 1984. Then, it was recognised that military threats to a nation can affect the psyche and social fabric of its people.[54]Therefore, the defence and progress of Singapore are dependent on all of its citizens' resolve, along with the government and armed forces.[55]Total Defence has since evolved to take into consideration threats and challenges outside of the conventional military domain. National security of Ukraine is defined in Ukrainian law as "a set of legislative and organisational measures aimed at permanent protection of vital interests of man and citizen, society and the state, which ensure sustainable development of society, timely detection, prevention and neutralisation of real and potential threats to national interests in areas of law enforcement, fight against corruption, border activities and defence, migration policy, health care, education and science, technology and innovation policy, cultural development of the population, freedom of speech andinformation security, social policy and pension provision, housing and communal services, financial services market, protection of property rights, stock markets and circulation of securities, fiscal and customs policy, trade and business, banking services, investment policy, auditing, monetary and exchange rate policy, information security, licensing, industry and agriculture, transport and communications, information technology, energy and energy saving, functioning of natural monopolies, use ofsubsoil, land and water resources, minerals, protection of ecology and environment and other areas of public administration, in the event of emergence of negative trends towards the creation of potential or real threats to national interests."[56] The primary body responsible for coordinating national security policy in Ukraine is theNational Security and Defense Council of Ukraine. It is an advisory state agency to thePresident of Ukraine, tasked with developing a policy of national security on domestic and international matters. All sessions of the council take place in thePresidential Administration Building. The council was created by the provision ofSupreme Council of Ukraine#1658-12 on October 11, 1991. It was defined as the highest state body of collegiate governing on matters of defence and security of Ukraine with the following goals: The primary body responsible for coordinating national security policy in the UK is theNational Security Council (United Kingdom)which helps produce and enact theUK's National Security Strategy. It was created in May 2010 by the newcoalition governmentof theConservative Party (UK)andLiberal Democrats. The National Security Council is a committee of theCabinet of the United Kingdomand was created as part of a wider reform of the nationalsecurity apparatus. This reform also included the creation of aNational Security Adviserand aNational Security Secretariatto support the National Security Council.[57] The concept of national security became an official guiding principle offoreign policy in the United Stateswhen theNational Security Act of 1947was signed on July 26, 1947, byU.S. PresidentHarry S. Truman.[3]: 3As amended in 1949, this Act: Notably, the Act didnotdefine national security, which was conceivably advantageous, as its ambiguity made it a powerful phrase to invoke against diverse threats to interests of the state, such as domestic concerns.[3]: 3–5 The notion that national security encompasses more than just military security was present, though understated, from the beginning. The Act established the National Security Council so as to "advise the President on the integration of domestic, military and foreign policies relating to national security".[2]: 52 The act establishes, within the National Security Council, the Committee on Foreign Intelligence, whose duty is to conduct an annual review "identifying the intelligence required to address the national security interests of the United Statesas specified by the President" (emphasis added).[59] In Gen.Maxwell Taylor's 1974 essay "The Legitimate Claims of National Security", Taylor states:[60] The national valuables in this broad sense include current assets and national interests, as well as the sources of strength upon which our future as a nation depends. Some valuables are tangible and earthy; others are spiritual or intellectual. They range widely from political assets such as the Bill of Rights, our political institutions, and international friendships to many economic assets which radiate worldwide from a highly productive domestic economy supported by rich natural resources. It is the urgent need to protect valuables such as these which legitimizes and makes essential the role of national security. To address the institutionalisation of new bureaucracies and government practices in the post–World War II period in the U.S., the culture of semi-permanent military mobilisation joined the National Security Council (NSC), the Central Intelligence Agency (CIA), the Department of Defense (DoD), and the Joint Chiefs of Staff (JCS) for the practical application of the concept of thenational security state:[61][62][63] During and after World War II, U.S. leaders expanded the concept of national security, and used its terminology for the first time to explain America's relationship to the world. For most of U.S. history, the continental United States was secure. But, by 1945, it had become rapidly vulnerable with the advent of long-range bombers, atom bombs, and ballistic missiles. A general perception grew that future mobilization would be insufficient and that preparation must be constant. For the first time, American leaders dealt with the essential paradox of national security faced by the Roman Empire and subsequent great powers:Si vis pacem, para bellum— "If you want peace, prepare for war."[64] Jack Nelson-Pallmeyeroffers a seven-characteristic definition for 'national security state' as where the military and broader national security establishment, e.g., exert influence over political and economic affairs; hold ultimate power while maintaining an appearance of democracy; are preoccupied with external and/or internal enemies; define policies in secret and implement those policies through covert channels.[65] The U.S.Joint Chiefs of Staffdefines national security of the United States in the following manner :[66] A collective term encompassing both national defense and foreign relations of the United States. Specifically, the condition provided by: a. a military or defense advantage over any foreign nation or group of nations; b. a favorable foreign relations position; or c. a defense posture capable of successfully resisting hostile or destructive action from within or without, overt or covert. In 2010, theWhite Houseincluded an all-encompassing world-view in a national security strategy which identified "security" as one of the country's "four enduring national interests" that were "inexorably intertwined":[67] "To achieve the world we seek, the United States must apply our strategic approach in pursuit of four enduring national interests: Each of these interests is inextricably linked to the others: no single interest can be pursued in isolation, but at the same time, positive action in one area will help advance all four." U.S. Secretary of StateHillary Clintonhas said that, "The countries that threaten regional and global peace are the very places where women and girls are deprived of dignity and opportunity".[68]She has noted that countries, where women are oppressed, are places where the "rule of law and democracy are struggling to take root",[68]and that, when women's rights as equals in society are upheld, the society as a whole changes and improves, which in turn enhances stability in that society, which in turn contributes to global society.[68] The Bush administration in January 2008 initiated the Comprehensive National Cybersecurity Initiative (CNCI). It introduced a differentiated approach, such as identifying existing and emerging cybersecurity threats, finding and plugging existing cyber vulnerabilities and apprehending those trying to access federal information systems.[69] President Obama said the "cyber threat is one of the most serious economic and national security challenges we face as a nation" and that "America's economic prosperity in the 21st century will depend on cybersecurity".[70]
https://en.wikipedia.org/wiki/National_security
Thenothing to hide argumentis alogical fallacywhich states that individuals have no reason to fear or opposesurveillanceprograms unless they are afraid it will uncover their own illicit activities. An individual using this argument may claim that an average person should not worry about government surveillance, as they would have "nothing to hide".[1] An early instance of this argument was referenced byHenry Jamesin his 1888 novel,The Reverberator: If these people had done bad things they ought to be ashamed of themselves and he couldn't pity them, and if they hadn't done them there was no need of making such a rumpus about other people knowing. Upton Sinclairalso referenced a similar argument in his bookThe Profits of Religion, published in 1917 : Not merely was my own mail opened, but the mail of all my relatives and friends — people residing in places as far apart as California and Florida. I recall the bland smile of a government official to whom I complained about this matter: "If you have nothing to hide you have nothing to fear." My answer was that a study of many labor cases had taught me the methods of the agent provocateur. He is quite willing to take real evidence if he can find it; but if not, he has familiarized himself with the affairs of his victim, and can make evidence which will be convincing when exploited by the yellow press.[2] The motto "If you've got nothing to hide, you've got nothing to fear" has been used in defense of theclosed-circuit televisionprogram practiced in theUnited Kingdom.[3] This argument is commonly used in discussions regardingprivacy. Legal scholarGeoffrey Stonesaid that the use of the argument is "all-too-common".[3]Bruce Schneier, a data security expert andcryptographer, described it as the "most common retort against privacy advocates."[3]Colin J. Bennett, author ofThe Privacy Advocates, said that an advocate of privacy often "has to constantly refute" the argument.[4]Bennett explained that most people "go through their daily lives believing that surveillance processes are not directed at them, but at the miscreants and wrongdoers" and that "the dominant orientation is that mechanisms of surveillance are directed at others" despite "evidence that the monitoring of individual behavior has become routine and everyday". An ethnographic study by Ana Viseu, Andrew Clement, and Jane Aspinal revealed that individuals with higher socioeconomic status were not as concerned by surveillance as their counterparts.[5]In another study regarding privacy-enhancing technology,[6]Viseuet al.,noticed a compliancy regarding user privacy. Both studies attributed this attitude to the nothing to hide argument. A qualitative study conducted for thegovernment of the United Kingdomaround 2003[7]found that self-employed men initially used the "nothing to hide" argument before shifting to an argument in which they perceived surveillance to be a nuisance instead of a threat.[8] Viseuet al.,said that the argument "has been well documented in the privacy literature as a stumbling block to the development of pragmaticprivacy protectionstrategies, and it, too, is related to the ambiguous and symbolic nature of the term ‘privacy' itself."[6]They explained that privacy is an abstract concept and people only become concerned with it once their privacy is gone. Furthermore, they compare a loss to privacy with people knowing thatozone depletionandglobal warmingare negative developments, but that "the immediate gains of driving the car to work or putting on hairspray outweigh the often invisible losses ofpollutingthe environment." Whistleblower and anti-surveillance advocateEdward Snowdenremarked that "Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care aboutfree speechbecause you have nothing to say."[9]From his perspective, governments are obligated to protect citizens' right to privacy, and people who argue in favor of thenothing to hideargument are too willing to accept government infringement upon those rights. Daniel J. Solovestated in an article forThe Chronicle of Higher Educationthat he opposes the argument. He was concerned that without privacy rights, governments could do damage to citizens by leaking sensitive information, or use information about a person to deny access to services, even if that person has not actually committed any crimes. Solove also wrote that a government can cause damage to an individual's personal life by making errors:[3]"When engaged directly, the nothing-to-hide argument can ensnare, for it forces the debate to focus on its narrow understanding of privacy. But when confronted with the plurality of privacy problems implicated by governmentdata collectionand use beyond surveillance and disclosure, the nothing-to-hide argument, in the end, has nothing to say." Adam D. Moore, author ofPrivacy Rights: Moral and Legal Foundations, argued that "it is the view that rights are resistant tocost/benefitorconsequentialistsort of arguments. Here we are rejecting the view that privacy interests are the sorts of things that can be traded for security."[10]He also stated that surveillance can disproportionately affect certain groups in society based on appearance, ethnicity, sexuality, and religion. Cryptographer and computer security expertBruce Schneierexpressed opposition to thenothing to hideargument, citing a statement widely attributed toCardinal Richelieu:[11]"Give me six lines written by the hand of the most honest man, I'll find enough to hang him." This metaphor is meant to illustrate that with even a small amount of information about an individual, an entity such as a government can find a way to prosecute orblackmailthem.[12]Schneier also argued that the actual choice is between "liberty versus control", rather than "security versus privacy".[12] Philosopher and psychoanalyst Emilio Mordini argued that the "nothing to hide" argument is inherentlyparadoxical, because people do not need to have "something to hide" in order to be hiding "something". Mordini makes the point that the content of what is hidden is not necessarily relevant; instead, he argues that it is necessary to have an intimate area which can be both hidden and access-restricted, because–from a psychological perspective–people become individuals when they discover that it is possible to hide something from others.[13] Julian Assange, founder ofWikileaks, agreed withJacob Appelbaumand remarked that "Mass surveillance is a mass structural change. When society goes bad, it's going to take you with it, even if you are the blandest person on earth."[14] Law professorIgnacio Cofoneargued that the argument is mistaken in its own terms because whenever people disclose relevant information to others, they also must disclose irrelevant information, and this irrelevant information has privacy costs and can lead to discrimination or other harmful effects.[15][16] Alex Winter, director of the documentaryDeep Web: The Untold Story of Bitcoin and the Silk Road, stated in his 2015 TED Talk "I don't accept the idea that if we have nothing to hide we have nothing to fear. Privacy serves a purpose. It’s why we have blinds on our windows and a door on our bathroom."[17]
https://en.wikipedia.org/wiki/Nothing_to_hide_argument
Apen register, ordialed number recorder(DNR), is a device that records allnumberscalled from a particulartelephoneline.[1]The term has come to include any device or program that performs similar functions to an original pen register, including programsmonitoringInternetcommunications. The United States statutes governing pen registers are codified under18 U.S.C., Chapter 206. The termpen registeroriginally referred to a device for recordingtelegraphsignals on a strip of paper.Samuel F. B. Morse's 1840 telegraph patent described such a register as consisting of aleverholding an armature on one end, opposite anelectromagnet, with afountain pen,pencilor other marking instrument on the other end, and a clockwork mechanism to advance a paper recording tape under the marker.[2] The termtelegraph registercame to be a generic term for such a recording device in the later 19th century.[3]Where the record was made in ink with a pen, the termpen registeremerged. By the end of the 19th century, pen registers were widely used to record pulsed electrical signals in many contexts. For example, one fire-alarm system used a "double pen-register",[4]and another used a "single or multiple pen register".[5] Aspulse dialingcame into use fortelephone exchanges, pen registers had obvious applications as diagnostic instruments for recording sequences of telephone dial pulses. In the United States, the clockwork-powered Bunnell pen register remained in use into the 1960s.[6] After the introduction oftone dialing, any instrument that could be used to record the numbers dialed from a telephone came to be defined as a pen register. Title 18 of theUnited States Codedefines a pen register as: a device or process which records or decodes dialing, routing, addressing, or signaling information transmitted by an instrument or facility from which a wire or electronic communication is transmitted, provided, however, that such information shall not include the contents of any communication, but such term does not include any device or process used by a provider or customer of a wire or electronic communication service for billing, or recording as an incident to billing, for communications services provided by such provider or any device or process used by a provider or customer of a wire communication service for cost accounting or other like purposes in the ordinary course of its business[7] This is the current definition of a pen register, as amended by passage of the 2001USA PATRIOT Act. The original statutory definition of a pen register was created in 1984 as part of theElectronic Communications Privacy Act, which defined a "Pen Register" as: A device which records or decodes electronic or other impulses which identify the numbers called or otherwise transmitted on the telephone line to which such device is dedicated. A pen register is similar to atrap and trace device. A trap and trace device would show what numbers had called a specific telephone, i.e., allincomingphone numbers. A pen register rather would show what numbers a phone had called, i.e. alloutgoingphone numbers. The two terms are often used in concert, especially in the context of Internet communications. They are often jointly referred to as "Pen Register or Trap and Trace devices" to reflect the fact that the same program will probably do both functions in the modern era, and the distinction is not that important. The term "pen register" is often used to describe both pen registers and trap and trace devices.[8] InKatz v. United States(1967), theUnited States Supreme Courtestablished its "reasonable expectation of privacy" test. It overturnedOlmstead v. United States(1928) and held that warrantlesswiretapswereunconstitutionalsearches, because there was a reasonable expectation that the communication would beprivate. From then on, the government was required to get awarrantto execute a wiretap. Twelve years later the Supreme Court held that a pen register is not a search because the "petitioner voluntarily conveyed numerical information to the telephone company."Smith v. Maryland, 442 U.S. 735, 744 (1979). Since thedefendanthad disclosed the dialed numbers to the telephone company so they could connect his call, he did not have a reasonable expectation of privacy in the numbers he dialed. The court did not distinguish between disclosing the numbers to a human operator or just the automatic equipment used by the telephone company. The Smith decision left pen registers completely outsideconstitutionalprotection. If there was to be any privacy protection, it would have to be enacted by Congress as statutoryprivacy law[citation needed].[1] TheElectronic Communications Privacy Act(ECPA) was passed in 1986 (Pub. L. No. 99-508, 100 Stat. 1848). There were three main provisions or Titles to the ECPA. Title III created the Pen Register Act, which included restrictions on private and law enforcement uses of pen registers. Private parties were generally restricted from using them unless they met one of the exceptions, which included an exception for the business providing the communication if it needed to do so to ensure the proper functioning of its business. Forlaw enforcement agenciesto get a pen register approved forsurveillance, they must get acourt orderfrom a judge. According to 18 U.S.C. § 3123(a)(1), the "court shall enter anex parteorder authorizing the installation and use of a pen register or trap and trace device anywhere within the United States, if the court finds that the attorney for the Government has certified to the court that the information likely to be obtained by such installation and use is relevant to an ongoing criminal investigation".[9]Thus, a government attorney only needs to certify that information will "likely" be obtained in relation to an 'ongoingcriminal investigation'. This is the lowest requirement for receiving a court order under any of the ECPA's three titles. This is because inSmith v. Maryland, the Supreme Court ruled that use of a pen register does not constitute asearch. The ruling held that only the content of a conversation should receive full constitutional protection under theright to privacy; since pen registers do not intercept conversation, they do not pose as much threat to this right. Some have argued that the government should be required to present "specific and articulable facts" showing that the information to be gathered is relevant and material to an ongoing investigation. This is the standard used by Title II of the ECPA with regard to the contents of stored communications. Others, such asDaniel J. Solove,Petricia Bellia, andDierdre Mulligan, believe thatprobable causeand a warrant should be necessary.[10][11][12]Paul Ohmargues thatstandard of proofshould be replaced/reworked for electronic communications altogether.[13] The Pen Register Act did not include anexclusionary rule. While there werecivil remediesfor violations of the Act, evidence gained in violation of the Act can still be used against a defendant in court. There have also been calls for Congress to add anexclusionary ruleto the Pen Register Act, as this would make it more analogous to traditionalFourth Amendmentprotections. The penalty for violating the Pen Register Act is a misdemeanor, and it carries a prison sentence of not more than one year.[14] Section 216 of the 2001USA PATRIOT Actexpanded the definition of a pen register to include devices or programs that provide an analogous function with Internet communications. Prior to the Patriot Act, it was unclear whether or not the definition of a pen register, which included very specific telephone terminology,[15]could apply to Internet communications. Most courts and law enforcement personnel operated under the assumption that it did, however, theClinton administrationhad begun to work on legislation to make that clear, and onemagistratejudge inCaliforniadid rule that the language was too telephone-specific to apply toInternet surveillance. The Pen Register Statute is a privacy act. As there is no constitutional protection for information divulged to a third party under the Supreme Court's expectation of privacy test, and the routing information for phone and Internet communications are divulged to the company providing the communication, the absence or inapplicability of the statute would leave the routing information for those communications completely unprotected from government surveillance. The government also has an interest in making sure the Pen Register Act exists and applies to Internet communications. Without the Act, they cannot compel service providers to give them records or do Internet surveillance with their own equipment orsoftware, and the law enforcement agency, which may not have very goodtechnologicalcapabilities, will have to do the surveillance itself at its own cost. Rather than creating new laws regarding Internet surveillance, the Patriot Act simply expanded the definition of a pen register to include computer software programs doing Internet surveillance by accessing information. While not completely compatible with the technical definition of a pen register device, this was the interpretation that had been used by almost all courts and law enforcement agencies prior to the change.[15] When, in 2006, the Bush administration came under fire for having secretly collected billions of phone call details from regular Americans, ostensibly to check for calls to terror suspects, the Pen Register Act was cited, along with theStored Communications Act, as an example of how such domestic spying violated Federal law.[16] In 2013, the Obama administration sought a court order "requiring Verizon on an 'ongoing, daily basis' to give the NSA information on all telephone calls in its systems, both within the US and between the US and other countries". The order was approved on April 25, 2013, by federal JudgeRoger Vinson, member of the secretForeign Intelligence Surveillance Court(FISC), which court had been created by theForeign Intelligence Surveillance Act(FISA). The order gave the government unlimited authority to compel Verizon to collect and provide the data for a specified three-month period ending on July 19. This is the first time significant and top-secret documents have been revealed exposing the continuation of the practice on a massive scale under U.S. PresidentBarack Obama. According toThe Guardian, "it is not known whether Verizon is the only cell-phone provider to be targeted with such an order, although previous reporting has suggested the NSA has collected cell records from all major mobile networks. It is also unclear from the leaked document whether the three-month order was a one-off or the latest in a series of similar orders".[17] On September 1, 2013, theDEA's Hemisphere Project was revealed to the public byThe New York Times. In a series ofPowerPointslides acquired through a lawsuit,AT&Tis revealed to be operating a call database going back to 1987 which the DEA has warrantless access to with no judicial oversight under "administrative subpoenas" originated by the DEA. The DEA pays AT&T to maintain employees throughout the country devoted to investigating call records through this database for the DEA. The database grows by 4 billion records per day, and presumably covers all traffic that crosses AT&T's network. Internal directives instructed participants never to reveal the project publicly, despite the fact that the project was portrayed as a "routine" part of DEA investigations; several investigations unrelated to drugs have been mentioned as using the data. When questioned on their participation,Verizon,Sprint, andT-Mobilerefused to comment on whether they were part of the project, generating fears thatpen registersandtrap and tracedevices are effectively irrelevant in the face of ubiquitous private-public-partnership surveillance with indefinite data retention.[18] Information that is legally collectible according to 2014 pen trap laws includes:[citation needed]
https://en.wikipedia.org/wiki/Pen_register
Apolice statedescribes astatewhosegovernmentinstitutions exercise an extreme level of control overcivil societyandliberties. There is typically little or no distinction between the law and the exercise ofpolitical powerby theexecutive, and the deployment ofinternal securityandpoliceforces play a heightened role ingovernance. A police state is a characteristic ofauthoritarian,totalitarianorilliberal regimes(contrary to aliberal democratic regime). Such governments are typicallyone-party statesanddominant-party states, but police-state-level control may emerge inmulti-party systemsas well. Originally, a police state was a state regulated by acivil administration, but since the beginning of the 20th century it has "taken on an emotional and derogatory meaning" by describing an undesirable state of living characterized by the overbearing presence of civil authorities.[1]The inhabitants of a police state may experience restrictions ontheir mobility, or on their freedom to express or communicate political or other views, which are subject to police monitoring or enforcement. Political control may be exerted by means of asecret policeforce that operates outside the boundaries normally imposed by aconstitutional state.[2]Robert von Mohl, who first introduced the rule of law to Germanjurisprudence, contrasted theRechtsstaat("legal" or "constitutional" state) with the anti-aristocraticPolizeistaat("police state").[3] TheOxford English Dictionarytraces the phrase "police state" back to 1851, when it was used in reference to the use of a national police force to maintain order in theAustrian Empire.[4]The German termPolizeistaatcame into English usage in the 1930s with reference to totalitarian governments that had begun to emerge in Europe.[5] Because there are different political perspectives as to what an appropriate balance is between individual freedom and national security, there are no objective standards defining a police state.[citation needed]This concept can be viewed as a balance or scale. Along this spectrum, any law that has the effect of removing liberty is seen as moving towards a police state while any law that limits government oversight of the populace is seen as moving towards afree state.[6] Anelectronic police stateis one in which the government aggressively uses electronic technologies to record, organize, search and distribute forensic evidence against its citizens.[7][8] Early forms of police states can be found in ancient China. During the rule ofKing Li of Zhouin the 9th century BC, there was strict censorship, extensive state surveillance, and frequent executions of those who were perceived to be speaking against the regime. During this reign of terror, ordinary people did not dare to speak to each other on the street, and only made eye contacts with friends as a greeting, hence known as '道路以目'. Subsequently, during the short-lived Qin Dynasty, the police state became far more wide-reaching than its predecessors. In addition to strict censorship and the burning of all political and philosophical books, the state implemented strict control over its population by using collective executions and by disarming the population. Residents were grouped into units of 10 households, with weapons being strictly prohibited, and only one kitchen knife was allowed for 10 households. Spying and snitching was in common place, and failure to report any anti-regime activities was treated the same as if the person participated in it. If one person committed any crime against the regime, all 10 households would be executed.[citation needed] Some have characterised the rule ofKing Henry VIIIduring theTudor periodas a police state.[9][10]TheOprichninaestablished byTsar Ivan IVwithin theRussian Tsardomin 1565 functioned as a predecessor to the modern police state, featuring persecutions andautocratic rule.[11][12] Nazi Germanyemerged from an originallydemocratic government, yet gradually exerted more and more repressive controls over its people in the lead-up toWorld War II. In addition to theSSand theGestapo, the Nazi police state used the judiciary to assert control over the population from the 1930s until the end of the war in 1945.[13] During the period ofapartheid,South Africamaintained police-state attributes such as banning people and organizations, arrestingpolitical prisoners, maintaining segregated living communities and restricting movement and access.[14] Augusto Pinochet'sChileoperated as a police state,[15]exhibiting "repression of public liberties, the elimination of political exchange, limiting freedom of speech, abolishing the right to strike, freezing wages".[16] TheRepublic of Cubaunder President (and later right-wing dictator)Fulgencio Batistawas anauthoritarianpolice state until his overthrow during theCuban Revolutionin 1959 with the rise to power ofFidel Castroand foundation of aMarxist-Leninistrepublic.[17][18][19][20] Following the failedJuly 1958 Haitian coup d'état attemptto overthrow the president,Haitidescended into an autocratic and despoticfamily dictatorshipunder theHaitian Vodoublack nationalistFrançois Duvalier(Papa Doc) and hisNational Unity Party. In 1959, Papa Doc ordered the creation ofTonton Macoutes, a paramilitary force unit whom he authorized to commit systematic violence and human rights abuses to suppress political opposition, including an unknown number of murders, public executions,rapes, disappearances of and attacks on dissidents; an unrestrainedstate terrorism. In the1964 Haitian constitutional referendum, he declared himself thepresident for lifethrough a sham election. After Duvalier's death in 1971, his sonJean-Claude(Baby Doc) succeeded him as the next president for life, continuing the regime until thepopular uprisingthat had him overthrown in February 1986.[citation needed] Ba'athist Syriaunder the dictatorship ofBashar al-Assadwas described[by whom?]as the most "ruthless police state" in theArab world; with a tight system of restrictions on the movement of civilians, independent journalists and other unauthorized individuals. AlongsideNorth KoreaandEritrea, it operated one of the strictestcensorshipmachines that regulated thetransfer of information. The Syrian security apparatus was established in the 1970s byHafez al-Assad, who ran amilitary dictatorshipwith theBa'ath partyas its civilian cover to enforce the loyalty of Syrian populations to theAssad family. The dreadedMukhabaratwas given free hand to terrorise,tortureor murder non-compliant civilians, while public activities of any organized opposition was curbed down with the raw firepower of thearmy. Bashar and hisfamilywere overthrown in December 2024 during theSyrian Revolutionwith theFall of the Assad regimeand theFall of Damascuswhich had forced Bashar to leaveSyriaforRussiaand its capitalMoscowfor a political asylum.[21][22] The region of modern-dayKoreais claimed to have elements of a police state, from theJuche-styleSillakingdom,[23]to the imposition of afascistpolice stateby the Japanese,[23]to thetotalitarianpolice state imposed and maintained by theKim family.[24]In 2006, Paris-basedReporters Without Bordersranked North Korea last or second last in their test of press freedom since thePress Freedom Index's introduction, stating that the ruling Kim family control all of the media.[25][26] In response to government proposals to enact new security measures to curb protests, theAKP-led government ofTurkeyhas been accused of turningTurkeyinto a police state.[27]Since the2013 removalof theMuslim Brotherhood-affiliated formerEgyptian presidentMohamed Morsifrom office, the government ofEgypthas carried out extensive efforts to suppress certain forms of Islamism andreligious extremism(including the aforementioned Muslim Brotherhood),[28][better source needed]leading to accusations that it has effectively become a "revolutionary police state".[29][30] TheUSSRwas a police state.[31]Notable secret police forces in the former USSR were theCheka, theNKVD, and theKGB. Tools of state control used by the Soviet Union includecensorship,forced labourunder theGulagsystem of labour camps,[32]anddeportationandgenocideof ethnic minorities such as in theHolodomor,NKVD Order No. 00485against the Poles andDe-Cossackization.[33]Modern-dayRussia[34][35]andBelarusare often described as police states.[36][37] ThedictatorshipofFerdinand Marcosfrom the 1970s to early 1980s in thePhilippineshad many characteristics of a police state.[38][39] Hong Kong is perceived by some human rights organizations and press to have implemented the tools of a police state after passing theNational Security legislation in 2020, following repeated attempts by People's Republic of China to erode the rule of law in the former British colony.[40][41][42][43][44] TheUnited Stateshas been described as a police state since theelectionof PresidentDonald Trumpin 2024, particularly due to themass deportations of activists—includinggreen cardholders such asMahmoud Khalil—andimmigrantswithout due process or transparency, alongside unidentifiedICEdetentions.[45] Fictional police states have featured in media ranging from novels to films to video games.George Orwell's novel1984describes Britain under thetotalitarianOceanianregime that continuously invokes (and helps to cause) aperpetual war. This perpetual war is used as a pretext for subjecting the people tomass surveillanceand invasive police searches. This novel was described byThe Encyclopedia of Police Scienceas "the definitive fictional treatment of a police state, which has also influenced contemporary usage of the term".[46]
https://en.wikipedia.org/wiki/Police_state
Theright to privacyis an element of various legal traditions that intends to restraingovernmentaland private actions that threaten theprivacyof individuals.[1][failed verification][2]Over 185 nationalconstitutionsmention the right to privacy.[3]Since theglobal surveillance disclosuresof 2013, the right to privacy has been a subject of international debate. Government agencies, such as theNSA,FBI,CIA,R&AW, andGCHQ, have engaged inmass,global surveillance. Some current debates around the right to privacy include whether privacy can co-exist with thecurrent capabilitiesofintelligence agenciesto access and analyze many details of an individual's life; whether or not the right to privacy is forfeited as part of thesocial contractto bolster defense against supposed terrorist threats; and whether threats ofterrorismare a valid excuse to spy on the general population. Private sector actors can also threaten the right to privacy – particularly technology companies, such asAmazon,Apple,Meta,Google,Microsoft, andYahoothat use and collect personal data. The right to privacy is a fundamental human right firmly grounded in international law. On10 December 1948, the United Nations General Assembly adopted theUniversal Declaration of Human Rights(UDHR); while the right to privacy does not appear in the document, Article 12 mentions privacy: No one shall be subjected to arbitrary interference with their privacy, family, home or correspondence, nor to attacks upon his honor and reputation. Everyone has the right to the protection of the law against such interference or attacks. Privacy is later codified in successive (hard) international human human rights treaties, including theInternational Covenant on Civil and Political Rights:[4][5] The concept of a human"right to privacy"begins when the Latin wordiusexpanded from meaning "what is fair" to include "a right – an entitlement a person possesses to control or claim something," by theDecretum GratianiinBologna, Italy in the 12th century.[6] In the United States, an article in the 15 December 1890, issue of theHarvard Law Reviewentitled "The Right to Privacy," written by attorneySamuel D. Warren IIand future U.S. Supreme Court JusticeLouis Brandeis, is often cited as the first explicit finding of a U.S. right to privacy. Warren II and Brandeis wrote that privacy is the "right to be let alone," and focused on protecting individuals. This approach was a response to recent technological developments of the time, such as photography and sensationalist journalism, also known as "yellow journalism."[7] Privacy rights are inherently intertwined with information technology. In his widely citeddissenting opinioninOlmstead v. United States(1928), Brandeis relied on thoughts he developed in the article "The Right to Privacy."[7]In that dissent, he urged that personal privacy matters were more relevant toconstitutional law, going so far as to say that "the government was identified as a potential privacy invader." He writes, "Discovery and invention have made it possible for the Government, by means far more effective than stretching upon the rack, to obtain disclosure in court of what is whispered in the closet." At that time, telephones were often community assets, with shared party lines and potentially eavesdroppingswitchboard operators. By the time ofKatz, in 1967, telephones had become personal devices with lines not shared across homes and switching was electro-mechanical. In the 1970s, new computing and recording technologies raised more concerns about privacy, resulting in theFair Information Practice Principles. In recent years, there have been few attempts to clearly and precisely define the "right to privacy."[8] Alan Westinbelieves that new technologies alter the balance between privacy and disclosure and that privacy rights may limit government surveillance to protect democratic processes. Westin defines privacy as "the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others". Westin describes 4 states of privacy: solitude, intimacy, anonymity, and reserve. These states must balance participation against norms: Each individual is continually engaged in a personal adjustment process in which he balances the desire for privacy with the desire for disclosure and communication of themself to others, in light of the environmental conditions and social norms set by the society in which they live. Under liberal democratic systems, privacy creates a space separate from political life, and allows personal autonomy, while ensuring democraticfreedoms of associationandexpression. Privacy to individuals is the ability to behave, think, speak, and express ideas without the monitoring or surveillance of someone else. Individuals exercise their freedom of expression through attending political rallies and choosing to hide their identities online by using pseudonyms. David Flaherty believes networked computer databases pose threats to privacy. He develops 'data protection' as an aspect of privacy, which involves "the collection, use, and dissemination of personal information". This concept forms the foundation for fair information practices used by governments globally. Flaherty forwards an idea of privacy as information control, "individuals want to be left alone and to exercise some control over how information about them is used".[10] Marc Rotenberghas described the modern right to privacy as Fair Information Practices: "the rights and responsibilities associated with the collection and use of personal information." Rotenberg emphasizes that the allocation of rights are to the data subject and the responsibilities are assigned to the data collectors because of the transfer of the data and the asymmetry of information concerning data practices.[11] Richard PosnerandLawrence Lessigfocus on the economic aspects of personal information control. Posner criticizes privacy for concealing information, which reduces market efficiency. For Posner, employment is selling oneself in the labor market, which he believes is like selling a product. Any 'defect' in the 'product' that is not reported is fraud.[12]For Lessig, privacy breaches online can be regulated through code and law. Lessig claims that "the protection of privacy would be stronger if people conceived of the right as a property right," and that "individuals should be able to control information about themselves".[13]Economic approaches to privacy make communal conceptions of privacy difficult to maintain. Adam D. Moorehas argued that privacy, the right to control access to and use of personal information is closely connected to human well-being. He notes that "having the ability and authority to regulate access to and uses of locations, bodies, and personal information, is an essential part of human flourishing" and while "the forms of privacy may be culturally relative . . . the need for privacy is not."[14] There have been attempts to reframe privacy as a fundamentalhuman right, whose social value is an essential component in the functioning of democratic societies.[15] Priscilla Regan believes that individual concepts of privacy have failed philosophically and in policy. She supports a social value of privacy with three dimensions: shared perceptions, public values, andcollectivecomponents. Shared ideas about privacy allow freedom of conscience and diversity in thought. Public values guarantee democratic participation, including freedoms of speech and association, and limit government power. Collective elements describe privacy as a collective good that cannot be divided. Regan's goal is to strengthen privacy claims in policy making: "if we did recognize the collective or public-good value of privacy, as well as the common and public value of privacy, those advocating privacy protections would have a stronger basis upon which to argue for its protection".[16] Leslie Regan Shade argues that the human right to privacy is necessary for meaningful democratic participation, and ensures human dignity and autonomy. Privacy depends on norms for how information is distributed, and if this is appropriate. Violations of privacy depend on context. The human right to privacy has precedent in the United Nations Declaration of Human Rights. Shade believes that privacy must be approached from a people-centered perspective, and not through the marketplace.[17] Privacy laws apply to both public and private sector actors. Australia does not have a constitutional right to privacy. However, thePrivacy Act 1988(Cth) provides a degree of protection over an individual'spersonally identifiable informationand its usage by the government and large companies.[18]ThePrivacy Actalso outlines the 13 Australian Privacy Principles.[19] Australia also lacks atortagainstinvasions of privacy. In the 2001 case ofAustralian Broadcasting Corporation v Lenah Game Meats Pty Ltd, 208 CLR 199, theHigh Court of Australiaexplained that there stood the possibility of "a tort identified as unjustified invasion of privacy",[20]but that this case lacked the facts to establish it.[18]Since 2001, there have been some state-based cases—namely the 2003 caseGrosse v Purvis, QDC 151; and the 2007 caseDoe v Australian Broadcasting Corporation, VCC 281—that attempted to establish a tortious invasion of privacy, but these cases were settled before decisions could be made. Further, they have received conflicting analyses in later cases.[21] Canadian privacy law is derived from thecommon law, statutes of theParliament of Canadaand the various provincial legislatures, and theCanadian Charter of Rights and Freedoms. Perhaps ironically, Canada's legal conceptualization of privacy, along with most modern legal Western conceptions of privacy, can be traced back to Warren and Brandeis’s"The Right to Privacy"published in theHarvard Law Reviewin 1890, Holvast states "Almost all authors on privacy start the discussion with the famous article 'The Right to Privacy' of Samuel Warren and Louis Brandeis". The Constitution is the highest law in China. Privacy rights have been applied throughout China.[22]The Constitution provides direction for all states in China and it further stipulates that "all states must abide by and be held accountable for any violation of the Constitution and the law; the law specifically protects civil rights of a citizen's personal dignity and confidentiality of correspondence."[23]China has a new standard and the first of its kind for the country coming into effect 1 January 2021, the Civil Code is the first of its kind sweeping law replacing all laws covering general provisions, real property, contracts, personality rights, marriage and family, inheritance, tort liability, and supplementary provisions.[24] In many cases raised in the legal system, these rights have been overlooked as the courts have not treated each case with the same legal precedent for each case. China deploysmass surveillanceon its population including through the use ofclosed-circuit television.[25] The 2021Data Security Lawclassifies data into different categories and establishes corresponding levels of protection.[26]: 131It imposes significant data localization requirements, in a response to the extraterritorial reach of the United StatesCLOUD Actor similar foreign laws.[26]: 250–251 The 2021Personal Information Protection Lawis China's first comprehensive law on personal data rights and is modeled after the European Union'sGeneral Data Protection Regulation.[26]: 131 The right to privacy is protected in the EU byEuropean Convention on Human RightsArticle 8: 1. Everyone has the right to respect for his private and family life, his home and his correspondence. 2. There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others. Compared to the United States, the European Union (EU) has more extensive data protection laws.[27] TheGeneral Data Protection Regulation(GDPR) is an important component of EUprivacy lawand ofhuman rights law, in particular Article 8(1) of theCharter of Fundamental Rights of the European Union. Under GDPR, data about citizens may only be gathered or processed under specific cases, and with certain conditions. Requirements of data controller parties under the GDPR include keeping records of their processing activities, adopting data protection policies, transparency with data subjects, appointing a Data Protection Officer, and implementing technical safeguards to mitigate security risks.[citation needed] TheCouncil of Europegathered to discuss the protection of individuals when the Convention Treaty No.108 was created and opened for signature by member states and for accession by non-member States.[28] The Convention closed and was then renamed Convention 108: Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data. Convention 108 has undergone 5 ratifications with the last ratification 10 January 1985 officially changing the name to Convention 108+ and providing the summary stating the intent of the treaty as: The first binding international instrument which protects the individual against abuses which may accompany the collection and processing of personal data, and which seeks to regulate at the same time the transfrontier flow of personal data.[28] Increase use of the Internet and technological advancement in products lead to the Council of Europe to look at Convention 108+ and the relevance of the Treaty in the wake of the changes. In 2011 the modernization of Convention 108+ started and completed in 2012 amending the treaty with Protocol CETS No223.[29] This modernization of Convention 108+ was in progress while the EU data protection rules were developed, the EU data protection rules would be adapted to become the GDPR. The newdata sharingpolicy of WhatsApp with Facebook after Facebook acquired WhatsApp in 2014 has been challenged in the Supreme Court. The Supreme Court must decide if the right to privacy can be enforced against private entities.[30] TheIndian Supreme Courtwith nine-judge bench underJS Khehar,ruled on 24 August 2017, that the right to privacy is a fundamental right for Indian citizens per Article 21 of the Constitution and additionally under Part III rights. Specifically, the court adopted the three-pronged test required for the encroachment of any Article 21 right – legality – i.e. through an existing law; necessity, in terms of a legitimate state objective and proportionality, that ensures a rational nexus between the object of the invasion and the means adopted to achieve that object.[31] This clarification was crucial to prevent the dilution of the right in the future on the whims and fancies of the government in power.[32]The Court adopted a liberal interpretation of the fundamental rights to meet the challenges posed an increasing digital age. It held that individual liberty must extend to digital spaces and individual autonomy and privacy must be protected.[33] This ruling by the Supreme Court paved the way for decriminalization of homosexuality in India on 6 September 2018, thus legalizing same-sex sexual intercourse between two consenting adults in private.[34]India is the world's biggest democracy and with this ruling, it has joined United States, Canada, South Africa, the European Union, and the UK in recognizing this fundamental right.[35] India's Data Protection law is known asDigital Personal Data Protection Act, 2023. InIsraelprivacy protection is a constitutional basic right and is therefore protected by the Basic Law. Basic Law: the Knesset passed on 12 February 1958, by theThird Knesset.[36]The Twelfth Knesset update to the Basic Law occurred on 17 March 1992. This update added to the law Human Dignity and Liberty by defining: Human freedom in Israel as being the right to leave the country and enter it, as well as the right to privacy and intimacy, refrainment from searches relating to one's private property, body and possessions, and avoidance of violations of the privacy of one's speech, writings and notes. October 2006 Israel established a regulatory authority, the PPA, part of the Ministry of Justice. PPA defined the Privacy Law and associated regulates based on two principles: general right to online privacy and the protection of personal data stored in databases.[37] The Constitution of the Russian Federation: Article 45 states:[40] The Russian Constitution specifically articles 23 and 24, institutes individual citizen the right to privacy. Russia, a member of theStrasbourg Convention, ratified processing of personal data against automatic processing and afterwards adopted a new convention. The new Russian Federal Law No.152-FZ R implemented on 27 July 2006, was updated to cover Personal Data and this law extends privacy to include personal and family secrets. Its main target is to protect individuals' personal data. Privacy entered the forefront of Russian legislature in 2014 when the approach to privacy turned to the goal of protecting privacy of government operations and the people of Russia. The amendments originally modified the Personal Data Law which has since been renamed The Data Localisation Law. The new law requires business operators who collect any information on Russian citizens' must maintain the collected data locally. This means that data transmission, processing, and storage must be in a database in Russia. 1 March 2021, the new amendment came into effect. Consent from the data subject is required if the data operator wants to use the data publicly.[41] TheConstitution of the United StatesandUnited States Bill of Rightsdo not explicitly include a right to privacy.[42]Currently no federal law takes a holistic approach to privacy regulation. In the US, privacy and expectations of privacy have been determined via court cases. Those protections have been established through court decisions provide a reasonable expectations of privacy. TheSupreme CourtinGriswold v. Connecticut, 381 U.S. 479(1965) found that the Constitution guarantees a right to privacy against governmental intrusion viapenumbraslocated in the founding text.[43] In 1890, Warren and Brandeis drafted an article published in theHarvard Law Reviewtitled "The Right To Privacy" that is often cited as the first implicit finding of a U.S. stance on the right to privacy.[7] Right to privacy has been the justification for decisions involving a wide range ofcivil libertiescases, includingPierce v. Society of Sisters, which invalidated a successful 1922Oregoninitiativerequiring compulsorypublic education;Roe v. Wade, which struck down an abortion law fromTexas, and thus restricted state powers to enforce laws against abortion; andLawrence v. Texas, which struck down a Texassodomy law, and thus eliminated state powers to enforce laws againstsodomy.Dobbs v. Jackson Women's Health Organizationlater overruledRoe v. Wade, in part due to theSupreme Courtfinding that the right to privacy was not mentioned in the constitution,[44]leaving the future validity of these decisions uncertain.[45] Legally, the right of privacy is a basic law[46]which includes: However, outside of recognized private locations, American law, for the most part, grants next to no privacy for those in public areas. In other words, no verbal or written consent is needed to take photos or videos of those in public areas.[47]This laxness extends to potentially embarrassing situations such as when actressJennifer Garnerbent over to retrieve something from her car and revealed herthong underwearto create awhale tail. Because the photographer took the photo in a public location, in this case a pumpkin patch, circulating the photo online was a legal act.[48] For the health care sector where medical records are part of an individual's privacy, The Privacy Rule of the Health Insurance Portability and Accountability Act was passed in 1996. This act safeguards medical data of the patient which also includes giving individuals rights over their health information, like getting a copy of their records and seeking correction.[49]Medical anthropologistKhiara Bridgeshas argued that theUS Medicaresystem requires so much personal disclosure from pregnant women that they effectively do not have privacy rights.[50] In 2018, California set out to create a policy promoting data protection, the first state in the United States to pursue such protection. The resulting effort is theCalifornia Consumer Privacy Act(CCPA), reviewed as a critical juncture where the legal definition of what privacy entails from California lawmakers' perspective. The California Consumer Protection Act is a privacy law protecting the residents of California and theirPersonal identifying information. The law enacts regulation over all companies regardless of operational geography protecting the six Intentional Acts included in the law.[51] The intentions included in the Act provide California residents with the right to: Governmental organizations such as theNational Security Agency(NSA), CIA, and GCHQ amongst others are authorized to conductmass surveillancethroughoutother nations in the world. Programs such asPRISM,MYSTIC, and other operations conducted byNATO-member states are capable of collecting a vast quantity of metadata, internet history, and even actual recordings of phone calls from various countries.[52]Domestic law enforcement at the federal level is conducted by theFederal Bureau of Investigation, so these agencies have never been authorized to collect US data.[53]PRISM has faced criticism for privacy concerns, in that it collects "metadata and communication content." However, scholars have argued that programs like PRISM do more good than harm, in that they protect Americans from foreign threats.[54] After theSeptember 11 attacks, the NSA turned its surveillance apparatus on the US and its citizens.[55] In March 2013,James Clapper, theDirector of National Intelligenceat the time, testified under oath that the NSA does not "wittingly" collect data on Americans. Clapper later retracted this statement.[56] The US Government's ownPrivacy and Civil Liberties Oversight Board(PCLOB) reviewed the confidential security documents, and found in 2014 that the program did not have"a single instance involving a threat to the United States in which the program made a concrete difference"in counterterrorism or the disruption of a terrorist attack.[57] The Chinese government is conducting mass surveillance inXinjiangprovince for detention of Muslims. As part of its "Strike Hard Campaign against Violent Terrorism" policy the authorities in China have subjugated 13 million Turkish Muslims to the highest order of restrictions.[58] During theCOVID-19 pandemicthe Chinese authorities documented the contact information and travel history of every individual and issued red, yellow and green badges/codes for transportation and entering stores. These badges/codes were also sometimes misused to freeze bank accounts and pressurize the protestors who were angry about the severe restrictions. The privacy of these health codes remain unacknowledged and unaddressed.[59] It is often claimed, particularly by those in the eye of the media, that their right to privacy is violated when information about their private lives is reported in the press. The point of view of the press, however, is that the general public has a right to know personal information about those with status as a public figure. This distinction is encoded in most legal traditions as an element offreedom of speech. Publication of private facts speaks of the newsworthiness of private facts according to the law and the protections that private facts have.[60]If a fact has significant newsworthiness to the public, it is protected by law under thefreedom of the press. However, even if the fact is true, if it is not newsworthy, it is not necessarily protected.Digital Media Law Projectuses examples such assexual orientation, HIV status, and financial status to show that these can be publicly detrimental to the figure being posted about.[60]The problem arises from the definition of newsworthiness. According to Digital Media Law Project, the courts will usually side with thepressin the publication of private facts.[60]This helps to uphold the freedom of the press in theUS Constitution. "there is a legitimate public interest in nearly all recent events, as well as in the private lives of prominent figures such as movie stars, politicians, and professional athletes."[60]Digital Media Law Project supports these statements with citations to specific cases. While most recent events and prominent figures are considered newsworthy, it cannot go too far and too deep with a morbid curiosity.[60]The media gain a lot of leverage once a person becomes a prominent figure and many things about their lives become newsworthy. Multiple cases such as Strutner v. Dispatch Printing Co., 442 N.E.2d 129 (Ohio Ct. App. 1982)[61]show that the publication of a person's home address and full name who is being questioned by the police is valid and "a newsworthy item of legitimate public concern." The last part to consider is whether this could be considered a form ofdoxxing. With the court upholding the newspaper's right to publish, this is much harder to change in the future. Newsworthiness has much around it that is held up by court rulings andcase law. This is not in legislation but is created through the courts, as many other laws and practices are. These are still judged on a case-by-case basis as they are often settled through alawsuitof some form.[60]While there is a fair amount of case law supporting newsworthiness of subjects, it is hardly comprehensive and, news publications can publish things not covered and defend themselves in court for their right to publish these facts. Private sector actors can also threaten the right to privacy – particularly technology companies, such asAmazon,Apple, Facebook, Google, andYahoothat use and collectpersonal data. These private sector threats are more acute due to AI data processing.[62] In some American jurisdictions, the use of a person's name as a keyword under Google'sAdWordsfor advertising or trade purposes without the person's consent[63]has raised certain personal privacy concerns.[64]The right to privacy and social media content laws have been considered and enacted in several states, such as California's "online erasure" law protecting minors from leaving a digital trail. State laws, such as the CPPA in California, have granted more comprehensive protection.[65] However, the United States is behind that of European Union countries in protecting privacy online. For example, the "right to be forgotten" ruling by the EU Court of Justice protects both adults and minors.[66]TheGeneral Data Protection Regulationhas made significant progress to protect privacy from these risks, and it has led to a wave of privacy and data protection laws around the world. Privacy is a major issue in the health care sector with technology becoming an essential component of it. Connecting personal data of patients to internet makes them vulnerable to cyber attacks. There are also concerns about how much data should be stored and who should have access to it.[67] Laws and courts in the UK uphold the protection of minors in the journalistic space. TheIndependent Press Standards Organisation(IPSO) in the UK have shown that the usage of footage of a 12-year-old girl being bullied in 2017 can be retroactively taken down due to fears ofcyber-bullyingand potential harm done to the child in the future.[68]This was after the Mail Online published the video without any attempt to hide the identity of the child. Following the newsworthiness point, it is possible that content like this would be allowed in the United States due to the recentness of the event.[60]Protection of minors is a different matter in the United States with new stories about minors doing certain things and their faces are shown in a news publication. TheDetroit Free Press, as an example, chose to do a hard-hitting story about prostitution and drugs from a teenager but never named her or showed her face, only referring to her and the "16-year-old from Taylor".[69]In the UK, during the case of Campbell v MGN, Lord Hope stated that the protection of minors will be handled on a case-by-case basis and affected by the child's awareness of the photo and their expectation of privacy.[68]Many factors will be considered such as the age of the children, activity, usage of real names, etc.[68] The protection of minors in the United States often falls on the shoulders of the Children's Online Privacy Protection Act (COPPA).[70]This protects any children under the age of 13 from the collection of their data without their parent's or guardian's permission. This law is the reason why many sites will ask if you are under 13 or require you to be 13 to sign up. While this law is intended to protect preteen children, it fails to protect the information of anyone older than 13, including teenage minors. It also begins to overlap with other privacy protection laws such as theHealth Insurance Portability and Accountability Act(HIPAA).
https://en.wikipedia.org/wiki/Right_to_privacy
Security cultureis a set of practices used by activists, notablycontemporary anarchists, to avoid, or mitigate the effects of, policesurveillanceand harassment and state control.[1][2] Security culture recognizes the possibility that anarchist spaces and movements are surveilled and/or infiltrated byinformantsorundercover operatives.[3]Security culture has three components: determining when and how surveillance is occurring, protecting anarchist communities if infiltration occurs, and responding to security breaches.[4]Its origins are uncertain, though some anarchists identify its genesis in thenew social movementsof the 1960s, which were targeted by theFederal Bureau of Investigation'sCOINTELPROprojects.[5]The sociologist Christine M. Robinson has identified security culture as a response to the labelling of anarchists as terrorists in theaftermath of the September 11 attacks.[6] The geographer Nathan L. Clough describes security culture as "a technique for cultivating a new affective structure".[3]The political scientist Sean Parson offers the following definition: "'security culture' ... includes such rules as not disclosing full names, one's activist history, or anything else that could be used to identify oneself or others to authorities. The goal of security culture is to weaken the influence of infiltrators and 'snitches,' which allows groups to more readily engage in illegal acts with less concern for arrest."[7]The media scholar Laura Portwood-Stacer defines security culture as "the norms of privacy and information control developed by anarchists in response to regular infiltration of their groups and surveillance by law enforcement personnel."[8] Security culture does not involve abandoning confrontational political tactics, but rather eschews boasting about such deeds on the basis that doing so facilitates the targeting and conviction of anarchist activists.[3]Advocates of security culture aim to make its practices instinctive, automatic or unconscious.[3]Participants in anarchist movements see security culture as vital to their ability to function, especially in the context of theWar on Terror.[5] Portwood-Stacer observes that security culture impacts upon research on anarchistsubculturesand that, while subcultures are often resistant to observation, "the stakes are often much higher for anarchist activists, because they are a frequent target of state surveillance and repression."[8] Security culture regulates what topics can be discussed, in what context, and among whom.[9]It prohibits speaking to law enforcement, and certain media and locations are identified as security risks; the Internet, telephone and mail, individuals' homes and vehicles, and community meeting places are assumed to containcovert listening devices.[9]Security culture prohibits or discourages discussing involvement in illegal or covert activities.[9]Three exceptions, however, are drawn: discussing plans with others involved, discussing criminal activities for which one has been convicted, and discussing past actions anonymously inzinesor with trusted media are permitted.[9]Robinson identifies theblack bloctactic, in which anarchists cover their faces and wear black clothing, as a component of security culture.[10]Other practices include the use ofpseudonymsand "[i]nverting the gaze to inspect others' corporeality".[11]Breaches of security culture may be met by avoiding, isolating or shunning those responsible.[12] In his discussion of security culture during the protests around the2008 Republican National Convention(RNC), Clough notes that "fear of surveillance and infiltration" impeded trust among activists and led to energy being directed toward counter-measures.[3]He also suggests that security culture practices may cause newer participants in movements to feel less welcome or less trusted, and therefore less likely to commit to causes,[3]and, in the context of the 2008 RNC, prevented those who did not conform to anarchist norms from assuming prominent positions within theRNC Welcoming Committee.[13]Assessing the role of security culture in the anti-RNC mobilisation, which was infiltrated by four police operatives, Clough finds that had "a mixed record", succeeding in frustrating shorter-term infiltrators operating at the movement's peripheries, but failing to prevent longer-term infiltrators from gaining others' trust.[14]
https://en.wikipedia.org/wiki/Security_culture
Edward Joseph Snowden(born June 21, 1983) is a formerNSAintelligence contractor andwhistleblower[2]who leaked classified documents revealing the existence of global surveillance programs. In 2013, while working as a government contractor, Snowden leakedhighly classified informationfrom theNational Security Agency(NSA). He was indicted for espionage.[3]His disclosuresrevealed numerousglobal surveillanceprograms, many run by the NSA and theFive Eyesintelligence alliance with the cooperation oftelecommunication companiesand European governments and prompted acultural discussionaboutnational securityand individual privacy. In 2013, Snowden was hired by an NSA contractor,Booz Allen Hamilton, after previous employment withDelland theCIA.[4]Snowden says he gradually became disillusioned with the programs with which he was involved and that he tried to raise his ethical concerns through internal channels but was ignored. On May 20, 2013, Snowden flew toHong Kongafter taking medical leave from his job at anNSA facility in Hawaii, and in early June he revealed thousands of classified NSA documents to journalistsGlenn Greenwald,Laura Poitras,Barton Gellman, andEwen MacAskill. Snowden came to international attention after stories based on the material appeared inThe Guardian,The Washington Post, and other publications. On June 21, 2013, theUnited States Department of Justiceunsealed charges against Snowden of two counts of violating theEspionage Act of 1917and theft of government property,[3]following which theDepartment of Staterevoked hispassport.[5]Two days later, he flew intoMoscow'sSheremetyevo International Airport, where Russian authorities observed the canceled passport, and he was restricted to the airport terminal for over one month.Russialatergranted Snowden the right of asylumwith an initial visa for residence for one year, which was repeatedly extended. In October 2020, he was granted permanent residency in Russia.[6]In September 2022, Snowden was granted Russian citizenship by PresidentVladimir Putinas part of a decree covering 71 other now Russian citizens.[7][8] His disclosures have fueled debates overmass surveillance,government secrecy, and the balance between national security andinformation privacy, something that he has said he intended to deal with in prospective interviews.[9]Snowden has defended his actions as an effort "to inform the public as to that which is done in their name and that which is done against them".[10] In early 2016, Snowden became the president of theFreedom of the Press Foundation, a San Francisco–based nonprofit organization that aims to protect journalists from hacking and government surveillance.[11]He also has a job at an unnamed RussianITcompany.[12]In 2017, he marriedLindsay Mills. On September 17, 2019, his memoirPermanent Recordwas published.[13]On September 2, 2020, a U.S. federal court ruled inUnited States v. Moalinthat one of theU.S. intelligence's mass surveillance programs exposed by Snowden was illegal and possibly unconstitutional.[14] Edward Joseph Snowden was born on June 21, 1983,[15]inElizabeth City, North Carolina.[16]Snowden's father, Lonnie "Lon", was a warrant officer in theU.S. Coast Guard,[17]and his mother, Elizabeth, was a clerk at theU.S. District Court for the District of Maryland.[18][19][20][21][22]His older sister, Jessica, was a lawyer at theFederal Judicial CenterinWashington, D.C.His maternal grandfather,Edward J. Barrett,[23][24]arear admiralin the Coast Guard, became a senior official with theFBIand was atthe Pentagonin 2001 during theSeptember 11 attacks.[25]Edward Snowden said that he had expected to work for the federal government, as had the rest of his family.[26]His parents divorced in 2001,[27]and his father remarried.[28] In the early 1990s, while still in grade school, Snowden moved with his family to the area ofFort Meade, Maryland.[29]Mononucleosiscaused him to miss high school for almost nine months.[26]Rather than returning to school, he claims to have passed theGEDtest.[10][30]He took classes atAnne Arundel Community College.[20]Although Snowden had no undergraduate college degree,[31]he worked online toward a master's degree in computer security at theUniversity of Liverpool,England, in 2011.[32]He was interested inJapanese popular culture, had studied theJapanese language,[33]and worked for ananimecompany that had a resident office in the U.S.[34][35]He also said he had a basic understanding ofMandarin Chineseand was deeply interested inmartial arts. At age 20, he listed his religion asBuddhismafter working at aU.S. military basein Japan.[36][37][38] In September 2019, as part of interviews relating to the release of his memoirPermanent Record, Snowden revealed toThe Guardianthat he marriedLindsay Millsin a courthouse in Moscow.[13]The couple's first son was born in December 2020,[39]and their second son was born sometime before September 2022.[40] Because he felt "an obligation as a human being to help free people from oppression" during theIraq War,[10]Snowden enlisted in theUnited States Armyon May 7, 2004, and became aSpecial Forcescandidate through its18Xenlistment option.[41]He did not complete the training[15]due to a leg injury and was given an administrative discharge[42]on September 28, 2004.[43] Snowden was then employed for less than a year in 2005 as a security guard at theUniversity of Maryland's Center for Advanced Study of Language, a research center sponsored by the National Security Agency (NSA).[44]According to the university, this is not a classified facility,[45]though it is heavily guarded.[46]In June 2014, Snowden toldWiredthat his job as a security guard required a high-levelsecurity clearance, for which he passed apolygraphexam and underwent a stringent background investigation.[26] After attending a 2006 job-fair focused on intelligence agencies, Snowden accepted an offer for a position at the CIA.[26][47]The Agency assigned him to the global communications division atCIA headquartersinLangley, Virginia.[26] In May 2006, Snowden wrote inArs Technicathat he had no trouble getting work because he was a "computer wizard".[36]Snowden was sent to the CIA's secret school for technology specialists, where he lived in a hotel for six months while studying and training full-time.[26][30] In March 2007, the CIA stationed Snowden withdiplomatic coverinGeneva,Switzerland, where he was responsible for maintaining computer-network security.[26][48]Assigned to theU.S. Permanent Mission to the United Nations, adiplomatic missionrepresenting U.S. interests before theUNand otherinternational organizations, Snowden received a diplomatic passport and a four-bedroom apartment nearLake Geneva.[26] According to Greenwald, while there Snowden said he was "considered the top technical and cybersecurity expert" in that country and "was hand-picked by the CIA to support the president at the 2008NATOsummit inRomania".[49][50]A 2016 report from the US House of Representatives Select Committee on Intelligence said that Snowden's official position at CIA was an entry-level technical services officer.[30] Snowden described his CIA experience in Geneva as formative, stating that the CIA deliberately got a Swiss banker drunk and encouraged him to drive home. Snowden said that when the latter was arrested for drunk driving, a CIA operative offered to help in exchange for the banker becoming an informant.[51]Ueli Maurer,President of the Swiss Confederationfor the year 2013, publicly disputed Snowden's claims in June of that year. "This would mean that the CIA successfully bribed the Geneva police and judiciary. With all due respect, I just can't imagine it," said Maurer.[52]In February 2009, following six counseling sessions from his supervisors regarding poor performance, Snowden resigned from the CIA.[53][50] In 2009, Snowden began work as a contractor forDell,[54]which manages computer systems for multiple government agencies. Assigned to an NSA facility atYokota Air BasenearTokyo, Snowden instructed top officials and military officers on how to defend their networks from Chinese hackers.[26]Snowden looked intomass surveillance in Chinawhich prompted him to investigate and then expose Washington's mass surveillance program after he was asked in 2009 to brief a conference in Tokyo.[55] During his four years with Dell, he rose from supervising NSA computer system upgrades to working as what his résumé termed a "cyber strategist" and an "expert in cyber counterintelligence" at several U.S. locations.[56]In 2010, he had a brief stint inNew Delhi,India, where he enrolled himself in a local IT institute to learn coreJava programmingand advancedethical hacking.[57] In 2011, he returned toMaryland, where he spent a year as system administrator and pre-sales technical engineer on Dell's CIA account. In that capacity, he was consulted by the chiefs of the CIA's technical branches, including the agency'schief information officerand itschief technology officer.[26]U.S. officials and other sources familiar with the investigation said Snowden began downloading documents describing the government's electronic spying programs while working for Dell in April 2012.[54]Investigators estimated that of the 50,000 to 200,000 documents Snowden gave to Greenwald and Poitras, most were copied by Snowden while working at Dell.[4] In March 2012, Dell reassigned Snowden to Hawaii as lead technologist for the NSA's information-sharing office.[26] On March 15, 2013—three days after what he later called his "breaking point" of "seeing theDirector of National Intelligence,James Clapper, directly lie under oath to Congress"[58]—Snowden quit his job at Dell.[59]Although he has said his career high annual salary was $200,000,[60]Snowden said he took a pay cut to work at consulting firmBooz Allen Hamilton,[60]where he sought employment in order to gather data and then release details of the NSA's worldwide surveillance activity.[61] At the time of his departure from the U.S. in May 2013, he had been employed for 15 months inside the NSA'sHawaii regional operations center, which focuses on the electronic monitoring ofChinaandNorth Korea,[4]first for Dell and then for two months withBooz Allen Hamilton.[62]While intelligence officials have described his position there as asystem administrator, Snowden has said he was an infrastructure analyst, which meant that his job was to look for new ways to break into Internet and telephone traffic around the world.[63]An anonymous source toldReutersthat, while in Hawaii, Snowden may have persuaded 20–25 co-workers to give him their login credentials by telling them he needed them to do his job.[64]The NSA sent a memo to Congress saying that Snowden had tricked a fellow employee into sharing his personalprivate keyto gain greater access to the NSA's computer system.[65][66]Snowden disputed the memo,[67]saying in January 2014, "I never stole any passwords, nor did I trick an army of co-workers."[68][69]Booz Allen terminated Snowden's employment on June 10, 2013, the day after he went public with his story, and 3 weeks after he had leftHawaiion a leave of absence.[70] The former colleague said Snowden was given full administrator privileges with virtually unlimited access to NSA data. Snowden was offered a position on the NSA's elite team ofhackers,Tailored Access Operations, but turned it down to join Booz Allen.[67]An anonymous source later said that Booz Allen's hiring screeners found possible discrepancies in Snowden's résumé but still decided to hire him.[31]Snowden's résumé stated that he attended computer-related classes atJohns Hopkins University. A spokeswoman for Johns Hopkins said that the university did not find records to show that Snowden attended the university and suggested that he may instead have attended Advanced Career Technologies, a private for-profit organization that operated as the Computer Career Institute at Johns Hopkins University.[31]The University of Maryland University College acknowledged that Snowden had attended a summer session at a UM campus in Asia. Snowden's résumé stated that he estimated he would receive a University of Liverpool computer security master's degree in 2013. The university said that Snowden registered for an online master's degree program in computer security in 2011 but was inactive as a student and had not completed the program.[31] In his May 2014 interview withNBC News, Snowden accused the U.S. government of trying to use one position here or there in his career to distract from the totality of his experience, downplaying him as a "low-level analyst." In his words, he was "trained as a spy in the traditional sense of the word in that I lived and worked undercover overseas—pretending to work in a job that I'm not—and even being assigned a name that was not mine." He said he'd worked for the NSA undercover overseas, and for theDIAhad developed sources and methods to keep information and people secure "in the most hostile and dangerous environments around the world. So when they say I'm a low-level systems administrator, that I don't know what I'm talking about, I'd say it's somewhat misleading."[25]In a June interview withGlobo TV, Snowden reiterated that he "was actually functioning at a very senior level."[71]In a July interview withThe Guardian, Snowden explained that, during his NSA career, "I began to move from merely overseeing these systems to actively directing their use. Many people don't understand that I was actually an analyst and I designated individuals and groups for targeting."[72]Snowden subsequently toldWiredthat while at Dell in 2011, "I would sit down with the CIO of the CIA, the CTO of the CIA, the chiefs of all the technical branches. They would tell me their hardest technology problems, and it was my job to come up with a way to fix them."[26] During his time as an NSA analyst, directing the work of others, Snowden recalled a moment when he and his colleagues began to have severe ethical doubts. Snowden said 18- to 22-year-old analysts were suddenly: ...thrust into a position of extraordinary responsibility, where they now have access to all your private records. In the course of their daily work, they stumble across something that is completely unrelated in any sort of necessary sense—for example, an intimate nude photo of someone in a sexually compromising situation. But they're extremely attractive. So what do they do? They turn around in their chair and they show a co-worker ... and sooner or later this person's whole life has been seen by all of these other people. Snowden observed that this behavior happened routinely every two months but was never reported, being considered one of the "fringe benefits" of the work.[73] Snowden has described himself as a whistleblower,[74]a description used by many sources, includingCNBC,[75]The New Yorker,[76]Reuters,[77]andThe Guardian,[78]among others.[79][80][81]The term has both informal and legal meanings. Snowden said that he had told multiple employees and two supervisors about his concerns, but the NSA disputes his claim.[82]Snowden elaborated in January 2014, saying "[I] made tremendous efforts to report these programs to co-workers, supervisors, and anyone with the proper clearance who would listen. The reactions of those I told about the scale of the constitutional violations ranged from deeply concerned to appalled, but no one was willing to risk their jobs, families, and possibly even freedom to go to [sic] through what [Thomas Andrews]Drakedid."[69][83]In March 2014, during testimony to the European Parliament, Snowden wrote that before revealing classified information he had reported "clearly problematic programs" to ten officials, who he said did nothing in response.[84]In a May 2014 interview, Snowden told NBC News that after bringing his concerns about the legality of the NSA spying programs to officials, he was told to stay silent on the matter. He said that the NSA had copies of emails he sent to their Office of General Counsel, oversight, and compliance personnel broaching "concerns about the NSA's interpretations of its legal authorities. I had raised these complaints not just officially in writing through email, but to my supervisors, to my colleagues, in more than one office."[25] In May 2014, U.S. officials released a single email that Snowden had written in April 2013 inquiring about legal authorities but said that they had found no other evidence that Snowden had expressed his concerns to someone in an oversight position.[85]In June 2014, the NSA said it had not been able to find any records of Snowden raising internal complaints about the agency's operations.[86]That same month, Snowden explained that he had not produced the communiqués in question because of the ongoing nature of the dispute, disclosing for the first time that "I am working with the NSA in regard to these records and we're going back and forth, so I don't want to reveal everything that will come out."[87] Self-description as a whistleblower and attribution as such in news reports does not determine whether he qualifies as a whistleblower within the meaning of theWhistleblower Protection Actof 1989 (5 USC 2303(b)(8)-(9); Pub. Law 101-12). However, Snowden's potential status as a whistleblower under the 1989 Act is not directly addressed in the criminal complaint against him in theUnited States District Court for the Eastern District of Virginia(see below) (Case No. 1:13 CR 265 (0MH)). These and similar and related issues are discussed in an essay by David Pozen, in a chapter of the bookWhistleblowing Nation, published in March 2020,[88]an adaptation of which[89]also appeared onLawfare Blogin March 2019.[90]The unclassified portion of a September 15, 2016, report by theUnited States House Permanent Select Committee on Intelligence(HPSCI), initiated by the chairman and Ranking Member in August 2014, and posted on the website of theFederation of American Scientists, concluded that Snowden was not a whistleblower in the sense required by the Whistleblower Protection Act.[91]The bulk of the report is classified. The exact size of Snowden's disclosure is unknown,[92]but Australian officials have estimated 15,000 or moreAustralian intelligencefiles[93]and British officials estimate at least 58,000British intelligencefiles were included.[94]NSA DirectorKeith Alexanderinitially estimated that Snowden had copied anywhere from 50,000 to 200,000 NSA documents.[95]Later estimates provided by U.S. officials were in the order of 1.7 million,[96]a number that originally came fromDepartment of Defensetalking points.[97]In July 2014,The Washington Postreported on a cache previously provided by Snowden from domestic NSA operations consisting of "roughly 160,000 intercepted e-mail and instant-message conversations, some of them hundreds of pages long, and 7,900 documents taken from more than 11,000 online accounts."[98]A DIA report declassified in June 2015 said that Snowden took 900,000 Department of Defense files, more than he downloaded from the NSA.[97] In March 2014, Army GeneralMartin Dempsey,Chairman of the Joint Chiefs of Staff, told theHouse Armed Services Committee, "The vast majority of the documents that Snowden ... exfiltrated from our highest levels of security ... had nothing to do with exposing government oversight of domestic activities. The vast majority of those were related to our military capabilities, operations, tactics, techniques, and procedures."[99]When asked in a May 2014 interview to quantify the number of documents Snowden stole, retired NSA director Keith Alexander said there was no accurate way of counting what he took, but Snowden may have downloaded more than a million documents.[100]The September 15, 2016 HPSCI report[91]estimated the number of downloaded documents at 1.5 million. In a 2013Associated Pressinterview,Glenn Greenwaldstated: In order to take documents with him that proved that what he was saying was true he had to take ones that included very sensitive, detailed blueprints of how the NSA does what they do.[101] Thus, the Snowden documents allegedly contained sensitive NSAblueprintsdetailing how the NSA operates, which would allow someone who read them to evade or even duplicate NSA surveillance. Further, a 2015New York Timesarticle[102]reported that theIslamic Stategroup had studied Snowden's revelations about how the United States gathered information on militants. As a result, the group's top leaders used couriers or encrypted channels to avoid being tracked or monitored by Western analysts. According to Snowden, he did not indiscriminately turn over documents to journalists, stating that "I carefully evaluated every single document I disclosed to ensure that each was legitimately in the public interest. There are all sorts of documents that would have made a big impact that I didn't turn over"[10]and that "I have to screen everything before releasing it to journalists ... If I have time to go through this information, I would like to make it available to journalists in each country."[61]Despite these measures, the improper redaction of a document byThe New York Timesresulted in the exposure of intelligence activity againstal-Qaeda.[103] In June 2014, the NSA's recently installed director,U.S. NavyAdmiralMichael S. Rogers, said that while some terrorist groups had altered their communications to avoid surveillance techniques revealed by Snowden, the damage done was not significant enough to conclude that "the sky is falling."[104]Nevertheless, in February 2015, Rogers said that Snowden's disclosures had a material impact on the NSA's detection and evaluation of terrorist activities worldwide.[105] On June 14, 2015, the LondonSunday Timesreported that Russian and Chinese intelligence services had decrypted more than 1 million classified files in the Snowden cache, forcing the UK'sMI6intelligence agency to move agents out of live operations in hostile countries. SirDavid Omand, a former director of the UK'sGCHQintelligence gathering agency, described it as a huge strategic setback that was harming Britain, America, and their NATO allies.The Sunday Timessaid it was not clear whether Russia and China stole Snowden's data or whether Snowden voluntarily handed it over to remain at liberty in Hong Kong and Moscow.[106][107]In April 2015, theHenry Jackson Society, a Britishneoconservativethink tank, published a report claiming that Snowden's intelligence leaks negatively impacted Britain's ability to fight terrorism and organized crime.[108]Gus Hosein, executive director ofPrivacy International, criticized the report for, in his opinion, presuming that the public became concerned about privacy only after Snowden's disclosures.[109] Snowden's decision to leak NSA documents developed gradually following his March 2007 posting as a technician to the Geneva CIA station.[110]Snowden later made contact with Glenn Greenwald, a journalist working atThe Guardian.[111]He contacted Greenwald anonymously as "Cincinnatus"[112][113]and said he had sensitive documents that he would like to share.[114]Greenwald found the measures that the source asked him to take to secure their communications, such as encrypting email, too annoying to employ. Snowden then contacted documentary filmmakerLaura Poitrasin January 2013.[115]According to Poitras, Snowden chose to contact her after seeing herNew York Timesarticle about NSA whistleblowerWilliam Binney.[116]What originally attracted Snowden to Greenwald and Poitras was aSalonarticle written by Greenwald detailing how Poitras's controversial films had made her a target of the government.[114] Greenwald began working with Snowden in either February[117]or April 2013, after Poitras asked Greenwald to meet her in New York City, at which point Snowden began providing documents to them.[111]Barton Gellman, writing forThe Washington Post, says his first direct contact was on May 16, 2013.[118]According to Gellman, Snowden approached Greenwald after thePostdeclined to guarantee publication within 72 hours of all 41 PowerPoint slides that Snowden had leaked exposing thePRISMelectronic data mining program, and to publish online an encrypted code allowing Snowden to later prove that he was the source.[118] Snowden communicated usingencrypted email,[115]and going by the codename "Verax". He asked not to be quoted at length for fear of identification bystylometry.[118] According to Gellman, before their first meeting in person, Snowden wrote, "I understand that I will be made to suffer for my actions and that the return of this information to the public marks my end."[118]Snowden also told Gellman that until the articles were published, the journalists working with him would also be at mortal risk from theUnited States Intelligence Community"if they think you are the single point of failure that could stop this disclosure and make them the sole owner of this information."[118] In May 2013, Snowden was permitted temporary leave from his position at the NSA in Hawaii, on the pretext of receiving treatment for hisepilepsy.[10]In mid-May, Snowden gave an electronic interview to Poitras andJacob Appelbaumwhich was published weeks later byDer Spiegel.[119] After disclosing the copied documents, Snowden promised that nothing would stop subsequent disclosures. In June 2013, he said, "All I can say right now is the US government is not going to be able to cover this up by jailing or murdering me. Truth is coming, and it cannot be stopped."[120] On May 20, 2013, Snowden flew to Hong Kong,[121]where he was staying when the initial articles based on the leaked documents were published,[122]beginning withThe Guardianon June 5.[123]Greenwald later said Snowden disclosed 9,000 to 10,000 documents.[124] Within months, documents had been obtained and published by media outlets worldwide, most notablyThe Guardian(Britain),Der Spiegel(Germany),The Washington PostandThe New York Times(U.S.),O Globo(Brazil),Le Monde(France), and similar outlets inSweden,Canada,Italy,Netherlands,Norway,Spain, andAustralia.[125]In 2014,NBCbroke its first story based on the leaked documents.[126]In February 2014, for reporting based on Snowden's leaks, journalists Glenn Greenwald, Laura Poitras, Barton Gellman andThe Guardian′s Ewen MacAskill were honored as co-recipients of the 2013George Polk Award, which they dedicated to Snowden.[127]The NSA reporting by these journalists also earnedThe GuardianandThe Washington Postthe 2014Pulitzer Prize for Public Service[128]for exposing the "widespread surveillance" and for helping to spark a "huge public debate about the extent of the government's spying".The Guardian's chief editor,Alan Rusbridger, credited Snowden for having performed a public service.[129] The ongoing publication of leaked documents has revealed previously unknown details of aglobal surveillanceapparatus run by the NSA[132]in close cooperation with three of its fourFive Eyespartners: Australia'sASD,[133]the UK'sGCHQ,[134]and Canada'sCSEC.[135] On June 5, 2013, media reports documenting the existence and functions of classified surveillance programs and their scope began and continued throughout the entire year. The first program to be revealed was PRISM, which allows for direct access to data on the servers of Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, and Apple.[136][137]Barton Gellman ofThe Washington Postwas the first journalist to report on Snowden's documents. He said the U.S. government urged him not to specify by name which companies were involved, but Gellman decided that to name them "would make it real to Americans."[138]Reports also revealed details ofTempora, a secret British surveillance program run by the NSA's British partner, GCHQ.[139]The initial reports included details aboutNSA call database,Boundless Informant, and a secret court order requiringVerizonto hand the NSA millions of Americans' phone records daily,[140]as well as the surveillance of phone and Internet records of French citizens, with specific targets of French people either "suspected of association with terrorist activities" or in "the worlds of business, politics or French state administration."[141][142][143]XKeyscore, an analytical tool that allows for collection of "almost anything done on the internet," was described byThe Guardianas a program that shed light on one of Snowden's most controversial statements: "I, sitting at my desk [could] wiretap anyone, from you or your accountant, to a federal judge or even the president, if I had a personal email."[144] The NSA's top-secretblack budget, obtained from Snowden byThe Washington Post, exposed the successes and failures of the 16 spy agencies comprising the U.S. intelligence community,[145]and revealed that the NSA was paying U.S. private tech companies for clandestine access to their communications networks.[146]The agencies were allotted $52 billion for the 2013 fiscal year.[147] It was revealed that the NSA was harvesting millions of email and instant messaging contact lists,[148]searching email content,[149]tracking and mapping the location of cell phones,[150]undermining attempts atencryptionviaBullrun[151][152]and that the agency was usingcookiesto piggyback on the same tools used by Internet advertisers "to pinpoint targets for government hacking and to bolster surveillance."[153]The NSA was shown to be secretly accessing Yahoo and Google data centers to collect information from hundreds of millions of account holders worldwide by tapping undersea cables using theMUSCULARsurveillance program.[130][131] The NSA, theCIAand GCHQ spied on users ofSecond Life,Xbox LiveandWorld of Warcraft, and attempted to recruit would-be informants from the sites, according to documents revealed in December 2013.[154][155]Leaked documents showed NSA agents also spied on their own "love interests", a practice NSA employees termedLOVEINT.[156][157]The NSA was shown to be tracking the online sexual activity of people they termed "radicalizers" in order to discredit them.[158]Following the revelation of Blackpearl, a program targeting private networks, the NSA was accused of extending beyond its primary mission of national security. The agency's intelligence-gathering operations had targeted, among others, oil giantPetrobras, Brazil's largest company.[159]The NSA and the GCHQ were also shown to be surveilling charities includingUNICEFandMédecins du Monde, as well as allies such asEuropean CommissionerJoaquín AlmuniaandIsraeli Prime MinisterBenjamin Netanyahu.[160] In October 2013, Glenn Greenwald said "the most shocking and significant stories are the ones we are still working on, and have yet to publish."[161]In November,The Guardian's editor-in-chief Alan Rusbridger said that only one percent of the documents had been published.[162]In December,Australia's Minister for DefenceDavid Johnstonsaid his government assumed the worst was yet to come.[163] By October 2013, Snowden's disclosures had created tensions[164][165]between the U.S. and some of its close allies after they revealed that the U.S. had spied onBrazil,France,Mexico,[166]Britain,[167]China,[168]Germany,[169]and Spain,[170]as well as 35 world leaders,[171]most notablyGerman ChancellorAngela Merkel, who said "spying among friends" was unacceptable[172][173]and compared the NSA with theStasi.[174]Leaked documents published byDer Spiegelin 2014 appeared to show that the NSA had targeted 122 high-ranking leaders.[175] An NSA mission statement titled "SIGINT Strategy 2012-2016" affirmed that the NSA had plans for the continued expansion of surveillance activities. Their stated goal was to "dramatically increase mastery of the global network" and to acquire adversaries' data from "anyone, anytime, anywhere."[176]Leaked slides revealed in Greenwald's bookNo Place to Hide, released in May 2014, showed that the NSA's stated objective was to "Collect it All," "Process it All," "Exploit it All," "Partner it All," "Sniff it All" and "Know it All."[177] Snowden said in a January 2014 interview with German television that the NSA does not limit its data collection to national security issues, accusing the agency of conductingindustrial espionage. Using the example of German companySiemens, he said, "If there's information at Siemens that's beneficial to US national interests—even if it doesn't have anything to do with national security—then they'll take that information nevertheless."[178]In the wake of Snowden's revelations and in response to an inquiry from theLeft Party, Germany's domestic security agencyBundesamt für Verfassungsschutz (BfV)investigated and found no concrete evidence that the U.S. conducted economic or industrial espionage in Germany.[179] In February 2014, during testimony to the European Union, Snowden said of the remaining undisclosed programs, "I will leave the public interest determinations as to which of these may be safely disclosed to responsible journalists in coordination with government stakeholders."[180] In March 2014, documents disclosed by Glenn Greenwald writing forThe Interceptshowed the NSA, in cooperation with the GCHQ, has plans to infect millions of computers withmalwareusing a program calledTURBINE.[181]Revelations included information aboutQUANTUMHAND, a program through which the NSA set up a fake Facebook server to intercept connections.[181] According to a report inThe Washington Postin July 2014, relying on information furnished by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans and are not the intended targets. The newspaper said it had examined documents including emails, message texts, and online accounts that support the claim.[182] In an August 2014 interview, Snowden for the first time disclosed acyberwarfareprogram in the works, codenamedMonsterMind, that would automate the detection of a foreigncyberattackas it began and automatically fire back. "These attacks can bespoofed," said Snowden. "You could have someone sitting in China, for example, making it appear that one of these attacks is originating in Russia. And then we end up shooting back at a Russian hospital. What happens next?"[26] Snowden first contemplated leaking confidential documents around 2008 but held back, partly because he believed the newly elected Barack Obama might introduce reforms.[4]After the disclosures, his identity was made public byThe Guardianat his request on June 9, 2013.[117]"I do not want to live in a world where everything I do and say is recorded," he said. "My sole motive is to inform the public as to that which is done in their name and that which is done against them."[121] Snowden said he wanted to "embolden others to step forward" by demonstrating that "they can win."[118]He also said that the system for reporting problems did not work. "You have to report wrongdoing to those most responsible for it." He cited a lack of whistleblower protection for government contractors, the use of theEspionage Act of 1917to prosecute leakers and the belief that had he used internal mechanisms to "sound the alarm," his revelations "would have been buried forever."[110][183] In December 2013, upon learning that a U.S. federal judge had ruled the collection of U.S. phone metadata conducted by the NSA as likely unconstitutional, Snowden said, "I acted on my belief that the NSA's mass surveillance programs would not withstand a constitutional challenge, and that the American public deserved a chance to see these issues determined by open courts ... today, a secret program authorized by a secret court was, when exposed to the light of day, found to violate Americans' rights."[184] In January 2014, Snowden said his "breaking point" was "seeing theDirector of National Intelligence,James Clapper, directly lie under oath to Congress."[58]This referred to testimony on March 12, 2013—three months after Snowden first sought to share thousands of NSA documents with Greenwald,[111]and nine months after the NSA says Snowden made his first illegal downloads during the summer of 2012[4]—in which Clapper denied to theU.S. Senate Select Committee on Intelligencethat the NSA wittingly collects data on millions of Americans.[185]Snowden said, "There's no saving an intelligence community that believes it can lie to the public and the legislators who need to be able to trust it and regulate its actions. Seeing that really meant for me there was no going back. Beyond that, it was the creeping realization that no one else was going to do this. The public had a right to know about these programs."[186]In March 2014, Snowden said he had reported policy or legal issues related to spying programs to more than ten officials, but as a contractor had no legal avenue to pursue further whistleblowing.[84] Snowden's disclosure, association with foreign intelligence services and subsequent public relations campaign and flight to avoid prosecution is similar in one regard to the case ofPhilip Agee, a former CIA case officer and alleged spy for Russian and Cuban Intelligence during the 1970s andKim Philby, a former British intelligence officer in that Snowden and Philby both engaged in flight to avoid prosecution after unauthorized disclosure of classified information and accusations of espionage. Agee died in Cuba. Philby, like Snowden, fled to Russia where he died in exile. Beyond this, the similarities are more tenuous, as unlike Snowden, Agee and Philby were double agents for the KGB. Snowden made a number of claims about theGovernment Communications Security Bureau(GCSB) ofNew Zealand. He accused the agency of conducting surveillance on New Zealand citizens and engaging in espionage between 2008 and 2016, whenJohn Keyserved as thePrime Minister of New Zealand.[187][188] In May 2013, Snowden quit his job, telling his supervisors he required epilepsy treatment, but instead fled the United States for Hong Kong on May 10. He chose Hong Kong for its "spiritedcommitment to free speechand the right of political dissent" at the time.[10][189]Snowden remained in his room at theMira Hotelafter his arrival in the city, rarely going out.[190] Snowden vowed to challenge any extradition attempt by the U.S. government, and engaged Hong Kong-based Canadian human rights lawyer Robert Tibbo as a legal adviser.[4][191][192]Snowden told theSouth China Morning Postthat he planned to remain in Hong Kong for as long as its government would permit.[193][194]Snowden also told thePostthat "the United States government has committed a tremendous number of crimes against Hong Kong [and] thePRCas well,"[195]going on to identify Chinese Internet Protocol addresses that the NSA monitored and stating that the NSA collected text-message data for Hong Kong residents. Glenn Greenwald said Snowden was motivated by a need to "ingratiate himself to the people of Hong Kong and China."[196] After leaving the Mira Hotel, Snowden was housed for two weeks in several apartments by other refugees seeking asylum in Hong Kong, an arrangement set up by Tibbo to hide Snowden from the US authorities.[197][198] The Russian newspaperKommersantsubsequently reported that Snowden was living at the Russian consulate shortly before his departure from Hong Kong to Moscow.[199] Anatoly Kucherena, who became Snowden's lawyer in July 2013 when Snowden asked him for help in seeking temporary asylum in Russia,[200]said in August 2013 that Snowden did not communicate with Russian diplomats while he was in Hong Kong.[201][202]In early September 2013, however, Russian presidentVladimir Putinsaid that, a few days before boarding a plane to Moscow, Snowden met in Hong Kong with Russian diplomatic representatives.[203][204] Ben Wizner, a lawyer with theAmerican Civil Liberties Union(ACLU) and legal adviser to Snowden, later disputed this, saying in January 2014, "Every news organization in the world has been trying to confirm that story. They haven't been able to, because it's false."[205] On June 22, 2013, 18 days after the publication of Snowden's NSA documents began, officials revoked his U.S. passport.[206]On June 23, Snowden boarded a commercialAeroflotflight, SU213, to Moscow, accompanied bySarah HarrisonofWikiLeaks, with an intended final destination of Ecuador arranged in an Ecuadorian emergency travel document that Snowden had acquired. However Snowden became initially stranded in Russia upon his landing in Moscow when his U.S. passport was revoked.[207][208][209]Hong Kong authorities said that Snowden had not been detained for the U.S. because the request had not fully complied with Hong Kong law[210][211]and there was no legal basis to prevent Snowden from leaving.[212][213][Notes 1]On June 24, a U.S. State Department spokesman rejected the explanation of technical noncompliance, accusing the Hong Kong government of deliberately releasing a fugitive despite a valid arrest warrant and after having sufficient time to prohibit his travel.[216]That same day,Julian Assangesaid that WikiLeaks had paid for Snowden's lodging in Hong Kong and his flight out.[217]Assange also asked Fidel Narváez, consul at the Ecuadorian embassy in London, to sign an emergency travel document for Snowden. Snowden said that having the document gave him "the confidence, the courage to get on that plane to begin the journey".[209] In October 2013, Snowden said that before flying to Moscow, he gave all the classified documents he had obtained to journalists he met in Hong Kong and kept no copies for himself.[110]In January 2014, he told a German TV interviewer that he gave all of his information to American journalists reporting on American issues.[58]During his first American TV interview, in May 2014, Snowden said he had protected himself from Russian leverage by destroying the material he had been holding before landing in Moscow.[25] In January 2019, Vanessa Rodel, one of the refugees who had housed Snowden in Hong Kong, and her 7-year-old daughter were granted asylum by Canada.[218]In 2021, Supun Thilina Kellapatha, Nadeeka Dilrukshi Nonis and their children found refuge in Canada, leaving only one of Snowden's Hong Kong helpers waiting for asylum.[219] On June 23, 2013, Snowden landed at Moscow'sSheremetyevo Airport.[220]WikiLeaks said he was on a circuitous but safe route to asylum inEcuador.[221]Snowden had a seat reserved to continue toCuba[222]but did not board that onward flight, saying in a January 2014 interview that he intended to transit through Russia but was stopped en route. He said "a planeload of reporters documented the seat I was supposed to be in" when he was ticketed forHavana, but the U.S. canceled his passport.[205]He said the U.S. wanted him to stay in Moscow so "they could say, 'He's a Russian spy.'"[71]Greenwald's account differed on the point of Snowden being already ticketed. According to Greenwald, Snowden's passport was valid when he departed Hong Kong but was revoked during the hours he was in transit to Moscow, preventing him from obtaining a ticket to leave Russia. Greenwald said Snowden was thus forced to stay in Moscow and seek asylum.[223] According to one Russian report, Snowden planned to fly from Moscow through Havana to Latin America; however, Cuba told Moscow it would not allow the Aeroflot plane carrying Snowden to land.[201]The Russian newspaperKommersantreported that Cuba had a change of heart after receiving pressure from U.S. officials,[224]leaving him stuck in the transit zone because at the last minute Havana told officials in Moscow not to allow him on the flight.[225]The Washington Postcontrasted this version with what it called "widespread speculation" that Russia never intended to let Snowden proceed.[226]Fidel Castrocalled claims that Cuba would have blocked Snowden's entry a "lie" and a "libel".[222]Describing Snowden's arrival in Moscow as a surprise and likening it to "an unwanted Christmas gift",[227]Russian president Putin said that Snowden remained in the transit area of Sheremetyevo Airport, had committed no crime in Russia, was free to leave and should do so.[228][227] Following Snowden's arrival in Moscow, the White House expressed disappointment at Hong Kong's decision to allow him to leave.[229][230][216]An anonymous U.S. official not authorized to discuss the matter told the Associated Press Snowden's passport had been revoked before he left Hong Kong, but that a senior official in a country or airline could order subordinates to overlook the withdrawn passport.[231]U.S. Secretary of StateJohn Kerrysaid that Snowden's passport was canceled "within two hours" of the charges against Snowden being made public[5]which was Friday, June 21.[3]In a July 1 statement, Snowden said, "Although I am convicted of nothing, [the U.S. government] has unilaterally revoked my passport, leaving me a stateless person. Without any judicial order, the administration now seeks to stop me exercising a basic right. A right that belongs to everybody. The right to seek asylum."[232] Four countries offered Snowden permanent asylum: Ecuador, Nicaragua, Bolivia, and Venezuela.[233]No direct flights between Moscow and Venezuela, Bolivia, or Nicaragua existed, however, and the U.S. pressured countries along his route to hand him over. Snowden said in July 2013 that he decided to bid for asylum in Russia because he felt there was no safe way to reach Latin America.[234]Snowden said he remained in Russia because "when we were talking about possibilities for asylum in Latin America, the United States forced down the Bolivian president's plane", citing theMorales plane incident. According to Snowden, "the CIA has a very powerful presence [in Latin America] and the governments and the security services there are relatively much less capable than, say, Russia.... they could have basically snatched me...."[235]On the issue, he said "some governments in Western European and North American states have demonstrated a willingness to act outside the law, and this behavior persists today. This unlawful threat makes it impossible for me to travel to Latin America and enjoy the asylum granted there in accordance with our shared rights."[236]Snowden said that he would travel from Russia if there was no interference from the U.S. government.[205] Four months after Snowden received asylum in Russia, Julian Assange commented: "While Venezuela and Ecuador could protect him in the short term, over the long term there could be a change in government. In Russia, he's safe, he's well-regarded, and that is not likely to change. That was my advice to Snowden, that he would be physically safest in Russia."[237] In an October 2014 interview withThe Nationmagazine, Snowden reiterated that he had originally intended to travel to Latin America: "A lot of people are still unaware that I never intended to end up in Russia." According to Snowden, the U.S. government "waited until I departed Hong Kong to cancel my passport in order to trap me in Russia." Snowden added, "If they really wanted to capture me, they would've allowed me to travel to Latin America because the CIA can operate with impunity down there. They did not want that; they chose to keep me in Russia."[238] On July 1, 2013, presidentEvo MoralesofBolivia, who had been attending a conference in Russia, suggested during an interview withRT(formerly Russia Today) that he would consider a request by Snowden for asylum.[239]The following day, Morales's plane, en route toLa Paz, was rerouted to Austria and landed there, after France, Spain, and Italy denied access to their airspace. While the plane was parked in Vienna, the Spanish ambassador to Austria arrived with two embassy personnel and asked to search the plane but they were denied permission by Morales himself.[240]U.S. officials had raised suspicions that Snowden may have been on board.[241]Morales blamed the U.S. for putting pressure on European countries and said that the grounding of his plane was a violation of international law.[242] In April 2015, Bolivia's ambassador to Russia, María Luisa Ramos Urzagaste, accused Julian Assange of inadvertently putting Morales's life at risk by intentionally providing to the U.S. false rumors that Snowden was on Morales's plane. Assange responded that "we weren't expecting this outcome. The result was caused by the United States' intervention. We can only regret what happened."[243][244] Snowden applied for political asylum to 21 countries.[245][246]A statement attributed to him contended that the U.S. administration, and specifically then–Vice PresidentJoe Biden, had pressured the governments to refuse his asylum petitions. Biden had telephoned Ecuadorian PresidentRafael Correadays prior to Snowden's remarks, asking the Ecuadorian leader not to grant Snowden asylum.[247]Ecuador had initially offered Snowden a temporary travel document but later withdrew it,[248]and Correa later called the offer a mistake.[249] On July 1, 2013, Snowden accused the U.S. government of "using citizenship as a weapon" and using what he described as "old, bad tools of political aggression." Citing Obama's promise to not allow "wheeling and dealing" over the case, Snowden commented, "This kind of deception from a world leader is not justice, and neither is the extralegal penalty of exile."[250]Several days later, WikiLeaks announced that Snowden had applied for asylum in six additional countries, but declined to name them, alleging attempted U.S. interference.[251] After evaluating the law and Snowden's situation, theFrench interior ministryrejected his request for asylum.[252]Poland refused to process his application because it did not conform to legal procedure.[253]Brazil's Foreign Ministrysaid the government planned no response to Snowden's asylum request. Germany and India rejected Snowden's application outright, while Austria, Ecuador, Finland, Norway, Italy, the Netherlands, and Spain said he must be on their territory to apply.[254][255][256]In November 2014, Germany announced that Snowden had not renewed his previously denied request and was not being considered for asylum.[257]Glenn Greenwald later reported thatSigmar Gabriel,Vice-Chancellor of Germany, told him the U.S. government had threatened to stop sharing intelligence if Germany offered Snowden asylum or arranged for his travel there.[258] Putin said on July 1, 2013, that if Snowden wanted to be grantedasylum in Russia, he would be required to "stop his work aimed at harming our American partners."[259]A spokesman for Putin subsequently said that Snowden had withdrawn his asylum application upon learning of the conditions.[260] In a July 12 meeting at Sheremetyevo Airport with representatives of human rights organizations and lawyers, organized in part by the Russian government,[261]Snowden said he was accepting all offers of asylum that he had already received or would receive. He added that Venezuela's grant of asylum formalized his legal status as an asylum-seeker, removing any basis for state interference with his right to asylum.[262]He also said he would request asylum in Russia until he resolved his travel problems.[263]Slovenian correspondent Polonca Frelih, the only journalist, who presented at the July 12 meeting with Snowden and photographed him,[264]reported that he "looked like someone without daylight for long time but strong enough psychologically" while expressing worries about his medical condition.[265]RussianFederal Migration Serviceofficials confirmed on July 16 that Snowden had submitted an application for temporary asylum.[266]On July 24, Kucherena said his client wanted to find work in Russia, travel and create a life for himself, and had already begun learning Russian.[267] Amid media reports in early July 2013 attributed to U.S. administration sources that Obama's one-on-one meeting with Putin, ahead of aG20 meeting in St Petersburgscheduled for September, was in doubt due to Snowden's protracted sojourn in Russia,[268]top U.S. officials repeatedly made it clear to Moscow that Snowden should immediately be returned to the United States to face charges for the unauthorized leaking of classified information.[269][270][271]His Russian lawyer said Snowden needed asylum because he faced persecution by the U.S. government and feared "that he could be subjected to torture and capital punishment."[272] On April 16, 2020, CNN reported that Edward Snowden had requested a three-year extension of his Russian residency permit.[273] In October 2020, Snowden was granted permanent residency in Russia. His lawyer said that granting an unlimited residence permit became possible after changes in the migration legislation of the Russian Federation in 2019.[274][275][276] On September 26, 2022, Putin granted Snowden Russian citizenship, making it impossible to extradite him to any country.[277] In a letter to Russian Minister of JusticeAleksandr Konovalovdated July 23, 2013,U.S. Attorney GeneralEric Holderrepudiated Snowden's claim to refugee status and offered a limited validity passport good for direct return to the U.S.[278]He stated that Snowden would not be subject to torture or the death penalty, and would receive a trial in a civilian court with proper legal counsel.[279]The same day, the Russian president's spokesman reiterated that his government would not hand over Snowden, commenting that Putin was not personally involved in the matter and that it was being handled through talks between the FBI and Russia'sFSB.[280] On June 14, 2013, United States federal prosecutors filed a criminal complaint[281]against Snowden, charging him with three felonies: theft of government property and two counts of violating the Espionage Act of 1917 (18 U. S. C. Sect. 792 et. seq.; Publ. L. 65-24) through unauthorized communication of national defense information and willful communication of classified communications intelligence information to an unauthorized person.[3][278] Specifically, the charges filed in the Criminal Complaint were: Each of the three charges carries a maximum possible prison term of ten years. The criminal complaint was initially secret but was unsealed a week later. Stephen P. Mulligan and Jennifer K. Elsea, Legislative attorneys for theCongressional Research Service, provide a 2017 analysis[282]of the uses of the Espionage Act to prosecute unauthorized disclosures of classified information, based on what was disclosed, to whom, and how; the burden of proof requirements e.g. degrees ofMens Rea(guilty mind), and the relationship of such considerations to the First Amendment framework of protections of free speech are also analyzed. The analysis includes the charges against Snowden, among several other cases. The discussion also covers gaps in the legal framework used to prosecute such cases. Snowden was asked in a January 2014 interview about returning to the U.S. to face the charges in court, as Obama had suggested a few days prior. Snowden explained why he rejected the request: What he doesn't say are that the crimes that he's charged me with are crimes that don't allow me to make my case. They don't allow me to defend myself in an open court to the public and convince a jury that what I did was to their benefit. ... So it's, I would say, illustrative that the president would choose to say someone should face the music when he knows the music is a show trial.[58][283] Snowden's legal representative,Jesselyn Radack, wrote that "the Espionage Act effectively hinders a person from defending himself before a jury in an open court." She said that the "arcane World War I law" was never meant to prosecute whistleblowers, but rather spies who betrayed their trust by selling secrets to enemies for profit.[284] On September 17, 2019, the United States filed a lawsuit, Civil Action No. 1:19-cv-1197-LO-TCB, against Snowden for alleged violations of non-disclosure agreements with the CIA and NSA.[285]The two-count civil complaint alleged that Snowden had violated prepublication obligations related to the publication of his memoirPermanent Record. The complaint listed the publishersMacmillan Publishing Group, LLCd.b.a. Henry Holt and Company andHoltzbrink, as relief-defendants.[286]The Hon.Liam O'Grady, a judge in the Alexandria Division of the United States District Court for the Eastern District of Virginia found for the United States (Plaintiff) by summary judgment, on both counts of the action.[287]The judgment also found that Snowden had been paid speaker honorariums totaling $1.03 million for a series of 56 speeches delivered by video link. The effect of the ruling was that the US government can collect the proceeds from his book and speeches and means that Snowden has to relinquish more than $5.2 million earned to a "constructive trust", created to transfer the money to the government.[12] On June 23, 2013, Snowden landed at Moscow's Sheremetyevo Airport aboard a commercial Aeroflot flight from Hong Kong.[220][207][288]After 39 days in the transit section, he left the airport on August 1 and was grantedtemporary asylumin Russia for one year by theFederal Migration Service.[289] Snowden had the choice to apply for renewal of his temporary refugee status for 12 months or requesting a permit for temporary stay for three years.[290]A year later, his temporary refugee status having expired, Snowden received a three-year temporary residency permit allowing him to travel freely within Russia and to go abroad for up to three months. He was not granted permanent political asylum.[291]In 2017, his temporary residency permit was extended for another three years.[6][292] In December 2013, Snowden told journalist Barton Gellman that supporters in Silicon Valley had donated enoughbitcoinfor him to live on.[293](A single bitcoin was then worth about $1,000.[12]) In 2017, Snowden secretly married Lindsay Mills.[294]By 2019, he no longer felt the need to be disguised in public and lived what was described byThe Guardianas a "more or less normal life." He was able to travel around Russia and make a living from speaking arrangements, locally and over the internet.[294] Snowden's memoirPermanent Recordwas released internationally on September 17, 2019, and while U.S. royalties were expected to be seized, he was able to receive anadvance[294]of $4 million.[12]The memoir reached the top position onAmazon's bestseller list that day.[295]Snowden said his work for the NSA and CIA showed him that the United States Intelligence Community (IC) had "hacked the Constitution", and that he had concluded there was no option for him but to expose his revelations via thepress. In the memoir he wrote, "I realized that I was crazy to have imagined that the Supreme Court, or Congress, or President Obama, seeking to distance his administration from President George W. Bush's, would ever hold the IC legally responsible – for anything".[296]Of Russia he said, "One of the things that is lost in all the problematic politics of the Russian government is the fact this is one of the most beautiful countries in the world" with "friendly" and "warm" people.[294] Snowden has also used the pseudonym John Dobbertin (after cryptographerHans Dobbertin). In 2016, from Russia, Snowden participated in thecreation ceremonyof thezcashcryptocurrency as John Dobbertin, by briefly holding a part of the privatecryptographic keyfor the zcashgenesis block, before destroying it.[297] On November 1, 2019, new amendments took effect introducing a permanent residence permit for the first time and removing the requirement to renew the pre-2019 so-called "permanent" residence permit every five years.[298][299]The new permanent residence permit must be replaced three times in a lifetime like an ordinaryinternal passport for Russian citizens.[300]In accordance with that law, Snowden was in October 2020 granted permanent residence in Russia instead of another extension.[6][301] In April 2020, an amendment toRussian nationality lawallowing foreigners to obtain Russian citizenship without renouncing a foreign citizenship came into force.[302]In November 2020, Snowden announced that he and his wife, Lindsay, who were expecting their son in late December, were applying for dual U.S.-Russian citizenship in order not to be separated from him "in this era of pandemics and closed borders."[303]On September 26, 2022, President Vladimir Putin granted Snowden Russian citizenship.[40][304]The couple by then had two young sons born in Russia.[40]On December 1, Snowden swore an oath of allegiance to Russia and received a Russian passport, according to his lawyer.[305][306] Prior to theRussian invasion of Ukraine, he described the warnings from the United States about an imminent invasion as a 'disinformation campaign'.[307] Snowden has said that, in the2008 United States presidential election, he voted for athird-partycandidate; he also said that he "believed in Obama's promises". Following the election, he said thatBarack Obamawas continuing policies instated byGeorge W. Bush.[308] In accounts published in June 2013, interviewers noted that Snowden's laptop displayed stickers supporting Internet freedom organizations including theElectronic Frontier Foundation(EFF) and theTor Project.[10]A week after publication of his leaks began,Ars Technicaconfirmed that Snowden had been an active participant at the site'sonline forumandIRCchannels from 2001 through May 2012, discussing a variety of topics under the pseudonym "TheTrueHOOHA". In an IRC discussion in 2009, Snowden said: "I went to London just last year it's where all of your Muslims live I didn't want to get out of the car. I thought I had gotten off of the plane in the wrong country... It was terrifying." In a January 2009 entry, TheTrueHOOHA exhibited strong support for the U.S. security state apparatus and said leakers of classified information "should be shot in the balls".[309]Snowden disliked Obama's CIA director appointment ofLeon Panetta, saying: "Obama just named a fucking politician to run the CIA."[310]Snowden was also offended by a possible ban on assault weapons, writing: "Me and all my lunatic, gun-toting NRA compatriots would be on the steps of Congress before the C-Span feed finished."[310]Snowden disliked Obama's economic policies, was against Social Security, and favoredRon Paul's call for a return to thegold standard.[310] In 2019, Snowden said that U.S. Senator and then-presidential candidateBernie Sandersis "the most fundamentally decent man in politics".[311]In 2024, following theUnited States Supreme Court'sdecision of Donald Trump's presidential immunity, Snowden blamed theDNC's interference in the 2016 primaryfor Donald Trump's election, stating that "[t]he DNC is the only reason you're not currently enjoying a second-term [Bernie] Sanders and a sane Supreme Court."[312] In response to outrage by European leaders, PresidentBarack Obamasaid in early July 2013 that all nations collect intelligence, including those expressing outrage. His remarks came in response to an article in the German magazineDer Spiegel.[313] In 2014, Obama stated, "our nation's defense depends in part on the fidelity of those entrusted with our nation's secrets. If any individual who objects to government policy can take it into their own hands to publicly disclose classified information, then we will not be able to keep our people safe, or conduct foreign policy." He objected to the "sensational" way the leaks were reported, saying the reporting often "shed more heat than light." He said that the disclosures had revealed "methods to our adversaries that could impact our operations."[314] During a November 2016 interview with the German broadcasterARDand the German paperDer Spiegel, then-outgoing President Obama said he "can't" pardon Edward Snowden unless he is physically submitted to US authorities on US soil.[315] In 2013,Donald Trumpmade a series of tweets in which he referred to Snowden as a "traitor", saying he gave "serious information to China and Russia" and "should be executed". Later that year he added a caveat, tweeting "if it and he could reveal Obama's [birth] records, I might become a major fan".[316][317] In August 2020, Trump said during a press conference that he would "take a look" atpardoningSnowden, and added that he was "not that aware of the Snowden situation".[318][319]He stated, "There are many, many people – it seems to be a split decision that many people think that he should be somehow treated differently, and other people think he did very bad things, and I'm going to take a very good look at it."[296] Forbesdescribed Trump's willingness to consider a pardon as "leagues away" from his 2013 views. Snowden responded to the announcement saying, "the last time we heard a White House considering a pardon was 2016, when the very same Attorney General who once charged me conceded that, on balance, my work in exposing the NSA's unconstitutional system of mass surveillance had been 'a public service'."[320]Top members of theHouse Armed Services Committeeimmediately voiced strong opposition to a pardon, saying Snowden's actions resulted in "tremendous harm" to national security, and that he needed to stand trial.Liz Cheneycalled the idea of a pardon "unconscionable". A week prior to the announcement, Trump also said he had been thinking of letting Snowden return to the U.S. without facing any time in jail.[319] Days later,Attorney GeneralWilliam Barrtold the Associated Press he was "vehemently opposed" to the idea of a pardon, saying "[Snowden] was a traitor and the information he provided our adversaries greatly hurt the safety of the American people, he was peddling it around like a commercial merchant. We can't tolerate that."[296] Pentagon PapersleakerDaniel Ellsbergcalled Snowden's release of NSA material the most significant leak in U.S. history.[321][322]Shortly before the September 2016 release of his biographical thriller filmSnowden,a semi-fictionalized drama based on the life of Edward Snowden with a short appearance by Snowden himself,Oliver Stonesaid that Snowden should be pardoned, calling him a "patriot above all" and suggesting that he should run the NSA himself.[323] In a December 18, 2013, CNN editorial, former NSA whistleblowerJ. Kirk Wiebe, known for his involvement in the NSA's Trailblazer Project, noted that a federal judge for theDistrict of Columbia, theHon. Richard J. Leon, had ruled in a contemporaneous case before him that the NSA warrantless surveillance program was likely unconstitutional; Wiebe then proposed that Snowden should be grantedamnestyand allowed to return to the United States.[324] Numerous high-ranking current or formerU.S. governmentofficials reacted publicly to Snowden's disclosures. In the U.S., Snowden's actions precipitated an intense debate on privacy and warrantless domestic surveillance.[338][339]President Obama dismissed the idea of using force to acquire Snowden, saying "I'm not going to be scrambling jets to get a 29-year-old hacker."[340][341][342]In August 2013, Obama rejected the suggestion that Snowden was a patriot,[343]and in November said that "the benefit of the debate he generated was not worth the damage done, because there was another way of doing it."[344] In June 2013, U.S. SenatorBernie Sandersof Vermont shared a "must-read" news story on his blog byRon Fournier, stating "Love him or hate him, we all owe Snowden our thanks for forcing upon the nation an important debate. But the debate shouldn't be about him. It should be about the gnawing questions his actions raised from the shadows."[345]In 2015, Sanders stated that "Snowden played a very important role in educating the American public" and that although Snowden should not go unpunished for breaking the law, "that education should be taken into consideration before the sentencing."[346] Snowden said in December 2013 that he was "inspired by the global debate" ignited by the leaks and that NSA's "culture of indiscriminate global espionage ... is collapsing."[347] At the end of 2013,The Washington Postsaid that the public debate and its offshoots had produced no meaningful change in policy, with the status quo continuing.[156] In 2016, onThe Axe Filespodcast, former U.S. Attorney General Eric Holder said that Snowden "performed a public service by raising the debate that we engaged in and by the changes that we made." Holder nevertheless said that Snowden's actions were inappropriate and illegal.[348] In September 2016, the bipartisanU.S. House Permanent Select Committee on Intelligencecompleted a review of the Snowden disclosures and said that the federal government would have to spend millions of dollars responding to the fallout from Snowden's disclosures.[349]The report also said that "the public narrative popularized by Snowden and his allies is rife with falsehoods, exaggerations, and crucial omissions."[350]The report was denounced byWashington PostreporterBarton Gellman, who, in an opinion piece forThe Century Foundation, called it "aggressively dishonest" and "contemptuous of fact."[351] In August 2013, President Obama said that he had called for a review of U.S. surveillance activities before Snowden had begun revealing details of the NSA's operations,[343]and announced that he was directing DNI James Clapper "to establish a review group on intelligence and communications technologies."[352][353]In December, the task force issued 46 recommendations that, if adopted, would subject the NSA to additional scrutiny by the courts, Congress, and the president, and would strip the NSA of the authority to infiltrate American computer systems usingbackdoorsin hardware or software.[354]Panel memberGeoffrey R. Stonesaid there was no evidence that the bulk collection of phone data had stopped anyterror attacks.[355] On June 6, 2013, in the wake of Snowden's leaks, conservative public interest lawyer andJudicial WatchfounderLarry Klaymanfiled a lawsuit claiming that the federal government had unlawfully collected metadata for his telephone calls and was harassing him. InKlayman v. Obama, Judge Richard J. Leon referred to the NSA's "almost-Orwellian technology" and ruled the bulk telephone metadata program to be likely unconstitutional.[356]Leon's ruling was stayed pending an appeal by the government. Snowden later described Judge Leon's decision as vindication.[357] On June 11, the ACLU filed a lawsuit against James Clapper, Director of National Intelligence, alleging that the NSA's phone records program was unconstitutional. In December 2013, ten days after Judge Leon's ruling, JudgeWilliam H. Pauley IIIcame to the opposite conclusion. InACLU v. Clapper, although acknowledging that privacy concerns are not trivial, Pauley found that the potential benefits of surveillance outweigh these considerations and ruled that the NSA's collection of phone data is legal.[358] Gary Schmitt, former staff director of the Senate Select Committee on Intelligence, wrote that "The two decisions have generated public confusion over the constitutionality of the NSA's data collection program—a kind of judicial 'he-said, she-said' standoff."[359] On May 7, 2015, in the case ofACLU v. Clapper, theUnited States Court of Appeals for the Second Circuitsaid thatSection 215of thePatriot Actdid not authorize the NSA to collect Americans' calling records in bulk, as exposed by Snowden in 2013. The decision voided U.S. District Judge William Pauley's December 2013 finding that the NSA program was lawful, and remanded the case to him for further review. The appeals court did not rule on the constitutionality of the bulk surveillance and declined to enjoin the program, noting the pending expiration of relevant parts of the Patriot Act. Circuit JudgeGerard E. Lynchwrote that, given the national security interests at stake, it was prudent to give Congress an opportunity to debate and decide the matter.[360] On September 2, 2020, a US federal court ruled that theUS intelligence's mass surveillance program, exposed by Edward Snowden, was illegal and possibly unconstitutional. They also stated that the US intelligence leaders, who publicly defended it, were not telling the truth.[14] On June 2, 2015, the U.S. Senate passed, and President Obama signed, theUSA Freedom Actwhich restored in modified form several provisions of the Patriot Act that had expired the day before, while for the first time imposing some limits on the bulk collection of telecommunication data on U.S. citizens by American intelligence agencies. The new restrictions were widely seen as stemming from Snowden's revelations.[361][362] In an official report published in October 2015, theUnited Nations special rapporteurfor the promotion and protection of the right to freedom of speech, ProfessorDavid Kaye, criticized the U.S. government's harsh treatment of, and bringing criminal charges against, whistleblowers, including Edward Snowden. The report found that Snowden's revelations were important for people everywhere and made "a deep and lasting impact on law, policy, and politics."[363][364]TheEuropean Parliamentinvited Snowden to make a pre-recorded video appearance to aid their NSA investigation.[365][366]Snowden gave written testimony in which he said that he was seeking asylum in the EU, but that he was told by European Parliamentarians that the U.S. would not allow EU partners to make such an offer.[367]He told the Parliament that the NSA was working with the security agencies of EU states to "get access to as much data of EU citizens as possible."[368]He said that the NSA's Foreign Affairs Division lobbies the EU and other countries to change their laws, allowing for "everyone in the country" to be spied on legally.[369] Snowden applied for asylum inAustria,[370]Italy[371]and Switzerland.[372][373][374]Snowden, speaking to aGeneva,Switzerlandaudience via video link from Moscow, said he would love to return to Geneva, where he had previously worked undercover for the CIA. Swiss media said that the Swiss Attorney General had determined that Switzerland would not extradite Snowden if the US request were considered "politically motivated". Switzerland would grant Snowden asylum if he revealed the extent of espionage activities by the United States government. According to the paperSonntags Zeitung, Snowden would be granted safe entry and residency in Switzerland, in return for his knowledge of American intelligence activities. Swiss paperLe Matinreported that Snowden's activity could be part of criminal proceedings or part of a parliamentary inquiry. On September 16, 2019, it was reported that Snowden had said he "would love" to get political asylum in France.[375]Snowden first applied unsuccessfully for asylum in France in 2013, under then French PresidentFrançois Hollande. His second request, under PresidentEmmanuel Macron, was favorably received by Justice MinisterNicole Belloubet. However, no other members of the French government were known to express support for Snowden's asylum request, possibly due to the potential adverse diplomatic consequences. Hans-Georg Maaßen, head of the Federal Office for the Protection of the Constitution, Germany's domestic security agency, speculated that Snowden could have been working for the Russian government.[376][377]Snowden rejected this insinuation,[378]speculating on Twitter in German that "it cannot be proven if Maaßen is an agent of theSVRor FSB."[379]On October 31, 2013, Snowden met with German Green Party lawmakerHans-Christian Ströbelein Moscow to discuss the possibility of Snowden giving testimony in Germany.[380]At the meeting, Snowden gave Ströbele a letter to the German government, parliament, and federal Attorney-General, the details of which were to later be made public. Germany later blocked Snowden from testifying in person in an NSA inquiry, citing a potentialgrave strainon US-German relations.[381] The FBI demanded that Nordic countries arrest Snowden should he visit their countries.[382]Snowden made asylum requests to Sweden,Norway,FinlandandDenmark.[245]All requests were ultimately denied, with varying degrees of severity in the response. According to Finnish foreign ministry spokeswomanTytti Pylkkö, Snowden made an asylum request to Finland by sending an application to the Finnish embassy in Moscow while he was confined to the transit area of the Sheremetyevo International Airport in Moscow, but was told that Finnish law required him to be on Finnish soil.[383]According to SVT News, Snowden met with three Swedish MP's;Matthias Sundin(L),Jakop Dalunde(MP) andCecilia Magnusson(M), in Moscow, to discuss his views on mass surveillance.[384]The meeting was organized by the Right Livelihood Award Foundation, which awarded Snowden theRight Livelihood Honorary Award,[385]often called Sweden's "Alternative Nobel Prize". According to the foundation, the prize was for Snowden's work on press freedom. Sweden ultimately rejected Snowden's asylum, however, so the award was accepted by his father, Lon Snowden, on his behalf. Snowden was granted a freedom of speech award by the Oslo branch of the writer's groupPEN International. He applied for asylum in Norway but Norwegian Justice SecretaryPål Lønsethinsisted that the application be made on Norwegian soil and further expressed doubt that Snowden met the criteria for gaining asylum - being "important for foreign political reasons". Snowden then filed a lawsuit for free passage through Norway in order to receive his freedom of speech award, through Oslo's District Court, followed by an appeals court, and finally Norway's Supreme Court. The lawsuit was ultimately rejected by the Norwegian Supreme Court.[386][387][388]Snowden also applied for asylum in Denmark, but this was rejected by the center-right Danish Prime MinisterLars Løkke Rasmussen, who said he could see no reason to grant Snowden asylum, calling him a "criminal".[389]Apparently, under an agreement with the Danish government, a US government jet lay in wait on standby inCopenhagen, to transfer Snowden back to the United States from any Scandinavian country.[390] Germany and France spearheaded a proposal for an EU-only integrated electronic communication/system network in response to the Snowden revelations.[391][392]Germany and France wished to control their own networks without the United States being amiddleman.[393][394] The proposal was first announced in 2011,[395]although in 2014, the US trade representatives voiced their opposition to Schengen Cloud.[391][395]It was also unclear that the system would prevent surveillance, but would limit access for EU corporations to international markets. The necessary regulatory alignment across multiple EU member states, to create the scheme, was not pursued.[396]: 62 Support for Snowden came from Latin and South American leaders including the Argentinian PresidentCristina Fernández de Kirchner, Brazilian PresidentDilma Rousseff, Ecuadorian President Rafael Correa, Bolivian President Evo Morales, Venezuelan PresidentNicolás Maduro, and Nicaraguan PresidentDaniel Ortega.[397][398] Crediting the Snowden leaks, theUnited Nations General Assemblyunanimously adopted Resolution 68/167 in December 2013. The non-binding resolution denounced unwarranted digital surveillance and included a symbolic declaration of the right of all individuals to online privacy.[399][400][401] In July 2014,Navi Pillay, UNHigh Commissioner for Human Rights, told a news conference in Geneva that the U.S. should abandon its efforts to prosecute Snowden, since his leaks were in the public interest.[402] Surveys conducted by news outlets and professional polling organizations found that American public opinion was divided on Snowden's disclosures and that those polled in Canada and Europe were more supportive of Snowden than respondents in the U.S., although more Americans have grown more supportive of Snowden's disclosure. In Germany, Italy, France, the Netherlands, and Spain, more than 80% of people familiar with Snowden view him positively.[403] For his global surveillance disclosures, Snowden has been honored by publications and organizations based in Europe and the United States. He was voted asThe Guardian's person of the year 2013, garnering four times the number of votes as any other candidate.[404] In March 2014, Snowden spoke at theSouth by Southwest(SXSW) Interactive technology conference inAustin, Texas, in front of 3,500 attendees. He participated by teleconference carried over multiple routers running theGoogle Hangoutsplatform. On-stage moderators wereChristopher Soghoianand Snowden's legal counsel Wizner, both from the ACLU.[405]Snowden said that the NSA was "setting fire to the future of the internet," and that the SXSW audience was "the firefighters".[406][407][408]Attendees could use Twitter to send questions to Snowden, who answered one by saying that information gathered by corporations was much less dangerous than that gathered by a government agency, because "governments have the power to deprive you of your rights."[406]Then-RepresentativeMike Pompeo(R-KS) of the House Intelligence Committee, laterdirector of the CIAandsecretary of state, had tried unsuccessfully to get the SXSW management to cancel Snowden's appearance; instead, SXSW director Hugh Forrest said that the NSA was welcome to respond to Snowden at the 2015 conference.[406] Later that month, Snowden appeared by teleconference at theTEDconference inVancouver, British Columbia. Represented on stage by a robot with a video screen, video camera, microphones, and speakers, Snowden conversed with TED curatorChris Andersonand told the attendees that online businesses should act quickly to encrypt their websites. He described the NSA's PRISM program as the U.S. government using businesses to collect data for them, and that the NSA "intentionally misleads corporate partners" using, as an example, theBullrun decryption programto create backdoor access.[409]Snowden said he would gladly return to the U.S. if givenimmunity from prosecution, but that he was more concerned about alerting the public about abuses of government authority.[409]Anderson invited Internet pioneerTim Berners-Leeon stage to converse with Snowden, who said that he would support Berners-Lee's concept of an "internet Magna Carta" to "encode our values in the structure of the internet."[409][410] On September 15, 2014, Snowden appeared via remote video link, along with Julian Assange, onKim Dotcom'sMoment of Truthtown hall meeting held inAuckland.[411]He made a similar video link appearance on February 2, 2015, along with Greenwald, as the keynote speaker at the World Affairs Conference atUpper Canada Collegein Toronto.[412] In March 2015, while speaking at theFIFDH(international human rights film festival) he made a public appeal for Switzerland to grant him asylum, saying he would like to return to live in Geneva, where he once worked undercover for the Central Intelligence Agency.[413] In April 2015,John Oliver, the host ofLast Week Tonight with John Oliver, flew to Moscow to interview Edward Snowden.[414] On November 10, 2015, Snowden appeared at theNewseum, via remote video link, forPEN American Center's "Secret Sources: Whistleblowers, National Security and Free Expression," event.[415] In 2015, Snowden earned over $200,000 from digital speaking engagements in the U.S.[416] On March 19, 2016, Snowden delivered the opening keynote address of theLibrePlanetconference, a meeting of internationalfree softwareactivists and developers presented by theFree Software Foundation. The conference was held at theMassachusetts Institute of Technologyand was the first such time Snowden spoke via teleconference using a full free software stack, end-to-end.[jargon][417][418][419][420] On July 21, 2016, Snowden and hardware hackerBunnie Huang, in a talk atMIT Media Lab's Forbidden Research event, published research for a smartphone case, the so-calledIntrospection Engine, that would monitor signals received and sent by that phone to provide an alert to the user if his or her phone istransmitting or receiving information when it shouldn't be(for example when it's turned off or in airplane mode), a feature described by Snowden to be useful for journalists or activists operating under hostile governments that would otherwise track their activities through their phones.[421][422][423][424] In August 2020, a court filing by the Department of Justice indicated that Snowden had collected a total of over $1.2 million in speaking fees in addition to advances on books since 2013.[425]In September 2021,Yahoo! Financereported that for 67 speaking appearances by video link from September 2015–May 2020, Snowden had earned more than $1.2 million. In March 2021,Iowa State Universitypaid him $35,000 for one such speech, his first at a public U.S. college since February 2017, when theUniversity of Pittsburghpaid him $15,000.[12] In April 2021, Snowden appeared at a Canadian investment conference sponsored by Sunil Tulsiani, a former policeman who had been barred from trading for life after dishonest behavior.[426]Snowden took the opportunity to affirm his role as a whistleblower, inform viewers of Tulsiani's background, and encourage investors to conduct proper research before spending any money.[426][427] In July 2013, media criticJay Rosendefined the "Snowden effect" as "Direct and indirect gains in public knowledge from the cascade of events and further reporting that followed Edward Snowden's leaks of classified information about the surveillance state in the U.S."[428]In December 2013,The Nationwrote that Snowden had sparked an overdue debate about national security and individual privacy.[429]InForbes, the effect was seen to have nearly united the U.S. Congress in opposition to the massivepost-9/11domestic intelligence gathering system.[430]In its Spring 2014 Global Attitudes Survey, thePew Research Centerfound that Snowden's disclosures had tarnished the image of the United States, especially in Europe and Latin America.[431] On November 2, 2018, Snowden provided a courtdeclarationinJewel v. National Security Agency,confirming that a document relied upon in the case, discussing the mass surveillance program known asStellar Wind, is actually the same document that he came upon during the course of his employment as an NSA contractor.[432][433][434]
https://en.wikipedia.org/wiki/Schengen_Cloud
Signals intelligence(SIGINT) is the act and field ofintelligence-gatheringby interception ofsignals, whethercommunicationsbetween people (communications intelligence—abbreviated toCOMINT) or from electronic signals not directly used in communication (electronic intelligence—abbreviated toELINT).[1]Asclassifiedandsensitive informationis usuallyencrypted, signals intelligence may necessarily involvecryptanalysis(to decipher the messages).Traffic analysis—the study of who is signaling to whom and in what quantity—is also used to integrate information, and it may complement cryptanalysis.[citation needed] Electronic interceptions appeared as early as 1900, during theBoer Warof 1899–1902. The BritishRoyal Navyhad installed wireless sets produced byMarconion board their ships in the late 1890s, and theBritish Armyused some limited wireless signalling. TheBoerscaptured some wireless sets and used them to make vital transmissions.[2]Since the British were the only people transmitting at the time, the British did not need special interpretation of the signals that they were.[3] The birth of signals intelligence in a modern sense dates from theRusso-Japanese Warof 1904–1905. As the Russian fleet prepared for conflict with Japan in 1904, the British shipHMSDianastationed in theSuez Canalintercepted Russian naval wireless signals being sent out for the mobilization of the fleet, for the first time in history.[4][ambiguous] Over the course of theFirst World War, a new method of signals intelligence reached maturity.[5]Russia’s failure to properly protect its communications fatally compromised theRussian Army’sadvance early in World War Iand led to their disastrous defeat by the Germans underLudendorffandHindenburgat theBattle of Tannenberg. In 1918, French intercept personnel captured a message written in the newADFGVX cipher, which was cryptanalyzed byGeorges Painvin. This gave the Allies advance warning of the German 1918Spring Offensive. The British in particular, built up great expertise in the newly emerging field of signals intelligence and codebreaking (synonymous with cryptanalysis). On the declaration of war, Britain cut all German undersea cables.[6]This forced the Germans to communicate exclusively via either (A) a telegraph line that connected through the British network and thus could be tapped; or (B) through radio which the British could then intercept.[7]Rear AdmiralHenry OliverappointedSir Alfred Ewingto establish an interception and decryption service at theAdmiralty;Room 40.[7]An interception service known as'Y' service, together with thepost officeandMarconistations, grew rapidly to the point where the British could intercept almost all official German messages.[7] The German fleet was in the habit each day of wirelessing the exact position of each ship and giving regular position reports when at sea. It was possible to build up a precise picture of the normal operation of theHigh Seas Fleet, to infer from the routes they chose where defensive minefields had been placed and where it was safe for ships to operate. Whenever a change to the normal pattern was seen, it immediately signalled that some operation was about to take place, and a warning could be given. Detailed information about submarine movements was also available.[7] The use of radio-receiving equipment to pinpoint the location of any single transmitter was also developed during the war. CaptainH.J. Round, working forMarconi, began carrying out experiments withdirection-findingradio equipment forthe army in Francein 1915. By May 1915, the Admiralty was able to track German submarines crossing the North Sea. Some of these stations also acted as 'Y' stations to collect German messages, but a new section was created within Room 40 to plot the positions of ships from the directional reports.[7] Room 40 played an important role in several naval engagements during the war, notably in detecting major German sorties into theNorth Sea. Thebattle of Dogger Bankwas won in no small part due to the intercepts that allowed the Navy to position its ships in the right place.[8]It played a vital role in subsequent naval clashes, including at theBattle of Jutlandas the British fleet was sent out to intercept them. The direction-finding capability allowed for the tracking and location of German ships, submarines, andZeppelins. The system was so successful that by the end of the war, over 80 million words, comprising the totality of German wireless transmission over the course of the war, had been intercepted by the operators of theY-stationsand decrypted.[9]However, its most astonishing success was indecryptingtheZimmermann Telegram, atelegramfrom the German Foreign Office sent via Washington to itsambassadorHeinrich von Eckardtin Mexico. With the importance of interception and decryption firmly established by the wartime experience, countries established permanent agencies dedicated to this task in the interwar period. In 1919, the British Cabinet's Secret Service Committee, chaired byLord Curzon, recommended that a peace-time codebreaking agency should be created.[10]TheGovernment Code and Cypher School(GC&CS) was the first peace-time codebreaking agency, with a public function "to advise as to the security of codes and cyphers used by all Government departments and to assist in their provision", but also with a secret directive to "study the methods of cypher communications used by foreign powers".[11]GC&CS officially formed on 1 November 1919, and produced its first decrypt on 19 October.[10][12]By 1940, GC&CS was working on the diplomatic codes and ciphers of 26 countries, tackling over 150 diplomatic cryptosystems.[13] TheUS Cipher Bureauwas established in 1919 and achieved some success at theWashington Naval Conferencein 1921, through cryptanalysis byHerbert Yardley. Secretary of WarHenry L. Stimsonclosed the US Cipher Bureau in 1929 with the words "Gentlemen do not read each other's mail." The use of SIGINT had even greater implications duringWorld War II. The combined effort of intercepts and cryptanalysis for the whole of the British forces in World War II came under the code name "Ultra", managed fromGovernment Code and Cypher SchoolatBletchley Park. Properly used, the GermanEnigmaandLorenz ciphersshould have been virtually unbreakable, but flaws in German cryptographic procedures, and poor discipline among the personnel carrying them out, created vulnerabilities which made Bletchley's attacks feasible. Bletchley's work was essential to defeating theU-boatsin theBattle of the Atlantic, and to the British naval victories in theBattle of Cape Matapanand theBattle of North Cape. In 1941, Ultra exerted a powerful effect on theNorth African desert campaignagainst German forces under GeneralErwin Rommel. General SirClaude Auchinleckwrote that were it not for Ultra, "Rommel would have certainly got through to Cairo". Ultra decrypts featured prominently in the story ofOperation SALAM,László Almásy's mission acrossthe desertbehind Allied lines in 1942.[14]Prior to theNormandy landingson D-Day in June 1944, the Allies knew the locations of all but two of Germany's fifty-eightWestern Frontdivisions. Winston Churchillwas reported to have told KingGeorge VI: "It is thanks to the secret weapon of GeneralMenzies, put into use on all the fronts, that we won the war!" Supreme Allied Commander,Dwight D. Eisenhower, at the end of the war, described Ultra as having been "decisive" to Allied victory.[15]Official historian of British Intelligence in World War IISir Harry Hinsleyargued that Ultra shortened the war "by not less than two years and probably by four years"; and that, in the absence of Ultra, it is uncertain how the war would have ended.[16] At a lower level, German cryptanalysis, direction finding, and traffic analysis were vital to Rommel's early successes in theWestern Desert Campaignuntil British forces tightened their communications discipline and Australian raiders destroyed his principle SIGINT Company.[17] TheUnited States Department of Defensehas defined the term "signals intelligence" as: Being a broad field, SIGINT has many sub-disciplines. The two main ones are communications intelligence (COMINT) and electronic intelligence (ELINT). A collection system has to know to look for a particular signal. "System", in this context, has several nuances. Targeting is the process of developingcollection requirements: First, atmospheric conditions,sunspots, the target's transmission schedule and antenna characteristics, and other factors create uncertainty that a given signal intercept sensor will be able to "hear" the signal of interest, even with a geographically fixed target and an opponent making no attempt to evade interception. Basic countermeasures against interception include frequent changing ofradio frequency,polarization, and other transmission characteristics. An intercept aircraft could not get off the ground if it had to carry antennas and receivers for every possible frequency and signal type to deal with such countermeasures. Second, locating the transmitter's position is usually part of SIGINT.Triangulationand more sophisticatedradio locationtechniques, such astime of arrivalmethods, require multiple receiving points at different locations. These receivers send location-relevant information to a central point, or perhaps to a distributed system in which all participate, such that the information can be correlated and a location computed. Modern SIGINT systems, therefore, have substantial communications among intercept platforms. Even if some platforms are clandestine, there is still a broadcast of information telling them where and how to look for signals.[19]A United States targeting system under development in the late 1990s, PSTS, constantly sends out information that helps the interceptors properly aim their antennas and tune their receivers. Larger intercept aircraft, such as theEP-3orRC-135, have the on-board capability to do some target analysis and planning, but others, such as theRC-12 GUARDRAIL, are completely under ground direction. GUARDRAIL aircraft are fairly small and usually work in units of three to cover a tactical SIGINT requirement, whereas the larger aircraft tend to be assigned strategic/national missions. Before the detailed process of targeting begins, someone has to decide there is a value in collecting information about something. While it would be possible to direct signals intelligence collection at a major sports event, the systems would capture a great deal of noise, news signals, and perhaps announcements in the stadium. If, however, an anti-terrorist organization believed that a small group would be trying to coordinate their efforts using short-range unlicensed radios at the event, SIGINT targeting of radios of that type would be reasonable. Targeting would not know where in the stadium the radios might be located or the exact frequency they are using; those are the functions of subsequent steps such as signal detection and direction finding. Once the decision to target is made, the various interception points need to cooperate, since resources are limited. Knowing what interception equipment to use becomes easier when a target country buys its radars and radios from known manufacturers, or is given them asmilitary aid. National intelligence services keep libraries of devices manufactured by their own country and others, and then use a variety of techniques to learn what equipment is acquired by a given country. Knowledge ofphysicsandelectronic engineeringfurther narrows the problem of what types of equipment might be in use. An intelligence aircraft flying well outside the borders of another country will listen for long-range search radars, not short-range fire control radars that would be used by a mobile air defense. Soldiers scouting the front lines of another army know that the other side will be using radios that must be portable and not have huge antennas. Even if a signal is human communications (e.g., a radio), the intelligence collection specialists have to know it exists. If the targeting function described above learns that a country has a radar that operates in a certain frequency range, the first step is to use a sensitive receiver, with one or more antennas that listen in every direction, to find an area where such a radar is operating. Once the radar is known to be in the area, the next step is to find its location. If operators know the probable frequencies of transmissions of interest, they may use a set of receivers, preset to the frequencies of interest. These are the frequency (horizontal axis) versus power (vertical axis) produced at the transmitter, before any filtering of signals that do not add to the information being transmitted. Received energy on a particular frequency may start a recorder, and alert a human to listen to the signals if they are intelligible (i.e., COMINT). If the frequency is not known, the operators may look for power on primary orsidebandfrequencies using aspectrum analyzer. Information from the spectrum analyzer is then used to tune receivers to signals of interest. For example, in this simplified spectrum, the actual information is at 800 kHz and 1.2 MHz. Real-world transmitters and receivers usually are directional. In the figure to the left, assume that each display is connected to a spectrum analyzer connected to a directional antenna aimed in the indicated direction. Spread-spectrum communications is anelectronic counter-countermeasures(ECCM) technique to defeat looking for particular frequencies. Spectrum analysis can be used in a different ECCM way to identify frequencies not being jammed or not in use. The earliest, and still common, means of direction finding is to use directional antennas asgoniometers, so that a line can be drawn from the receiver through the position of the signal of interest. (SeeHF/DF.) Knowing the compass bearing, from a single point, to the transmitter does not locate it. Where the bearings from multiple points, using goniometry, are plotted on a map, the transmitter will be located at the point where the bearings intersect. This is the simplest case; a target may try to confuse listeners by having multiple transmitters, giving the same signal from different locations, switching on and off in a pattern known to their user but apparently random to the listener. Individual directional antennas have to be manually or automatically turned to find the signal direction, which may be too slow when the signal is of short duration. One alternative is theWullenweberarray technique. In this method, several concentric rings of antenna elements simultaneously receive the signal, so that the best bearing will ideally be clearly on a single antenna or a small set. Wullenweber arrays for high-frequency signals are enormous, referred to as "elephant cages" by their users. A more advance approach isAmplitude comparison. An alternative to tunable directional antennas or large omnidirectional arrays such as the Wullenweber is to measure thetime of arrivalof the signal at multiple points, usingGPSor a similar method to have precise time synchronization. Receivers can be on ground stations, ships, aircraft, or satellites, giving great flexibility. A more accurate approach isInterferometer. Modernanti-radiation missilescan home in on and attack transmitters; military antennas are rarely a safe distance from the user of the transmitter. When locations are known, usage patterns may emerge, from which inferences may be drawn. Traffic analysis is the discipline of drawing patterns from information flow among a set of senders and receivers, whether those senders and receivers are designated by location determined throughdirection finding, by addressee and sender identifications in the message, or evenMASINTtechniques for "fingerprinting" transmitters or operators. Message content other than the sender and receiver is not necessary to do traffic analysis, although more information can be helpful. For example, if a certain type of radio is known to be used only by tank units, even if the position is not precisely determined by direction finding, it may be assumed that a tank unit is in the general area of the signal. The owner of the transmitter can assume someone is listening, so might set up tank radios in an area where he wants the other side to believe he has actual tanks. As part ofOperation Quicksilver, part of thedeceptionplan for the invasion of Europe at theBattle of Normandy, radio transmissions simulated the headquarters and subordinate units of the fictitiousFirst United States Army Group(FUSAG), commanded byGeorge S. Patton, to make the German defense think that the main invasion was to come at another location. In like manner, fake radio transmissions from Japanese aircraft carriers, before theBattle of Pearl Harbor, were made from Japanese local waters, while the attacking ships moved under strict radio silence. Traffic analysis need not focus on human communications. For example, a sequence of a radar signal, followed by an exchange of targeting data and a confirmation, followed by observation of artillery fire, may identify an automatedcounterbattery firesystem. A radio signal that triggers navigational beacons could be a radio landing aid for an airstrip or helicopter pad that is intended to be low-profile. Patterns do emerge. A radio signal with certain characteristics, originating from a fixed headquarters, may strongly suggest that a particular unit will soon move out of its regular base. The contents of the message need not be known to infer the movement. There is an art as well as science of traffic analysis. Expert analysts develop a sense for what is real and what is deceptive.Harry Kidder,[20]for example, was one of the star cryptanalysts of World War II, a star hidden behind the secret curtain of SIGINT.[21] Generating anelectronic order of battle(EOB) requires identifying SIGINT emitters in an area of interest, determining their geographic location or range of mobility, characterizing their signals, and, where possible, determining their role in the broader organizationalorder of battle. EOB covers both COMINT and ELINT.[22]TheDefense Intelligence Agencymaintains an EOB by location. The Joint Spectrum Center (JSC) of theDefense Information Systems Agencysupplements this location database with five more technical databases: For example, several voice transmitters might be identified as the command net (i.e., top commander and direct reports) in a tank battalion or tank-heavy task force. Another set of transmitters might identify the logistic net for that same unit. An inventory of ELINT sources might identify themedium- andlong-rangecounter-artillery radars in a given area. Signals intelligence units will identify changes in the EOB, which might indicate enemy unit movement, changes in command relationships, and increases or decreases in capability. Using the COMINT gathering method enables the intelligence officer to produce an electronic order of battle by traffic analysis and content analysis among several enemy units. For example, if the following messages were intercepted: This sequence shows that there are two units in the battlefield, unit 1 is mobile, while unit 2 is in a higher hierarchical level, perhaps a command post. One can also understand that unit 1 moved from one point to another which are distant from each 20 minutes with a vehicle. If these are regular reports over a period of time, they might reveal a patrol pattern. Direction-finding andradio frequency MASINTcould help confirm that the traffic is not deception. The EOB buildup process is divided as following: Separation of the intercepted spectrum and the signals intercepted from each sensor must take place in an extremely small period of time, in order to separate the different signals to different transmitters in the battlefield. The complexity of the separation process depends on the complexity of the transmission methods (e.g.,hoppingortime-division multiple access(TDMA)). By gathering and clustering data from each sensor, the measurements of the direction of signals can be optimized and get much more accurate than the basic measurements of a standarddirection findingsensor.[23]By calculating larger samples of the sensor's output data in near real-time, together with historical information of signals, better results are achieved. Data fusion correlates data samples from different frequencies from the same sensor, "same" being confirmed by direction finding or radiofrequency MASINT. If an emitter is mobile, direction finding, other than discovering a repetitive pattern of movement, is of limited value in determining if a sensor is unique. MASINT then becomes more informative, as individual transmitters and antennas may have unique side lobes, unintentional radiation, pulse timing, etc. Network build-up, or analysis of emitters (communication transmitters) in a target region over a sufficient period of time, enables creation of the communications flows of a battlefield.[24] COMINT (communicationsintelligence) is a sub-category of signals intelligence that engages in dealing with messages or voice information derived from the interception of foreign communications. COMINT is commonly referred to as SIGINT, which can cause confusion when talking about the broader intelligence disciplines. The USJoint Chiefs of Staffdefines it as "Technical information and intelligence derived from foreign communications by other than the intended recipients".[18] COMINT, which is defined to be communications among people, will reveal some or all of the following: A basic COMINT technique is to listen for voice communications, usually over radio but possibly "leaking" from telephones or from wiretaps. If the voice communications are encrypted, traffic analysis may still give information. In the Second World War, for security the United States used Native American volunteer communicators known ascode talkers, who used languages such asNavajo,ComancheandChoctaw, which would be understood by few people, even in the U.S. Even within these uncommon languages, the code talkers used specialized codes, so a "butterfly" might be a specific Japanese aircraft. British forces made limited use ofWelshspeakers for the same reason. While modern electronic encryption does away with the need for armies to use obscure languages, it is likely that some groups might use rare dialects that few outside their ethnic group would understand. Morse code interception was once very important, butMorse codetelegraphy is now obsolete in the western world, although possibly used by special operations forces. Such forces, however, now have portable cryptographic equipment. Specialists scan radio frequencies for character sequences (e.g., electronic mail) and fax. A given digital communications link can carry thousands or millions of voice communications, especially in developed countries. Without addressing the legality of such actions, the problem of identifying which channel contains which conversation becomes much simpler when the first thing intercepted is thesignaling channelthat carries information to set up telephone calls. In civilian and many military use, this channel will carry messages inSignaling System 7protocols. Retrospective analysis of telephone calls can be made fromCall detail record(CDR) used for billing the calls. More a part ofcommunications securitythan true intelligence collection, SIGINT units still may have the responsibility of monitoring one's own communications or other electronic emissions, to avoid providing intelligence to the enemy. For example, a security monitor may hear an individual transmitting inappropriate information over an unencrypted radio network, or simply one that is not authorized for the type of information being given. If immediately calling attention to the violation would not create an even greater security risk, the monitor will call out one of the BEADWINDOW codes[25]used by Australia, Canada, New Zealand, the United Kingdom, the United States, and other nations working under their procedures. Standard BEADWINDOW codes (e.g., "BEADWINDOW 2") include: In WWII, for example, the Japanese Navy, by poor practice, identified a key person's movement over a low-security cryptosystem. This made possibleOperation Vengeance, the interception and death of the Combined Fleet commander, AdmiralIsoroku Yamamoto. Electronic signals intelligence (ELINT) refers tointelligence-gatheringby use of electronic sensors. Its primary focus lies onnon-communications signalsintelligence. The Joint Chiefs of Staff define it as "Technical and geolocation intelligence derived from foreign noncommunications electromagnetic radiations emanating from sources other than nuclear detonations or radioactive sources."[18] Signal identification is performed by analyzing the collected parameters of a specific signal, and either matching it to known criteria, or recording it as a possible new emitter. ELINT data are usually highly classified, and are protected as such. The data gathered are typically pertinent to the electronics of an opponent's defense network, especially the electronic parts such asradars,surface-to-air missilesystems, aircraft, etc. ELINT can be used to detect ships and aircraft by their radar and other electromagnetic radiation; commanders have to make choices between not using radar (EMCON), intermittently using it, or using it and expecting to avoid defenses. ELINT can be collected from ground stations near the opponent's territory, ships off their coast, aircraft near or in their airspace, or by satellite. Combining other sources of information and ELINT allows traffic analysis to be performed on electronic emissions which contain human encoded messages. The method of analysis differs from SIGINT in that any human encoded message which is in the electronic transmission is not analyzed during ELINT. What is of interest is the type of electronic transmission and its location. For example, during theBattle of the AtlanticinWorld War II,UltraCOMINT was not always available becauseBletchley Parkwas not always able to read theU-boatEnigmatraffic. Buthigh-frequency direction finding("huff-duff") was still able to detect U-boats by analysis of radio transmissions and the positions through triangulation from the direction located by two or more huff-duff systems. TheAdmiraltywas able to use this information to plot courses which took convoys away from high concentrations of U-boats. Other ELINT disciplines include intercepting and analyzing enemy weapons control signals, or theidentification, friend or foeresponses from transponders in aircraft used to distinguish enemy craft from friendly ones. A very common area of ELINT is intercepting radars and learning their locations and operating procedures. Attacking forces may be able to avoid the coverage of certain radars, or, knowing their characteristics,electronic warfareunits may jam radars or send them deceptive signals. Confusing a radar electronically is called a "soft kill", but military units will also send specialized missiles at radars, or bomb them, to get a "hard kill". Some modern air-to-air missiles also have radar homing guidance systems, particularly for use against large airborne radars. Knowing where each surface-to-air missile andanti-aircraft artillerysystem is and its type means that air raids can be plotted to avoid the most heavily defended areas and to fly on a flight profile which will give the aircraft the best chance of evading ground fire and fighter patrols. It also allows for thejammingorspoofingof the enemy's defense network (seeelectronic warfare). Good electronic intelligence can be very important to stealth operations;stealth aircraftare not totally undetectable and need to know which areas to avoid. Similarly, conventional aircraft need to know where fixed or semi-mobileair defensesystems are so that they can shut them down or fly around them. Electronic support measures (ESM)orelectronic surveillance measuresare ELINT techniques using variouselectronic surveillance systems, but the term is used in the specific context of tactical warfare. ESM give the information needed forelectronic attack (EA)such as jamming, or directional bearings (compass angle) to a target insignals interceptsuch as in thehuff-duffradio direction finding (RDF) systems so critically important during the World War IIBattle of the Atlantic. After WWII, the RDF, originally applied only in communications, was broadened into systems to also take in ELINT from radar bandwidths and lower frequency communications systems, giving birth to a family of NATO ESM systems, such as the shipboard USAN/WLR-1[26]—AN/WLR-6systems and comparable airborne units. EA is also calledelectronic counter-measures (ECM). ESM provides information needed forelectronic counter-counter measures (ECCM), such as understanding a spoofing or jamming mode so one can change one's radar characteristics to avoid them. Meaconing[27]is the combined intelligence and electronic warfare of learning the characteristics of enemy navigation aids, such as radio beacons, and retransmitting them with incorrect information. FISINT (Foreign instrumentation signals intelligence) is a sub-category of SIGINT, monitoring primarily non-human communication. Foreign instrumentation signals include (but not limited to)telemetry(TELINT), tracking systems, and video data links. TELINT is an important part ofnational means of technical verificationfor arms control. Still at the research level are techniques that can only be described ascounter-ELINT, which would be part of aSEADcampaign. It may be informative to compare and contrast counter-ELINT withECCM. Signals intelligence and measurement and signature intelligence (MASINT) are closely, and sometimes confusingly, related.[28]The signals intelligence disciplines of communications and electronic intelligence focus on the information in those signals themselves, as with COMINT detecting the speech in a voice communication or ELINT measuring thefrequency, pulse repetition rate, and other characteristicsof a radar. MASINT also works with collected signals, but is more of an analysis discipline. There are, however, unique MASINT sensors, typically working in different regions or domains of the electromagnetic spectrum, such as infrared or magnetic fields. While NSA and other agencies have MASINT groups, the Central MASINT Office is in theDefense Intelligence Agency(DIA). Where COMINT and ELINT focus on the intentionally transmitted part of the signal, MASINT focuses on unintentionally transmitted information. For example, a given radar antenna will havesidelobesemanating from a direction other than that in which the main antenna is aimed. The RADINT (radar intelligence) discipline involves learning to recognize a radar both by its primary signal, captured by ELINT, and its sidelobes, perhaps captured by the main ELINT sensor, or, more likely, a sensor aimed at the sides of the radio antenna. MASINT associated with COMINT might involve the detection of common background sounds expected with human voice communications. For example, if a given radio signal comes from a radio used in a tank, if the interceptor does not hear engine noise or higher voice frequency than the voicemodulationusually uses, even though the voice conversation is meaningful, MASINT might suggest it is a deception, not coming from a real tank. SeeHF/DFfor a discussion of SIGINT-captured information with a MASINT flavor, such as determining the frequency to which areceiveris tuned, from detecting the frequency of thebeat frequency oscillatorof thesuperheterodyne receiver. Since the invention of the radio, the international consensus has been that the radio-waves are no one's property, and thus the interception itself is not illegal.[29]There can, however, be national laws on who is allowed to collect, store, and process radio traffic, and for what purposes. Monitoring traffic in cables (i.e. telephone and Internet) is far more controversial, since it most of the time requires physical access to the cable and thereby violating ownership and expected privacy.[citation needed]
https://en.wikipedia.org/wiki/Signals_intelligence
Sousveillance(/suːˈveɪləns/soo-VAY-lənss) is the recording of an activity by a member of the public, rather than a person or organisation in authority, typically by way of smallwearableor portable personal technologies.[14]The term, coined bySteve Mann,[15]stems from the contrasting French wordssur, meaning "above", andsous, meaning "below", i.e. "surveillance" denotes the "eye-in-the-sky" watching from above, whereas "sousveillance" denotes bringing the means of observation down to human level, either physically (mounting cameras on people rather than on buildings) or hierarchically (ordinary people doing the watching, rather than higher authorities or architectures).[16][17][23] While surveillance and sousveillance both usually refer to visual monitoring, they can denote other forms of monitoring such as audio surveillance or sousveillance. With audio (e.g. recording of phone conversations), sousveillance is sometimes referred to as "one party consent".[24] Undersight(inverse oversight) is sousveillance at high-level, e.g. "citizen undersight" being reciprocal to a congressional oversight committee or the like.[25][26][27] Inverse surveillanceis a subset of sousveillance with an emphasis on "watchful vigilance from underneath" and a form ofsurveillanceinquiry or legal protection involving the recording, monitoring, study, or analysis of surveillance systems, proponents of surveillance, and possibly also recordings of authority figures. Inverse surveillance is typically undertaken by those who are subjected to surveillance, so it can be thought of as a form ofethnographyorethnomethodology(i.e. an analysis of the surveilled from the perspective of a participant in a society under surveillance).[28]Sousveillance typically involves community-based recording from first person perspectives, without necessarily involving any specific political agenda, whereas inverse surveillance is a form of sousveillance that is typically directed at, or used to collect data to analyze or study, surveillance or its proponents (e.g., the actions of police or protestors at a protest rally).[29][30][31] Sousveillance is not necessarilycountersurveillance. Sousveillance can be used to "counter" surveillance or it can be used with surveillance to create a more complete "veillance" ("Surveillance is a half-truth without sousveillance"[32]). The question of "Who watches the watchers" is dealt with more properly under the topic of metaveillance[33](the veillance of veillance) than sousveillance. Inverse surveillance is a type of sousveillance. The more general concept of sousveillance goes beyond just inverse surveillance and the associated twentieth-century political "usversusthem" framework for citizens to photographpolice, shoppers to photograph shopkeepers, or passengers to photograph taxicab drivers.Howard Rheingoldcommented in his bookSmart Mobsthat this is similar to the pedestrian−driver concept, i.e. these are roles that many of us take both sides on, from time to time. Many aspects of sousveillance were examined in the general category of "reciprocal accountability" in David Brin's 1997 non-fiction book The Transparent Society, and also in Brin's novels. The firstInternational Workshop on Inverse Surveillance, IWIS, took place in 2004,[34]chaired by Dr. Jim Gemmell, (MyLifeBits),Joi Ito,Anastasios Venetsanopoulos, andSteve Mann, among others. One of the things that brought inverse surveillance to light was the reactions of security guards to electric seeing aids and similar sousveillance practices. It seemed, early on, that the morecamerasthat were in an establishment, the more the guards disliked the use of an electric seeing aid, such as theEyeTapeyeglasses. It was through simply wearing electric seeing aids, as a passive observer, that it was discovered that surveillance and sousveillance can cause conflict and sometimes confrontation. This led some researchers to explore why the perpetrators of surveillance are suspicious of sousveillance, and thus defined the notion of inverse surveillance as a new and interesting facet of studies in sousveillance.[28] Since the year 2001, December 24 has been World Sousveillance Day with groups of participants inNew York City,Toronto,Boston,Florida,Vancouver,Japan,Spainand theUnited Kingdom. However, this designated day focuses only on hierarchical sousveillance, whereas there are a number of groups around the world working on combining the two forms of sousveillance. An essay fromWiredmagazine predicts that sousveillance is an important development that will be on the rise in 2014.[35] Sousveillance of a state by its citizens has been credited with addressing many problems such as election fraud or electoral misdeeds, as well as providing good governance. For example, mobile phones were used inSierra LeoneandGhanain 2007 for checking malpractices and intimidation during elections.[36] A recent area of research further developed at IWIS was the equilibrium between surveillance and sousveillance. Current "equiveillancetheory" holds that sousveillance, to some extent, often reduces or eliminates the need for surveillance. In this sense it is possible to replace the Panoptic God's eye view of surveillance with a more community-building ubiquitous personal experience capture.Crimes, for example, might then be solved by way of collaboration among the citizenry rather than through the watching over the citizenry from above. But it is not so black-and-white as this dichotomy suggests. In particular, citizens watching over their neighbors is not necessarily "better" than the alternative: an increase in community self-reliance might be offset by an uncomfortable "nosy neighbor" effect. "Personal sousveillance" has been referred to as "coveillance" by Mann, Nolan and Wellman. Copwatchis a network ofAmericanandCanadianvolunteer organizations that "police the police." Copwatch groups usually engage in monitoring of the police, videotaping police activity, and educating the public about police misconduct. Fitwatch is a group that photographForward Intelligence Teams(police photographers) in the United Kingdom.[37] In 2008,Cambridgeresearchers (in the MESSAGE project) teamed with bicycle couriers to measure and transmit air pollution indicators as they travel the city.[38] In 2012 the Danish daily newspaper and online titleDagbladet Informationcrowdmappedthe positions of surveillance cameras by encouraging readers to use a free Android and iOS app to photograph and geolocateCCTVcameras.[39][better source needed] Personal sousveillance is the art, science, and technology of personal experience capture, processing, storage, retrieval, and transmission, such as lifelong audiovisual recording by way ofcyberneticprosthetics, such as seeing-aids, visual memory aids, and the like. Even today's personal sousveillance technologies likecamera phonesandweblogstend to build asense of community, in contrast to surveillance that some have said is corrosive tocommunity.[40] The legal, ethical, and policy issues surrounding personal sousveillance are largely yet to be explored, but there are close parallels to the social and legal norms surroundingrecording of telephone conversations. When one or more parties to the conversation record it, it is called "sousveillance", whereas when the conversation is recorded by a person who is not a party to the conversation (such as a prison guard violating a client-lawyer relationship), the recording is called "surveillance". "Targeted sousveillance" refers to sousveillance of a specific individual by one or more other individuals.[41]Usually, the targeted individual is a representative or proponent of surveillance, so targeted sousveillance is often inverse surveillance or hierarchical sousveillance. "Hierarchical sousveillance" refers, for example, to citizens photographing the police, shoppers photographing shopkeepers, or taxicab passengers photographing cab drivers.[42]So, for example, targeting former White House security official AdmiralJohn Poindexterwith sousveillance follows this more political narrative. Classy's Kitchen describes sousveillance as "another way to add further introspection to the commons that keeps society open but still makes the world smaller and safer".[43]In this way sousveillance may be regarded as a possible replacement for surveillance. In this sur/sousveillance replacement, one can consider an operative social norm that would require cameras to be attached to a human operator. Under such a scenario, any objections to the camera could be raised by another human more easily than it would be to interact with a lamp post upon which is mounted a surveillance camera. Thus, the argument is that cameras attached to people ought to be less offensive than cameras attached to inanimate objects, because there is at least one responsible party present to operate the camera. This responsible-party argument is analogous to that used for the operation of a motor vehicle, where a responsible driver is present, in contrast to the remote or automated operation of a motor vehicle. Beyond the political or breaching of hierarchical structure explored in academia, the more rapidly emerging discourse on sousveillance within the industry is "personal sousveillance", namely the recording of an activity by a participant in the activity. As the technologies get smaller and easier to use, the capture, recording, and playback of everyday life get that much easier to initiate spontaneously in unexpected situations. For example, David Ollila, a manufacturer of video camera equipment, was trapped for four hours aboard aComairplane atJFK AirportinNew York City. When he recorded an interview with thepilotabout the situation, the pilot called the police who then removed Ollila for questioning and removed everyone from the plane.[44] Recording a situation is only part of the sousveillance process. Communicating is also important. Video-sharing sites such asYouTubeand photo-sharing sites such asFlickrplay a vital role. For example, policeagents provocateurwere quickly revealed on YouTube when they infiltrated a demonstration inMontebello, Quebec, against the leaders ofCanada,Mexicoand the United States (August 2007). When the head of the Quebec police publicly stated that there was no police presence, a sousveillance video showed him to be wrong. When he revised his statement to say that the police provocateurs were peaceful observers, the same video showed them to be masked, wearing police boots, and in one case holding a rock.[45] There are many similar examples, such as the widely viewedYouTubevideo ofUCLAcampus policementasering a student. In Russia, as well as in some other countries where road users trust neither each other nor police,onboard camerasare so ubiquitous that thousands of videos of automobile accidents and near-miss incidents have been uploaded. The unanticipated2013 Russian meteor eventwas well documented from a dozen angles via the use of these devices.[46]Similarly, in February 2015, dashcams caught valuable footage of the crash ofTransAsia AirwaysFlight GE235.[47] Alibi sousveillance is a form of sousveillance activity aimed at generating analibias evidence to defend against allegations of wrongdoing.[48] Hasan Elahi, aUniversity of Marylandprofessor, has produced a sousveillance for his entire life, after being detained at an airport because he was erroneously placed on theUS terrorist watchlist. Some of his sousveillance activities include using his cell phone as atracking device, and publicly postingdebit cardand other transactions that document his actions.[49] One specific use of alibi sousveillance is the growing trend ofpolice officers wearing body cameraswhile on patrol. Well-publicized events involving police-citizen altercations (such as the case ofMichael Brownin Ferguson, Missouri) have increased calls for police to wear body cameras and so capture evidence of the incidents, for their benefit and the criminal justice system as a whole.[50]By having officers use sousveillance, police forces can generate hours of video evidence to be used in cases like that of Michael Brown, and the video evidence can act as an important alibi in the judicial proceedings in regards to who is truly at fault.[50]Regardless of the outcome of such events, contemporaneous audio-video evidence can be extremely valuable in respect of compliance- and enforcement-related events. Use of wearable cameras by police officerscombined with video streaming and recording in an archive produces a record of the interactions of the officer with civilians and criminals. Experiments with police use inRialto, Californiafrom 2012 to 2013 resulted in a reduction of both complaints against officers and a reduction in the use of violence by officers. The public is shielded from police misconduct and the police officer from bogus complaints.[51] Because these body cameras are turned on for every encounter with the public, privacy issues have been brought up with specific emphasis on special victim cases such as rape or domestic violence. Police worry that with a camera right in front of the victim, they will not feel comfortable revealing all the information that they know.[52]There have been two case studies done in the United States that have revealed that police officers who have cameras have fewer encounters with citizens than officers who do not have cameras, due to fear of being reprimanded for committing a mistake.[50] Prior to contemporary sousveillance cultures,Simone Browne(2015) used "dark sousveillance" to refer to the ways that enslaved Black Americans refashioned techniques and technologies to facilitate survival and escape. Browne (2015) notes how pranks and other performative practices and creative acts were used to resist enslavement from experiential insight. In the era of web-based participatory media and convergence cultures, non-governmental and non-state actors, with their own virtual communities and networks that cut across national borders, use what Bakir (2010)[53]calls the sousveillant assemblage to wielddiscursive power. The sousveillant assemblage comprises Haggerty & Ericson's (2000)[54]surveillant assemblage (or loosely linked, unstable, systems of data flows of people's activities, tracked by computers, and data-mined so that we are each reconfigured as (security) risks or (commercial) opportunities, but data-fattened by the proliferation of web-based participatory media and personal sousveillance that we willingly provide online). Verde Garrido (2015) has also exploredMann's concept of sousveillance and reinterpretedMichel Foucault's notion ofparrhesia(i.e., confronting authority and power with the truth) to explain that in contemporary societies, which are global and digital, 'parrhesiastic sousveillance' allows to resist and contest social, economic, and political relations of power by means of technology. These acts of resistance and contestation, in turn, enable civil societies to change old meanings and offer new ones, using a newborn digital agency to create new and contemporary politics of truth.[55] Features of sousveillance cultures: Undoubtedly, the urge and practice of dissent have been common, and people exploit the participatory media technologies at hand to mark and spread their dissent.[citation needed]However, the rise of web-based participatory media and sousveillance cultures have made it easier for many more to record and spread this dissent globally, unimpeded by traditional media's commercial distribution restrictions such as pre-defined circulation runs or paid-for airtime, or the need for expert knowledge in media production. Mann has long maintained that the 'informal nature of sousveillance, with its tendency to distribute recordings widely, will often expose inappropriate use to scrutiny, whereas the secret nature of surveillance will tend to prevent misuse from coming to light' (Mann, 2005, p. 641).[56]Just as Foucault's Panopticon operates through potential or implied surveillance, sousveillance might also operate through the credible threat of its existence. As the ubiquity and awareness of sousveillance widen, it is this that may most empower citizens – by making officials realise that their actions may, themselves, be monitored and exposed at any time. The permanent potential for sousveillance from so many (as opposed to more formalised exposés at the hands of investigative reporters, a small media elite) raises the likelihood that power abuses will be captured on record which can then be used to hold power-abusers and manipulators to account, providing of course, that there is a functioning legal system and/or public sphere (with mechanisms in place to translate popular demands and moral outrage into real-world change). In the case of police body camera implementation in the US, there are multiple responses and social implications to this form of sousveillance. The case againstpolice brutalityand theBlack Lives Mattermovement has garnered an immense and impassioned following in a very short amount of time. There are two different social movements that have arisen in response to police body cameras. One school of thought that has manifested support is that police body cameras are necessary in fighting and ending police brutality. The other is the opposing stance to this one that raises the issue of privacy that police body cameras may violate. There have not been many case studies that have taken place in implementing police body cameras.[57]This means that police worn body cameras have not been proven as a definite method to solve the problem of police brutality.[58]Studies have also shown that people, both policemen and civilians, act differently when they are aware that they are being surveilled on camera.[58]This leaves a lot of room for unpredictability surrounding the consequences of the use of this form of sousveillance. In Mann's original conception, sousveillance had an emancipatory political thrust, with hierarchical sousveillance a conscious act of resistance to surveillance. Yet, the nature of the social change generated is unpredictable and dependent on the sousveillant content, the context of its subsequent sharing, and, of course, the strength of the traditions of deliberation for democratic purposes.ISIS' use of sousveillance, then, may result in social change, but not in a progressive fashion.[citation needed] Given the lack of secrecy inherent in placing sousveillant content online, the anonymity of the sousveillers is of prime importance if hierarchical (politically or legally motivated) sousveillance is to proliferate. There is a real need for spaces online that are willing to protect users' anonymity and keep their subversive content online despite political or corporate pressure. With this sort of situation in mind, whistle-blowing websites have been set up that guarantee anonymity, such as Wiki-leaks, launched in December 2006. More such sites are needed. Social media provide what could be described as a semi-permanent, and easily accessible database of eyewitness accounts. Given that the web can be used and searched in the manner of a database to find examples of sousveillance; and given the recirculation of sousveillant footage in memes and in mainstream media, ever-hungry for new content in a media environment of convergence and expanding capacity, the longevity of sousveillant footage is perhaps what gives sousveillance its agenda-building power. It allows journalists, citizens, activists, insurgents, strategic communicators and researchers the opportunity to discover and partially relive both the eye-witnessed, sousveillant account, and the discourse surrounding specific moments of sousveillance, as well as reflecting on, and marshalling, their significance. David Brin's 1989 novelEarthportrays citizens equipped with both augmented reality gear ("Tru-Vu Goggles") and cameras exercising reciprocal accountability, with each other and with authority figures, discussing effects on crime and presaging today's "cop cam" developments. Elites are allowed only temporary, cached secrecy. In Robert Sawyer'sNeanderthal Parallaxtrilogy, theHomo neanderthalensisoccupying a parallel universe have what are called companion implants. These are comprehensive recording and transmission devices, mounted in the forearm of each person. Their entire life is constantly monitored and sent to their alibi archive, a repository of recordings that are only accessible by their owner, or by the proper authorities when investigating an infraction, and in the latter case only in circumstances relevant to the investigation. Recordings are maintained after death; it is not made clear what the reasoning is for this and under what circumstances and or by whom a deceased person's archive can be accessed. The plot of the 1995 movieStrange Daysis based on a future where sousveillance recordings are made and sold as entertainment. The plot of the movie revolves around the murder of a celebrity by police officers that is recorded by a person secretly wearing one of the devices. In the movie, the recordings are made by a flat array of sensors that pick up signals from the brain stem. The sensors are usually hidden under a wig, and they record everything the person wearing them sees and hears. Recordings made while the person making them dies are called "blackjack" tapes. The plot of the 1985John Crowleyshort storySnowrevolves around a suspended camera recording the whole of a subject's life being sold as a consumer product. The 2007 novelHalting StatebyCharles Strossand its sequelRule 34depict a 2020s Scotland in which wearable computing has a level of ubiquity similar to that of 2013's cell phones. The implications of a society in which anyone might be recording anything at any time are explored at length, particularly with respect to policing. Theopen sourcescience fiction role-playing gameEclipse Phasehas sousveillance as a common part of life in the setting, as a result of data storage technology and high-definition digital cameras becoming commonplace and often integrated into any and all objects. Orson Scott Card's novel,the Worthing Chroniclealso investigates the effect of (apparently) omnipotent watchers, and how it can degrade human experience—the moral dilemma leading the watchers to cease.Vernor Vinge's character,Pham Nuwenpresciently recognizes the stage of "ubiquitous surveillance" in the collapse-and-rebuild cycle that plagues human planetary civilization ina Deepness in the Sky. There is a need for efficacy, efficiency or effectiveness of sousveillance, which can be met by social media, such as through widespread dissemination on social media, and when used as an output modality in conjunction with sousveillance as an input modality, is called "swollag", orgallowsspelled backwards.[59][60][61]For example, filming or streaming an abusive situation, like police abuse, doesn't always lead to justice and punishment of the abuser without some means (i.e. swollag) for sousveillance to take effect. For example, in 2014, a man namedEric Garnerwas choked to death by a police officer in Staten Island after being arrested on suspicion of selling loose cigarettes. "Garner's death was documented by his friend Ramsey Orta, and the video was widely disseminated. Despite the video evidence, a grand jury declined to indict Garner's killer, leading to widespread outrage and protest. (In an ironic twist, the only person indicted in connection with Garner's death was Orta, who came under police scrutiny and was arrested on an "unrelated" weapons possession charge. Orta is now in prison in New York. Sousveillance is not without its costs.)"[62] However, it appeared that a filmed abusive behavior is more likely to be punished if the video is widely spread. This makes sousveillance more efficient and politically meaningful, insofar it shows to a significant proportion of the population the abuses of the authority. Thus, the development of video platforms, like YouTube and Snapchat, and streaming platforms like Periscope and Twitch, are key components to sousveillance's efficiency. This was shown during French demonstrations against the "Loi Travail" in 2016, during which a Periscope stream showing authority forces, called abusive by one part of the demonstrators, was watched by 93,362 people.[63]This video was posted on Twitter.[64]Nevertheless, it can be considered whether this creates a dangerous dependence on private platforms, often ruled by Internet giants (like Google, for YouTube) which have common interests with governments, and who adapt their content through algorithms users don't have control on. In addition, some argue that sousveillance may aid in state surveillance, despite being conducted by the people. Examples includemobile appsused to help people signal public threats, such as the Israeli app c-Now (previously known as Reporty). In January 2018, c-Now was tested inNiceby the mayorChristian Estrosi, sparking virulent public debates, with security advocates reporting spyware associated with the app[65]Furthermore, the director of c-Now isEhud Barack, former prime minister of Israel, who is suspected to have kept close links with Israeli and American governments.[66]For these reasons, security advocates consider the app to serve America's global surveillance program (revealed byEdward Snowdenin 2013), and raise the question of whether sousveillance really serves as "inverse surveillance".
https://en.wikipedia.org/wiki/Sousveillance
Thestakeholder theoryis a theory oforganizational managementandbusiness ethicsthat accounts for multiple constituencies impacted by business entities like employees, suppliers, local communities, creditors, and others.[1]It addresses morals and values in managing an organization, such as those related tocorporate social responsibility,market economy, andsocial contract theory. The stakeholder view of strategy integrates aresource-based viewand a market-based view, and adds asocio-politicallevel. One common version of stakeholder theory seeks to define the specificstakeholdersof a company (the normative theory of stakeholderidentification) and then examine the conditions under which managers treat these parties as stakeholders (the descriptive theory of stakeholdersalience).[2] In fields such as law, management, and human resources, stakeholder theory succeeded in challenging the usual analysis frameworks, by suggesting that stakeholders' needs should be put at the beginning of any action.[3]Some authors, such as Geoffroy Murat, tried to apply stakeholder's theory to irregular warfare.[4] Concepts similar to modern stakeholder theory can be traced back to longstanding philosophical views about the nature of civil society itself and the relations between individuals.[5]InMiles v Sydney Meat-Preserving Co Ltd(1912), which saw the rejection of a shareholder's legal right to adividend, Australian chief justiceSamuel Griffithobserved that: The law does not require the members of a company to divest themselves, in its management, of all altruistic motives, or to maintain the character of the company as a soulless and bowelless thing, or to exact the last farthing in its commercial dealings, or forbid them to carry on its operations in a way which they think conducive to the best interests of the community as a whole.[6] The term "stakeholder" in its current use first appeared in an internal memorandum[7]at theStanford Research Institutein 1963.[5][8]Subsequently, a "plethora"[5]of stakeholder definitions and theories were developed.[9][5]In 1971, Hein Kroos andKlaus Schwabpublished a German bookletModerne Unternehmensführung im Maschinenbau[10](Modern Enterprise Management in Mechanical Engineering) arguing that the management of a modern enterprise must serve not only shareholders but all stakeholders (die Interessenten) to achieve long-term growth and prosperity. This claim is disputed.[11]U.S. authors followed; for example, in 1983,Ian Mitroffpublished "Stakeholders of the Organizational Mind" in San Francisco.R. Edward Freemanhad an article on Stakeholder theory in the California Management Review in early 1983, but makes no reference to Mitroff's work, attributing the development of the concept to internal discussion in the Stanford Research Institute. He followed this article with a bookStrategic Management: A Stakeholder Approach. This book identifies and models the groups which arestakeholdersof acorporation, and both describes and recommends methods by which management can give due regard to the interests of those groups. In short, it attempts to address the "principle of who or what really counts. In the traditional view of a company, theshareholderview, only the owners or shareholders of the company are important, and the company has a bindingfiduciaryduty to put their needs first, to increase value for them. Stakeholder theory instead argues that there are other parties involved, includingemployees,customers,suppliers,financiers,communities,governmental bodies,political groups,trade associations, andtrade unions. Even competitors are sometimes counted as stakeholders – their status being derived from their capacity to affect the firm and its stakeholders. The nature of what constitutes a stakeholder is highly contested (Miles, 2012),[12]with hundreds of definitions existing in the academic literature (Miles, 2011).[13] Numerous articles and books written on stakeholder theory generally identify Freeman as the "father of stakeholder theory".[14]Freeman'sStrategic Management: A Stakeholder Approach(1984) is widely cited in the field as being the foundation of stakeholder theory,[15]although Freeman himself refers to several bodies of literature used in the development of his approach, includingstrategic management,corporate planning,systems theory,organization theory, andcorporate social responsibility. A related field of research examines the concept of stakeholders and stakeholder salience, or the importance of various stakeholder groups to a specific firm. An anticipation of such concepts, as part of Corporate Social Responsibility, appears in a publication that appeared in 1968 by the Italian economist Giancarlo Pallavicini, creator of "the decomposition method of the parameters" to calculate the results are not directly economic activity of enterprise, regarding ethical issues, moral, social, cultural and environmental.[16] More recent scholarly works on the topic of stakeholder theory that exemplify research and theorizing in this area include Donaldson and Preston (1995),[15]Mitchell, Agle, and Wood (1997),[17]Friedman and Miles (2002),[18]and Phillips (2003).[19] Thomas Donaldsonand Lee E. Preston argue that the theory has three distinct but mutually supportive aspects, descriptive, instrumental, and normative:[20] Since the publication of this article in 1995, it has served as a foundational reference for researchers in the field, having been cited over 1,100 times.[citation needed] Mitchell, et al. derive a typology of stakeholders based on the attributes ofpower(the extent a party has means to impose its will in a relationship), legitimacy (socially accepted and expected structures or behaviors), and urgency (time sensitivity or criticality of the stakeholder's claims).[23]By examining the combination of these attributes in a binary manner, 8 types of stakeholders are derived along with their implications for the organization. Friedman and Miles explore the implications of contentious relationships between stakeholders and organizations by introducing compatible/incompatible interests and necessary/contingent connections as additional attributes with which to examine the configuration of these relationships.[24]Robert Allen Phillipsdistinguishes between normatively legitimate stakeholders (those to whom an organization holds a moral obligation) and derivatively legitimate stakeholders (those whose stakeholder status is derived from their ability to affect the organization or its normatively legitimate stakeholders). Stakeholder theory succeeds in becoming famous not only in the business ethics fields; it is used as one of the frameworks incorporate social responsibilitymethods. For example,ISO 26000andGRI (Global Reporting Initiative)involve stakeholder analysis.[25] In the field of business ethics, Weiss, J.W. (2014) illustrates how stakeholder analysis can be complemented with issues management approaches to examine societal, organizational, and individual dilemmas. Several case studies are offered to illustrated uses of these methods. Stakeholder theory has seen growing uptake inhigher educationin the late 20th and early 21st centuries.[26]One influential definition defines a stakeholder in the context of higher education as anyone with a legitimate interest in education who thereby acquires a right to intervene.[27]Studies of higher education first began to recognize students as stakeholders in 1975.[28]External stakeholders may include employers.[28]In Europe, the rise of stakeholder regimes has arisen from the shift of higher education from a government-run bureaucracy to modern systems in which the government's role involves more monitoring than direct control.[29] Economist and university professorDanuše Nerudová, a candidate in the2023 Czech presidential election, is a proponent of stakeholder capitalism where "questions of sustainability and global politics, as well as the development of domestic societies" will have increased relevance for company and state decision making. Researcher Benjamin Tallis has examined whether a move fromneoliberalismto stakeholder capitalism, "which implies a different role for the state as well as a focus on creating more cohesive and resilient societies", could affect public optimism in the Czech Republic.[30] The political philosopherCharles Blattberghas criticized stakeholder theory for assuming that the interests of the various stakeholders can be, at best, compromised or balanced against each other. Blattberg argues that this is a product of its emphasis on negotiation as the chief mode of dialogue for dealing with conflicts between stakeholder interests. He recommends conversation instead and this leads him to defend what he calls a 'patriotic' conception of the corporation as an alternative to that associated with stakeholder theory.[31] Management scholar Samuel F. Mansell argued that stakeholder theory, by applying the political concept of a 'social contract' to the corporation, undermines the principles on which a market economy is based, and could thereby increase the opportunities of weak stakeholder exploitation by self-interested managers rather than to decrease them.[32] Data related toStakeholder theoryat Wikidata
https://en.wikipedia.org/wiki/Stakeholder_theory
Surveillance capitalismis a concept inpolitical economicswhich denotes the widespread collection andcommodificationofpersonal databy corporations. This phenomenon is distinct fromgovernment surveillance, although the two can be mutually reinforcing. The concept of surveillance capitalism, as described byShoshana Zuboff, is driven by aprofit-makingincentive, and arose as advertising companies, led by Google'sAdWords, saw the possibilities of using personal data to target consumers more precisely.[1] Increased data collection may have various benefits for individuals and society, such asself-optimization(thequantified self),[2]societal optimizations (e.g., bysmart cities) and optimized services (including variousweb applications). However, ascapitalismfocuses on expanding the proportion of social life that is open todata collectionanddata processing,[2]this can have significant implications for vulnerability and control of society, as well as forprivacy. The economic pressures of capitalism are driving the intensification of online connection andmonitoring, with spaces of social life opening up to saturation by corporate actors, directed at making profits and/or regulating behavior. Therefore, personal data points increased in value after the possibilities oftargeted advertisingwere known.[3]As a result, the increasing price of data has limited access to the purchase of personaldata pointsto the richest in society.[4] Shoshana Zuboff writes that "analysing massive data sets began as a way to reduce uncertainty by discovering the probabilities of future patterns in the behavior of people and systems".[5]In 2014, Vincent Mosco referred to the marketing of information about customers and subscribers to advertisers assurveillance capitalismand made note of thesurveillance statealongside it.[6]Christian Fuchsfound that the surveillance state fuses with surveillance capitalism.[7] Similarly, Zuboff informs that the issue is further complicated by highly invisible collaborative arrangements with state security apparatuses. According to Trebor Scholz, companies recruit people as informants for this type of capitalism.[8]Zuboff contrasts themass productionofindustrial capitalismwith surveillance capitalism, where the former was interdependent with its populations, who were its consumers and employees, and the latter preys on dependent populations, who are neither its consumers nor its employees and largely ignorant of its procedures.[9] Their research shows that the capitalist addition to the analysis of massive amounts of data has taken its original purpose in an unexpected direction.[1]Surveillancehas been changing power structures in the information economy, potentially shifting the balance of power further from nation-states and towards large corporations employing the surveillance capitalist logic.[10] Zuboff notes that surveillance capitalism extends beyond the conventional institutional terrain of the private firm, accumulating not only surveillance assets and capital but also rights, and operating without meaningful mechanisms of consent.[9]In other words, analysing massive data sets was at some point not only executed by the state apparatuses but also companies. Zuboff claims that bothGoogleandFacebookhave invented surveillance capitalism and translated it into "a new logic of accumulation".[1][11][12] This mutation resulted in both companies collecting very large numbers of data points about their users, with the core purpose of making a profit. By selling these data points to external users (particularly advertisers), it has become an economic mechanism. The combination of the analysis of massive data sets and the use of these data sets as a market mechanism has shaped the concept of surveillance capitalism. Surveillance capitalism has been heralded as the successor toneoliberalism.[13][14] Oliver Stone, creator of the filmSnowden, pointed to thelocation-based gamePokémon Goas the "latest sign of the emerging phenomenon and demonstration of surveillance capitalism". Stone criticized that the location of its users was used not only for game purposes, but also to retrieve more information about its players. By tracking users' locations, the game collected far more information than just users' names and locations: "it can access the contents of your USB storage, your accounts, photographs, network connections, and phone activities, and can even activate your phone, when it is in standby mode". This data can then be analysed and commodified by companies such as Google (which significantly invested in the game's development) to improve the effectiveness oftargeted advertisement.[15][16] Another aspect of surveillance capitalism is its influence onpolitical campaigning. Personal data retrieved bydata minerscan enable various companies (most notoriouslyCambridge Analytica) to improve the targeting ofpoliticaladvertising, a step beyond the commercial aims of previous surveillance capitalist operations. In this way, it is possible that political parties will be able to produce far more targeted political advertising to maximise its impact on voters. However,Cory Doctorowwrites that the misuse of these data sets "will lead us towards totalitarianism".[17][better source needed]This may resemble acorporatocracy, andJoseph Turowwrites that "the centrality ofcorporate poweris a direct reality at the very heart of thedigital age".[2][18]: 17 The terminology "surveillance capitalism" was popularized by Harvard Professor Shoshana Zuboff.[19]: 107In Zuboff's theory, surveillance capitalism is a novel market form and a specific logic ofcapitalistaccumulation. In her 2014 essayA Digital Declaration: Big Data as Surveillance Capitalism, she characterized it as a "radically disembedded and extractive variant of information capitalism" based on thecommodificationof "reality" and its transformation into behavioral data for analysis and sales.[20][21][22][23] In a subsequent article in 2015, Zuboff analyzed the societal implications of thismutationof capitalism. She distinguished between "surveillance assets", "surveillance capital", and "surveillance capitalism" and their dependence on a global architecture of computer mediation that she calls "Big Other", a distributed and largely uncontested new expression of power that constitutes hidden mechanisms of extraction, commodification, and control that threatens core values such asfreedom,democracy, andprivacy.[24][2] According to Zuboff, surveillance capitalism was pioneered by Google and later Facebook, just asmass-productionand managerial capitalism were pioneered byFordandGeneral Motorsa century earlier, and has now become the dominant form of information capitalism.[9]Zuboff emphasizes that behavioral changes enabled by artificial intelligence have become aligned with the financial goals of American internet companies such as Google, Facebook, and Amazon.[19]: 107 In herOxford Universitylecture published in 2016, Zuboff identified the mechanisms and practices of surveillance capitalism, including the production of "prediction products" for sale in new "behavioral futures markets." She introduced the concept "dispossession bysurveillance", arguing that it challenges the psychological and political bases ofself-determinationby concentrating rights in the surveillance regime. This is described as a "coup from above."[25] Zuboff's bookThe Age of Surveillance Capitalism[26]is a detailed examination of the unprecedented power of surveillance capitalism and the quest by powerful corporations to predict and control human behavior.[26]Zuboff identifies four key features in the logic of surveillance capitalism and explicitly follows the four key features identified by Google's chief economist,Hal Varian:[27] Zuboff compares demanding privacy from surveillance capitalists or lobbying for an end to commercial surveillance on the Internet to askingHenry Fordto make eachModel Tby hand and states that such demands are existential threats that violate the basic mechanisms of the entity's survival.[9] Zuboff warns that principles of self-determination might be forfeited due to "ignorance, learned helplessness, inattention, inconvenience, habituation, or drift" and states that "we tend to rely on mental models, vocabularies, and tools distilled from past catastrophes," referring to the twentieth century'stotalitariannightmares or themonopolisticpredations ofGilded Agecapitalism, with countermeasures that have been developed to fight those earlier threats not being sufficient or even appropriate to meet the novel challenges.[9] She also poses the question: "will we be the masters of information, or will we be its slaves?" and states that "if the digital future is to be our home, then it is we who must make it so".[28] In her book, Zuboff discusses the differences between industrial capitalism and surveillance capitalism. Zuboff writes that as industrial capitalism exploited nature, surveillance capitalism exploits human nature.[29] The term "surveillance capitalism" has also been used bypolitical economistsJohn Bellamy FosterandRobert W. McChesney, although with a different meaning. In an article published inMonthly Reviewin 2014, they apply it to describe the manifestation of the "insatiable need for data" offinancialization, which they explain is "the long-term growth speculation on financial assets relative to GDP" introduced in the United States by industry and government in the 1980s that evolved out of themilitary-industrial complexand the advertising industry.[30] Numerous organizations have been struggling forfree speechandprivacy rightsin the new surveillance capitalism[31]and various national governments have enactedprivacy laws. It is also conceivable that new capabilities and uses for mass-surveillance require structural changes towards a new system to create accountability and prevent misuse.[32]Government attention towards the dangers of surveillance capitalism especially increased after the exposure of theFacebook-Cambridge Analytica data scandalthat occurred in early 2018.[4]In response to the misuse of mass-surveillance multiple states have taken preventive measures. TheEuropean Union, for example, has reacted to these events and restricted its rules and regulations on misusing big data.[33]Surveillance-Capitalism has become a lot harder under these rules, known as theGeneral Data Protection Regulations.[33]However, implementing preventive measures against misuse of mass-surveillance is hard for many countries as it requires structural change of the system.[34] Bruce Sterling's 2014 lecture atStrelka Institute"The epic struggle of theinternet of things"[35]explained how consumer products could become surveillance objects that track people's everyday life. In his talk, Sterling highlights the alliances between multinational corporations who develop Internet of Things-based surveillance systems which feeds surveillance capitalism.[35][36][37] In 2015, Tega Brain and Surya Mattu's satirical artworkUnfit Bitsencourages users to subvert fitness data collected byFitbits. They suggested ways to fake datasets by attaching the device, for example to a metronome or on a bicycle wheel.[38][39]In 2018, Brain created a project withSam LavignecalledNew Organswhich collect people's stories of being monitored online and offline.[40][41] The 2019 documentary filmThe Great Hacktells the story of how a company named Cambridge Analytica used Facebook to manipulate the2016 U.S. presidential election. Extensive profiling of users and news feeds that are ordered by black box algorithms were presented as the main source of the problem, which is also mentioned in Zuboff's book.[42]The usage of personal data to subject individuals to categorization and potentially politically influence individuals highlights how individuals can become voiceless in the face of data misusage. This highlights the crucial role surveillance capitalism can have on social injustice as it can affect all aspects of life.[43]
https://en.wikipedia.org/wiki/Surveillance_capitalism
Telephone tappingin theEastern Blocwas a widespread method of themass surveillanceof the population by thesecret police.[1] In the past, telephone tapping was an open and legal practice in certain countries.[2]Duringmartial law in Poland, official censorship was introduced, which included open phone tapping. Despite the introduction of the new censorship division, thePolish secret policedid not have resources to monitor all conversations.[3] InRomania,telephonetapping was conducted by the General Directorate for Technical Operations of theSecuritate.[4]Created withSovietassistance in 1954, the outfit monitored all voice and electroniccommunicationsinside and outside of Romania. Theybuggedtelephones and intercepted alltelegraphsandtelexmessages, as well as placedmicrophonesin both public and private buildings.[5] The 1991Polishcomedy filmCalls Controlled[6]capitalizes on this fact. The title alludes to the pre-recorded message "Rozmowa kontrolowana" ("The call is being monitored") being sounded during phone calls while themartial law in Polandwas in force during the 1980s.[7][8] The 2006 filmThe Lives of Othersconcerns aStasicaptain who is listening to the conversations of a suspected dissident writer in a bugged apartment with equipment including telephone-tapping.[9][10] This article about theCold Waris astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Telephone_tapping_in_the_Eastern_Bloc
Traffic analysisis the process of intercepting and examining messages in order to deduce information from patterns incommunication. It can be performed even when the messages areencrypted.[1]In general, the greater the number of messages observed, the greater information be inferred. Traffic analysis can be performed in the context ofmilitary intelligence,counter-intelligence, orpattern-of-life analysis, and is also a concern incomputer security. Traffic analysis tasks may be supported by dedicated computersoftwareprograms. Advanced traffic analysis techniques which may include various forms ofsocial network analysis. Traffic analysis has historically been a vital technique incryptanalysis, especially when the attempted crack depends on successfully seeding aknown-plaintext attack, which often requires an inspired guess based on how specific the operational context might likely influence what an adversary communicates, which may be sufficient to establish a short crib. Traffic analysis method can be used to break theanonymityof anonymous networks, e.g.,TORs.[1]There are two methods of traffic-analysis attack, passive and active. In a military context, traffic analysis is a basic part ofsignals intelligence, and can be a source of information about the intentions and actions of the target. Representative patterns include: There is a close relationship between traffic analysis andcryptanalysis(commonly calledcodebreaking).Callsignsand addresses are frequentlyencrypted, requiring assistance in identifying them. Traffic volume can often be a sign of an addressee's importance, giving hints to pending objectives or movements to cryptanalysts. Traffic-flow securityis the use of measures that conceal the presence and properties of valid messages on a network to prevent traffic analysis. This can be done by operational procedures or by the protection resulting from features inherent in some cryptographic equipment. Techniques used include: Traffic-flow security is one aspect ofcommunications security. TheCommunications' Metadata Intelligence, orCOMINT metadatais a term incommunications intelligence(COMINT) referring to the concept of producing intelligence by analyzing only the technicalmetadata, hence, is a great practical example for traffic analysis in intelligence.[2] While traditionally information gathering in COMINT is derived from intercepting transmissions, tapping the target's communications and monitoring the content of conversations, the metadata intelligence is not based on content but on technical communicational data. Non-content COMINT is usually used to deduce information about the user of a certain transmitter, such as locations, contacts, activity volume, routine and its exceptions. For example, if an emitter is known as the radio transmitter of a certain unit, and by usingdirection finding(DF) tools, the position of the emitter is locatable, the change of locations from one point to another can be deduced, without listening to any orders or reports. If one unit reports back to a command on a certain pattern, and another unit reports on the same pattern to the same command, the two units are probably related. That conclusion is based on themetadataof the two units' transmissions, not on the content of their transmissions. Using all or as much of the metadata available is commonly used to build up anElectronic Order of Battle(EOB) by mapping different entities in the battlefield and their connections. Of course, the EOB could be built by tapping all the conversations and trying to understand, which unit is where, but using the metadata with an automatic analysis tool enables a much faster and accurate EOB build-up, which, alongside tapping, builds a much better and complete picture. Traffic analysis is also a concern incomputer security. An attacker can gain important information by monitoring the frequency and timing of network packets. A timing attack on theSSHprotocol can use timing information to deduce information aboutpasswordssince, during interactive session, SSH transmits each keystroke as a message.[8]The time between keystroke messages can be studied usinghidden Markov models. Song,et al.claim that it can recover the password fifty times faster than abrute force attack. Onion routingsystems are used to gain anonymity. Traffic analysis can be used to attack anonymous communication systems like theTor anonymity network. Adam Back, Ulf Möeller and Anton Stiglic present traffic analysis attacks against anonymity providing systems.[9]Steven J. MurdochandGeorge Danezisfrom University of Cambridge presented[10]research showing that traffic-analysis allows adversaries to infer which nodes relay the anonymous streams. This reduces the anonymity provided by Tor. They have shown that otherwise unrelated streams can be linked back to the same initiator. Remailersystems can also be attacked via traffic analysis. If a message is observed going to a remailing server, and an identical-length (if now anonymized) message is seen exiting the server soon after, a traffic analyst may be able to (automatically) connect the sender with the ultimate receiver. Variations of remailer operations exist that can make traffic analysis less effective. Traffic analysis involves intercepting and scrutinizing cybersecurity threats to gather valuable insights about anonymous data flowing through theexit node. By using technique rooted indark webcrawling and specializing software, one can identify the specific characteristics of a client's network traffic within the dark web.[11] It is difficult to defeat traffic analysis without both encrypting messages and masking the channel. When no actual messages are being sent, the channel can bemasked[12]by sending dummy traffic, similar to the encrypted traffic, thereby keeping bandwidth usage constant.[13]"It is very hard to hide information about the size or timing of messages. The known solutions requireAliceto send a continuous stream of messages at the maximumbandwidthshe will ever use...This might be acceptable for military applications, but it is not for most civilian applications." The military-versus-civilian problems applies in situations where the user is charged for the volume of information sent. Even for Internet access, where there is not a per-packet charge,ISPsmake statistical assumption that connections from user sites will not be busy 100% of the time. The user cannot simply increase the bandwidth of the link, since masking would fill that as well. If masking, which often can be built into end-to-end encryptors, becomes common practice, ISPs will have to change their traffic assumptions.
https://en.wikipedia.org/wiki/Traffic_analysis
Dataveillanceis the practice of monitoring and collecting online data as well as metadata.[1]The word is aportmanteauofdataandsurveillance.[2]Dataveillance is concerned with the continuous monitoring of users' communications and actions across various platforms.[3]For instance, dataveillance refers to the monitoring of data resulting from credit card transactions,GPScoordinates, emails,social networks, etc. Usingdigital mediaoften leaves traces of data and creates adigital footprintof our activity.[4]Unlikesousveillance, this type of surveillance is not often known and happens discreetly.[5]Dataveillance may involve the surveillance of groups of individuals. There exist three types of dataveillance:personal dataveillance, mass dataveillance,andfacilitative mechanisms.[3] Unlikecomputer and network surveillance, which collects data from computer networks and hard drives, dataveillance monitors and collects data (and metadata) through social networks and various other online platforms. Dataveillance is not to be confused with electronic surveillance. Electronic surveillance refers to the surveillance of oral and audio systems such as wire tapping.[3]Additionally, electronic surveillance depends on having suspects already presumed before surveillance can occur.[6]On the other hand, dataveillance can use data to identify an individual or a group.[6]Oftentimes, these individuals and groups have sparked some form of suspicion with their activity.[3] Dataveillance has significant impacts on advertising theory and practice. These impacts particularly stem from recent infrastructure and technological advancements that increase the extent to which advertisers can gain data information about consumers and their behaviours. For example, collecting data can be extended into collecting consumers’ offline behaviors and to places that are considered private.[7] The types of dataveillance are separated by the way data is collected, as well as the number of individuals associated with it. Personal Dataveillance:Personal dataveillance refers to the collection and monitoring of a person's personal data. Personal dataveillance can occur when an individual's data causes a suspicion or has attracted attention in some way.[3]Personal data can include information such as birth date, address, social security (or social insurance) number, as well as otherunique identifiers. Mass Dataveillance:Refers to the collection of data on groups of people.[3]The general distinction between mass dataveillance and personal dataveillance is the surveillance and collection of data as a group rather than an individual. Facilitative Mechanisms:Unlike mass dataveillance a group is not targeted. An individual's data is placed into a system or database along with various others where computer matching can unveil distinct patterns.[3]An individual's data is never considered to part of a group in this instance. There are many concerns and benefits associated with dataveillance. Dataveillance can be useful for collecting and verifying data in ways that are beneficial. For instance, personal dataveillance can be utilized by financial institutions to track fraudulent purchases on credit card accounts.[3]This has the potential to prevent and regulate fraudulent financial claims and resolve the issue. Compared to traditional methods of surveillance, dataveillance tends to be an economical approach, since it can monitor more information in a less time. In this case, the responsibility of monitoring is transferred to computers, therefore reducing time and human labor in the process of surveilling.[8] Dataveillance has also been useful in assessing security threats associated with terrorism. Authorities have utilized dataveillance to help them understand and predict potential terrorist or criminal threats.[9]Dataveillance is very important to the concept ofpredictive policing. Since predictive policing requires a great deal of data to operate effectively and dataveillance can do just that. Predictive policing allows police to intervene in potential crimes to create safer communities and better understand potential threats. Businesses also rely on dataveillance to help them understand the online activity for potential clients bytracking their online activity.[10]By tracking their online activity through cookies, as well as various other methods, businesses are able to better understand what sort of advertisements work with their existing and potential clients.[10]While making online transactions users often give away their information freely which is later used by the company for corporate or private interests.[11]For businesses this information can help boost sales and attract attention towards their products to help generate revenue. On the other hand, there are many concerns that arise with dataveillance. Dataveillance assumes that our technologies and data are a true reflection of ourselves.[3]This presents itself as a potential concern.[9]This becomes a critical concern when associated with the surveillance of criminal suspects and terrorist groups. Authorities who monitor these suspects would then assume that the data they have collected reflects their actions.[9]This helps to understand potential or past threats for criminals as well.[9] There is also the lack of transparency and privacy regarding companies who collect and share their user's data.[3]This is a critical issue with both the trust and belief of the data and its uses.[1]Many social networks have argued that their users forfeit part of their privacy in order to provide their service for free.[1]Several of these companies choose not to fully disclose what data is collected and who it is shared with. When data is volunteered to companies it is difficult to know what companies have gained data about you and your online activity.[9]Much of an individual's data is shared with websites and social networks in order to provide a more customized marketing experience. Many of those social networks may share your information with intelligence agencies and authorities, without a user's knowledge.[1]Since the recent scandal involving Edward Snowden and National Security Agency, it has been revealed that authorities may have access to more data from various devices and platforms.[1]It has become very difficult to know what will happen with your data or what specifically has been collected. It is also important to recognize that while online users are worried about their information, many of those same worries are not always applied to their activities or behavior.[12]With social networks collecting a large amount of personal data such as birth date, legal name, sex, and photos there is an issue of dataveillance compromising confidentiality. Ultimately, dataveillance can compromise online anonymity. Despite dataveillance compromising anonymity, anonymity itself presents a crucial issue. Online criminals who steal users' data and information may exploit it for their own gain. Tactics used by online users to conceal their identity, make it difficult for others to track the criminal behavior and identify those responsible. Having unique identifiers such as IP addresses allows for the identification of users actions, which are often used to track illegal online activity such aspiracy. While dataveillance may help businesses market their products to existing and potential clients, there are concerns over how and who has access to customer data. When visiting a business's website, cookies are often installed onto users' devices.Cookieshave been a new way for businesses to obtain data on potential customers, since it allows them to track their online activities.[10]Companies may also look to sell information they have collected on their clients to third parties.[10]Since clients are not notified about these transactions it becomes difficult to know where your data has been sold. Furthermore, since dataveillance is discrete, clients are very unlikely to know the exact nature of the data that has been either collected or sold.[10]Education on tracking tools (such as cookies) presents a critical issue. If businesses or online services are unwilling to define cookies, or educate their users as to why they are being used, many may unwillingly accept them.[13] The issue stemming from companies and other agencies which collect personal data and information is that they have now engaged in the practices of data brokering.Data brokers, such asAcxiom, collect users' information, and are known for often selling that information to third parties. While companies may disclose that they are collecting data or online activity from their users, it is usually not comprehensible by everyday users.[11]It is difficult for everyday people to spot this disclosure, since it is hidden by jargon and writing most often understood by lawyers.[11]This is now becoming a new source of revenue for companies. In terms ofpredictive policing, the proper use of crime data and the combination of offline practices and technology have also become the challenges for police institutions. Too much reliance on results brought up by big data may lead to the subjective judgement of police. It also may reduce the amount of real-time on site communication between local police officers and residents in particular areas, thus decreasing the opportunity for the police to investigate and cruise in local communities at a frequent basis.[14]Secondly, data security still remains to be a huge dilemma, considering the access to crime data and the potential use of these data for negative purposes. Last but not least, discrimination towards certain community might be developed due to the findings of data analysis, which could lead to improper behaviours or over-reaction of surveillance. One of the major issues with dataveillance is the removal of a human actors from the loop. Computer systems which oversee data and construct representations.[4]This can allow for greater risk of false representations being created, as they are based on the data that has been surveilled. Computer systems can only use the data they have, wand if this is not an accurate depiction of individuals or their situations then false representations can be created. Dataveillance is highly automated through computer systems which observe our interactions and activities.[4]Highly automated systems and technology eliminates human understanding of our activities. With such an increase in data collection and surveillance, many individuals are now attempting to reduce the concerns which have risen alongside it.Countersurveillanceis perhaps the most significant concept focused on the tactics to prevent dataveillance. There are various tools associated with the concept of countersurveillance, which disrupt the effectiveness and possibilities of dataveillance. Privacy-enhancing technologies, otherwise known as PETs, have been utilized by individuals to reduce data collection and decrease the possibility for dataveillance.[15]PETs, such asadblocker, attempt to prevent other actors from collecting users data. In the case of adblock, the web browser extension is able to prevent the display of advertisements, which disrupts data collection about users online interactions.[15]For businesses that may limit their opportunity to provide online users with tailored advertisements. Recently, theEuropean Uniondemanded companies to indicate that their website uses cookies.[13]This law has become basic practice by many online services and companies, however, education on tracking tools with the general public differs and therefore can prevent the effectiveness of this sort of ruling.[13]However, many companies are launching new PETs initiatives within their products. For example, Mozilla'sFirefox Focusin pre-enabled with customizable privacy features, which allows for better online privacy.[16]A few of the tools featured in Firefox Focus are also mimicked by other web browsers such as Apple'sSafari. Some of the various tools featured with these web browsers are the capabilities to block ads and remove cookie data and history. Private browsing, otherwise known as Incognito for Google Chrome users, allows users to browse the web with having their history or cookies saved. These tools, aid in curbing dataveillance, by disrupting the collection and analysis of users' data. While several other web browsers may not pre-enable these PETs within their software users can download the same tools, like adblocker, through their browser's web store such as theGoogle Chrome Web Store. Many of these extensions help enable better privacy tools. Social networks, such asFacebook, have introduced new[when?]security measures to help users protect their online data. Users can block their posts and other information on their account other than their name and profile picture. While this doesn't necessarily prevent data tracking these tools have helped to keep users data more private and less accessible for online criminals to exploit.
https://en.wikipedia.org/wiki/Dataveillance