text
stringlengths
11
320k
source
stringlengths
26
161
Mod,MODormodsmay refer to:
https://en.wikipedia.org/wiki/Mod_(disambiguation)
Thelanguage of mathematicshas a widevocabularyof specialist and technical terms. It also has a certain amount ofjargon: commonly used phrases which are part of the culture of mathematics, rather than of the subject. Jargon often appears in lectures, and sometimes in print, as informal shorthand forrigorousarguments or precise ideas. Much of this uses common English words, but with a specific non-obvious meaning when used in a mathematical sense. Some phrases, like "in general", appear below in more than one section. [The paper ofEilenbergandMac Lane(1942)] introduced the very abstract idea of a 'category' — a subject then called 'general abstract nonsense'! [Grothendieck] raisedalgebraic geometryto a new level of abstraction...if certain mathematicians could console themselves for a time with the hope that all these complicated structures were 'abstract nonsense'...the later papers of Grothendieck and others showed that classical problems...which had resisted efforts of several generations of talented mathematicians, could be solved in terms of...complicated concepts. There are two canonical proofs that are always used to show non-mathematicians what a mathematical proof is like: The beauty of a mathematical theory is independent of the aesthetic qualities...of the theory's rigorous expositions. Some beautiful theories may never be given a presentation which matches their beauty....Instances can also be found of mediocre theories of questionable beauty which are given brilliant, exciting expositions....[Category theory] is rich in beautiful and insightful definitions and poor in elegant proofs....[The theorems] remain clumsy and dull....[Expositions ofprojective geometry] vied for one another in elegance of presentation and in cleverness of proof....In retrospect, one wonders what all the fuss was about.Mathematicians may say that a theorem is beautiful when they really mean to say that it is enlightening. We acknowledge a theorem's beauty when we see how the theorem 'fits' in its place....We say that a proof is beautiful when such a proof finally gives away the secret of the theorem.... Many of the results mentioned in this paper should be considered "folklore" in that they merely formally state ideas that are well-known to researchers in the area, but may not be obvious to beginners and to the best of my knowledge do not appear elsewhere in print. Since half a century we have seen arise a crowd of bizarrefunctionswhich seem to try to resemble as little as possible the honest functions which serve some purpose....Nay more, from the logical point of view, it is these strange functions which are the most general....to-day they are invented expressly to put at fault the reasonings of our fathers.... [TheDirichlet function] took on an enormous importance...as giving an incentive for the creation of new types of function whose properties departed completely from what intuitively seemed admissible. A celebrated example of such a so-called 'pathological' function...isthe one provided by Weierstrass....This function iscontinuousbut notdifferentiable. Although ultimately every mathematical argument must meet a high standard of precision, mathematicians use descriptive but informal statements to discuss recurring themes or concepts with unwieldy formal statements. Note that many of the terms are completely rigorous in context. Norbert A'Campo of the University of Basel once asked Grothendieck about something related to thePlatonic solids. Grothendieck advised caution. The Platonic solids are so beautiful and so exceptional, he said, that one cannot assume such exceptional beauty will hold in more general situations. The formal language ofproofdraws repeatedly from a small pool of ideas, many of which are invoked through various lexical shorthands in practice. LetVbe a finite-dimensional vector space overk....Let (ei)1≤i≤nbe abasisforV....There is an isomorphism of thepolynomial algebrak[Tij]1≤i,j≤nonto thealgebraSymk(V⊗V*)....It extends to an isomorphism ofk[GLn] to the localized algebra Symk(V⊗V*)D, whereD= det(ei⊗ej*)....We writek[GL(V)] for this last algebra. By transport of structure, we obtain alinear algebraic groupGL(V) isomorphic toGLn. Mathematicians have several phrases to describe proofs or proof techniques. These are often used as hints for filling in tedious details. This section features terms used across different areas inmathematics, or terms that do not typically appear in more specialized glossaries. For the terms used only in some specific areas of mathematics, see glossaries inCategory:Glossaries of mathematics.
https://en.wikipedia.org/wiki/List_of_mathematical_jargon
Twomathematical objectsaandbare called "equalup toanequivalence relationR" This figure of speech is mostly used in connection with expressions derived from equality, such as uniqueness or count. For example, "xis unique up toR" means that all objectsxunder consideration are in the same equivalence class with respect to the relationR. Moreover, the equivalence relationRis often designated rather implicitly by a generating condition or transformation. For example, the statement "an integer's prime factorization is unique up to ordering" is a concise way to say that any two lists of prime factors of a given integer are equivalent with respect to the relationRthat relates two lists if one can be obtained by reordering (permuting) the other.[1]As another example, the statement "the solution to an indefinite integral issin(x), up to addition of a constant" tacitly employs the equivalence relationRbetween functions, defined byfRgif the differencef−gis a constant function, and means that the solution and the functionsin(x)are equal up to thisR. In the picture, "there are 4 partitions up to rotation" means that the setPhas 4 equivalence classes with respect toRdefined byaRbifbcan be obtained fromaby rotation; one representative from each class is shown in the bottom left picture part. Equivalence relations are often used to disregard possible differences of objects, so "up toR" can be understood informally as "ignoring the same subtleties asRignores". In the factorization example, "up to ordering" means "ignoring the particular ordering". Further examples include "up to isomorphism", "up to permutations", and "up to rotations", which are described in theExamples section. In informal contexts, mathematicians often use the wordmodulo(or simplymod) for similar purposes, as in "modulo isomorphism". Objects that are distinct up to an equivalence relation defined by a group action, such as rotation, reflection, or permutation, can be counted usingBurnside's lemmaor its generalization,Pólya enumeration theorem. Consider the sevenTetrispieces (I, J, L, O, S, T, Z), known mathematically as thetetrominoes. If you consider all the possible rotations of these pieces — for example, if you consider the "I" oriented vertically to be distinct from the "I" oriented horizontally — then you find there are 19 distinct possible shapes to be displayed on the screen. (These 19 are the so-called "fixed" tetrominoes.[2]) But if rotations are not considered distinct — so that we treat both "I vertically" and "I horizontally" indifferently as "I" — then there are only seven. We say that "there are seventetrominoes, up to rotation". One could also say that "there are five tetrominoes, up to rotation and reflection", which accounts for the fact that L reflected gives J, and S reflected gives Z. In theeight queens puzzle, if the queens are considered to be distinct (e.g. if they are colored with eight different colors), then there are 3709440 distinct solutions. Normally, however, the queens are considered to be interchangeable, and one usually says "there are3,709,440 / 8! = 92unique solutions up topermutationof the queens", or that "there are 92 solutions modulo the names of the queens", signifying that two different arrangements of the queens are considered equivalent if the queens have been permuted, as long as the set of occupied squares remains the same. If, in addition to treating the queens as identical,rotationsandreflectionsof the board were allowed, we would have only 12 distinct solutions "up tosymmetryand the naming of the queens". For more, seeEight queens puzzle § Solutions. Theregularn-gon, for a fixedn, is unique up tosimilarity. In other words, the "similarity" equivalence relation over the regularn-gons (for a fixedn) has only one equivalence class; it is impossible to produce two regularn-gons which are not similar to each other. Ingroup theory, one may have agroupGactingon a setX, in which case, one might say that two elements ofXare equivalent "up to the group action"—if they lie in the sameorbit. Another typical example is the statement that "there are two differentgroupsof order 4 up toisomorphism", or "modulo isomorphism, there are two groups of order 4". This means that, if one considersisomorphicgroups "equivalent", there are only two equivalence classes of groups of order 4. Ahyperrealxand itsstandard partst(x)are equal up to aninfinitesimaldifference.
https://en.wikipedia.org/wiki/Up_to
Theampere-turn(symbolA⋅t) is theMKS(metre–kilogram–second) unit ofmagnetomotive force(MMF), represented by adirect currentof oneampereflowing in a single-turn loop.[1]Turnsrefers to thewinding numberof an electrical conductor composing anelectromagnetic coil. For example, a current of2 Aflowing through a coil of 10 turns produces an MMF of20 A⋅t. The corresponding physical quantity isNI, the product of thenumber of turns,N, and the current,I; it has been used in industry, specifically, US-basedcoil-making industries.[citation needed] By maintaining the same current and increasing the number of loops or turns of the coil, the strength of the magnetic field increases because each loop or turn of the coil sets up its own magnetic field. The magnetic field unites with the fields of the other loops to produce the field around the entire coil, making the total magnetic field stronger. The strength of the magnetic field is not linearly related to the ampere-turns when a magnetic material is used as a part of the system. Also, the material within the magnet carrying the magnetic flux "saturates" at some point, after which adding more ampere-turns has little effect. The ampere-turn corresponds to⁠4π/10⁠gilberts, the correspondingCGSunit. InThomas Edison's laboratoryFrancis Uptonwas the lead mathematician. Trained withHelmholtzin Germany, he usedweberas the name of the unit of current, modified toamperelater:
https://en.wikipedia.org/wiki/Ampere-turn
Thehertz(symbol:Hz) is the unit offrequencyin theInternational System of Units(SI), often described as being equivalent to one event (orcycle) persecond.[1][a]The hertz is anSI derived unitwhose formal expression in terms ofSI base unitsis 1/s or s−1, meaning that one hertz is one per second or thereciprocal of one second.[2]It is used only in the case of periodic events. It is named afterHeinrich Rudolf Hertz(1857–1894), the first person to provide conclusive proof of the existence ofelectromagnetic waves. For high frequencies, the unit is commonly expressed inmultiples: kilohertz (kHz), megahertz (MHz), gigahertz (GHz), terahertz (THz). Some of the unit's most common uses are in the description ofperiodic waveformsandmusical tones, particularly those used inradio- and audio-related applications. It is also used to describe theclock speedsat which computers and other electronics are driven. The units are sometimes also used as a representation of theenergy of a photon, via thePlanck relationE=hν, whereEis the photon's energy,νis its frequency, andhis thePlanck constant. The hertz is defined as one per second for periodic events. TheInternational Committee for Weights and Measuresdefined the second as "the duration of9192631770periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of thecaesium-133 atom"[3][4]and then adds: "It follows that the hyperfine splitting in the ground state of the caesium 133 atom is exactly9192631770hertz,νhfs Cs=9192631770Hz." The dimension of the unit hertz is 1/time (T−1). Expressed in base SI units, the unit is the reciprocal second (1/s). In English, "hertz" is also used as the plural form.[5]As an SI unit, Hz can beprefixed; commonly used multiples are kHz (kilohertz,103Hz), MHz (megahertz,106Hz), GHz (gigahertz,109Hz) and THz (terahertz,1012Hz). One hertz (i.e. one per second) simply means "one periodic event occurs per second" (where the event being counted may be a complete cycle);100 Hzmeans "one hundred periodic events occur per second", and so on. The unit may be applied to any periodic event—for example, a clock might be said to tick at1 Hz, or a human heart might be said tobeatat1.2 Hz. The occurrencerate of aperiodicorstochasticevents is expressed inreciprocal secondorinverse second(1/s or s−1) in general or, in the specific case ofradioactivity, inbecquerels.[b]Whereas1 Hz(one per second) specifically refers to one cycle (or periodic event) per second,1 Bq(also one per second) specifically refers to one radionuclide event per second on average. Even though frequency,angular velocity,angular frequencyand radioactivity all have the dimension T−1, of these only frequency is expressed using the unit hertz.[7]Thus a disc rotating at 60 revolutions per minute (rpm) is said to have an angular velocity of 2πrad/s and afrequency of rotationof1 Hz. The correspondence between a frequencyfwith the unit hertz and an angular velocityωwith the unitradiansper second is The hertz is named afterHeinrich Hertz. As with everySIunit named after a person, its symbol starts with anupper caseletter (Hz), but when written in full, it follows the rules for capitalisation of acommon noun; i.e.,hertzbecomes capitalised at the beginning of a sentence and in titles but is otherwise in lower case. The hertz is named after the German physicistHeinrich Hertz(1857–1894), who made important scientific contributions to the study ofelectromagnetism. The name was established by theInternational Electrotechnical Commission(IEC) in 1935.[8]It was adopted by theGeneral Conference on Weights and Measures(CGPM) (Conférence générale des poids et mesures) in 1960, replacing the previous name for the unit, "cycles per second" (cps), along with its related multiples, primarily "kilocycles per second" (kc/s) and "megacycles per second" (Mc/s), and occasionally "kilomegacycles per second" (kMc/s). The term "cycles per second" was largely replaced by "hertz" by the 1970s.[9][failed verification] In some usage, the "per second" was omitted, so that "megacycles" (Mc) was used as an abbreviation of "megacycles per second" (that is, megahertz (MHz)).[10] Soundis a travelinglongitudinal wave, which is anoscillationofpressure. Humans perceive the frequency of a sound as itspitch. Eachmusical notecorresponds to a particular frequency. An infant's ear is able to perceive frequencies ranging from20 Hzto20000Hz; the averageadult humancan hear sounds between20 Hzand16000Hz.[11]The range ofultrasound,infrasoundand other physical vibrations such asmolecularandatomic vibrationsextends from a fewfemtohertz[12]into theterahertzrange[c]and beyond.[12] Electromagnetic radiationis often described by its frequency—the number of oscillations of the perpendicular electric and magnetic fields per second—expressed in hertz. Radio frequency radiation is usually measured in kilohertz (kHz), megahertz (MHz), or gigahertz (GHz). with the latter known asmicrowaves.Lightis electromagnetic radiation that is even higher in frequency, and has frequencies in the range of tens of terahertz (THz,infrared) to a few petahertz (PHz,ultraviolet), with thevisible spectrumbeing 400–790 THz. Electromagnetic radiation with frequencies in the low terahertz range (intermediate between those of the highest normally usable radio frequencies and long-wave infrared light) is often calledterahertz radiation. Even higher frequencies exist, such as that ofX-raysandgamma rays, which can be measured in exahertz (EHz). For historical reasons, the frequencies of light and higher frequency electromagnetic radiation are more commonly specified in terms of theirwavelengthsorphoton energies: for a more detailed treatment of this and the above frequency ranges, seeElectromagnetic spectrum. Gravitational wavesare also described in Hertz. Current[when?]observations are conducted in the 30–7000 Hz range by laserinterferometerslikeLIGO, and the nanohertz (1–1000 nHz) range bypulsar timing arrays. Future space-based detectors are planned to fill in the gap, withLISAoperating from 0.1–10 mHz (with some sensitivity from 10 μHz to 100 mHz), andDECIGOin the 0.1–10 Hz range. In computers, mostcentral processing units(CPU) are labeled in terms of theirclock rateexpressed in megahertz (MHz) or gigahertz (GHz). This specification refers to the frequency of the CPU's masterclock signal. This signal is nominally asquare wave, which is an electrical voltage that switches between low and high logic levels at regular intervals. As the hertz has become the primary unit of measurement accepted by the general populace to determine the performance of a CPU, many experts have criticized this approach, which they claim is aneasily manipulable benchmark. Some processors use multiple clock cycles to perform a single operation, while others can perform multiple operations in a single cycle.[13]For personal computers, CPU clock speeds have ranged from approximately1 MHzin the late 1970s (Atari,Commodore,Apple computers) to up to6 GHzinIBM Power microprocessors. Variouscomputer buses, such as thefront-side busconnecting the CPU andnorthbridge, also operate at various frequencies in the megahertz range. Higher frequencies than theInternational System of Unitsprovides prefixes for are believed to occur naturally in the frequencies of the quantum-mechanical vibrations of massive particles, although these are not directly observable and must be inferred through other phenomena. By convention, these are typically not expressed in hertz, but in terms of the equivalent energy, which is proportional to the frequency by the factor of thePlanck constant. TheCJK Compatibilityblock inUnicodecontains characters for common SI units for frequency. These are intended for compatibility with East Asian character encodings, and not for use in new documents (which would be expected to use Latin letters, e.g. "MHz").[14]
https://en.wikipedia.org/wiki/Hertz
Thecycle per secondis a once-common English name for the unit offrequencynow known as thehertz(Hz). Cycles per second may be denoted byc.p.s.,c/s, or, ambiguously, just "cycles" (Cyc., Cy., C, or c). The term comes from repetitive phenomena such assound waveshaving a frequency measurable as a number of oscillations, or cycles, per second.[1] With the organization of theInternational System of Unitsin 1960, the cycle per second was officially replaced by thehertz, orreciprocal second, "s−1" or "1/s". Symbolically, "cycle per second" units are "cycle/second", while hertz is "Hz" or "s−1".[2]For higher frequencies,kilocycles(kc), as an abbreviation ofkilocycles per secondwere often used on components or devices. Other higher units likemegacycle(Mc) and less commonlykilomegacycle(kMc) were used before 1960[3]and in some later documents.[4]These have modern equivalents such as kilohertz (kHz), megahertz (MHz), and gigahertz (GHz). Following the introduction of the SI standard, use of these terms began to fall off in favor of the new unit, with hertz becoming the dominant convention in both academic and colloquial speech by the 1970s.[5] Cycle can also be a unit for measuring usage ofreciprocatingmachines, especiallypresses, in which casescyclerefers to one complete revolution of the mechanism being measured (i.e. the shaft of areciprocating engine). Derived units includecycles per day(cpd) andcycles per year(cpy).
https://en.wikipedia.org/wiki/Cycle_per_second
Theangular displacement(symbol θ, ϑ, or φ) – also calledangle of rotation,rotational displacement, orrotary displacement– of aphysical bodyis theangle(inunitsofradians,degrees,turns, etc.) through which the bodyrotates(revolves or spins) around a centre oraxis of rotation. Angular displacement may be signed, indicating the sense of rotation (e.g.,clockwise); it may also be greater (inabsolute value) than a fullturn. When a body rotates about its axis, the motion cannot simply be analyzed as a particle, as incircular motionit undergoes a changing velocity and acceleration at any time. When dealing with the rotation of a body, it becomes simpler to consider the body itself rigid. A body is generally considered rigid when the separations between all the particles remains constant throughout the body's motion, so for example parts of its mass are not flying off. In a realistic sense, all things can be deformable, however this impact is minimal and negligible. In the example illustrated to the right (or above in some mobile versions), a particle or body P is at a fixed distancerfrom the origin,O, rotating counterclockwise. It becomes important to then represent the position of particle P in terms of its polar coordinates (r,θ). In this particular example, the value ofθis changing, while the value of the radius remains the same. (In rectangular coordinates (x,y) bothxandyvary with time.) As the particle moves along the circle, it travels anarc lengths, which becomes related to the angular position through the relationship: Angular displacement may be expressed inradiansor degrees. Using radians provides a very simple relationship between distance traveled around the circle (circular arclength) and the distancerfrom the centre (radius): For example, if a body rotates 360° around a circle of radiusr, the angular displacement is given by the distance traveled around the circumference - which is 2πr- divided by the radius:θ=2πrr{\displaystyle \theta ={\frac {2\pi r}{r}}}which easily simplifies to:θ=2π{\displaystyle \theta =2\pi }. Therefore, 1revolutionis2π{\displaystyle 2\pi }radians. The above definition is part of theInternational System of Quantities(ISQ), formalized in the international standardISO 80000-3(Space and time),[1]and adopted in theInternational System of Units(SI).[2][3] Angular displacement may be signed, indicating the sense of rotation (e.g.,clockwise);[1]it may also be greater (inabsolute value) than a fullturn. In the ISQ/SI, angular displacement is used to define thenumber of revolutions,N=θ/(2π rad), a ratio-typequantity of dimension one. In three dimensions, angular displacement is an entity with a direction and a magnitude. The direction specifies the axis of rotation, which always exists by virtue of theEuler's rotation theorem; the magnitude specifies the rotation inradiansabout that axis (using theright-hand ruleto determine direction). This entity is called anaxis-angle. Despite having direction and magnitude, angular displacement is not avectorbecause it does not obey thecommutative lawfor addition.[4]Nevertheless, when dealing with infinitesimal rotations, second order infinitesimals can be discarded and in this case commutativity appears. Several ways to describe rotations exist, likerotation matricesorEuler angles. Seecharts on SO(3)for others. Given that any frame in the space can be described by a rotation matrix, the displacement among them can also be described by a rotation matrix. BeingA0{\displaystyle A_{0}}andAf{\displaystyle A_{f}}two matrices, the angular displacement matrix between them can be obtained asΔA=AfA0−1{\displaystyle \Delta A=A_{f}A_{0}^{-1}}. When this product is performed having a very small difference between both frames we will obtain a matrix close to the identity. In the limit, we will have an infinitesimal rotation matrix. Aninfinitesimal rotation matrixor differential rotation matrix is amatrixrepresenting aninfinitelysmallrotation. While arotation matrixis anorthogonal matrixRT=R−1{\displaystyle R^{\mathsf {T}}=R^{-1}}representing an element ofSO(n){\displaystyle SO(n)}(thespecial orthogonal group), thedifferentialof a rotation is askew-symmetric matrixAT=−A{\displaystyle A^{\mathsf {T}}=-A}in thetangent spaceso(n){\displaystyle {\mathfrak {so}}(n)}(thespecial orthogonal Lie algebra), which is not itself a rotation matrix. An infinitesimal rotation matrix has the form whereI{\displaystyle I}is the identity matrix,dθ{\displaystyle d\theta }is vanishingly small, andA∈so(n).{\displaystyle A\in {\mathfrak {so}}(n).} For example, ifA=Lx,{\displaystyle A=L_{x},}representing an infinitesimal three-dimensional rotation about thex-axis, a basis element ofso(3),{\displaystyle {\mathfrak {so}}(3),}then and
https://en.wikipedia.org/wiki/Angle_of_rotation
Revolutions per minute(abbreviatedrpm,RPM,rev/min,r/min, orr⋅min−1) is a unit ofrotational speed(orrotational frequency) for rotating machines. One revolution perminuteis equivalent to⁠1/60⁠hertz. ISO 80000-3:2019 defines aphysical quantitycalledrotation(ornumber of revolutions),dimensionless, whoseinstantaneous rate of changeis calledrotational frequency(orrate of rotation), with units ofreciprocal seconds(s−1).[1] A related but distinct quantity for describing rotation isangular frequency(orangular speed, the magnitude ofangular velocity), for which the SI unit is theradian per second(rad/s). Although they have the samedimensions(reciprocal time) and base unit (s−1), the hertz (Hz) and radians per second (rad/s) are special names used to express two different but proportionalISQquantities: frequency and angular frequency, respectively. The conversions between a frequencyfand an angular frequencyωare Thus a disc rotating at 60 rpm is said to have an angular speed of 2πrad/s and a rotation frequency of 1 Hz. TheInternational System of Units(SI) does not recognize rpm as a unit. It defines units ofangular frequencyandangular velocityas rad s−1, and units offrequencyasHz, equal to s−1.
https://en.wikipedia.org/wiki/Revolutions_per_minute
Therepeating circleis an instrument forgeodetic surveying, developed from thereflecting circlebyÉtienne Lenoirin 1784.[1]He invented it while an assistant ofJean-Charles de Borda, who later improved the instrument. It was notable as being the equal of thegreat theodolitecreated by the renowned instrument maker,Jesse Ramsden. It was used tomeasurethemeridian arcfromDunkirktoBarcelonabyJean Baptiste DelambreandPierre Méchain(see:meridian arc of Delambre and Méchain). The repeating circle is made of twotelescopesmounted on a shared axis with scales to measure the angle between the two. The instrument combines multiple measurements to increase accuracy with the following procedure: At this stage, the angle on the instrument is double the angle of interest between the points. Repeating the procedure causes the instrument to show 4× the angle of interest, with further iterations increasing it to 6×, 8×, and so on. In this way, many measurements can be added together, allowing some of the random measurement errors to cancel out.[2] The repeating circle was used byCésar-François Cassini de Thury, assisted byPierre Méchain, for thetriangulationof theAnglo-French Survey. It would later be used for theArc measurement of Delambre and Méchainas improvements in the measuring device designed by Borda and used for this survey also raised hopes for a more accurate determination of the length of theFrench meridian arc.[3] When themetrewas chosen as an international unit of length, it was well known that by measuring the latitude of two stations inBarcelona, Méchain had found that the difference between these latitudes was greater than predicted by direct measurement of distance by triangulation and that he did not dare to admit this inaccuracy.[4][5]This was later explained by clearance in the central axis of the repeating circle causing wear and consequently thezenithmeasurements contained significantsystematic errors.[6] Thishistory of sciencearticle is astub. You can help Wikipedia byexpanding it. This article about acivil engineeringtopic is astub. You can help Wikipedia byexpanding it. Thisstandards- ormeasurement-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Repeating_circle
Thespat(symbolsp[1]), from theLatinspatium("space"), is aunitofsolid angle.[2][3]1 spat is equal to 4πsteradians[1][3]or approximately41253square degreesof solid angle (sequenceA125560in theOEIS).[2]Thus it is the solid anglesubtendedby a completesphereat its center.[2] The whole sphere contains ~148.510 million square arcminutes and ~534.638 billion square arcseconds. Thisgeometry-relatedarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Spat_(angular_unit)
Ingeometry, asolid angle(symbol:Ω) is a measure of the amount of thefield of viewfrom some particular point that a given object covers. That is, it is a measure of how large the object appears to an observer looking from that point. The point from which the object is viewed is called theapexof the solid angle, and the object is said tosubtendits solid angle at that point. In theInternational System of Units(SI), a solid angle is expressed in adimensionlessunitcalled asteradian(symbol: sr), which is equal to one square radian, sr = rad2. One steradian corresponds to one unit of area (of any shape) on theunit spheresurrounding the apex, so an object that blocks all rays from the apex would cover a number of steradians equal to the totalsurface areaof the unit sphere,4π{\displaystyle 4\pi }. Solid angles can also be measured in squares of angular measures such asdegrees, minutes, and seconds. A small object nearby may subtend the same solid angle as a larger object farther away. For example, although theMoonis much smaller than theSun, it is also much closer toEarth. Indeed, as viewed from any point on Earth, both objects have approximately the same solid angle (and therefore apparent size). This is evident during asolar eclipse. The magnitude of an object's solid angle insteradiansis equal to theareaof the segment of aunit sphere, centered at the apex, that the object covers. Giving the area of a segment of a unit sphere in steradians is analogous to giving the length of an arc of aunit circlein radians. Just as the magnitude of a plane angle in radians at the vertex of a circular sector is the ratio of the length of its arc to its radius, the magnitude of a solid angle in steradians is the ratio of the area covered on a sphere by an object to the square of the radius of the sphere. The formula for the magnitude of the solid angle in steradians is Ω=Ar2,{\displaystyle \Omega ={\frac {A}{r^{2}}},} whereA{\displaystyle A}is the area (of any shape) on the surface of the sphere andr{\displaystyle r}is the radius of the sphere. Solid angles are often used inastronomy,physics, and in particularastrophysics. The solid angle of an object that is very far away is roughly proportional to the ratio of area to squared distance. Here "area" means the area of the object when projected along the viewing direction. The solid angle of a sphere measured from any point in its interior is 4πsr. The solid angle subtended at the center of a cube by one of its faces is one-sixth of that, or 2π/3  sr.The solid angle subtended at the corner of a cube (anoctant) or spanned by aspherical octantisπ/2  sr, one-eighth of the solid angle of a sphere.[1] Solid angles can also be measured insquare degrees(1 sr = (180/π)2square degrees), in squarearc-minutesand squarearc-seconds, or in fractions of the sphere (1 sr =⁠1/4π⁠fractional area), also known asspat(1 sp = 4πsr). Inspherical coordinatesthere is a formula for thedifferential, dΩ=sin⁡θdθdφ,{\displaystyle d\Omega =\sin \theta \,d\theta \,d\varphi ,} whereθis thecolatitude(angle from the North Pole) andφis the longitude. The solid angle for an arbitraryoriented surfaceSsubtended at a pointPis equal to the solid angle of the projection of the surfaceSto the unit sphere with centerP, which can be calculated as thesurface integral: Ω=∬Sr^⋅n^r2dS=∬Ssin⁡θdθdφ,{\displaystyle \Omega =\iint _{S}{\frac {{\hat {r}}\cdot {\hat {n}}}{r^{2}}}\,dS\ =\iint _{S}\sin \theta \,d\theta \,d\varphi ,} wherer^=r→/r{\displaystyle {\hat {r}}={\vec {r}}/r}is theunit vectorcorresponding tor→{\displaystyle {\vec {r}}}, theposition vectorof an infinitesimal area of surfacedSwith respect to pointP, and wheren^{\displaystyle {\hat {n}}}represents the unitnormal vectortodS. Even if the projection on the unit sphere to the surfaceSis notisomorphic, the multiple folds are correctly considered according to the surface orientation described by the sign of thescalar productr^⋅n^{\displaystyle {\hat {r}}\cdot {\hat {n}}}. Thus one can approximate the solid angle subtended by a smallfacethaving flat surface areadS, orientationn^{\displaystyle {\hat {n}}}, and distancerfrom the viewer as: dΩ=4π(dSA)(r^⋅n^),{\displaystyle d\Omega =4\pi \left({\frac {dS}{A}}\right)\,({\hat {r}}\cdot {\hat {n}}),} where thesurface area of a sphereisA= 4πr2. The solid angle of aconewith its apex at the apex of the solid angle, and withapexangle 2θ, is the area of aspherical capon aunit sphere Ω=2π(1−cos⁡θ)=4πsin2⁡θ2.{\displaystyle \Omega =2\pi \left(1-\cos \theta \right)\ =4\pi \sin ^{2}{\frac {\theta }{2}}.} For smallθsuch thatcosθ≈ 1 −⁠θ2/2⁠this reduces toπθ2≈ πr2, the area of a circle. (Ash → 0,θ→ r.) The above is found by computing the followingdouble integralusing the unitsurface element in spherical coordinates: ∫02π∫0θsin⁡θ′dθ′dϕ=∫02πdϕ∫0θsin⁡θ′dθ′=2π∫0θsin⁡θ′dθ′=2π[−cos⁡θ′]0θ=2π(1−cos⁡θ).{\displaystyle {\begin{aligned}\int _{0}^{2\pi }\int _{0}^{\theta }\sin \theta '\,d\theta '\,d\phi &=\int _{0}^{2\pi }d\phi \int _{0}^{\theta }\sin \theta '\,d\theta '\\&=2\pi \int _{0}^{\theta }\sin \theta '\,d\theta '\\&=2\pi \left[-\cos \theta '\right]_{0}^{\theta }\\&=2\pi \left(1-\cos \theta \right).\end{aligned}}} This formula can also be derived without the use ofcalculus. Over 2200 years agoArchimedesproved that the surface area of a spherical cap is always equal to the area of a circle whose radius equals the distance from the rim of the spherical cap to the point where the cap's axis of symmetry intersects the cap.[3] In the above coloured diagram this radius is given as 2rsin⁡θ2.{\displaystyle 2r\sin {\frac {\theta }{2}}.}In the adjacent black & white diagram this radius is given as "t". Hence for a unit sphere the solid angle of the spherical cap is given as Ω=4πsin2⁡θ2=2π(1−cos⁡θ).{\displaystyle \Omega =4\pi \sin ^{2}{\frac {\theta }{2}}=2\pi \left(1-\cos \theta \right).} Whenθ=⁠π/2⁠, the spherical cap becomes ahemispherehaving a solid angle 2π. The solid angle of the complement of the cone is 4π−Ω=2π(1+cos⁡θ)=4πcos2⁡θ2.{\displaystyle 4\pi -\Omega =2\pi \left(1+\cos \theta \right)=4\pi \cos ^{2}{\frac {\theta }{2}}.} This is also the solid angle of the part of thecelestial spherethat an astronomical observer positioned at latitudeθcan see as the Earth rotates. At the equator all of the celestial sphere is visible; at either pole, only one half. The solid angle subtended by a segment of a spherical cap cut by a plane at angleγfrom the cone's axis and passing through the cone's apex can be calculated by the formula[4] Ω=2[arccos⁡(sin⁡γsin⁡θ)−cos⁡θarccos⁡(tan⁡γtan⁡θ)].{\displaystyle \Omega =2\left[\arccos \left({\frac {\sin \gamma }{\sin \theta }}\right)-\cos \theta \arccos \left({\frac {\tan \gamma }{\tan \theta }}\right)\right].} For example, ifγ= −θ, then the formula reduces to the spherical cap formula above: the first term becomesπ, and the secondπcosθ. Let OABC be the vertices of atetrahedronwith an origin at O subtended by the triangular face ABC wherea→,b→,c→{\displaystyle {\vec {a}}\ ,\,{\vec {b}}\ ,\,{\vec {c}}}are the vector positions of the vertices A, B and C. Define thevertex angleθato be the angle BOC and defineθb,θccorrespondingly. Letϕab{\displaystyle \phi _{ab}}be thedihedral anglebetween the planes that contain the tetrahedral faces OAC and OBC and defineϕac{\displaystyle \phi _{ac}},ϕbc{\displaystyle \phi _{bc}}correspondingly. The solid angleΩsubtended by the triangular surface ABC is given by Ω=(ϕab+ϕbc+ϕac)−π.{\displaystyle \Omega =\left(\phi _{ab}+\phi _{bc}+\phi _{ac}\right)\ -\pi .} This follows from the theory ofspherical excessand it leads to the fact that there is an analogous theorem to the theorem that"The sum of internal angles of a planar triangle is equal toπ", for the sum of the four internal solid angles of a tetrahedron as follows: ∑i=14Ωi=2∑i=16ϕi−4π,{\displaystyle \sum _{i=1}^{4}\Omega _{i}=2\sum _{i=1}^{6}\phi _{i}\ -4\pi ,} whereϕi{\displaystyle \phi _{i}}ranges over all six of the dihedral angles between any two planes that contain the tetrahedral faces OAB, OAC, OBC and ABC.[5] A useful formula for calculating the solid angle of the tetrahedron at the origin O that is purely a function of the vertex anglesθa,θb,θcis given byL'Huilier's theorem[6][7]as tan⁡(14Ω)=tan⁡(θs2)tan⁡(θs−θa2)tan⁡(θs−θb2)tan⁡(θs−θc2),{\displaystyle \tan \left({\frac {1}{4}}\Omega \right)={\sqrt {\tan \left({\frac {\theta _{s}}{2}}\right)\tan \left({\frac {\theta _{s}-\theta _{a}}{2}}\right)\tan \left({\frac {\theta _{s}-\theta _{b}}{2}}\right)\tan \left({\frac {\theta _{s}-\theta _{c}}{2}}\right)}},} whereθs=θa+θb+θc2.{\displaystyle \theta _{s}={\frac {\theta _{a}+\theta _{b}+\theta _{c}}{2}}.} Another interesting formula involves expressing the vertices as vectors in 3 dimensional space. Leta→,b→,c→{\displaystyle {\vec {a}}\ ,\,{\vec {b}}\ ,\,{\vec {c}}}be the vector positions of the vertices A, B and C, and leta,b, andcbe the magnitude of each vector (the origin-point distance). The solid angleΩsubtended by the triangular surface ABC is:[8][9] tan⁡(12Ω)=|a→b→c→|abc+(a→⋅b→)c+(a→⋅c→)b+(b→⋅c→)a,{\displaystyle \tan \left({\frac {1}{2}}\Omega \right)={\frac {\left|{\vec {a}}\ {\vec {b}}\ {\vec {c}}\right|}{abc+\left({\vec {a}}\cdot {\vec {b}}\right)c+\left({\vec {a}}\cdot {\vec {c}}\right)b+\left({\vec {b}}\cdot {\vec {c}}\right)a}},} where|a→b→c→|=a→⋅(b→×c→){\displaystyle \left|{\vec {a}}\ {\vec {b}}\ {\vec {c}}\right|={\vec {a}}\cdot ({\vec {b}}\times {\vec {c}})} denotes thescalar triple productof the three vectors anda→⋅b→{\displaystyle {\vec {a}}\cdot {\vec {b}}}denotes thescalar product. Care must be taken here to avoid negative or incorrect solid angles. One source of potential errors is that the scalar triple product can be negative ifa,b,chave the wrongwinding. Computing the absolute value is a sufficient solution since no other portion of the equation depends on the winding. The other pitfall arises when the scalar triple product is positive but the divisor is negative. In this case returns a negative value that must be increased byπ. The solid angle of a four-sided right rectangularpyramidwithapexanglesaandb(dihedral anglesmeasured to the opposite side faces of the pyramid) isΩ=4arcsin⁡(sin⁡(a2)sin⁡(b2)).{\displaystyle \Omega =4\arcsin \left(\sin \left({a \over 2}\right)\sin \left({b \over 2}\right)\right).} If both the side lengths (αandβ) of the base of the pyramid and the distance (d) from the center of the base rectangle to the apex of the pyramid (the center of the sphere) are known, then the above equation can be manipulated to give Ω=4arctan⁡αβ2d4d2+α2+β2.{\displaystyle \Omega =4\arctan {\frac {\alpha \beta }{2d{\sqrt {4d^{2}+\alpha ^{2}+\beta ^{2}}}}}.} The solid angle of a rightn-gonal pyramid, where the pyramid base is a regularn-sided polygon of circumradiusr, with a pyramid heighthis Ω=2π−2narctan⁡(tan⁡(πn)1+r2h2).{\displaystyle \Omega =2\pi -2n\arctan \left({\frac {\tan \left({\pi \over n}\right)}{\sqrt {1+{r^{2} \over h^{2}}}}}\right).} The solid angle of an arbitrary pyramid with ann-sided base defined by the sequence of unit vectors representing edges{s1,s2}, ...sncan be efficiently computed by:[4] Ω=2π−arg⁡∏j=1n((sj−1sj)(sjsj+1)−(sj−1sj+1)+i[sj−1sjsj+1]).{\displaystyle \Omega =2\pi -\arg \prod _{j=1}^{n}\left(\left(s_{j-1}s_{j}\right)\left(s_{j}s_{j+1}\right)-\left(s_{j-1}s_{j+1}\right)+i\left[s_{j-1}s_{j}s_{j+1}\right]\right).} where parentheses (* *) is ascalar productand square brackets [* * *] is ascalar triple product, andiis animaginary unit. Indices are cycled:s0=snands1=sn+ 1. The complex products add the phase associated with each vertex angle of the polygon. However, a multiple of2π{\displaystyle 2\pi }is lost in the branch cut ofarg{\displaystyle \arg }and must be kept track of separately. Also, the running product of complex phases must scaled occasionally to avoid underflow in the limit of nearly parallel segments. The solid angle of a latitude-longitude rectangle on aglobeis(sin⁡ϕN−sin⁡ϕS)(θE−θW)sr,{\displaystyle \left(\sin \phi _{\mathrm {N} }-\sin \phi _{\mathrm {S} }\right)\left(\theta _{\mathrm {E} }-\theta _{\mathrm {W} }\,\!\right)\;\mathrm {sr} ,}whereφNandφSare north and south lines oflatitude(measured from theequatorinradianswith angle increasing northward), andθEandθWare east and west lines oflongitude(where the angle in radians increases eastward).[10]Mathematically, this represents an arc of angleϕN−ϕSswept around a sphere byθE−θWradians. When longitude spans 2πradians and latitude spansπradians, the solid angle is that of a sphere. A latitude-longitude rectangle should not be confused with the solid angle of a rectangular pyramid. All four sides of a rectangular pyramid intersect the sphere's surface ingreat circlearcs. With a latitude-longitude rectangle, only lines of longitude are great circle arcs; lines of latitude are not. By using the definition ofangular diameter, the formula for the solid angle of a celestial object can be defined in terms of the radius of the object,R{\textstyle R}, and the distance from the observer to the object,d{\displaystyle d}: Ω=2π(1−d2−R2d):d≥R.{\displaystyle \Omega =2\pi \left(1-{\frac {\sqrt {d^{2}-R^{2}}}{d}}\right):d\geq R.} By inputting the appropriate average values for theSunand theMoon(in relation to Earth), the average solid angle of the Sun is6.794×10−5steradians and the average solid angle of theMoonis6.418×10−5steradians. In terms of the total celestial sphere, theSunand theMoonsubtend averagefractional areasof0.0005406% (5.406ppm) and0.0005107% (5.107 ppm), respectively. As these solid angles are about the same size, the Moon can cause both total and annular solareclipsesdepending on the distance between the Earth and the Moon during the eclipse. The solid angle subtended by the complete (d − 1)-dimensional spherical surface of the unit sphere ind-dimensional Euclidean spacecan be defined in any number of dimensionsd. One often needs this solid angle factor in calculations with spherical symmetry. It is given by the formulaΩd=2πd2Γ(d2),{\displaystyle \Omega _{d}={\frac {2\pi ^{\frac {d}{2}}}{\Gamma \left({\frac {d}{2}}\right)}},}whereΓis thegamma function. Whendis an integer, the gamma function can be computed explicitly.[11]It follows thatΩd={1(d2−1)!2πd2deven(12(d−1))!(d−1)!2dπ12(d−1)dodd.{\displaystyle \Omega _{d}={\begin{cases}{\frac {1}{\left({\frac {d}{2}}-1\right)!}}2\pi ^{\frac {d}{2}}\ &d{\text{ even}}\\{\frac {\left({\frac {1}{2}}\left(d-1\right)\right)!}{(d-1)!}}2^{d}\pi ^{{\frac {1}{2}}(d-1)}\ &d{\text{ odd}}.\end{cases}}} This gives the expected results of 4πsteradians for the 3D sphere bounded by a surface of area4πr2and 2πradians for the 2D circle bounded by a circumference of length2πr. It also gives the slightly less obvious 2 for the 1D case, in which the origin-centered 1D "sphere" is the interval[−r,r]and this is bounded by two limiting points. The counterpart to the vector formula in arbitrary dimension was derived by Aomoto[12][13]and independently by Ribando.[14]It expresses them as an infinite multivariateTaylor series:Ω=Ωd|det(V)|(4π)d/2∑a→∈N0(d2)[(−2)∑i<jaij∏i<jaij!∏iΓ(1+∑m≠iaim2)]α→a→.{\displaystyle \Omega =\Omega _{d}{\frac {\left|\det(V)\right|}{(4\pi )^{d/2}}}\sum _{{\vec {a}}\in \mathbb {N} _{0}^{\binom {d}{2}}}\left[{\frac {(-2)^{\sum _{i<j}a_{ij}}}{\prod _{i<j}a_{ij}!}}\prod _{i}\Gamma \left({\frac {1+\sum _{m\neq i}a_{im}}{2}}\right)\right]{\vec {\alpha }}^{\vec {a}}.}Givendunit vectorsv→i{\displaystyle {\vec {v}}_{i}}defining the angle, letVdenote the matrix formed by combining them so theith column isv→i{\displaystyle {\vec {v}}_{i}}, andαij=v→i⋅v→j=αji,αii=1{\displaystyle \alpha _{ij}={\vec {v}}_{i}\cdot {\vec {v}}_{j}=\alpha _{ji},\alpha _{ii}=1}. The variablesαij,1≤i<j≤d{\displaystyle \alpha _{ij},1\leq i<j\leq d}form a multivariableα→=(α12,…,α1d,α23,…,αd−1,d)∈R(d2){\displaystyle {\vec {\alpha }}=(\alpha _{12},\dotsc ,\alpha _{1d},\alpha _{23},\dotsc ,\alpha _{d-1,d})\in \mathbb {R} ^{\binom {d}{2}}}. For a "congruent" integer multiexponenta→=(a12,…,a1d,a23,…,ad−1,d)∈N0(d2),{\displaystyle {\vec {a}}=(a_{12},\dotsc ,a_{1d},a_{23},\dotsc ,a_{d-1,d})\in \mathbb {N} _{0}^{\binom {d}{2}},}defineα→a→=∏αijaij{\textstyle {\vec {\alpha }}^{\vec {a}}=\prod \alpha _{ij}^{a_{ij}}}. Note that hereN0{\displaystyle \mathbb {N} _{0}}= non-negative integers, or natural numbers beginning with 0. The notationαji{\displaystyle \alpha _{ji}}forj>i{\displaystyle j>i}means the variableαij{\displaystyle \alpha _{ij}}, similarly for the exponentsaji{\displaystyle a_{ji}}. Hence, the term∑m≠lalm{\textstyle \sum _{m\neq l}a_{lm}}means the sum over all terms ina→{\displaystyle {\vec {a}}}in which l appears as either the first or second index. Where this series converges, it converges to the solid angle defined by the vectors.
https://en.wikipedia.org/wiki/Solid_angle
Thesteradian(symbol:sr) orsquare radian[1][2]is the unit ofsolid anglein theInternational System of Units(SI). It is used inthree dimensional geometry, and is analogous to theradian, which quantifiesplanar angles. A solid angle in the form of aright circular conecan be projected onto a sphere, defining aspherical capwhere the cone intersects the sphere. The magnitude of the solid angle expressed in steradians is defined as the quotient of the surface area of the spherical cap and the square of the sphere's radius. This is analogous to the way a plane angle projected onto a circle defines acircular arcon the circumference, whose length is proportional to the angle. Steradians can be used to measure a solid angle of any shape. The solid angle subtended is the same as that of a cone with the same projected area. A solid angle of one steradian subtends acone apertureof approximately 1.144 radians or 65.54 degrees. In the SI, solid angle is considered to be adimensionlessquantity, the ratio of the area projected onto a surrounding sphere and the square of the sphere's radius. This is the number of square radians in the solid angle. This means that the SI steradian is the number of square radians in a solid angle equal to one square radian, which of course is the number one. It is useful to distinguish between dimensionless quantities of a differentkind, such as the radian (in the SI, a ratio of quantities of dimension length), so the symbol sr is used. For example,radiant intensitycan be measured in watts per steradian (W⋅sr−1). The steradian was formerly anSI supplementary unit, but this category was abolished in 1995 and the steradian is now considered anSI derived unit. The namesteradianis derived from theGreekστερεόςstereos'solid' + radian. A steradian can be defined as the solid anglesubtendedat the centre of aunit sphereby a unitarea(of any shape) on its surface. For a general sphere ofradiusr, any portion of its surface with areaA=r2subtends one steradian at its centre.[3] A solid angle in the form of a circular cone is related to the area it cuts out of a sphere: where Because the surface areaAof a sphere is4πr2, the definition implies that a sphere subtends4πsteradians (≈ 12.56637 sr) at its centre, or that a steradian subtends1/4π≈ 0.07958of a sphere. By the same argument, the maximum solid angle that can be subtended at any point is4πsr. The area of aspherical capisA= 2πrh, wherehis the "height" of the cap. IfA=r2, thenhr=12π{\displaystyle {\tfrac {h}{r}}={\tfrac {1}{2\pi }}}. From this, one can compute thecone aperture(a plane angle)2θof the cross-section of a simplespherical conewhose solid angle equals one steradian: givingθ≈0.572 rad = 32.77° and aperture2θ≈1.144 rad = 65.54°. The solid angle of a spherical cone whose cross-section subtends the angle2θis: A steradian is also equal to14π{\displaystyle {\tfrac {1}{4\pi }}}of a completesphere(spat), to(360∘2π)2{\displaystyle \left({\tfrac {360^{\circ }}{2\pi }}\right)^{2}}≈3282.80635square degrees, and to the spherical area of apolygonhaving anangle excessof 1 radian.[clarification needed] Millisteradians (msr) and microsteradians (μsr) are occasionally used to describelightandparticlebeams.[4][5]Other multiples are rarely used.
https://en.wikipedia.org/wiki/Steradian
Inmathematics, theunit intervalis theclosed interval[0,1], that is, thesetof allreal numbersthat are greater than or equal to 0 and less than or equal to 1. It is often denotedI(capital letterI). In addition to its role inreal analysis, the unit interval is used to studyhomotopy theoryin the field oftopology. In the literature, the term "unit interval" is sometimes applied to the other shapes that an interval from 0 to 1 could take:(0,1],[0,1), and(0,1). However, the notationIis most commonly reserved for the closed interval[0,1]. The unit interval is acomplete metric space,homeomorphicto theextended real number line. As atopological space, it iscompact,contractible,path connectedandlocally path connected. TheHilbert cubeis obtained by taking atopological productof countably many copies of the unit interval. Inmathematical analysis, the unit interval is aone-dimensionalanalyticalmanifoldwhose boundary consists of the two points 0 and 1. Its standardorientationgoes from 0 to 1. The unit interval is atotally ordered setand acomplete lattice(every subset of the unit interval has asupremumand aninfimum). Thesizeorcardinalityof a set is the number of elements it contains. The unit interval is asubsetof thereal numbersR{\displaystyle \mathbb {R} }. However, it has the same size as the whole set: thecardinality of the continuum. Since the real numbers can be used to represent points along aninfinitely long line, this implies that aline segmentof length 1, which is a part of that line, has the same number of points as the whole line. Moreover, it has the same number of points as a square ofarea1, as acubeofvolume1, and even as an unboundedn-dimensionalEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}(seeSpace filling curve). The number of elements (either real numbers or points) in all the above-mentioned sets isuncountable, as it is strictly greater than the number ofnatural numbers. The unit interval is acurve. The open interval (0,1) is a subset of thepositive real numbersand inherits an orientation from them. Theorientationis reversed when the interval is entered from 1, such as in the integral∫1xdtt{\displaystyle \int _{1}^{x}{\frac {dt}{t}}}used to definenatural logarithmforxin the interval, thus yielding negative values for logarithm of suchx. In fact, this integral is evaluated as asigned areayieldingnegative areaover the unit interval due to reversed orientation there. The interval[-1,1], with length two, demarcated by the positive and negative units, occurs frequently, such as in therangeof thetrigonometric functionssine and cosine and thehyperbolic functiontanh. This interval may be used for thedomainofinverse functions. For instance, when 𝜃 is restricted to[−π/2, π/2]thensin⁡θ{\displaystyle \sin \theta }is in this interval and arcsine is defined there. Sometimes, the term "unit interval" is used to refer to objects that play a role in various branches of mathematics analogous to the role that[0,1]plays in homotopy theory. For example, in the theory ofquivers, the (analogue of the) unit interval is the graph whose vertex set is{0,1}{\displaystyle \{0,1\}}and which contains a single edgeewhose source is 0 and whose target is 1. One can then define a notion ofhomotopybetween quiverhomomorphismsanalogous to the notion of homotopy betweencontinuousmaps. Inlogic, the unit interval[0,1]can be interpreted as a generalization of theBoolean domain{0,1}, in which case rather than only taking values 0 or 1, any value between and including 0 and 1 can be assumed. Algebraically,negation(NOT) is replaced with1 −x;conjunction(AND) is replaced with multiplication (xy); anddisjunction(OR) is defined, perDe Morgan's laws, as1 − (1 −x)(1 −y). Interpreting these values as logicaltruth valuesyields amulti-valued logic, which forms the basis forfuzzy logicandprobabilistic logic. In these interpretations, a value is interpreted as the "degree" of truth – to what extent a proposition is true, or the probability that the proposition is true.
https://en.wikipedia.org/wiki/Unit_interval
Divine Proportions: Rational Trigonometry to Universal Geometryis a 2005 book by the mathematician Norman J. Wildberger on a proposed alternative approach toEuclidean geometryandtrigonometry, calledrational trigonometry. The book advocates replacing the usual basic quantities of trigonometry,Euclidean distanceandanglemeasure, bysquared distanceand the square of thesineof the angle, respectively. This is logically equivalent to the standard development (as the replacement quantities can be expressed in terms of the standard ones and vice versa). The author claims his approach holds some advantages, such as avoiding the need forirrational numbers. The book was "essentially self-published"[1]by Wildberger through his publishing company Wild Egg. The formulas and theorems in the book are regarded as correct mathematics but the claims about practical or pedagogical superiority are primarily promoted by Wildberger himself and have received mixed reviews. The main idea ofDivine Proportionsis to replace distances by thesquared Euclidean distance, which Wildberger calls thequadrance, and to replace angle measures by the squares of their sines, which Wildberger calls thespreadbetween two lines.Divine Proportionsdefines both of these concepts directly from theCartesian coordinatesof points that determine a line segment or a pair of crossing lines. Defined in this way, they arerational functionsof those coordinates, and can be calculated directly without the need to take thesquare rootsorinverse trigonometric functionsrequired when computing distances or angle measures.[1] For Wildberger, afinitist, this replacement has the purported advantage of avoiding the concepts oflimitsandactual infinityused in defining thereal numbers, which Wildberger claims to be unfounded.[2][1]It also allows analogous concepts to be extended directly from the rational numbers to other number systems such asfinite fieldsusing the same formulas for quadrance and spread.[1]Additionally, this method avoids the ambiguity of the twosupplementary anglesformed by a pair of lines, as both angles have the same spread. This system is claimed to be more intuitive, and to extend more easily from two to three dimensions.[3]However, in exchange for these benefits, one loses the additivity of distances and angles: for instance, if a line segment is divided in two, its length is the sum of the lengths of the two pieces, but combining the quadrances of the pieces is more complicated and requires square roots.[1] Divine Proportionsis divided into four parts. Part I presents an overview of the use of quadrance and spread to replace distance and angle, and makes the argument for their advantages. Part II formalizes the claims made in part I, and proves them rigorously.[1]Rather than defining lines as infinite sets of points, they are defined by theirhomogeneous coordinates, which may be used in formulas for testing the incidence of points and lines. Like the sine, the cosine and tangent are replaced with rational equivalents, called the "cross" and "twist", andDivine Proportionsdevelops various analogues oftrigonometric identitiesinvolving these quantities,[3]including versions of thePythagorean theorem,law of sinesandlaw of cosines.[4] Part III develops the geometry oftrianglesandconic sectionsusing the tools developed in the two previous parts.[1]Well known results such asHeron's formulafor calculating the area of a triangle from its side lengths, or theinscribed angle theoremin the form that the angles subtended by a chord of a circle from other points on the circle are equal, are reformulated in terms of quadrance and spread, and thereby generalized to arbitrary fields of numbers.[3][5]Finally, Part IV considers practical applications in physics and surveying, and develops extensions to higher-dimensionalEuclidean spaceand topolar coordinates.[1] Divine Proportionsdoes not assume much in the way of mathematical background in its readers, but its many long formulas, frequent consideration of finite fields, and (after part I) emphasis on mathematical rigour are likely to be obstacles to apopular mathematicsaudience. Instead, it is mainly written for mathematics teachers and researchers. However, it may also be readable by mathematics students, and contains exercises making it possible to use as the basis for a mathematics course.[1][6] The feature of the book that was most positively received by reviewers was its work extending results in distance and angle geometry to finite fields. Reviewer Laura Wiswell found this work impressive, and was charmed by the result that the smallest finite field containing a regularpentagonisF19{\displaystyle \mathbb {F} _{19}}.[1]Michael Henle calls the extension of triangle and conic section geometry to finite fields, in part III of the book, "an elegant theory of great generality",[4]and William Barker also writes approvingly of this aspect of the book, calling it "particularly novel" and possibly opening up new research directions.[6] Wiswell raises the question of how many of the detailed results presented without attribution in this work are actually novel.[1]In this light, Michael Henle notes that the use ofsquared Euclidean distance"has often been found convenient elsewhere";[4]for instance it is used indistance geometry,least squaresstatistics, andconvex optimization. James Franklin points out that for spaces of three or more dimensions, modelled conventionally usinglinear algebra, the use of spread byDivine Proportionsis not very different from standard methods involvingdot productsin place of trigonometric functions.[5] An advantage of Wildberger's methods noted by Henle is that, because they involve only simple algebra, the proofs are both easy to follow and easy for a computer to verify. However, he suggests that the book's claims of greater simplicity in its overall theory rest on a false comparison in which quadrance and spread are weighed not against the corresponding classical concepts of distances, angles, and sines, but the much wider set of tools from classical trigonometry. He also points out that, to a student with a scientific calculator, formulas that avoid square roots and trigonometric functions are a non-issue,[4]and Barker adds that the new formulas often involve a greater number of individual calculation steps.[6]Although multiple reviewers felt that a reduction in the amount of time needed to teach students trigonometry would be very welcome,[3][5][7]Paul Campbell is skeptical that these methods would actually speed learning.[7]Gerry Leversha keeps an open mind, writing that "It will be interesting to see some of the textbooks aimed at school pupils [that Wildberger] has promised to produce, and ... controlled experiments involving student guinea pigs."[3]However, these textbooks and experiments have not been published. Wiswell is unconvinced by the claim that conventional geometry has foundational flaws that these methods avoid.[1]While agreeing with Wiswell, Barker points out that there may be other mathematicians who share Wildberger's philosophical suspicions of the infinite, and that this work should be of great interest to them.[6] A final issue raised by multiple reviewers is inertia: supposing for the sake of argument that these methods are better, are they sufficiently better to make worthwhile the large individual effort of re-learning geometry and trigonometry in these terms, and the institutional effort of re-working the school curriculum to use them in place of classical geometry and trigonometry? Henle, Barker, and Leversha conclude that the book has not made its case for this,[3][4][6]butSandra Arlinghaussees this work as an opportunity for fields such as her mathematical geography "that have relatively little invested in traditional institutional rigidity" to demonstrate the promise of such a replacement.[8]
https://en.wikipedia.org/wiki/Twist_(rational_trigonometry)
The number𝜏(/ˈtaʊ,ˈtɔː,ˈtɒ/ⓘ; spelled out astau) is amathematical constantthat is theratioof acircle'scircumferenceto itsradius. It is approximately equal to 6.28 and exactly equal to2π. 𝜏andπare both circle constants relating the circumference of a circle to its linear dimension: the radius in the case of𝜏; the diameter in the case ofπ. Whileπis used almost exclusively in mainstream mathematical education and practice, it has been proposed, most notably byMichael Hartlin 2010, that𝜏should be used instead. Hartl and other proponents argue that𝜏is the more natural circle constant and its use leads to conceptually simpler and more intuitive mathematical notation.[1] Critics have responded that the benefits of using𝜏overπare trivial and that given the ubiquity and historical significance ofπa change is unlikely to occur.[2] The proposal did not initially gain widespread acceptance in the mathematical community, but awareness of𝜏has become more widespread,[3]having been added to several major programming languages and calculators. 𝜏is commonly defined as theratioof acircle'scircumferenceC{\textstyle {C}}to itsradiusr{\textstyle {r}}:τ=Cr{\displaystyle \tau ={\frac {C}{r}}}A circle is defined as a closed curve formed by the set of all points in a plane that are a given distance from a fixed point, where the given distance is called the radius. The distance around the circle is the circumference, and the ratioCr{\textstyle {\frac {C}{r}}}is constant regardless of the circle's size. Thus,𝜏denotes the fixed relationship between the circumference of any circle and the fundamental defining property of that circle, the radius. Whenradiansare used as the unit ofangular measurethere are 𝜏 radians in one full turn of a circle, and the radian angle is aligned with the proportion of a full turn around the circle:τ8{\textstyle {\frac {\tau }{8}}}rad is an eighth of a turn;3τ4{\textstyle {\frac {3\tau }{4}}}rad is three-quarters of a turn. As𝜏is exactly equal to 2πit shares many of the properties ofπincluding being both anirrationalandtranscendentalnumber. The proposal to use the Greek letter 𝜏 as a circle constant representing 2πdates toMichael Hartl's2010 publication,The Tau Manifesto,[a]although the symbol had been independently suggested earlier by Joseph Lindenburg (c.1990), John Fisher (2004) and Peter Harremoës (2010).[5] Hartl offered two reasons for the choice of notation. First,τis the number of radians in oneturn, and bothτandturnbegin with a/t/sound. Second,τvisually resemblesπ, whose association with the circle constant is unavoidable. There had been a number of earlier proposals for a new circle constant equal to 2π, together with varying suggestions for its name and symbol. In 2001, Robert Palais of the University of Utah proposed thatπwas "wrong" as the fundamental circle constant arguing instead that 2πwas the proper value.[6]His proposal used a "π with three legs" symbol to denote the constant (ππ=2π{\displaystyle \pi \!\;\!\!\!\pi =2\pi }), and referred to angles as fractions of a "turn" (14ππ=14turn{\displaystyle {\tfrac {1}{4}}\pi \!\;\!\!\!\pi ={\tfrac {1}{4}}\,\mathrm {turn} }). Palais stated that the word "turn" served as both the name of the new constant and a reference to the ordinary language meaning ofturn.[7] In 2008,Robert P. Creaseproposed defining a constant as the ratio of circumference to radius, an idea supported byJohn Horton Conway. Crease used the Greek letterpsi:ψ=2π{\displaystyle \psi =2\pi }.[8] The same year, Thomas Colignatus proposed the uppercase Greek lettertheta, Θ, to represent 2πdue to its visual resemblance of a circle.[9]For a similar reason another proposal suggested the Phoenician and Hebrew letterteth, 𐤈 or ט, (from which the letter theta was derived), due to its connection with wheels and circles in ancient cultures.[10][11] The meaning of the symbolπ{\displaystyle \pi }was not originally defined as the ratio of circumference to diameter, and at times was used in representations of the 6.28...constant. Early works in circle geometry used the letterπto designate theperimeter(i.e.,circumference) in different fractional representations of circle constants and in 1697David Gregoryused⁠π/ρ⁠(pi over rho) to denote the perimeter divided by the radius (6.28...).[12][13] Subsequentlyπcame to be used as a single symbol to represent the ratios in whole.Leonhard Eulerinitially used the single letterπwas to denote the constant 6.28... in his 1727Essay Explaining the Properties of Air.[14][15]Euler would later use the letterπfor 3.14... in his 1736Mechanica[16]and 1748Introductio in analysin infinitorum,[17]though defined as half the circumference of acircle of radius 1rather than the ratio of circumference to diameter. Elsewhere inMechanica, Euler instead used the letterπfor one-fourth of the circumference of a unit circle, or 1.57... .[18][19]Usage of the letterπ, sometimes for 3.14... and other times for 6.28..., became widespread, with the definition varying as late as 1761;[20]afterward,πwas standardized as being equal to 3.14... .[21][22] Proponents argue that while use of 𝜏 in place of 2πdoes not change any of the underlying mathematics, it does lead to simpler and more intuitive notation in many areas. Michael Hartl'sTau Manifesto[b]gives many examples of formulas that are asserted to be clearer whereτis used instead ofπ.[23][24][25] Hartl and Robert Palais[7]have argued that 𝜏 allows radian angles to be expressed more directly and in a way that makes clear the link between the radian measure and rotation around the unit circle. For instance,⁠3τ/4⁠rad can be easily interpreted as⁠3/4⁠⁠ of a turn around the unit circle in contrast with the numerically equal ⁠⁠3π/2⁠⁠ rad, where the meaning could be obscured, particularly for children and students of mathematics. Critics have responded that a full rotation is not necessarily the correct or fundamental reference measure for angles and two other possibilities, theright angleand straight angle, each have historical precedent.Euclidused the right angle as the basic unit of angle, and David Butler has suggested that⁠τ/4⁠=⁠π/2⁠≈ 1.57, which he denotes with the Greek letter η (eta), should be seen as the fundamental circle constant.[26][27] Hartl has argued that the periodic trigonometric functions are simplified using 𝜏 as it aligns the function argument (radians) with the function period: sin θ repeats with periodT = τrad, reaches a maximum at⁠T/4⁠=⁠τ/4⁠rad and a minimum at⁠3T/4⁠=⁠3τ/4⁠rad. Critics have argued that the formula for thearea of a circleis more complicated when restated asA=⁠1/2⁠𝜏r2. Hartl and others respond that the⁠1/2⁠factor is meaningful, arising from eitherintegrationor geometric proofs for the area of a circle ashalf the circumference times the radius. A common criticism ofτis thatEuler's identity,eiπ+ 1 = 0, sometimes claimed to be "the most beautiful theorem in mathematics"[28]is made less elegant rendered aseiτ/2+ 1 = 0.[29]Hartl has asserted thateiτ= 1(which he also called "Euler's identity") is more fundamental and meaningful. John Conway noted[8]that Euler's identity is a specific case of the general formula of the nthroots of unity,n√1 =eiτk/n(k = 1,2,..,n), which he maintained is preferable and more economical than Euler's. The following table shows how various identities appear whenτ= 2πis used instead ofπ.[30][6]For a more complete list, seeList of formulae involvingπ. Sn(r)=2πrVn−1(r){\displaystyle S_{n}(r)={\color {orangered}2\pi }rV_{n-1}(r)} Sn(r)=τrVn−1(r){\displaystyle S_{n}(r)={\color {orangered}\tau }rV_{n-1}(r)} 𝜏 has made numerous appearances in culture. It is celebrated annually on June 28, known as Tau Day.[31]Supporters of 𝜏 are called tauists.[25]𝜏 has been covered in videos byVi Hart,[32][33][34]Numberphile,[35][36][37]SciShow,[38]Steve Mould,[39][40][41]Khan Academy,[42]and3Blue1Brown,[43][44]and it has appeared in the comicsxkcd,[45][46]Saturday Morning Breakfast Cereal,[47][48][49]andSally Forth.[50]TheMassachusetts Institute of Technologyusually announces admissions on March 14 at 6:28p.m., which is onPi Dayat Tau Time.[51]Peter Harremoës has usedτin a mathematical research article which was granted Editor's award of the year.[52] The following table documents various programming languages that have implemented the circle constant for converting between turns and radians. All of the languages below support the name "Tau" in some casing, but Processing also supports "TWO_PI" and Raku also supports the symbol "τ" for accessing the same value. The constantτis made available in the Google calculator,Desmos graphing calculator,[53]and theiPhone's Convert Angle option expresses the turn asτ.[54]
https://en.wikipedia.org/wiki/Tau_(mathematics)
InBoolean algebra, thealgebraic normal form(ANF),ring sum normal form(RSNForRNF),Zhegalkin normal form, orReed–Muller expansionis a way of writingpropositional logicformulas in one of three subforms: Formulas written in ANF are also known asZhegalkin polynomialsand Positive Polarity (or Parity)Reed–Muller expressions(PPRM).[1] ANF is acanonical form, which means that twologically equivalentformulas will convert to the same ANF, easily showing whether two formulas are equivalent forautomated theorem proving. Unlike other normal forms, it can be represented as a simple list of lists of variable names—conjunctiveanddisjunctivenormal forms also require recording whether each variable is negated or not.Negation normal formis unsuitable for determining equivalence, since on negation normal forms, equivalence does not imply equality: a ∨ ¬a is not reduced to the same thing as 1, even though they are logically equivalent. Putting a formula into ANF also makes it easy to identifylinearfunctions (used, for example, inlinear-feedback shift registers): a linear function is one that is a sum of single literals. Properties of nonlinear-feedbackshift registerscan also be deduced from certain properties of the feedback function in ANF. There are straightforward ways to perform the standard Boolean operations on ANF inputs in order to get ANF results. XOR (logical exclusive disjunction) is performed directly: NOT (logical negation) is XORing 1:[2] AND (logical conjunction) isdistributed algebraically[3] OR (logical disjunction) uses either 1 ⊕ (1 ⊕ a)(1 ⊕ b)[4](easier when both operands have purely true terms) or a ⊕ b ⊕ ab[5](easier otherwise): Each variable in a formula is already in pure ANF, so one only needs to perform the formula's Boolean operations as shown above to get the entire formula into ANF. For example: ANF is sometimes described in an equivalent way: There are only four functions with one argument: To represent a function with multiple arguments one can use the following equality: Indeed, Since bothg{\displaystyle g}andh{\displaystyle h}have fewer arguments thanf{\displaystyle f}it follows that using this process recursively we will finish with functions with one variable. For example, let us construct ANF off(x,y)=x∨y{\displaystyle f(x,y)=x\lor y}(logical or):
https://en.wikipedia.org/wiki/Ring_sum_normal_form
Rod calculusor rod calculation was the mechanical method ofalgorithmiccomputation withcounting rodsin China from theWarring StatestoMing dynastybefore the counting rods were increasingly replaced by the more convenient and fasterabacus. Rod calculus played a key role in the development of Chinese mathematics to its height in theSong dynastyandYuan dynasty, culminating in the invention ofpolynomial equationsof up to four unknowns in the work ofZhu Shijie. The basic equipment for carrying out rod calculus is a bundle ofcounting rodsand a counting board. The counting rods are usually made of bamboo sticks, about 12 cm- 15 cm in length, 2mm to 4 mm diameter, sometimes from animal bones, or ivory and jade (for well-heeled merchants). A counting board could be a table top, a wooden board with or without grid, on the floor or on sand. In 1971 Chinese archaeologists unearthed a bundle of well-preserved animal bone counting rods stored in a silk pouch from a tomb in Qian Yang county in Shanxi province, dated back to the first half ofHan dynasty(206 BC – 8AD).[citation needed]In 1975 a bundle of bamboo counting rods was unearthed.[citation needed] The use of counting rods for rod calculus flourished in theWarring States, although no archaeological artefacts were found earlier than the Western Han dynasty (the first half ofHan dynasty; however, archaeologists did unearth software artefacts of rod calculus dated back to theWarring States); since the rod calculus software must have gone along with rod calculus hardware, there is no doubt that rod calculus was already flourishing during the Warring States more than 2,200 years ago. The key software required for rod calculus was a simple 45 phrase positional decimal multiplication table used in China since antiquity, called thenine-nine table, which were learned by heart by pupils, merchants, government officials and mathematicians alike. Rod numerals is the only numeric system that uses different placement combination of a single symbol to convey any number or fraction in the Decimal System. For numbers in the units place, every vertical rod represent 1. Two vertical rods represent 2, and so on, until 5 vertical rods, which represents 5. For number between 6 and 9, abiquinarysystem is used, in which a horizontal bar on top of the vertical bars represent 5. The first row are the number 1 to 9 in rod numerals, and the second row is the same numbers in horizontal form. For numbers larger than 9, adecimal systemis used. Rods placed one place to the left of the units place represent 10 times that number. For the hundreds place, another set of rods is placed to the left which represents 100 times of that number, and so on. As shown in the adjacent image, the number 231 is represented in rod numerals in the top row, with one rod in the units place representing 1, three rods in the tens place representing 30, and two rods in the hundreds place representing 200, with a sum of 231. When doing calculation, usually there was no grid on the surface. If rod numerals two, three, and one is placed consecutively in the vertical form, there's a possibility of it being mistaken for 51 or 24, as shown in the second and third row of the adjacent image. To avoid confusion, number in consecutive places are placed in alternating vertical and horizontal form, with the units place in vertical form,[1]as shown in the bottom row on the right. InRod numerals, zeroes are represented by a space, which serves both as a number and a place holder value. Unlike inHindu-Arabic numerals, there is no specific symbol to represent zero. Before the introduction of a written zero, in addition to a space to indicate no units, the character in the subsequent unit column would be rotated by 90°, to reduce the ambiguity of a single zero.[2]For example 107 (𝍠 𝍧) and 17 (𝍩𝍧) would be distinguished by rotation, in addition to the space, though multiple zero units could lead to ambiguity, e.g. 1007 (𝍩 𝍧), and 10007 (𝍠 𝍧). In the adjacent image, the number zero is merely represented with a space. Songmathematicians used red to represent positive numbers and black fornegative numbers. However, another way is to add a slash to the last place to show that the number is negative.[3] The Mathematical Treatise of Sunzi used decimal fraction metrology. The unit of length was 1chi, 1chi= 10cun, 1cun= 10fen, 1fen= 10li, 1li= 10hao, 10 hao = 1 shi, 1 shi = 10hu. 1chi2cun3fen4li5hao6shi7huis laid out on counting board as whereis the unit measurementchi. Southern Song dynastymathematicianQin Jiushaoextended the use of decimal fraction beyond metrology. In his bookMathematical Treatise in Nine Sections, he formally expressed 1.1446154 day as He marked the unit with a word “日” (day) underneath it.[4] Rod calculus works on the principle of addition. UnlikeArabic numerals, digits represented by counting rods have additive properties. The process of addition involves mechanically moving the rods without the need of memorising anaddition table. This is the biggest difference with Arabic numerals, as one cannot mechanically put 1 and 2 together to form 3, or 2 and 3 together to form 5. The adjacent image presents the steps in adding 3748 to 289: The rods in the augend change throughout the addition, while the rods in the addend at the bottom "disappear". In situation in which noborrowingis needed, one only needs to take the number of rods in thesubtrahendfrom theminuend. The result of the calculation is the difference. The adjacent image shows the steps in subtracting 23 from 54. In situations in which borrowing is needed such as 4231–789, one need use a more complicated procedure. The steps for this example are shown on the left. Sunzi Suanjingdescribed in detail the algorithm of multiplication. On the left are the steps to calculate 38×76: The animation on the left shows the steps for calculating⁠309/7⁠= 44⁠1/7⁠. The Sunzi algorithm for division was transmitted in toto byal Khwarizmito Islamic country from Indian sources in 825AD. Al Khwarizmi's book was translated into Latin in the 13th century, The Sunzi division algorithm later evolved intoGalley divisionin Europe. The division algorithm inAbu'l-Hasan al-Uqlidisi's 925AD bookKitab al-Fusul fi al-Hisab al-Hindiand in 11th centuryKushyar ibn Labban'sPrinciples of Hindu Reckoningwere identical to Sunzu's division algorithm. If there is a remainder in a place value decimal rod calculus division, both the remainder and the divisor must be left in place with one on top of another. InLiu Hui's notes toJiuzhang suanshu(2nd century BCE), the number on top is called "shi" (实), while the one at bottom is called "fa" (法). InSunzi Suanjing, the number on top is called "zi" (子) or "fenzi" (lit., son of fraction), and the one on the bottom is called "mu" (母) or "fenmu" (lit., mother of fraction). Fenzi and Fenmu are also the modern Chinese name fornumeratoranddenominator, respectively. As shown on the right, 1 is the numerator remainder, 7 is the denominator divisor, formed a fraction⁠1/7⁠. The quotient of the division⁠309/7⁠is 44 +⁠1/7⁠. Liu Hui used a lot of calculations with fractions inHaidao Suanjing. This form of fraction with numerator on top and denominator at bottom without a horizontal bar in between, was transmitted to Arabic country in an 825AD book byal Khwarizmivia India, and in use by 10th centuryAbu'l-Hasan al-Uqlidisiand 15th centuryJamshīd al-Kāshī's work "Arithematic Key". ⁠1/3⁠+⁠2/5⁠ ⁠8/9⁠−⁠1/5⁠ 3⁠1/3⁠× 5⁠2/5⁠ The algorithm for finding the highest common factor of two numbers and reduction of fraction was laid out inJiuzhang suanshu. The highest common factor is found by successive division with remainders until the last two remainders are identical. The animation on the right illustrates the algorithm for finding the highest common factor of⁠32,450,625/59,056,400⁠and reduction of a fraction. In this case the hcf is 25. Divide the numerator and denominator by 25. Thereduced fractionis⁠1,298,025/2,362,256⁠. Calendarist and mathematicianHe Chengtian(何承天) used fractioninterpolationmethod, called "harmonisation of the divisor of the day" (调日法) to obtain a better approximate value than the old one by iteratively adding the numerators and denominators a "weaker" fraction with a "stronger fraction".[5]Zu Chongzhi's legendaryπ=⁠355/113⁠could be obtained with He Chengtian's method[6] Chapter EightRectangular Arrays ofJiuzhang suanshuprovided an algorithm for solvingSystem of linear equationsbymethod of elimination:[7] Problem 8-1: Suppose we have 3 bundles of top quality cereals, 2 bundles of medium quality cereals, and a bundle of low quality cereal with accumulative weight of 39 dou. We also have 2, 3 and 1 bundles of respective cereals amounting to 34 dou; we also have 1,2 and 3 bundles of respective cereals, totaling 26 dou. Find the quantity of top, medium, and poor quality cereals. In algebra, this problem can be expressed in three system equations with three unknowns. This problem was solved inJiuzhang suanshuwith counting rods laid out on a counting board in a tabular format similar to a 3x4 matrix: Algorithm: The amount of one bundle of low quality cereal=9936=234{\displaystyle ={\frac {99}{36}}=2{\frac {3}{4}}} From which the amount of one bundle of top and medium quality cereals can be found easily: Algorithm for extraction of square root was described inJiuzhang suanshuand with minor difference in terminology inSunzi Suanjing. The animation shows the algorithm for rod calculus extraction of an approximation of the square root234567≈484311968{\displaystyle {\sqrt {234567}}\approx 484{\tfrac {311}{968}}}from the algorithm in chap 2 problem 19 of Sunzi Suanjing: The algorithm is as follows: . North Song dynasty mathematicianJia Xiandeveloped anadditive multiplicative algorithm for square root extraction, in which he replaced the traditional "doubling" of "fang fa" by addingshangdigit tofang fadigit, with same effect. Jiuzhang suanshuvol iv "shaoguang" provided algorithm for extraction of cubic root. 〔一九〕今有積一百八十六萬八百六十七尺。問為立方幾何?答曰:一百二十三尺。 problem 19: We have a 1860867 cubic chi, what is the length of a side ? Answer:123 chi. North Song dynasty mathematicianJia Xianinvented a method similar to simplified form ofHorner schemefor extraction of cubic root. The animation at right shows Jia Xian's algorithm for solving problem 19 in Jiuzhang suanshu vol 4. North Song dynasty mathematicianJia XianinventedHorner schemefor solving simple 4th order equation of the form South Song dynasty mathematicianQin Jiushaoimproved Jia Xian's Horner method to solve polynomial equation up to 10th order. The following is algorithm for solving This equation was arranged bottom up with counting rods on counting board in tabular form Algorithm: Yuan dynasty mathematicianLi Zhideveloped rod calculus intoTian yuan shu Example Li Zhi Ceyuan haijing vol II, problem 14 equation of one unknown: −x2−680x+96000=0{\displaystyle -x^{2}-680x+96000=0} MathematicianZhu Shijiefurther developed rod calculus to include polynomial equations of 2 to four unknowns. For example, polynomials of three unknowns: Equation 1:−y−z−y2∗x−x+xyz=0{\displaystyle -y-z-y^{2}*x-x+xyz=0} Equation 2:−y−z+x−x2+xz=0{\displaystyle -y-z+x-x^{2}+xz=0} Equation 3:y2−z2+x2=0;{\displaystyle y^{2}-z^{2}+x^{2}=0;} After successive elimination of two unknowns, the polynomial equations of three unknowns was reduced to a polynomial equation of one unknown: x4−6x3+4x2+6x−5=0{\displaystyle x^{4}-6x^{3}+4x^{2}+6x-5=0} Solved x=5; Which ignores 3 other answers, 2 are repeated.
https://en.wikipedia.org/wiki/Rod_calculus#Division
Inmathematics,division bytwoorhalvinghas also been calledmediationordimidiation.[1]The treatment of this as a different operation from multiplication and division by other numbers goes back to the ancient Egyptians, whosemultiplication algorithmused division by two as one of its fundamental steps.[2]Some mathematicians as late as the sixteenth century continued to view halving as a separate operation,[3][4]and it often continues to be treated separately in moderncomputer programming.[5]Performing this operation is simple indecimal arithmetic, in thebinary numeral systemused in computer programming, and in other even-numberedbases. Todivideanodd numberby2use themathematical solution((N−1)÷2)+0.5. For example, if N=7, then ((7−1)÷2)+0.5=3.5, so 7÷2=3.5. In binary arithmetic, division by two can be performed by abit shiftoperation that shifts the number one place to the right. This is a form ofstrength reductionoptimization. For example, 1101001 in binary (the decimal number 105), shifted one place to the right, is 110100 (the decimal number 52): the lowest order bit, a 1, is removed. Similarly, division by anypower of two2kmay be performed by right-shiftingkpositions. Because bit shifts are often much faster operations than division, replacing a division by a shift in this way can be a helpful step inprogram optimization.[5]However, for the sake ofsoftware portabilityand readability, it is often best to write programs using the division operation and trust in thecompilerto perform this replacement.[6]An example fromCommon Lisp: The above statements, however, are not always true when dealing with dividingsignedbinary numbers. Shifting right by 1 bit will divide by two, always rounding down. However, in some languages, division of signed binary numbers round towards 0 (which, if the result is negative, means it rounds up). For example,Javais one such language: in Java,-3 / 2evaluates to-1, whereas-3 >> 1evaluates to-2. So in this case, the compilercannotoptimize division by two by replacing it by a bit shift, when the dividend could possibly be negative. In binaryfloating-point arithmetic, division by two can be performed by decreasing the exponent by one (as long as the result is not asubnormal number). Many programming languages provide functions that can be used to divide a floating point number by a power of two. For example, theJava programming languageprovides the methodjava.lang.Math.scalbfor scaling by a power of two,[7]and theC programming languageprovides the functionldexpfor the same purpose.[8] The followingalgorithmis for decimal. However, it can be used as a model to construct an algorithm for taking half of any numberNin anyevenbase. Example: 1738/2=? Write 01738. We will now work on finding the result. Result: 0869. From the example one can see that0 is even. If the last digit ofNisodddigit one should add 0.5 to the result.
https://en.wikipedia.org/wiki/Division_by_two
Inarithmetic, thegalley method, also known as thebatelloor thescratch method, was the most widely used method ofdivisionin use prior to 1600. The namesgaleaand batello refer to a boat which the outline of the work was thought to resemble. An earlier version of this method was used as early as 825 byAl-Khwarizmi. The galley method is thought to be ofAraborigin and is most effective when used on a sandabacus. However,Lam Lay Yong's research pointed out that the galley method of division originated in the 1st century AD in ancient China.[1] The galley method writes fewer figures thanlong division, and results in interesting shapes and pictures as it expands both above and below the initial lines. It was the preferred method of division for seventeen centuries, far longer than long division's four centuries. Examples of the galley method appear in the 1702 British-Americancyphering bookwritten by Thomas Prust (or Priest).[2] Set up the problem by writing the dividend and then a bar. The quotient will be written after the bar. Steps: Now multiply each digit of the divisor by the new digit of the quotient and subtract the result from the left-hand segment of the dividend. Where the subtrahend and the dividend segment differ, cross out the dividend digit and write if necessary the difference (remainder) in the next vertical empty space. Cross out the divisor digit used. The above is called the cross-out version and is the most common. An erasure version exists for situations where erasure is acceptable and there is not need to keep track of the intermediate steps. This is the method used with a sand abacus. Finally, there is a printers' method[citation needed]that uses neither erasure or crossouts. Only the top digit in each column of the dividend is active with a zero used to denote an entirely inactive column. Galley division was the favorite method of division with arithmeticians through the 18th century and it is thought that it fell out of use due to the lack of cancelled types in printing. It is still taught in theMoorishschools ofNorth Africaand other parts of theMiddle East. Lam Lay Yong, mathematics professor ofNational University of Singapore, traced the origin of the galley method to theSunzi Suanjingwritten about 400AD. The division described byAl-Khwarizmiin 825 was identical to the Sunzi algorithm for division.[3]
https://en.wikipedia.org/wiki/Galley_division
Inmathematicsandcomputer programming, theorder of operationsis a collection of rules that reflect conventions about which operations to perform first in order to evaluate a givenmathematical expression. These rules are formalized with a ranking of the operations. The rank of an operation is called itsprecedence, and an operation with ahigherprecedence is performed before operations withlowerprecedence.Calculatorsgenerally perform operations with the same precedence from left to right,[1]but someprogramming languagesand calculators adopt different conventions. For example, multiplication is granted a higher precedence than addition, and it has been this way since the introduction of modernalgebraic notation.[2][3]Thus, in the expression1 + 2 × 3, the multiplication is performed before addition, and the expression has the value1 + (2 × 3) = 7, and not(1 + 2) × 3 = 9. When exponents were introduced in the 16th and 17th centuries, they were given precedence over both addition and multiplication and placed as a superscript to the right of their base.[2]Thus3 + 52= 28and3 × 52= 75. These conventions exist to avoid notationalambiguitywhile allowing notation to remain brief.[4]Where it is desired to override the precedence conventions, or even simply to emphasize them,parentheses( ) can be used. For example,(2 + 3) × 4 = 20forces addition to precede multiplication, while(3 + 5)2= 64forces addition to precedeexponentiation. If multiple pairs of parentheses are required in a mathematical expression (such as in the case of nested parentheses), the parentheses may be replaced by other types ofbracketsto avoid confusion, as in[2 × (3 + 4)] − 5 = 9. These rules are meaningful only when the usual notation (calledinfix notation) is used. WhenfunctionalorPolish notationare used for all operations, the order of operations results from the notation itself. The order of operations, that is, the order in which the operations in an expression are usually performed, results from a convention adopted throughout mathematics, science, technology and many computerprogramming languages. It is summarized as:[2][5] This means that to evaluate an expression, one first evaluates any sub-expression inside parentheses, working inside to outside if there is more than one set. Whether inside parentheses or not, the operation that is higher in the above list should be applied first. Operations of the same precedence are conventionally evaluated from left to right. If each division is replaced with multiplication by thereciprocal(multiplicative inverse) then theassociativeandcommutativelaws of multiplication allow the factors in eachtermto be multiplied together in any order. Sometimes multiplication and division are given equal precedence, or sometimes multiplication is given higher precedence than division; see§ Mixed division and multiplicationbelow. If each subtraction is replaced with addition of theopposite(additive inverse), then the associative and commutative laws of addition allow terms to be added in any order. Theradical symbol⁠t{\displaystyle {\sqrt {\vphantom {t}}}}⁠is traditionally extended by a bar (calledvinculum) over the radicand (this avoids the need for parentheses around the radicand). Other functions use parentheses around the input to avoid ambiguity.[6][7][a]The parentheses can be omitted if the input is a single numerical variable or constant,[2]as in the case ofsinx= sin(x)andsinπ= sin(π).[a]Traditionally this convention extends tomonomials; thus,sin 3x= sin(3x)and evensin⁠1/2⁠xy= sin(⁠1/2⁠xy), butsinx+y= sin(x) +y, becausex+yis not a monomial. However, this convention is not universally understood, and some authors prefer explicit parentheses.[b]Some calculators and programming languages require parentheses around function inputs, some do not. Parentheses and alternate symbols of grouping can be used to override the usual order of operations or to make the intended order explicit. Grouped symbols can be treated as a single expression.[2] Multiplication before addition: Parenthetical subexpressions are evaluated first: Exponentiation before multiplication, multiplication before subtraction: When an expression is written as a superscript, the superscript is considered to be grouped by its position above its base: The operand of a root symbol is determined by the overbar: A horizontal fractional line forms two grouped subexpressions, one above divided by another below: Parentheses can be nested, and should be evaluated from the inside outward. For legibility, outer parentheses can be made larger than inner parentheses. Alternately, other grouping symbols, such as curly braces{} or square brackets[ ], are sometimes used along with parentheses( ). For example: There are differing conventions concerning theunary operation'−'(usually pronounced "minus"). In written or printed mathematics, the expression −32is interpreted to mean−(32) = −9.[2][8] In some applications and programming languages, notablyMicrosoft Excel,PlanMaker(and other spreadsheet applications) andthe programming language bc, unary operations have a higher priority than binary operations, that is, the unary minus has higher precedence than exponentiation, so in those languages −32will be interpreted as(−3)2= 9.[9]This does not apply to the binary minusoperation '−';for example in Microsoft Excel while the formulas=-2^2,=(-2)^2and=0+-2^2return 4, the formulas=0-2^2and=-(2^2)return −4. There is no universal convention for interpreting an expression containing both division denoted by '÷' and multiplication denoted by '×'. Proposed conventions include assigning the operations equal precedence and evaluating them from left to right, or equivalently treating division as multiplication by the reciprocal and then evaluating in any order;[10]evaluating all multiplications first followed by divisions from left to right; or eschewing such expressions and instead always disambiguating them by explicit parentheses.[11] Beyond primary education, the symbol '÷' for division is seldom used, but is replaced by the use ofalgebraic fractions,[12]typically written vertically with the numerator stacked above the denominator – which makes grouping explicit and unambiguous – but sometimes written inline using theslashor solidus symbol '/'.[13] Multiplication denoted by juxtaposition (also known asimplied multiplication) creates a visual unit and is often given higher precedence than most other operations. In academic literature, when inline fractions are combined with implied multiplication without explicit parentheses, the multiplication is conventionally interpreted as having higher precedence than division, so that e.g.1 / 2nis interpreted to mean1 / (2 ·n)rather than(1 / 2) ·n.[2][10][14][15]For instance, the manuscript submission instructions for thePhysical Reviewjournals directly state that multiplication has precedence over division,[16]and this is also the convention observed in physics textbooks such as theCourse of Theoretical PhysicsbyLandauandLifshitz[c]and mathematics textbooks such asConcrete MathematicsbyGraham,Knuth, andPatashnik.[17]However, some authors recommend against expressions such asa/bc, preferring the explicit use of parenthesisa/ (bc).[3] More complicated cases are more ambiguous. For instance, the notation1 / 2π(a+b)could plausibly mean either1 / [2π· (a+b)]or[1 / (2π)] · (a+b).[18]Sometimes interpretation depends on context. ThePhysical Reviewsubmission instructions recommend against expressions of the forma/b/c; more explicit expressions(a/b) /cora/ (b/c)are unambiguous.[16] This ambiguity has been the subject ofInternet memessuch as "8 ÷ 2(2 + 2)", for which there are two conflicting interpretations: 8 ÷ [2 · (2 + 2)] = 1 and (8 ÷ 2) · (2 + 2) = 16.[15][19]Mathematics education researcher Hung-Hsi Wu points out that "one never gets a computation of this type in real life", and calls such contrived examples "a kind of Gotcha! parlor game designed to trap an unsuspecting person by phrasing it in terms of a set of unreasonably convoluted rules".[12] Ifexponentiationis indicated by stacked symbols using superscript notation, the usual rule is to work from the top down:[2][7] which typically is not equal to (ab)c. This convention is useful because there isa property of exponentiationthat (ab)c=abc, so it's unnecessary to use serial exponentiation for this. However, when exponentiation is represented by an explicit symbol such as acaret(^) orarrow(↑), there is no common standard. For example,Microsoft Exceland computation programming languageMATLABevaluatea^b^cas (ab)c, butGoogle SearchandWolfram Alphaasa(bc). Thus4^3^2is evaluated to 4,096 in the first case and to 262,144 in the second case. Mnemonicacronymsare often taught in primary schools to help students remember the order of operations.[20][21]The acronymPEMDAS, which stands forParentheses,Exponents,Multiplication/Division,Addition/Subtraction,[22]is common in theUnited States[23]and France.[24]Sometimes the letters are expanded into words of a mnemonic sentence such as "Please Excuse My Dear Aunt Sally".[25]The United Kingdom and otherCommonwealthcountries may useBODMAS(or sometimesBOMDAS), standing forBrackets,Of,Division/Multiplication,Addition/Subtraction, with "of" meaning fraction multiplication.[26][27]Sometimes theOis instead expanded asOrder, meaning exponent or root,[27][28]or replaced byIforIndices in the alternative mnemonicBIDMAS.[27][29]In Canada and New ZealandBEDMASis common.[30] In Germany, the convention is simply taught asPunktrechnung vor Strichrechnung, "dot operations before line operations" referring to the graphical shapes of the taught operator signsU+00B7·MIDDLE DOT(multiplication),U+2236∶RATIO(division), andU+002B+PLUS SIGN(addition),U+2212−MINUS SIGN(subtraction). These mnemonics may be misleading when written this way.[25]For example, misinterpreting any of the above rules to mean "addition first, subtraction afterward" would incorrectly evaluate the expression[25]a−b+c{\displaystyle a-b+c}asa−(b+c){\displaystyle a-(b+c)}, while the correct evaluation is(a−b)+c{\displaystyle (a-b)+c}. These values are different whenc≠0{\displaystyle c\neq 0}. Mnemonic acronyms have been criticized for not developing a conceptual understanding of the order of operations, and not addressing student questions about its purpose or flexibility.[31][32]Students learning the order of operations via mnemonic acronyms routinely make mistakes,[33]as do some pre-service teachers.[34]Even when students correctly learn the acronym, a disproportionate focus on memorization of trivia crowds out substantive mathematical content.[12]The acronym's procedural application does not match experts' intuitive understanding of mathematical notation: mathematical notation indicates groupings in ways other than parentheses or brackets and a mathematical expression is atree-like hierarchyrather than a linearly "ordered" structure; furthermore, there is no single order by which mathematical expressions must be simplified or evaluated and no universal canonical simplification for any particular expression, and experts fluently apply valid transformations and substitutions in whatever order is convenient, so learning a rigid procedure can lead students to a misleading and limiting understanding of mathematical notation.[35] Different calculators follow different orders of operations.[2]Many simple calculators without astackimplementchain input, working in button-press order without any priority given to different operations, give a different result from that given by more sophisticated calculators. For example, on a simple calculator, typing1 + 2 × 3 =yields 9, while a more sophisticated calculator will use a more standard priority, so typing1 + 2 × 3 =yields 7. Calculators may associate exponents to the left or to the right. For example, the expressiona^b^cis interpreted asa(bc)on theTI-92and theTI-30XS MultiViewin "Mathprint mode", whereas it is interpreted as (ab)con theTI-30XIIand the TI-30XS MultiView in "Classic mode". An expression like1/2xis interpreted as 1/(2x) byTI-82,[3]as well as many modernCasiocalculators[36](configurable on some like thefx-9750GIII), but as (1/2)xbyTI-83and every other TI calculator released since 1996,[37][3]as well as by allHewlett-Packardcalculators with algebraic notation. While the first interpretation may be expected by some users due to the nature ofimplied multiplication,[38]the latter is more in line with the rule that multiplication and division are of equal precedence.[3] When the user is unsure how a calculator will interpret an expression, parentheses can be used to remove the ambiguity.[3] Order of operations arose due to the adaptation ofinfix notationinstandard mathematical notation, which can be notationally ambiguous without such conventions, as opposed topostfix notationorprefix notation, which do not need orders of operations.[39][40]Hence, calculators utilizingreverse Polish notation(RPN) using astackto enter expressions in the correct order of precedence do not need parentheses or any possibly model-specific order of execution.[25][22] Mostprogramming languagesuse precedence levels that conform to the order commonly used in mathematics,[41]though others, such asAPL,Smalltalk,OccamandMary, have nooperatorprecedence rules (in APL, evaluation is strictly right to left; in Smalltalk, it is strictly left to right). Furthermore, because many operators are not associative, the order within any single level is usually defined by grouping left to right so that16/4/4is interpreted as(16/4)/4 = 1rather than16/(4/4) = 16; such operators are referred to as "left associative". Exceptions exist; for example, languages with operators corresponding to theconsoperation on lists usually make them group right to left ("right associative"), e.g. inHaskell,1:2:3:4:[] == 1:(2:(3:(4:[]))) == [1,2,3,4]. Dennis Ritchie, creator of theC language, said of the precedence in C (shared by programming languages that borrow those rules from C, for example,C++,PerlandPHP) that it would have been preferable to move thebitwise operatorsabove thecomparison operators.[42]Many programmers have become accustomed to this order, but more recent popular languages likePython[43]andRuby[44]do have this order reversed. The relative precedence levels of operators found in many C-style languages are as follows: Examples: (InPython,Ruby,PARI/GPand other popular languages,A & B == Cis interpreted as(A & B) == C.) Source-to-source compilersthat compile to multiple languages need to explicitly deal with the issue of different order of operations across languages.Haxefor example standardizes the order and enforces it by inserting brackets where it is appropriate. The accuracy of software developer knowledge about binary operator precedence has been found to closely follow their frequency of occurrence in source code.[46] The order of operations emerged progressively over centuries. The rule that multiplication has precedence over addition was incorporated into the development ofalgebraic notationin the 1600s, since the distributive property implies this as a natural hierarchy. As recently as the 1920s, the historian of mathematicsFlorian Cajoriidentifies disagreement about whether multiplication should have precedence over division, or whether they should be treated equally. The term "order of operations" and the "PEMDAS/BEDMAS" mnemonics were formalized only in the late 19th or early 20th century, as demand for standardized textbooks grew. Ambiguity about issues such as whether implicit multiplication takes precedence over explicit multiplication and division in such expressions asa/2b, which could be interpreted asa/(2b) or (a/2) ×b, imply that the conventions are not yet completely stable.[47][48] Chrystal's book was the canonical source in English about secondary school algebra of the turn of the 20th century, and plausibly the source for many later descriptions of the order of operations. However, while Chrystal's book initially establishes a rigid rule for evaluating expressions involving '÷' and '×' symbols, it later consistently gives implicit multiplication higher precedence than division when writing inline fractions, without ever explicitly discussing the discrepancy between formal rule and common practice.
https://en.wikipedia.org/wiki/Order_of_operations
Incombinatorics, therule of divisionis a counting principle. It states that there aren/dways to do a task if it can be done using a procedure that can be carried out innways, and for each wayw, exactlydof thenways correspond to the wayw. In a nutshell, the division rule is a common way to ignore "unimportant" differences when counting things.[1] In the terms of a set: "If the finite setAis the union of n pairwise disjoint subsets each withdelements, thenn= |A|/d."[1] The rule of division formulated in terms of functions: "Iffis a function fromAtoBwhereAandBare finite sets, and that for every valuey∈Bthere are exactlydvaluesx∈Asuch thatf(x) =y(in which case, we say thatfisd-to-one), then|B| = |A|/d."[1] Example 1 - How many different ways are there to seat four people around a circular table, where two seatings are considered the same when each person has the same left neighbor and the same right neighbor? Example 2 - We have 6 coloured bricks in total, 4 of them are red and 2 are white, in how many ways can we arrange them?
https://en.wikipedia.org/wiki/Rule_of_division_(combinatorics)
Inanalytic number theoryand related branches of mathematics, a complex-valuedarithmetic functionχ:Z→C{\displaystyle \chi :\mathbb {Z} \rightarrow \mathbb {C} }is aDirichlet character of modulusm{\displaystyle m}(wherem{\displaystyle m}is a positive integer) if for all integersa{\displaystyle a}andb{\displaystyle b}:[1] The simplest possible character, called theprincipal character, usually denotedχ0{\displaystyle \chi _{0}}, (seeNotationbelow) exists for all moduli:[2] The German mathematicianPeter Gustav Lejeune Dirichlet—for whom the character is named—introduced these functions in his 1837 paper onprimes in arithmetic progressions.[3][4] ϕ(n){\displaystyle \phi (n)}isEuler's totient function.[5] ζn{\displaystyle \zeta _{n}}is a complex primitiven-th root of unity: (Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}is thegroup of units modm{\displaystyle m}. It has orderϕ(m).{\displaystyle \phi (m).} (Z/mZ)×^{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}}is the group of Dirichlet characters modm{\displaystyle m}. p,pk,{\displaystyle p,p_{k},}etc. areprime numbers. (m,n){\displaystyle (m,n)}is a standard[6]abbreviation[7]forgcd(m,n){\displaystyle \gcd(m,n)} χ(a),χ′(a),χr(a),{\displaystyle \chi (a),\chi '(a),\chi _{r}(a),}etc. are Dirichlet characters. (the lowercaseGreek letter chifor "character") There is no standard notation for Dirichlet characters that includes the modulus. In many contexts (such as in the proof of Dirichlet's theorem) the modulus is fixed. In other contexts, such as this article, characters of different moduli appear. Where appropriate this article employs a variation ofConrey labeling(introduced byBrian Conreyand used by theLMFDB). In this labeling characters for modulusm{\displaystyle m}are denotedχm,t(a){\displaystyle \chi _{m,t}(a)}where the indext{\displaystyle t}is described in the sectionthe group of charactersbelow. In this labeling,χm,_(a){\displaystyle \chi _{m,\_}(a)}denotes an unspecified character andχm,1(a){\displaystyle \chi _{m,1}(a)}denotes the principal character modm{\displaystyle m}. The word "character" is used several ways in mathematics. In this section it refers to ahomomorphismfrom a groupG{\displaystyle G}(written multiplicatively) to the multiplicative group of the field of complex numbers: The set of characters is denotedG^.{\displaystyle {\widehat {G}}.}If the product of two characters is defined by pointwise multiplicationηθ(a)=η(a)θ(a),{\displaystyle \eta \theta (a)=\eta (a)\theta (a),}the identity by the trivial characterη0(a)=1{\displaystyle \eta _{0}(a)=1}and the inverse by complex inversionη−1(a)=η(a)−1{\displaystyle \eta ^{-1}(a)=\eta (a)^{-1}}thenG^{\displaystyle {\widehat {G}}}becomes an abelian group.[8] IfA{\displaystyle A}is afinite abelian groupthen[9]there is anisomorphismA≅A^{\displaystyle A\cong {\widehat {A}}}, and the orthogonality relations:[10] The elements of the finite abelian group(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}are the residue classes[a]={x:x≡a(modm)}{\displaystyle [a]=\{x:x\equiv a{\pmod {m}}\}}where(a,m)=1.{\displaystyle (a,m)=1.} A group characterρ:(Z/mZ)×→C×{\displaystyle \rho :(\mathbb {Z} /m\mathbb {Z} )^{\times }\rightarrow \mathbb {C} ^{\times }}can be extended to a Dirichlet characterχ:Z→C{\displaystyle \chi :\mathbb {Z} \rightarrow \mathbb {C} }by defining and conversely, a Dirichlet character modm{\displaystyle m}defines a group character on(Z/mZ)×.{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }.} Paraphrasing Davenport,[11]Dirichlet characters can be regarded as a particular case of Abelian group characters. But this article follows Dirichlet in giving a direct and constructive account of them. This is partly for historical reasons, in that Dirichlet's work preceded by several decades the development of group theory, and partly for a mathematical reason, namely that the group in question has a simple and interesting structure which is obscured if one treats it as one treats the general Abelian group. 4) Sincegcd(1,m)=1,{\displaystyle \gcd(1,m)=1,}property 2) saysχ(1)≠0{\displaystyle \chi (1)\neq 0}so it can be canceled from both sides ofχ(1)χ(1)=χ(1×1)=χ(1){\displaystyle \chi (1)\chi (1)=\chi (1\times 1)=\chi (1)}: 5) Property 3) is equivalent to 6) Property 1) implies that, for any positive integern{\displaystyle n} 7)Euler's theoremstates that if(a,m)=1{\displaystyle (a,m)=1}thenaϕ(m)≡1(modm).{\displaystyle a^{\phi (m)}\equiv 1{\pmod {m}}.}Therefore, That is, the nonzero values ofχ(a){\displaystyle \chi (a)}areϕ(m){\displaystyle \phi (m)}-throots of unity: for some integerr{\displaystyle r}which depends onχ,ζ,{\displaystyle \chi ,\zeta ,}anda{\displaystyle a}. This implies there are only a finite number of characters for a given modulus. 8) Ifχ{\displaystyle \chi }andχ′{\displaystyle \chi '}are two characters for the same modulus so is their productχχ′,{\displaystyle \chi \chi ',}defined by pointwise multiplication: The principal character is an identity: 9) Leta−1{\displaystyle a^{-1}}denote the inverse ofa{\displaystyle a}in(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}. Then Thecomplex conjugateof a root of unity is also its inverse (seeherefor details), so for(a,m)=1{\displaystyle (a,m)=1} Thus for all integersa{\displaystyle a} 10) The multiplication and identity defined in 8) and the inversion defined in 9) turn the set of Dirichlet characters for a given modulus into afinite abelian group. There are three different cases because the groups(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}have different structures depending on whetherm{\displaystyle m}is a power of 2, a power of an odd prime, or the product of prime powers.[14] Ifq=pk{\displaystyle q=p^{k}}is an odd number(Z/qZ)×{\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{\times }}is cyclic of orderϕ(q){\displaystyle \phi (q)}; a generator is called aprimitive rootmodq{\displaystyle q}.[15]Letgq{\displaystyle g_{q}}be a primitive root and for(a,q)=1{\displaystyle (a,q)=1}define the functionνq(a){\displaystyle \nu _{q}(a)}(theindexofa{\displaystyle a}) by For(ab,q)=1,a≡b(modq){\displaystyle (ab,q)=1,\;\;a\equiv b{\pmod {q}}}if and only ifνq(a)=νq(b).{\displaystyle \nu _{q}(a)=\nu _{q}(b).}Since Letωq=ζϕ(q){\displaystyle \omega _{q}=\zeta _{\phi (q)}}be a primitiveϕ(q){\displaystyle \phi (q)}-th root of unity. From property 7) above the possible values ofχ(gq){\displaystyle \chi (g_{q})}areωq,ωq2,...ωqϕ(q)=1.{\displaystyle \omega _{q},\omega _{q}^{2},...\omega _{q}^{\phi (q)}=1.}These distinct values give rise toϕ(q){\displaystyle \phi (q)}Dirichlet characters modq.{\displaystyle q.}For(r,q)=1{\displaystyle (r,q)=1}defineχq,r(a){\displaystyle \chi _{q,r}(a)}as Then for(rs,q)=1{\displaystyle (rs,q)=1}and alla{\displaystyle a}andb{\displaystyle b} 2 is a primitive root mod 3.   (ϕ(3)=2{\displaystyle \phi (3)=2}) so the values ofν3{\displaystyle \nu _{3}}are The nonzero values of the characters mod 3 are 2 is a primitive root mod 5.   (ϕ(5)=4{\displaystyle \phi (5)=4}) so the values ofν5{\displaystyle \nu _{5}}are The nonzero values of the characters mod 5 are 3 is a primitive root mod 7.   (ϕ(7)=6{\displaystyle \phi (7)=6}) so the values ofν7{\displaystyle \nu _{7}}are The nonzero values of the characters mod 7 are (ω=ζ6,ω3=−1{\displaystyle \omega =\zeta _{6},\;\;\omega ^{3}=-1}) 2 is a primitive root mod 9.   (ϕ(9)=6{\displaystyle \phi (9)=6}) so the values ofν9{\displaystyle \nu _{9}}are The nonzero values of the characters mod 9 are (ω=ζ6,ω3=−1{\displaystyle \omega =\zeta _{6},\;\;\omega ^{3}=-1}) (Z/2Z)×{\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{\times }}is the trivial group with one element.(Z/4Z)×{\displaystyle (\mathbb {Z} /4\mathbb {Z} )^{\times }}is cyclic of order 2. For 8, 16, and higher powers of 2, there is no primitive root; the powers of 5 are the units≡1(mod4){\displaystyle \equiv 1{\pmod {4}}}and their negatives are the units≡3(mod4).{\displaystyle \equiv 3{\pmod {4}}.}[16]For example Letq=2k,k≥3{\displaystyle q=2^{k},\;\;k\geq 3}; then(Z/qZ)×{\displaystyle (\mathbb {Z} /q\mathbb {Z} )^{\times }}is the direct product of a cyclic group of order 2 (generated by −1) and a cyclic group of orderϕ(q)2{\displaystyle {\frac {\phi (q)}{2}}}(generated by 5). For odd numbersa{\displaystyle a}define the functionsν0{\displaystyle \nu _{0}}andνq{\displaystyle \nu _{q}}by For odda{\displaystyle a}andb,a≡b(modq){\displaystyle b,\;\;a\equiv b{\pmod {q}}}if and only ifν0(a)=ν0(b){\displaystyle \nu _{0}(a)=\nu _{0}(b)}andνq(a)=νq(b).{\displaystyle \nu _{q}(a)=\nu _{q}(b).}For odda{\displaystyle a}the value ofχ(a){\displaystyle \chi (a)}is determined by the values ofχ(−1){\displaystyle \chi (-1)}andχ(5).{\displaystyle \chi (5).} Letωq=ζϕ(q)2{\displaystyle \omega _{q}=\zeta _{\frac {\phi (q)}{2}}}be a primitiveϕ(q)2{\displaystyle {\frac {\phi (q)}{2}}}-th root of unity. The possible values ofχ((−1)ν0(a)5νq(a)){\displaystyle \chi ((-1)^{\nu _{0}(a)}5^{\nu _{q}(a)})}are±ωq,±ωq2,...±ωqϕ(q)2=±1.{\displaystyle \pm \omega _{q},\pm \omega _{q}^{2},...\pm \omega _{q}^{\frac {\phi (q)}{2}}=\pm 1.}These distinct values give rise toϕ(q){\displaystyle \phi (q)}Dirichlet characters modq.{\displaystyle q.}For oddr{\displaystyle r}defineχq,r(a){\displaystyle \chi _{q,r}(a)}by Then for oddr{\displaystyle r}ands{\displaystyle s}and alla{\displaystyle a}andb{\displaystyle b} The only character mod 2 is the principal characterχ2,1{\displaystyle \chi _{2,1}}. −1 is a primitive root mod 4 (ϕ(4)=2{\displaystyle \phi (4)=2}) The nonzero values of the characters mod 4 are −1 is and 5 generate the units mod 8 (ϕ(8)=4{\displaystyle \phi (8)=4}) The nonzero values of the characters mod 8 are −1 and 5 generate the units mod 16 (ϕ(16)=8{\displaystyle \phi (16)=8}) The nonzero values of the characters mod 16 are Letm=p1m1p2m2⋯pkmk=q1q2⋯qk{\displaystyle m=p_{1}^{m_{1}}p_{2}^{m_{2}}\cdots p_{k}^{m_{k}}=q_{1}q_{2}\cdots q_{k}}wherep1<p2<⋯<pk{\displaystyle p_{1}<p_{2}<\dots <p_{k}}be the factorization ofm{\displaystyle m}into prime powers. The group of units modm{\displaystyle m}is isomorphic to the direct product of the groups mod theqi{\displaystyle q_{i}}:[17] This means that 1) there is a one-to-one correspondence betweena∈(Z/mZ)×{\displaystyle a\in (\mathbb {Z} /m\mathbb {Z} )^{\times }}andk{\displaystyle k}-tuples(a1,a2,…,ak){\displaystyle (a_{1},a_{2},\dots ,a_{k})}whereai∈(Z/qiZ)×{\displaystyle a_{i}\in (\mathbb {Z} /q_{i}\mathbb {Z} )^{\times }}and 2) multiplication modm{\displaystyle m}corresponds to coordinate-wise multiplication ofk{\displaystyle k}-tuples: TheChinese remainder theorem(CRT) implies that theai{\displaystyle a_{i}}are simplyai≡a(modqi).{\displaystyle a_{i}\equiv a{\pmod {q_{i}}}.} There are subgroupsGi<(Z/mZ)×{\displaystyle G_{i}<(\mathbb {Z} /m\mathbb {Z} )^{\times }}such that[18] Then(Z/mZ)×≅G1×G2×...×Gk{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }\cong G_{1}\times G_{2}\times ...\times G_{k}}and everya∈(Z/mZ)×{\displaystyle a\in (\mathbb {Z} /m\mathbb {Z} )^{\times }}corresponds to ak{\displaystyle k}-tuple(a1,a2,...ak){\displaystyle (a_{1},a_{2},...a_{k})}whereai∈Gi{\displaystyle a_{i}\in G_{i}}andai≡a(modqi).{\displaystyle a_{i}\equiv a{\pmod {q_{i}}}.}Everya∈(Z/mZ)×{\displaystyle a\in (\mathbb {Z} /m\mathbb {Z} )^{\times }}can be uniquely factored asa=a1a2...ak.{\displaystyle a=a_{1}a_{2}...a_{k}.}[19][20] Ifχm,_{\displaystyle \chi _{m,\_}}is a character modm,{\displaystyle m,}on the subgroupGi{\displaystyle G_{i}}it must be identical to someχqi,_{\displaystyle \chi _{q_{i},\_}}modqi{\displaystyle q_{i}}Then showing that every character modm{\displaystyle m}is the product of characters mod theqi{\displaystyle q_{i}}. For(t,m)=1{\displaystyle (t,m)=1}define[21] Then for(rs,m)=1{\displaystyle (rs,m)=1}and alla{\displaystyle a}andb{\displaystyle b}[22] (Z/15Z)×≅(Z/3Z)××(Z/5Z)×.{\displaystyle (\mathbb {Z} /15\mathbb {Z} )^{\times }\cong (\mathbb {Z} /3\mathbb {Z} )^{\times }\times (\mathbb {Z} /5\mathbb {Z} )^{\times }.} The factorization of the characters mod 15 is The nonzero values of the characters mod 15 are (Z/24Z)×≅(Z/8Z)××(Z/3Z)×.{\displaystyle (\mathbb {Z} /24\mathbb {Z} )^{\times }\cong (\mathbb {Z} /8\mathbb {Z} )^{\times }\times (\mathbb {Z} /3\mathbb {Z} )^{\times }.}The factorization of the characters mod 24 is The nonzero values of the characters mod 24 are (Z/40Z)×≅(Z/8Z)××(Z/5Z)×.{\displaystyle (\mathbb {Z} /40\mathbb {Z} )^{\times }\cong (\mathbb {Z} /8\mathbb {Z} )^{\times }\times (\mathbb {Z} /5\mathbb {Z} )^{\times }.}The factorization of the characters mod 40 is The nonzero values of the characters mod 40 are Letm=p1k1p2k2⋯=q1q2⋯{\displaystyle m=p_{1}^{k_{1}}p_{2}^{k_{2}}\cdots =q_{1}q_{2}\cdots },p1<p2<…{\displaystyle p_{1}<p_{2}<\dots }be the factorization ofm{\displaystyle m}and assume(rs,m)=1.{\displaystyle (rs,m)=1.} There areϕ(m){\displaystyle \phi (m)}Dirichlet characters modm.{\displaystyle m.}They are denoted byχm,r,{\displaystyle \chi _{m,r},}whereχm,r=χm,s{\displaystyle \chi _{m,r}=\chi _{m,s}}is equivalent tor≡s(modm).{\displaystyle r\equiv s{\pmod {m}}.}The identityχm,r(a)χm,s(a)=χm,rs(a){\displaystyle \chi _{m,r}(a)\chi _{m,s}(a)=\chi _{m,rs}(a)\;}is an isomorphism(Z/mZ)×^≅(Z/mZ)×.{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}\cong (\mathbb {Z} /m\mathbb {Z} )^{\times }.}[23] Each character modm{\displaystyle m}has a unique factorization as the product of characters mod the prime powers dividingm{\displaystyle m}: Ifm=m1m2,(m1,m2)=1{\displaystyle m=m_{1}m_{2},(m_{1},m_{2})=1}the productχm1,rχm2,s{\displaystyle \chi _{m_{1},r}\chi _{m_{2},s}}is a characterχm,t{\displaystyle \chi _{m,t}}wheret{\displaystyle t}is given byt≡r(modm1){\displaystyle t\equiv r{\pmod {m_{1}}}}andt≡s(modm2).{\displaystyle t\equiv s{\pmod {m_{2}}}.} Also,[24][25]χm,r(s)=χm,s(r){\displaystyle \chi _{m,r}(s)=\chi _{m,s}(r)} The two orthogonality relations are[26] The relations can be written in the symmetric form The first relation is easy to prove: Ifχ=χ0{\displaystyle \chi =\chi _{0}}there areϕ(m){\displaystyle \phi (m)}non-zero summands each equal to 1. Ifχ≠χ0{\displaystyle \chi \neq \chi _{0}}there is[27]somea∗,(a∗,m)=1,χ(a∗)≠1.{\displaystyle a^{*},\;(a^{*},m)=1,\;\chi (a^{*})\neq 1.}Then The second relation can be proven directly in the same way, but requires a lemma[29] The second relation has an important corollary: if(a,m)=1,{\displaystyle (a,m)=1,}define the function That isfa=1[a]{\displaystyle f_{a}=\mathbb {1} _{[a]}}theindicator functionof the residue class[a]={x:x≡a(modm)}{\displaystyle [a]=\{x:\;x\equiv a{\pmod {m}}\}}. It is basic in the proof of Dirichlet's theorem.[30][31] Any character mod a prime power is also a character mod every larger power. For example, mod 16[32] χ16,3{\displaystyle \chi _{16,3}}has period 16, butχ16,9{\displaystyle \chi _{16,9}}has period 8 andχ16,15{\displaystyle \chi _{16,15}}has period 4:χ16,9=χ8,5{\displaystyle \chi _{16,9}=\chi _{8,5}}andχ16,15=χ8,7=χ4,3.{\displaystyle \chi _{16,15}=\chi _{8,7}=\chi _{4,3}.} We say that a characterχ{\displaystyle \chi }of modulusq{\displaystyle q}has aquasiperiod ofd{\displaystyle d}ifχ(m)=χ(n){\displaystyle \chi (m)=\chi (n)}for allm{\displaystyle m},n{\displaystyle n}coprime toq{\displaystyle q}satisfyingm≡n{\displaystyle m\equiv n}modd{\displaystyle d}.[33]For example,χ2,1{\displaystyle \chi _{2,1}}, the only Dirichlet character of modulus2{\displaystyle 2}, has a quasiperiod of1{\displaystyle 1}, butnota period of1{\displaystyle 1}(it has a period of2{\displaystyle 2}, though). The smallest positive integer for whichχ{\displaystyle \chi }is quasiperiodic is theconductorofχ{\displaystyle \chi }.[34]So, for instance,χ2,1{\displaystyle \chi _{2,1}}has a conductor of1{\displaystyle 1}. The conductor ofχ16,3{\displaystyle \chi _{16,3}}is 16, the conductor ofχ16,9{\displaystyle \chi _{16,9}}is 8 and that ofχ16,15{\displaystyle \chi _{16,15}}andχ8,7{\displaystyle \chi _{8,7}}is 4. If the modulus and conductor are equal the character isprimitive, otherwiseimprimitive. An imprimitive character isinducedby the character for the smallest modulus:χ16,9{\displaystyle \chi _{16,9}}is induced fromχ8,5{\displaystyle \chi _{8,5}}andχ16,15{\displaystyle \chi _{16,15}}andχ8,7{\displaystyle \chi _{8,7}}are induced fromχ4,3{\displaystyle \chi _{4,3}}. A related phenomenon can happen with a character mod the product of primes; itsnonzero valuesmay be periodic with a smaller period. For example, mod 15, The nonzero values ofχ15,8{\displaystyle \chi _{15,8}}have period 15, but those ofχ15,11{\displaystyle \chi _{15,11}}have period 3 and those ofχ15,13{\displaystyle \chi _{15,13}}have period 5. This is easier to see by juxtaposing them with characters mod 3 and 5: If a character modm=qr,(q,r)=1,q>1,r>1{\displaystyle m=qr,\;\;(q,r)=1,\;\;q>1,\;\;r>1}is defined as its nonzero values are determined by the character modq{\displaystyle q}and have periodq{\displaystyle q}. The smallest period of the nonzero values is theconductorof the character. For example, the conductor ofχ15,8{\displaystyle \chi _{15,8}}is 15, the conductor ofχ15,11{\displaystyle \chi _{15,11}}is 3, and that ofχ15,13{\displaystyle \chi _{15,13}}is 5. As in the prime-power case, if the conductor equals the modulus the character isprimitive, otherwiseimprimitive. If imprimitive it isinducedfrom the character with the smaller modulus. For example,χ15,11{\displaystyle \chi _{15,11}}is induced fromχ3,2{\displaystyle \chi _{3,2}}andχ15,13{\displaystyle \chi _{15,13}}is induced fromχ5,3{\displaystyle \chi _{5,3}} The principal character is not primitive.[35] The characterχm,r=χq1,rχq2,r...{\displaystyle \chi _{m,r}=\chi _{q_{1},r}\chi _{q_{2},r}...}is primitive if and only if each of the factors is primitive.[36] Primitive characters often simplify (or make possible) formulas in the theories ofL-functions[37]andmodular forms. χ(a){\displaystyle \chi (a)}isevenifχ(−1)=1{\displaystyle \chi (-1)=1}and isoddifχ(−1)=−1.{\displaystyle \chi (-1)=-1.} This distinction appears in thefunctional equationof theDirichlet L-function. Theorderof a character is itsorder as an element of the group(Z/mZ)×^{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}}, i.e. the smallest positive integern{\displaystyle n}such thatχn=χ0.{\displaystyle \chi ^{n}=\chi _{0}.}Because of the isomorphism(Z/mZ)×^≅(Z/mZ)×{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}\cong (\mathbb {Z} /m\mathbb {Z} )^{\times }}the order ofχm,r{\displaystyle \chi _{m,r}}is the same as the order ofr{\displaystyle r}in(Z/mZ)×.{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }.}The principal character has order 1; otherreal charactershave order 2, and imaginary characters have order 3 or greater. ByLagrange's theoremthe order of a character divides the order of(Z/mZ)×^{\displaystyle {\widehat {(\mathbb {Z} /m\mathbb {Z} )^{\times }}}}which isϕ(m){\displaystyle \phi (m)} χ(a){\displaystyle \chi (a)}isrealorquadraticif all of its values are real (they must be0,±1{\displaystyle 0,\;\pm 1}); otherwise it iscomplexorimaginary. χ{\displaystyle \chi }is real if and only ifχ2=χ0{\displaystyle \chi ^{2}=\chi _{0}};χm,k{\displaystyle \chi _{m,k}}is real if and only ifk2≡1(modm){\displaystyle k^{2}\equiv 1{\pmod {m}}}; in particular,χm,−1{\displaystyle \chi _{m,-1}}is real and non-principal.[38] Dirichlet's original proof thatL(1,χ)≠0{\displaystyle L(1,\chi )\neq 0}(which was only valid for prime moduli) took two different forms depending on whetherχ{\displaystyle \chi }was real or not. His later proof, valid for all moduli, was based on hisclass number formula.[39][40] Real characters areKronecker symbols;[41]for example, the principal character can be written[42]χm,1=(m2∙){\displaystyle \chi _{m,1}=\left({\frac {m^{2}}{\bullet }}\right)}. The real characters in the examples are: Ifm=p1k1p2k2...,p1<p2<...{\displaystyle m=p_{1}^{k_{1}}p_{2}^{k_{2}}...,\;p_{1}<p_{2}<\;...}the principal character is[43]χm,1=(p12p22...∙).{\displaystyle \chi _{m,1}=\left({\frac {p_{1}^{2}p_{2}^{2}...}{\bullet }}\right).} χ16,1=χ8,1=χ4,1=χ2,1=(4∙){\displaystyle \chi _{16,1}=\chi _{8,1}=\chi _{4,1}=\chi _{2,1}=\left({\frac {4}{\bullet }}\right)}χ9,1=χ3,1=(9∙){\displaystyle \chi _{9,1}=\chi _{3,1}=\left({\frac {9}{\bullet }}\right)}χ5,1=(25∙){\displaystyle \chi _{5,1}=\left({\frac {25}{\bullet }}\right)}χ7,1=(49∙){\displaystyle \chi _{7,1}=\left({\frac {49}{\bullet }}\right)}χ15,1=(225∙){\displaystyle \chi _{15,1}=\left({\frac {225}{\bullet }}\right)}χ24,1=(36∙){\displaystyle \chi _{24,1}=\left({\frac {36}{\bullet }}\right)}χ40,1=(100∙){\displaystyle \chi _{40,1}=\left({\frac {100}{\bullet }}\right)} If the modulus is the absolute value of afundamental discriminantthere is a real primitive character (there are two if the modulus is a multiple of 8); otherwise if there are any primitive characters[36]they are imaginary.[44] χ3,2=(−3∙){\displaystyle \chi _{3,2}=\left({\frac {-3}{\bullet }}\right)}χ4,3=(−4∙){\displaystyle \chi _{4,3}=\left({\frac {-4}{\bullet }}\right)}χ5,4=(5∙){\displaystyle \chi _{5,4}=\left({\frac {5}{\bullet }}\right)}χ7,6=(−7∙){\displaystyle \chi _{7,6}=\left({\frac {-7}{\bullet }}\right)}χ8,3=(−8∙){\displaystyle \chi _{8,3}=\left({\frac {-8}{\bullet }}\right)}χ8,5=(8∙){\displaystyle \chi _{8,5}=\left({\frac {8}{\bullet }}\right)}χ15,14=(−15∙){\displaystyle \chi _{15,14}=\left({\frac {-15}{\bullet }}\right)}χ24,5=(−24∙){\displaystyle \chi _{24,5}=\left({\frac {-24}{\bullet }}\right)}χ24,11=(24∙){\displaystyle \chi _{24,11}=\left({\frac {24}{\bullet }}\right)}χ40,19=(−40∙){\displaystyle \chi _{40,19}=\left({\frac {-40}{\bullet }}\right)}χ40,29=(40∙){\displaystyle \chi _{40,29}=\left({\frac {40}{\bullet }}\right)} χ8,7=χ4,3=(−4∙){\displaystyle \chi _{8,7}=\chi _{4,3}=\left({\frac {-4}{\bullet }}\right)}χ9,8=χ3,2=(−3∙){\displaystyle \chi _{9,8}=\chi _{3,2}=\left({\frac {-3}{\bullet }}\right)}χ15,4=χ5,4χ3,1=(45∙){\displaystyle \chi _{15,4}=\chi _{5,4}\chi _{3,1}=\left({\frac {45}{\bullet }}\right)}χ15,11=χ3,2χ5,1=(−75∙){\displaystyle \chi _{15,11}=\chi _{3,2}\chi _{5,1}=\left({\frac {-75}{\bullet }}\right)}χ16,7=χ8,3=(−8∙){\displaystyle \chi _{16,7}=\chi _{8,3}=\left({\frac {-8}{\bullet }}\right)}χ16,9=χ8,5=(8∙){\displaystyle \chi _{16,9}=\chi _{8,5}=\left({\frac {8}{\bullet }}\right)}χ16,15=χ4,3=(−4∙){\displaystyle \chi _{16,15}=\chi _{4,3}=\left({\frac {-4}{\bullet }}\right)} χ24,7=χ8,7χ3,1=χ4,3χ3,1=(−36∙){\displaystyle \chi _{24,7}=\chi _{8,7}\chi _{3,1}=\chi _{4,3}\chi _{3,1}=\left({\frac {-36}{\bullet }}\right)}χ24,13=χ8,5χ3,1=(72∙){\displaystyle \chi _{24,13}=\chi _{8,5}\chi _{3,1}=\left({\frac {72}{\bullet }}\right)}χ24,17=χ3,2χ8,1=(−12∙){\displaystyle \chi _{24,17}=\chi _{3,2}\chi _{8,1}=\left({\frac {-12}{\bullet }}\right)}χ24,19=χ8,3χ3,1=(−72∙){\displaystyle \chi _{24,19}=\chi _{8,3}\chi _{3,1}=\left({\frac {-72}{\bullet }}\right)}χ24,23=χ8,7χ3,2=χ4,3χ3,2=(12∙){\displaystyle \chi _{24,23}=\chi _{8,7}\chi _{3,2}=\chi _{4,3}\chi _{3,2}=\left({\frac {12}{\bullet }}\right)} χ40,9=χ5,4χ8,1=(20∙){\displaystyle \chi _{40,9}=\chi _{5,4}\chi _{8,1}=\left({\frac {20}{\bullet }}\right)}χ40,11=χ8,3χ5,1=(−200∙){\displaystyle \chi _{40,11}=\chi _{8,3}\chi _{5,1}=\left({\frac {-200}{\bullet }}\right)}χ40,21=χ8,5χ5,1=(200∙){\displaystyle \chi _{40,21}=\chi _{8,5}\chi _{5,1}=\left({\frac {200}{\bullet }}\right)}χ40,31=χ8,7χ5,1=χ4,3χ5,1=(−100∙){\displaystyle \chi _{40,31}=\chi _{8,7}\chi _{5,1}=\chi _{4,3}\chi _{5,1}=\left({\frac {-100}{\bullet }}\right)}χ40,39=χ8,7χ5,4=χ4,3χ5,4=(−20∙){\displaystyle \chi _{40,39}=\chi _{8,7}\chi _{5,4}=\chi _{4,3}\chi _{5,4}=\left({\frac {-20}{\bullet }}\right)} The Dirichlet L-series for a characterχ{\displaystyle \chi }is This series only converges forR(s)>1{\displaystyle {\mathfrak {R}}(s)>1}; it can be analytically continued to a meromorphic function. Dirichlet introduced theL{\displaystyle L}-function along with the characters in his 1837 paper. Dirichlet characters appear several places in the theory of modular forms and functions. A typical example is[45] Letχ∈(Z/MZ)×^{\displaystyle \chi \in {\widehat {(\mathbb {Z} /M\mathbb {Z} )^{\times }}}}and letχ1∈(Z/NZ)×^{\displaystyle \chi _{1}\in {\widehat {(\mathbb {Z} /N\mathbb {Z} )^{\times }}}}be primitive. If define Then Seetheta series of a Dirichlet characterfor another example. The Gauss sum of a Dirichlet character moduloNis It appears in thefunctional equationof theDirichlet L-function. Ifχ{\displaystyle \chi }andψ{\displaystyle \psi }are Dirichlet characters mod a primep{\displaystyle p}their Jacobi sum is Jacobi sums can be factored into products of Gauss sums. Ifχ{\displaystyle \chi }is a Dirichlet character modq{\displaystyle q}andζ=e2πiq{\displaystyle \zeta =e^{\frac {2\pi i}{q}}}the Kloosterman sumK(a,b,χ){\displaystyle K(a,b,\chi )}is defined as[48] Ifb=0{\displaystyle b=0}it is a Gauss sum. It is not necessary to establish the defining properties 1) – 3) to show that a function is a Dirichlet character. IfX:Z→C{\displaystyle \mathrm {X} :\mathbb {Z} \rightarrow \mathbb {C} }such that thenX(a){\displaystyle \mathrm {X} (a)}is one of theϕ(m){\displaystyle \phi (m)}characters modm{\displaystyle m}[49] A Dirichlet character is a completely multiplicative functionf:N→C{\displaystyle f:\mathbb {N} \rightarrow \mathbb {C} }that satisfies alinear recurrence relation: that is, ifa1f(n+b1)+⋯+akf(n+bk)=0{\displaystyle a_{1}f(n+b_{1})+\cdots +a_{k}f(n+b_{k})=0} for all positive integersn{\displaystyle n}, wherea1,…,ak{\displaystyle a_{1},\ldots ,a_{k}}are not all zero andb1,…,bk{\displaystyle b_{1},\ldots ,b_{k}}are distinct thenf{\displaystyle f}is a Dirichlet character.[50] A Dirichlet character is a completely multiplicative functionf:N→C{\displaystyle f:\mathbb {N} \rightarrow \mathbb {C} }satisfying the following three properties: a)f{\displaystyle f}takes only finitely many values; b)f{\displaystyle f}vanishes at only finitely many primes; c) there is anα∈C{\displaystyle \alpha \in \mathbb {C} }for which the remainder |∑n≤xf(n)−αx|{\displaystyle \left|\sum _{n\leq x}f(n)-\alpha x\right|} is uniformly bounded, asx→∞{\displaystyle x\rightarrow \infty }. This equivalent definition of Dirichlet characters was conjectured by Chudakov[51]in 1956, and proved in 2017 by Klurman and Mangerel.[52]
https://en.wikipedia.org/wiki/Dirichlet_character
Inalgebraandnumber theory,Wilson's theoremstates that anatural numbern> 1 is aprime numberif and only ifthe product of all thepositive integersless thannis one less than a multiple ofn. That is (using the notations ofmodular arithmetic), thefactorial(n−1)!=1×2×3×⋯×(n−1){\displaystyle (n-1)!=1\times 2\times 3\times \cdots \times (n-1)}satisfies exactly whennis a prime number. In other words, any integern> 1 is a prime number if, and only if, (n− 1)! + 1 is divisible byn.[1] The theorem was first stated byIbn al-Haythamc.1000 AD.[2]Edward Waringannounced the theorem in 1770 without proving it, crediting his studentJohn Wilsonfor the discovery.[3]Lagrangegave the first proof in 1771.[4]There is evidence thatLeibnizwas also aware of the result a century earlier, but never published it.[5] For each of the values ofnfrom 2 to 30, the following table shows the number (n− 1)! and the remainder when (n− 1)! is divided byn. (In the notation ofmodular arithmetic, the remainder whenmis divided bynis writtenmmodn.) The background color is blue forprimevalues ofn, gold forcompositevalues. As abiconditional(if and only if) statement, the proof has two halves: to show that equalitydoes nothold whenn{\displaystyle n}is composite, and to show that itdoeshold whenn{\displaystyle n}is prime. Suppose thatn{\displaystyle n}is composite. Therefore, it is divisible by some prime numberq{\displaystyle q}where2≤q<n{\displaystyle 2\leq q<n}. Becauseq{\displaystyle q}dividesn{\displaystyle n}, there is an integerk{\displaystyle k}such thatn=qk{\displaystyle n=qk}. Suppose for the sake of contradiction that(n−1)!{\displaystyle (n-1)!}were congruent to−1{\displaystyle -1}modulon{\displaystyle {n}}. Then(n−1)!{\displaystyle (n-1)!}would also be congruent to−1{\displaystyle -1}moduloq{\displaystyle {q}}: indeed, if(n−1)!≡−1(modn){\displaystyle (n-1)!\equiv -1{\pmod {n}}}then(n−1)!=nm−1=(qk)m−1=q(km)−1{\displaystyle (n-1)!=nm-1=(qk)m-1=q(km)-1}for some integerm{\displaystyle m}, and consequently(n−1)!{\displaystyle (n-1)!}is one less than a multiple ofq{\displaystyle q}. On the other hand, since2≤q≤n−1{\displaystyle 2\leq q\leq n-1}, one of the factors in the expanded product(n−1)!=(n−1)×(n−2)×⋯×2×1{\displaystyle (n-1)!=(n-1)\times (n-2)\times \cdots \times 2\times 1}isq{\displaystyle q}. Therefore(n−1)!≡0(modq){\displaystyle (n-1)!\equiv 0{\pmod {q}}}. This is a contradiction; therefore it is not possible that(n−1)!≡−1(modn){\displaystyle (n-1)!\equiv -1{\pmod {n}}}whenn{\displaystyle n}is composite. In fact, more is true. With the sole exception of the casen=4{\displaystyle n=4}, where3!=6≡2(mod4){\displaystyle 3!=6\equiv 2{\pmod {4}}}, ifn{\displaystyle n}is composite then(n−1)!{\displaystyle (n-1)!}is congruent to 0 modulon{\displaystyle n}. The proof can be divided into two cases: First, ifn{\displaystyle n}can be factored as the product of two unequal numbers,n=ab{\displaystyle n=ab}, where2≤a<b<n{\displaystyle 2\leq a<b<n}, then botha{\displaystyle a}andb{\displaystyle b}will appear as factors in the product(n−1)!=(n−1)×(n−2)×⋯×2×1{\displaystyle (n-1)!=(n-1)\times (n-2)\times \cdots \times 2\times 1}and so(n−1)!{\displaystyle (n-1)!}is divisible byab=n{\displaystyle ab=n}. Ifn{\displaystyle n}has no such factorization, then it must be the square of some primeq{\displaystyle q}larger than 2. But then2q<q2=n{\displaystyle 2q<q^{2}=n}, so bothq{\displaystyle q}and2q{\displaystyle 2q}will be factors of(n−1)!{\displaystyle (n-1)!}, and son{\displaystyle n}divides(n−1)!{\displaystyle (n-1)!}in this case, as well. The first two proofs below use the fact that theresidue classesmodulo a prime number form afinite field(specifically, aprime field).[6] The result is trivial whenp=2{\displaystyle p=2}, so assumep{\displaystyle p}is an odd prime,p≥3{\displaystyle p\geq 3}. Since the residue classes modulop{\displaystyle p}form a field, every non-zero residuea{\displaystyle a}has a unique multiplicative inversea−1{\displaystyle a^{-1}}.Euclid's lemmaimplies[a]that the only values ofa{\displaystyle a}for whicha≡a−1(modp){\displaystyle a\equiv a^{-1}{\pmod {p}}}area≡±1(modp){\displaystyle a\equiv \pm 1{\pmod {p}}}. Therefore, with the exception of±1{\displaystyle \pm 1}, the factors in the expanded form of(p−1)!{\displaystyle (p-1)!}can be arranged in disjoint pairs such that product of each pair is congruent to 1 modulop{\displaystyle p}. This proves Wilson's theorem. For example, forp=11{\displaystyle p=11}, one has10!=[(1⋅10)]⋅[(2⋅6)(3⋅4)(5⋅9)(7⋅8)]≡[−1]⋅[1⋅1⋅1⋅1]≡−1(mod11).{\displaystyle 10!=[(1\cdot 10)]\cdot [(2\cdot 6)(3\cdot 4)(5\cdot 9)(7\cdot 8)]\equiv [-1]\cdot [1\cdot 1\cdot 1\cdot 1]\equiv -1{\pmod {11}}.} Again, the result is trivial forp= 2, so supposepis an odd prime,p≥ 3. Consider the polynomial ghas degreep− 1, leading termxp− 1, and constant term(p− 1)!. Itsp− 1roots are 1, 2, ...,p− 1. Now consider halso has degreep− 1and leading termxp− 1. Modulop,Fermat's little theoremsays it also has the samep− 1roots, 1, 2, ...,p− 1. Finally, consider fhas degree at mostp− 2 (since the leading terms cancel), and modulopalso has thep− 1roots 1, 2, ...,p− 1. ButLagrange's theoremsays it cannot have more thanp− 2 roots. Therefore,fmust be identically zero (modp), so its constant term is(p− 1)! + 1 ≡ 0 (modp). This is Wilson's theorem. It is possible to deduce Wilson's theorem from a particular application of theSylow theorems. Letpbe a prime. It is immediate to deduce that thesymmetric groupSp{\displaystyle S_{p}}has exactly(p−1)!{\displaystyle (p-1)!}elements of orderp, namely thep-cyclesCp{\displaystyle C_{p}}. On the other hand, each Sylowp-subgroup inSp{\displaystyle S_{p}}is a copy ofCp{\displaystyle C_{p}}. Hence it follows that the number of Sylowp-subgroups isnp=(p−2)!{\displaystyle n_{p}=(p-2)!}. The third Sylow theorem implies Multiplying both sides by(p− 1)gives that is, the result. In practice, Wilson's theorem is useless as aprimality testbecause computing (n− 1)! modulonfor largeniscomputationally complex.[7] Using Wilson's Theorem, for any odd primep= 2m+ 1, we can rearrange the left hand side of1⋅2⋯(p−1)≡−1(modp){\displaystyle 1\cdot 2\cdots (p-1)\ \equiv \ -1\ {\pmod {p}}}to obtain the equality1⋅(p−1)⋅2⋅(p−2)⋯m⋅(p−m)≡1⋅(−1)⋅2⋅(−2)⋯m⋅(−m)≡−1(modp).{\displaystyle 1\cdot (p-1)\cdot 2\cdot (p-2)\cdots m\cdot (p-m)\ \equiv \ 1\cdot (-1)\cdot 2\cdot (-2)\cdots m\cdot (-m)\ \equiv \ -1{\pmod {p}}.}This becomes∏j=1mj2≡(−1)m+1(modp){\displaystyle \prod _{j=1}^{m}\ j^{2}\ \equiv (-1)^{m+1}{\pmod {p}}}or(m!)2≡(−1)m+1(modp).{\displaystyle (m!)^{2}\equiv (-1)^{m+1}{\pmod {p}}.}We can use this fact to prove part of a famous result: for any primepsuch thatp≡ 1 (mod 4), the number (−1) is a square (quadratic residue) modp. For this, supposep= 4k+ 1 for some integerk. Then we can takem= 2kabove, and we conclude that (m!)2is congruent to (−1) (modp). Wilson's theorem has been used to constructformulas for primes, but they are too slow to have practical value. Wilson's theorem allows one to define thep-adic gamma function. Gaussproved[8][9]that∏k=1gcd(k,m)=1mk≡{−1(modm)ifm=4,pα,2pα1(modm)otherwise{\displaystyle \prod _{k=1 \atop \gcd(k,m)=1}^{m}\!\!k\ \equiv {\begin{cases}-1{\pmod {m}}&{\text{if }}m=4,\;p^{\alpha },\;2p^{\alpha }\\\;\;\,1{\pmod {m}}&{\text{otherwise}}\end{cases}}}whereprepresents an odd prime andα{\displaystyle \alpha }a positive integer. That is, the product of the positive integers less thanmand relatively prime tomis one less than a multiple ofmwhenmis equal to 4, or a power of an odd prime, or twice a power of an odd prime; otherwise, the product is one more than a multiple ofm. The values ofmfor which the product is −1 are precisely the ones where there is aprimitive root modulom. Original: Inoltre egli intravide anche il teorema di Wilson, come risulta dall'enunciato seguente:"Productus continuorum usque ad numerum qui antepraecedit datum divisus per datum relinquit 1 (vel complementum ad unum?) si datus sit primitivus. Si datus sit derivativus relinquet numerum qui cum dato habeat communem mensuram unitate majorem."Egli non giunse pero a dimostrarlo. Translation: In addition, he [Leibniz] also glimpsed Wilson's theorem, as shown in the following statement:"The product of all integers preceding the given integer, when divided by the given integer, leaves 1 (or the complement of 1?) if the given integer be prime. If the given integer be composite, it leaves a number which has a common factor with the given integer [which is] greater than one."However, he didn't succeed in proving it. TheDisquisitiones Arithmeticaehas been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes.
https://en.wikipedia.org/wiki/Wilson%27s_theorem#Gauss.27s_generalization
Innumber theory, given a positive integernand anintegeracoprimeton, themultiplicative orderofamodulonis the smallest positive integerksuch thatak≡1(modn){\textstyle a^{k}\ \equiv \ 1{\pmod {n}}}.[1] In other words, the multiplicative order ofamodulonis theorderofain themultiplicative groupof theunitsin theringof the integersmodulon. The order ofamodulonis sometimes written asordn⁡(a){\displaystyle \operatorname {ord} _{n}(a)}.[2] The powers of 4 modulo 7 are as follows: The smallest positive integerksuch that 4k≡ 1 (mod 7) is 3, so the order of 4 (mod 7) is 3. Even without knowledge that we are working in themultiplicative group of integers modulo n, we can show thataactually has an order by noting that the powers ofacan only take a finite number of different values modulon, so according to thepigeonhole principlethere must be two powers, saysandtandwithout loss of generalitys>t, such thatas≡at(modn). Sinceaandnarecoprime,ahas an inverse elementa−1and we can multiply both sides of the congruence witha−t, yieldingas−t≡ 1 (modn). The concept of multiplicative order is a special case of theorder of group elements. The multiplicative order of a numberamodulonis the order ofain themultiplicative groupwhose elements are the residues modulonof the numbers coprime ton, and whose group operation is multiplication modulon. This is thegroup of unitsof theringZn; it hasφ(n) elements, φ beingEuler's totient function, and is denoted asU(n) orU(Zn). As a consequence ofLagrange's theorem, the order ofa(modn) alwaysdividesφ(n). If the order ofais actually equal toφ(n), and therefore as large as possible, thenais called aprimitive rootmodulon. This means that the groupU(n) iscyclicand the residue class ofageneratesit. The order ofa(modn) also divides λ(n), a value of theCarmichael function, which is an even stronger statement than the divisibility ofφ(n).
https://en.wikipedia.org/wiki/Multiplicative_order
Innumber theory, akth root of unity modulonfor positiveintegersk,n≥ 2, is aroot of unityin the ring ofintegers modulon; that is, a solutionxto theequation(orcongruence)xk≡1(modn){\displaystyle x^{k}\equiv 1{\pmod {n}}}. Ifkis the smallest such exponent forx, thenxis called aprimitivekth root of unity modulon.[1]Seemodular arithmeticfor notation and terminology. The roots of unity modulonare exactly the integers that arecoprimewithn. In fact, these integers are roots of unity modulonbyEuler's theorem, and the other integers cannot be roots of unity modulon, because they arezero divisorsmodulon. Aprimitive root modulon, is a generator of the group ofunitsof the ring of integers modulon. There exist primitive roots modulonif and only ifλ(n)=φ(n),{\displaystyle \lambda (n)=\varphi (n),}whereλ{\displaystyle \lambda }andφ{\displaystyle \varphi }are respectively theCarmichael functionandEuler's totient function.[clarification needed] A root of unity modulonis a primitivekth root of unity modulonfor some divisorkofλ(n),{\displaystyle \lambda (n),}and, conversely, there are primitivekth roots of unity modulonif and only ifkis a divisor ofλ(n).{\displaystyle \lambda (n).} For the lack of a widely accepted symbol, we denote the number ofkth roots of unity modulonbyf(n,k){\displaystyle f(n,k)}. It satisfies a number of properties: Letn=7{\displaystyle n=7}andk=3{\displaystyle k=3}. In this case, there are three cube roots of unity (1, 2, and 4). Whenn=11{\displaystyle n=11}however, there is only one cube root of unity, the unit 1 itself. This behavior is quite different from the field of complex numbers where every nonzero number haskkth roots. For the lack of a widely accepted symbol, we denote the number of primitivekth roots of unity modulonbyg(n,k){\displaystyle g(n,k)}. It satisfies the following properties: Byfast exponentiation, one can check thatxk≡1(modn){\displaystyle x^{k}\equiv 1{\pmod {n}}}. If this is true,xis akth root of unity modulonbut not necessarily primitive. If it is not a primitive root, then there would be some divisor ℓ ofk, withxℓ≡1(modn){\displaystyle x^{\ell }\equiv 1{\pmod {n}}}. In order to exclude this possibility, one has only to check for a few ℓ's equalkdivided by a prime. That is,xis a primitivekth root of unity modulonif and only ifxk≡1(modn){\textstyle x^{k}\equiv 1{\pmod {n}}}andxk/p≢1(modn){\displaystyle x^{k/p}\not \equiv 1{\pmod {n}}}for everyprime divisorpofn. For example, ifn=17,{\displaystyle n=17,}every positive integer less than 17 is a 16th root of unity modulo 17, and the integers that are primitive 16th roots of unity modulo 17 are exactly those such thatx16/2≢1(mod17).{\displaystyle x^{16/2}\not \equiv 1{\pmod {17}}.} Among the primitivekth roots of unity, the primitiveλ(n){\displaystyle \lambda (n)}th roots are most frequent. It is thus recommended to try some integers for being a primitiveλ(n){\displaystyle \lambda (n)}th root, what will succeed quickly. For a primitiveλ(n){\displaystyle \lambda (n)}th rootx, the numberxλ(n)/k{\displaystyle x^{\lambda (n)/k}}is a primitivek{\displaystyle k}th root of unity. Ifkdoes not divideλ(n){\displaystyle \lambda (n)}, then there will be nokth roots of unity, at all. Once a primitivekth root of unityxis obtained, every powerxℓ{\displaystyle x^{\ell }}is ak{\displaystyle k}th root of unity, but not necessarily a primitive one. The powerxℓ{\displaystyle x^{\ell }}is a primitivek{\displaystyle k}th root of unity if and only ifk{\displaystyle k}andℓ{\displaystyle \ell }arecoprime. The proof is as follows: Ifxℓ{\displaystyle x^{\ell }}is not primitive, then there exists a divisorm{\displaystyle m}ofk{\displaystyle k}with(xℓ)m≡1(modn){\displaystyle (x^{\ell })^{m}\equiv 1{\pmod {n}}}, and sincek{\displaystyle k}andℓ{\displaystyle \ell }are coprime, there exists integersa,b{\displaystyle a,b}such thatak+bℓ=1{\displaystyle ak+b\ell =1}. This yields xm≡(xm)ak+bℓ≡(xk)ma((xℓ)m)b≡1(modn){\displaystyle x^{m}\equiv (x^{m})^{ak+b\ell }\equiv (x^{k})^{ma}((x^{\ell })^{m})^{b}\equiv 1{\pmod {n}}}, which means thatx{\displaystyle x}is not a primitivek{\displaystyle k}th root of unity because there is the smaller exponentm{\displaystyle m}. That is, by exponentiatingxone can obtainφ(k){\displaystyle \varphi (k)}different primitivekth roots of unity, but these may not be all such roots. However, finding all of them is not so easy. In what integerresidue class ringsdoes a primitivekth root of unity exist? It can be used to compute adiscrete Fourier transform(more precisely anumber theoretic transform) of ak{\displaystyle k}-dimensional integervector. In order to perform the inverse transform, divide byk{\displaystyle k}; that is,kis also a unit modulon.{\displaystyle n.} A simple way to find such annis to check for primitivekth roots with respect to the moduli in thearithmetic progressionk+1,2k+1,3k+1,…{\displaystyle k+1,2k+1,3k+1,\dots }All of these moduli are coprime tokand thuskis a unit. According toDirichlet's theorem on arithmetic progressionsthere are infinitely many primes in the progression, and for a primep{\displaystyle p}, it holdsλ(p)=p−1{\displaystyle \lambda (p)=p-1}. Thus ifmk+1{\displaystyle mk+1}is prime, thenλ(mk+1)=mk{\displaystyle \lambda (mk+1)=mk}, and thus there are primitivekth roots of unity. But the test for primes is too strong, and there may be other appropriate moduli. To find a modulusn{\displaystyle n}such that there are primitivek1th,k2th,…,kmth{\displaystyle k_{1}{\text{th}},k_{2}{\text{th}},\ldots ,k_{m}{\text{th}}}roots of unity modulon{\displaystyle n}, the following theorem reduces the problem to a simpler one: Backward direction: If there is a primitivelcm⁡(k1,…,km){\displaystyle \operatorname {lcm} (k_{1},\ldots ,k_{m})}th root of unity modulon{\displaystyle n}calledx{\displaystyle x}, thenxlcm⁡(k1,…,km)/kl{\displaystyle x^{\operatorname {lcm} (k_{1},\ldots ,k_{m})/k_{l}}}is akl{\displaystyle k_{l}}th root of unity modulon{\displaystyle n}. Forward direction: If there are primitivek1th,…,kmth{\displaystyle k_{1}{\text{th}},\ldots ,k_{m}{\text{th}}}roots of unity modulon{\displaystyle n}, then all exponentsk1,…,km{\displaystyle k_{1},\dots ,k_{m}}are divisors ofλ(n){\displaystyle \lambda (n)}. This implieslcm⁡(k1,…,km)∣λ(n){\displaystyle \operatorname {lcm} (k_{1},\dots ,k_{m})\mid \lambda (n)}and this in turn means there is a primitivelcm⁡(k1,…,km){\displaystyle \operatorname {lcm} (k_{1},\ldots ,k_{m})}th root of unity modulon{\displaystyle n}.
https://en.wikipedia.org/wiki/Root_of_unity_modulo_n
Innumber theory,Artin's conjecture on primitive rootsstates that a givenintegerathat is neither asquare numbernor −1 is aprimitive rootmodulo infinitely manyprimesp. Theconjecturealso ascribes anasymptotic densityto these primes. This conjectural density equals Artin's constant or arationalmultiple thereof. The conjecture was made byEmil ArtintoHelmut Hasseon September 27, 1927, according to the latter's diary. The conjecture is still unresolved as of 2025. In fact, there is no single value ofafor which Artin's conjecture is proved. Letabe an integer that is not a square number and not −1. Writea=a0b2witha0square-free. Denote byS(a) the set of prime numberspsuch thatais a primitive root modulop. Then the conjecture states The positive integers satisfying these conditions are: The negative integers satisfying these conditions are: Similar conjectural product formulas[1]exist for the density whenadoes not satisfy the above conditions. In these cases, the conjectural density is always a rational multiple ofCArtin. Ifais asquare numberora= −1, then the density is 0; more generally, ifais a perfectpth power for primep, then the number needs to be multiplied byp(p−2)p2−p−1;{\displaystyle {\frac {p(p-2)}{p^{2}-p-1}};}if there is more than one such primep, then the number needs to be multiplied byp(p−2)p2−p−1{\displaystyle {\frac {p(p-2)}{p^{2}-p-1}}}for all such primesp). Similarly, ifa0is congruent to 1 mod 4, then the number needs to be multiplied byp(p−1)p2−p−1{\displaystyle {\frac {p(p-1)}{p^{2}-p-1}}}for all prime factorspofa0. For example, takea= 2. The conjecture is that the set of primespfor which 2 is a primitive root has the densityCArtin. The set of such primes is (sequenceA001122in theOEIS) It has 38 elements smaller than 500 and there are 95 primes smaller than 500. The ratio (which conjecturally tends toCArtin) is 38/95 = 2/5 = 0.4. Fora= 8 = 23, which is a power of 2, the conjectured density is35C{\displaystyle {\frac {3}{5}}C}, and fora= 5, which is congruent to 1 mod 4, the density is2019C{\displaystyle {\frac {20}{19}}C}. In 1967,Christopher Hooleypublished aconditional prooffor the conjecture, assuming certain cases of thegeneralized Riemann hypothesis.[2] Without the generalized Riemann hypothesis, there is no single value ofafor which Artin's conjecture is proved. However,D. R. Heath-Brownproved in 1986 (Corollary 1) that at least one of 2, 3, or 5 is a primitive root modulo infinitely many primesp.[3]He also proved (Corollary 2) that there are at most two primes for which Artin's conjecture fails. An elliptic curveE{\displaystyle E}given byy2=x3+ax+b{\displaystyle y^{2}=x^{3}+ax+b}, Lang and Trotter gave a conjecture for rational points onE(Q){\displaystyle E(\mathbb {Q} )}analogous to Artin's primitive root conjecture.[4] Specifically, they said there exists a constantCE{\displaystyle C_{E}}for a given point of infinite orderP{\displaystyle P}in the set of rational pointsE(Q){\displaystyle E(\mathbb {Q} )}such that the numberN(P){\displaystyle N(P)}of primes (p≤x{\displaystyle p\leq x}) for which the reduction of the pointP(modp){\displaystyle P{\pmod {p}}}denoted byP¯{\displaystyle {\bar {P}}}generates the whole set of points inFp{\displaystyle \mathbb {F_{p}} }inE{\displaystyle E}, denoted byE¯(Fp){\displaystyle {\bar {E}}(\mathbb {F_{p}} )}, is given byN(P)∼CE(xlog⁡x){\displaystyle N(P)\sim C_{E}\left({\frac {x}{\log x}}\right)}.[5]Here we exclude the primes which divide the denominators of the coordinates ofP{\displaystyle P}. Gupta and Murty proved the Lang and Trotter conjecture forE/Q{\displaystyle E/\mathbb {Q} }with complex multiplication under the Generalized Riemann Hypothesis, for primes splitting in the relevant imaginary quadratic field.[6] Krishnamurty proposed the question how often the period of the decimal expansion1/p{\displaystyle 1/p}of a primep{\displaystyle p}is even. The claim is that the period of the decimal expansion of a prime in baseg{\displaystyle g}is even if and only ifg(p−12j)≢1modp{\displaystyle g^{\left({\frac {p-1}{2^{j}}}\right)}\not \equiv 1{\bmod {p}}}wherej≥1{\displaystyle j\geq 1}andj{\displaystyle j}is unique and p is such thatp≡1+2jmod2j{\displaystyle p\equiv 1+2^{j}\mod {2^{j}}}. The result was proven by Hasse in 1966.[4][7]
https://en.wikipedia.org/wiki/Artin%27s_conjecture_on_primitive_roots
In number theory, arational reciprocity lawis areciprocity lawinvolving residue symbols that are related by a factor of +1 or –1 rather than a general root of unity. As an example, there are rationalbiquadraticandoctic reciprocity laws. Define the symbol (x|p)kto be +1 ifxis ak-th power modulo the primepand -1 otherwise. Letpandqbe distinct primes congruent to 1 modulo 4, such that (p|q)2= (q|p)2= +1. Letp=a2+b2andq=A2+B2withaAodd. Then If in additionpandqare congruent to 1 modulo 8, letp=c2+ 2d2andq=C2+ 2D2. Then Thisnumber theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Rational_reciprocity_law
Inmathematics,modular arithmeticis a system ofarithmeticoperations forintegers, other than the usual ones from elementary arithmetic, where numbers "wrap around" when reaching a certain value, called themodulus. The modern approach to modular arithmetic was developed byCarl Friedrich Gaussin his bookDisquisitiones Arithmeticae, published in 1801. A familiar example of modular arithmetic is the hour hand on a12-hour clock. If the hour hand points to 7 now, then 8 hours later it will point to 3. Ordinary addition would result in7 + 8 = 15, but 15 reads as 3 on the clock face. This is because the hour hand makes one rotation every 12 hours and the hour number starts over when the hour hand passes 12. We say that 15 iscongruentto 3 modulo 12, written 15 ≡ 3 (mod 12), so that 7 + 8 ≡ 3 (mod 12). Similarly, if one starts at 12 and waits 8 hours, the hour hand will be at 8. If one instead waited twice as long, 16 hours, the hour hand would be on 4. This can be written as 2 × 8 ≡ 4 (mod 12). Note that after a wait of exactly 12 hours, the hour hand will always be right where it was before, so 12 acts the same as zero, thus 12 ≡ 0 (mod 12). Given anintegerm≥ 1, called amodulus, two integersaandbare said to becongruentmodulom, ifmis adivisorof their difference; that is, if there is an integerksuch that Congruence modulomis acongruence relation, meaning that it is anequivalence relationthat is compatible withaddition,subtraction, andmultiplication. Congruence modulomis denoted by The parentheses mean that(modm)applies to the entire equation, not just to the right-hand side (here,b). This notation is not to be confused with the notationbmodm(without parentheses), which refers to the remainder ofbwhen divided bym, known as themodulooperation: that is,bmodmdenotes the unique integerrsuch that0 ≤r<mandr≡b(modm). The congruence relation may be rewritten as explicitly showing its relationship withEuclidean division. However, thebhere need not be the remainder in the division ofabym.Rather,a≡b(modm)asserts thataandbhave the sameremainderwhen divided bym. That is, where0 ≤r<mis the common remainder. We recover the previous relation (a−b=k m) by subtracting these two expressions and settingk=p−q. Because the congruence modulomis defined by thedivisibilitybymand because−1is aunitin the ring of integers, a number is divisible by−mexactly if it is divisible bym. This means that every non-zero integermmay be taken as modulus. In modulus 12, one can assert that: because the difference is38 − 14 = 24 = 2 × 12, a multiple of12. Equivalently,38and14have the same remainder2when divided by12. The definition of congruence also applies to negative values. For example: The congruence relation satisfies all the conditions of anequivalence relation: Ifa1≡b1(modm)anda2≡b2(modm), or ifa≡b(modm), then:[1] Ifa≡b(modm), then it is generally false thatka≡kb(modm). However, the following is true: For cancellation of common terms, we have the following rules: The last rule can be used to move modular arithmetic into division. Ifbdividesa, then(a/b) modm= (amodb m) /b. Themodular multiplicative inverseis defined by the following rules: The multiplicative inversex≡a−1(modm)may be efficiently computed by solvingBézout's equationa x+m y= 1forx,y, by using theExtended Euclidean algorithm. In particular, ifpis a prime number, thenais coprime withpfor everyasuch that0 <a<p; thus a multiplicative inverse exists for allathat is not congruent to zero modulop. Some of the more advanced properties of congruence relations are the following: The congruence relation is anequivalence relation. Theequivalence classmodulomof an integerais the set of all integers of the forma+k m, wherekis any integer. It is called thecongruence classorresidue classofamodulom, and may be denoted(amodm), or asaor[a]when the modulusmis known from the context. Each residue class modulomcontains exactly one integer in the range0,...,|m|−1{\displaystyle 0,...,|m|-1}. Thus, these|m|{\displaystyle |m|}integers arerepresentativesof their respective residue classes. It is generally easier to work with integers than sets of integers; that is, the representatives most often considered, rather than their residue classes. Consequently,(amodm)denotes generally the unique integerrsuch that0 ≤r<mandr≡a(modm); it is called theresidueofamodulom. In particular,(amodm) = (bmodm)is equivalent toa≡b(modm), and this explains why "=" is often used instead of "≡" in this context. Each residue class modulommay be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class[2](since this is the proper remainder which results from division). Any two members of different residue classes modulomare incongruent modulom. Furthermore, every integer belongs to one and only one residue class modulom.[3] The set of integers{0, 1, 2, ...,m− 1}is called theleast residue system modulom. Any set ofmintegers, no two of which are congruent modulom, is called acomplete residue system modulom. The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely onerepresentativeof each residue class modulom.[4]For example, the least residue system modulo4is{0, 1, 2, 3}. Some other complete residue systems modulo4include: Some sets that arenotcomplete residue systems modulo 4 are: Given theEuler's totient functionφ(m), any set ofφ(m)integers that arerelatively primetomand mutually incongruent under modulusmis called areduced residue system modulom.[5]The set{5, 15}from above, for example, is an instance of a reduced residue system modulo 4. Covering systems represent yet another type of residue system that may contain residues with varying moduli. In the context of this paragraph, the modulusmis almost always taken as positive. The set of allcongruence classesmodulomis aringcalled thering of integers modulom, and is denotedZ/mZ{\textstyle \mathbb {Z} /m\mathbb {Z} },Z/m{\displaystyle \mathbb {Z} /m}, orZm{\displaystyle \mathbb {Z} _{m}}.[6]The ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is fundamental to various branches of mathematics (see§ Applicationsbelow). (In some parts ofnumber theorythe notationZm{\displaystyle \mathbb {Z} _{m}}is avoided because it can be confused with the set ofm-adic integers.) Form> 0one has Whenm= 1,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is thezero ring; whenm= 0,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is not anempty set; rather, it isisomorphictoZ{\displaystyle \mathbb {Z} }, sincea0= {a}. Addition, subtraction, and multiplication are defined onZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }by the following rules: The properties given before imply that, with these operations,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acommutative ring. For example, in the ringZ/24Z{\displaystyle \mathbb {Z} /24\mathbb {Z} }, one has as in the arithmetic for the 24-hour clock. The notationZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is used because this ring is thequotient ringofZ{\displaystyle \mathbb {Z} }by theidealmZ{\displaystyle m\mathbb {Z} }, the set formed by all multiples ofm, i.e., all numbersk mwithk∈Z.{\displaystyle k\in \mathbb {Z} .} Under addition,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acyclic group. All finite cyclic groups are isomorphic withZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }for somem.[7] The ring of integers modulomis afield, i.e., every nonzero element has amultiplicative inverse, if and only ifmisprime. Ifm=pkis aprime powerwithk> 1, there exists a unique (up to isomorphism) finite fieldGF(m)=Fm{\displaystyle \mathrm {GF} (m)=\mathbb {F} _{m}}withmelements, which isnotisomorphic toZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }, which fails to be a field because it haszero-divisors. Ifm> 1,(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}denotes themultiplicative group of the integers modulomthat are invertible. It consists of the congruence classesam, whereais coprimetom; these are precisely the classes possessing a multiplicative inverse. They form anabelian groupunder multiplication; its order isφ(m), whereφisEuler's totient function. In pure mathematics, modular arithmetic is one of the foundations ofnumber theory, touching on almost every aspect of its study, and it is also used extensively ingroup theory,ring theory,knot theory, andabstract algebra. In applied mathematics, it is used incomputer algebra,cryptography,computer science,chemistryand thevisualandmusicalarts. A very practical application is to calculate checksums within serial number identifiers. For example,International Standard Book Number(ISBN) uses modulo 11 (for 10-digit ISBN) or modulo 10 (for 13-digit ISBN) arithmetic for error detection. Likewise,International Bank Account Numbers(IBANs) use modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of theCAS registry number(a unique identifying number for each chemical compound) is acheck digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10. In cryptography, modular arithmetic directly underpinspublic keysystems such asRSAandDiffie–Hellman, and providesfinite fieldswhich underlieelliptic curves, and is used in a variety ofsymmetric key algorithmsincludingAdvanced Encryption Standard(AES),International Data Encryption Algorithm(IDEA), andRC4. RSA and Diffie–Hellman usemodular exponentiation. In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used inpolynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations ofpolynomial greatest common divisor, exactlinear algebraandGröbner basisalgorithms over the integers and the rational numbers. As posted onFidonetin the 1980s and archived atRosetta Code, modular arithmetic was used to disproveEuler's sum of powers conjectureon aSinclair QLmicrocomputerusing just one-fourth of the integer precision used by aCDC 6600supercomputerto disprove it two decades earlier via abrute force search.[8] In computer science, modular arithmetic is often applied inbitwise operationsand other operations involving fixed-width, cyclicdata structures. The modulo operation, as implemented in manyprogramming languagesandcalculators, is an application of modular arithmetic that is often used in this context. The logical operatorXORsums 2 bits, modulo 2. The use oflong divisionto turn a fraction into arepeating decimalin any base b is equivalent to modular multiplication of b modulo the denominator. For example, for decimal, b = 10. In music, arithmetic modulo 12 is used in the consideration of the system oftwelve-tone equal temperament, whereoctaveandenharmonicequivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharpis considered the same as D-flat). The method ofcasting out ninesoffers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9). Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular,Zeller's congruenceand theDoomsday algorithmmake heavy use of modulo-7 arithmetic. More generally, modular arithmetic also has application in disciplines such aslaw(e.g.,apportionment),economics(e.g.,game theory) and other areas of thesocial sciences, whereproportionaldivision and allocation of resources plays a central part of the analysis. Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved inpolynomial timewith a form ofGaussian elimination, for details seelinear congruence theorem. Algorithms, such asMontgomery reduction, also exist to allow simple arithmetic operations, such as multiplication andexponentiation modulom, to be performed efficiently on large numbers. Some operations, like finding adiscrete logarithmor aquadratic congruenceappear to be as hard asinteger factorizationand thus are a starting point forcryptographic algorithmsandencryption. These problems might beNP-intermediate. Solving a system of non-linear modular arithmetic equations isNP-complete.[9]
https://en.wikipedia.org/wiki/Complete_residue_system_modulo_m
Inabstract algebra, acongruence relation(or simplycongruence) is anequivalence relationon analgebraic structure(such as agroup,ring, orvector space) that is compatible with the structure in the sense thatalgebraic operationsdone with equivalent elements will yield equivalent elements.[1]Every congruence relation has a correspondingquotientstructure, whose elements are theequivalence classes(orcongruence classes) for the relation.[2] The definition of a congruence depends on the type ofalgebraic structureunder consideration. Particular definitions of congruence can be made forgroups,rings,vector spaces,modules,semigroups,lattices, and so forth. The common theme is that a congruence is anequivalence relationon an algebraic object that is compatible with the algebraic structure, in the sense that the operations arewell-definedon theequivalence classes. The general notion of a congruence relation can be formally defined in the context ofuniversal algebra, a field which studies ideas common to allalgebraic structures. In this setting, arelationR{\displaystyle R}on a given algebraic structure is calledcompatibleif A congruence relation on the structure is then defined as an equivalence relation that is also compatible.[3][4] The prototypical example of a congruence relation iscongruence modulon{\displaystyle n}on the set ofintegers. For a given positive integern{\displaystyle n}, two integersa{\displaystyle a}andb{\displaystyle b}are calledcongruent modulon{\displaystyle n}, written ifa−b{\displaystyle a-b}isdivisiblebyn{\displaystyle n}(or equivalently ifa{\displaystyle a}andb{\displaystyle b}have the sameremainderwhen divided byn{\displaystyle n}). For example,37{\displaystyle 37}and57{\displaystyle 57}are congruent modulo10{\displaystyle 10}, since37−57=−20{\displaystyle 37-57=-20}is a multiple of 10, or equivalently since both37{\displaystyle 37}and57{\displaystyle 57}have a remainder of7{\displaystyle 7}when divided by10{\displaystyle 10}. Congruence modulon{\displaystyle n}(for a fixedn{\displaystyle n}) is compatible with bothadditionandmultiplicationon the integers. That is, if then The corresponding addition and multiplication of equivalence classes is known asmodular arithmetic. From the point of view of abstract algebra, congruence modulon{\displaystyle n}is a congruence relation on theringof integers, and arithmetic modulon{\displaystyle n}occurs on the correspondingquotient ring. For example, a group is an algebraic object consisting of asettogether with a singlebinary operation, satisfying certain axioms. IfG{\displaystyle G}is a group with operation∗{\displaystyle \ast }, acongruence relationonG{\displaystyle G}is an equivalence relation≡{\displaystyle \equiv }on the elements ofG{\displaystyle G}satisfying for allg1,g2,h1,h2∈G{\displaystyle g_{1},g_{2},h_{1},h_{2}\in G}. For a congruence on a group, the equivalence class containing theidentity elementis always anormal subgroup, and the other equivalence classes are the othercosetsof this subgroup. Together, these equivalence classes are the elements of aquotient group. When an algebraic structure includes more than one operation, congruence relations are required to be compatible with each operation. For example, a ring possesses both addition and multiplication, and a congruence relation on a ring must satisfy wheneverr1≡r2{\displaystyle r_{1}\equiv r_{2}}ands1≡s2{\displaystyle s_{1}\equiv s_{2}}. For a congruence on a ring, the equivalence class containing 0 is always a two-sidedideal, and the two operations on the set of equivalence classes define the corresponding quotient ring. Iff:A→B{\displaystyle f:A\,\rightarrow B}is ahomomorphismbetween two algebraic structures (such ashomomorphism of groups, or alinear mapbetweenvector spaces), then the relationR{\displaystyle R}defined by is a congruence relation onA{\displaystyle A}. By thefirst isomorphism theorem, theimageofAunderf{\displaystyle f}is a substructure ofBisomorphicto the quotient ofAby this congruence. On the other hand, the congruence relationR{\displaystyle R}induces a unique homomorphismf:A→A/R{\displaystyle f:A\rightarrow A/R}given by Thus, there is a natural correspondence between the congruences and the homomorphisms of any given algebraic structure. In the particular case ofgroups, congruence relations can be described in elementary terms as follows: IfGis a group (withidentity elementeand operation *) and ~ is abinary relationonG, then ~ is a congruence whenever: Conditions 1, 2, and 3 say that ~ is anequivalence relation. A congruence ~ is determined entirely by the set{a∈G|a~e}of those elements ofGthat are congruent to the identity element, and this set is anormal subgroup. Specifically,a~bif and only ifb−1*a~e. So instead of talking about congruences on groups, people usually speak in terms of normal subgroups of them; in fact, every congruence corresponds uniquely to some normal subgroup ofG. A similar trick allows one to speak of kernels inring theoryasidealsinstead of congruence relations, and inmodule theoryassubmodulesinstead of congruence relations. A more general situation where this trick is possible is withOmega-groups(in the general sense allowing operators with multiple arity). But this cannot be done with, for example,monoids, so the study of congruence relations plays a more central role in monoid theory. The general notion of a congruence is particularly useful inuniversal algebra. An equivalent formulation in this context is the following:[4] A congruence relation on an algebraAis asubsetof thedirect productA×Athat is both anequivalence relationonAand asubalgebraofA×A. Thekernelof ahomomorphismis always a congruence. Indeed, every congruence arises as a kernel. For a given congruence ~ onA, the setA/ ~ofequivalence classescan be given the structure of an algebra in a natural fashion, thequotient algebra. The function that maps every element ofAto its equivalence class is a homomorphism, and the kernel of this homomorphism is ~. ThelatticeCon(A) of all congruence relations on an algebraAisalgebraic. John M. Howiedescribed howsemigrouptheory illustrates congruence relations in universal algebra: Incategory theory, a congruence relationRon a categoryCis given by: for each pair of objectsX,YinC, an equivalence relationRX,Yon Hom(X,Y), such that the equivalence relations respect composition of morphisms. SeeQuotient category § Definitionfor details.
https://en.wikipedia.org/wiki/Congruence_relation
Inmathematics,modular arithmeticis a system ofarithmeticoperations forintegers, other than the usual ones from elementary arithmetic, where numbers "wrap around" when reaching a certain value, called themodulus. The modern approach to modular arithmetic was developed byCarl Friedrich Gaussin his bookDisquisitiones Arithmeticae, published in 1801. A familiar example of modular arithmetic is the hour hand on a12-hour clock. If the hour hand points to 7 now, then 8 hours later it will point to 3. Ordinary addition would result in7 + 8 = 15, but 15 reads as 3 on the clock face. This is because the hour hand makes one rotation every 12 hours and the hour number starts over when the hour hand passes 12. We say that 15 iscongruentto 3 modulo 12, written 15 ≡ 3 (mod 12), so that 7 + 8 ≡ 3 (mod 12). Similarly, if one starts at 12 and waits 8 hours, the hour hand will be at 8. If one instead waited twice as long, 16 hours, the hour hand would be on 4. This can be written as 2 × 8 ≡ 4 (mod 12). Note that after a wait of exactly 12 hours, the hour hand will always be right where it was before, so 12 acts the same as zero, thus 12 ≡ 0 (mod 12). Given anintegerm≥ 1, called amodulus, two integersaandbare said to becongruentmodulom, ifmis adivisorof their difference; that is, if there is an integerksuch that Congruence modulomis acongruence relation, meaning that it is anequivalence relationthat is compatible withaddition,subtraction, andmultiplication. Congruence modulomis denoted by The parentheses mean that(modm)applies to the entire equation, not just to the right-hand side (here,b). This notation is not to be confused with the notationbmodm(without parentheses), which refers to the remainder ofbwhen divided bym, known as themodulooperation: that is,bmodmdenotes the unique integerrsuch that0 ≤r<mandr≡b(modm). The congruence relation may be rewritten as explicitly showing its relationship withEuclidean division. However, thebhere need not be the remainder in the division ofabym.Rather,a≡b(modm)asserts thataandbhave the sameremainderwhen divided bym. That is, where0 ≤r<mis the common remainder. We recover the previous relation (a−b=k m) by subtracting these two expressions and settingk=p−q. Because the congruence modulomis defined by thedivisibilitybymand because−1is aunitin the ring of integers, a number is divisible by−mexactly if it is divisible bym. This means that every non-zero integermmay be taken as modulus. In modulus 12, one can assert that: because the difference is38 − 14 = 24 = 2 × 12, a multiple of12. Equivalently,38and14have the same remainder2when divided by12. The definition of congruence also applies to negative values. For example: The congruence relation satisfies all the conditions of anequivalence relation: Ifa1≡b1(modm)anda2≡b2(modm), or ifa≡b(modm), then:[1] Ifa≡b(modm), then it is generally false thatka≡kb(modm). However, the following is true: For cancellation of common terms, we have the following rules: The last rule can be used to move modular arithmetic into division. Ifbdividesa, then(a/b) modm= (amodb m) /b. Themodular multiplicative inverseis defined by the following rules: The multiplicative inversex≡a−1(modm)may be efficiently computed by solvingBézout's equationa x+m y= 1forx,y, by using theExtended Euclidean algorithm. In particular, ifpis a prime number, thenais coprime withpfor everyasuch that0 <a<p; thus a multiplicative inverse exists for allathat is not congruent to zero modulop. Some of the more advanced properties of congruence relations are the following: The congruence relation is anequivalence relation. Theequivalence classmodulomof an integerais the set of all integers of the forma+k m, wherekis any integer. It is called thecongruence classorresidue classofamodulom, and may be denoted(amodm), or asaor[a]when the modulusmis known from the context. Each residue class modulomcontains exactly one integer in the range0,...,|m|−1{\displaystyle 0,...,|m|-1}. Thus, these|m|{\displaystyle |m|}integers arerepresentativesof their respective residue classes. It is generally easier to work with integers than sets of integers; that is, the representatives most often considered, rather than their residue classes. Consequently,(amodm)denotes generally the unique integerrsuch that0 ≤r<mandr≡a(modm); it is called theresidueofamodulom. In particular,(amodm) = (bmodm)is equivalent toa≡b(modm), and this explains why "=" is often used instead of "≡" in this context. Each residue class modulommay be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class[2](since this is the proper remainder which results from division). Any two members of different residue classes modulomare incongruent modulom. Furthermore, every integer belongs to one and only one residue class modulom.[3] The set of integers{0, 1, 2, ...,m− 1}is called theleast residue system modulom. Any set ofmintegers, no two of which are congruent modulom, is called acomplete residue system modulom. The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely onerepresentativeof each residue class modulom.[4]For example, the least residue system modulo4is{0, 1, 2, 3}. Some other complete residue systems modulo4include: Some sets that arenotcomplete residue systems modulo 4 are: Given theEuler's totient functionφ(m), any set ofφ(m)integers that arerelatively primetomand mutually incongruent under modulusmis called areduced residue system modulom.[5]The set{5, 15}from above, for example, is an instance of a reduced residue system modulo 4. Covering systems represent yet another type of residue system that may contain residues with varying moduli. In the context of this paragraph, the modulusmis almost always taken as positive. The set of allcongruence classesmodulomis aringcalled thering of integers modulom, and is denotedZ/mZ{\textstyle \mathbb {Z} /m\mathbb {Z} },Z/m{\displaystyle \mathbb {Z} /m}, orZm{\displaystyle \mathbb {Z} _{m}}.[6]The ringZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is fundamental to various branches of mathematics (see§ Applicationsbelow). (In some parts ofnumber theorythe notationZm{\displaystyle \mathbb {Z} _{m}}is avoided because it can be confused with the set ofm-adic integers.) Form> 0one has Whenm= 1,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is thezero ring; whenm= 0,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is not anempty set; rather, it isisomorphictoZ{\displaystyle \mathbb {Z} }, sincea0= {a}. Addition, subtraction, and multiplication are defined onZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }by the following rules: The properties given before imply that, with these operations,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acommutative ring. For example, in the ringZ/24Z{\displaystyle \mathbb {Z} /24\mathbb {Z} }, one has as in the arithmetic for the 24-hour clock. The notationZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is used because this ring is thequotient ringofZ{\displaystyle \mathbb {Z} }by theidealmZ{\displaystyle m\mathbb {Z} }, the set formed by all multiples ofm, i.e., all numbersk mwithk∈Z.{\displaystyle k\in \mathbb {Z} .} Under addition,Z/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }is acyclic group. All finite cyclic groups are isomorphic withZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }for somem.[7] The ring of integers modulomis afield, i.e., every nonzero element has amultiplicative inverse, if and only ifmisprime. Ifm=pkis aprime powerwithk> 1, there exists a unique (up to isomorphism) finite fieldGF(m)=Fm{\displaystyle \mathrm {GF} (m)=\mathbb {F} _{m}}withmelements, which isnotisomorphic toZ/mZ{\displaystyle \mathbb {Z} /m\mathbb {Z} }, which fails to be a field because it haszero-divisors. Ifm> 1,(Z/mZ)×{\displaystyle (\mathbb {Z} /m\mathbb {Z} )^{\times }}denotes themultiplicative group of the integers modulomthat are invertible. It consists of the congruence classesam, whereais coprimetom; these are precisely the classes possessing a multiplicative inverse. They form anabelian groupunder multiplication; its order isφ(m), whereφisEuler's totient function. In pure mathematics, modular arithmetic is one of the foundations ofnumber theory, touching on almost every aspect of its study, and it is also used extensively ingroup theory,ring theory,knot theory, andabstract algebra. In applied mathematics, it is used incomputer algebra,cryptography,computer science,chemistryand thevisualandmusicalarts. A very practical application is to calculate checksums within serial number identifiers. For example,International Standard Book Number(ISBN) uses modulo 11 (for 10-digit ISBN) or modulo 10 (for 13-digit ISBN) arithmetic for error detection. Likewise,International Bank Account Numbers(IBANs) use modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of theCAS registry number(a unique identifying number for each chemical compound) is acheck digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10. In cryptography, modular arithmetic directly underpinspublic keysystems such asRSAandDiffie–Hellman, and providesfinite fieldswhich underlieelliptic curves, and is used in a variety ofsymmetric key algorithmsincludingAdvanced Encryption Standard(AES),International Data Encryption Algorithm(IDEA), andRC4. RSA and Diffie–Hellman usemodular exponentiation. In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used inpolynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations ofpolynomial greatest common divisor, exactlinear algebraandGröbner basisalgorithms over the integers and the rational numbers. As posted onFidonetin the 1980s and archived atRosetta Code, modular arithmetic was used to disproveEuler's sum of powers conjectureon aSinclair QLmicrocomputerusing just one-fourth of the integer precision used by aCDC 6600supercomputerto disprove it two decades earlier via abrute force search.[8] In computer science, modular arithmetic is often applied inbitwise operationsand other operations involving fixed-width, cyclicdata structures. The modulo operation, as implemented in manyprogramming languagesandcalculators, is an application of modular arithmetic that is often used in this context. The logical operatorXORsums 2 bits, modulo 2. The use oflong divisionto turn a fraction into arepeating decimalin any base b is equivalent to modular multiplication of b modulo the denominator. For example, for decimal, b = 10. In music, arithmetic modulo 12 is used in the consideration of the system oftwelve-tone equal temperament, whereoctaveandenharmonicequivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharpis considered the same as D-flat). The method ofcasting out ninesoffers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9). Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular,Zeller's congruenceand theDoomsday algorithmmake heavy use of modulo-7 arithmetic. More generally, modular arithmetic also has application in disciplines such aslaw(e.g.,apportionment),economics(e.g.,game theory) and other areas of thesocial sciences, whereproportionaldivision and allocation of resources plays a central part of the analysis. Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved inpolynomial timewith a form ofGaussian elimination, for details seelinear congruence theorem. Algorithms, such asMontgomery reduction, also exist to allow simple arithmetic operations, such as multiplication andexponentiation modulom, to be performed efficiently on large numbers. Some operations, like finding adiscrete logarithmor aquadratic congruenceappear to be as hard asinteger factorizationand thus are a starting point forcryptographic algorithmsandencryption. These problems might beNP-intermediate. Solving a system of non-linear modular arithmetic equations isNP-complete.[9]
https://en.wikipedia.org/wiki/Least_residue_system_modulo_m
Number theoryis a branch ofpure mathematicsdevoted primarily to the study of theintegersandarithmetic functions. Number theorists studyprime numbersas well as the properties ofmathematical objectsconstructed from integers (for example,rational numbers), or defined as generalizations of the integers (for example,algebraic integers). Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory can often be understood through the study ofanalyticalobjects, such as theRiemann zeta function, that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also studyreal numbersin relation to rational numbers, as for instance how irrational numbers can be approximated by fractions (Diophantine approximation). Number theory is one of the oldest branches of mathematics alongside geometry. One quirk of number theory is that it deals with statements that are simple to understand but are very difficult to solve. Examples of this areFermat's Last Theorem, which was proved 358 years after the original formulation, andGoldbach's conjecture, which remains unsolved since the 18th century. German mathematicianCarl Friedrich Gauss(1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics."[1]It was regarded as the example of pure mathematics with no applications outside mathematics until the 1970s, when it became known that prime numbers would be used as the basis for the creation ofpublic-key cryptographyalgorithms. Number theory is the branch of mathematics that studiesintegersand theirpropertiesand relations.[2]The integers comprise asetthat extends the set ofnatural numbers{1,2,3,…}{\displaystyle \{1,2,3,\dots \}}to include number0{\displaystyle 0}and the negation of natural numbers{−1,−2,−3,…}{\displaystyle \{-1,-2,-3,\dots \}}. Number theorists studyprime numbersas well as the properties ofmathematical objectsconstructed from integers (for example,rational numbers), or defined as generalizations of the integers (for example,algebraic integers).[3][4] Number theory is closely related to arithmetic and some authors use the terms as synonyms.[5]However, the word "arithmetic" is used today to mean the study of numerical operations and extends to thereal numbers.[6]In a more specific sense, number theory is restricted to the study of integers and focuses on their properties and relationships.[7]Traditionally, it is known as higher arithmetic.[8]By the early twentieth century, the termnumber theoryhad been widely adopted.[note 1]The term number means whole numbers, which refers to either the natural numbers or the integers.[9][10][11] Elementary number theorystudies aspects of integers that can be investigated using elementary methods such aselementary proofs.[12]Analytic number theory, by contrast, relies oncomplex numbersand techniques from analysis and calculus.[13]Algebraic number theoryemploysalgebraic structuressuch asfieldsandringsto analyze the properties of and relations between numbers.Geometric number theoryuses concepts from geometry to study numbers.[14]Further branches of number theory areprobabilistic number theory,[15]combinatorial number theory,[16]computational number theory,[17]and applied number theory, which examines the application of number theory to science and technology.[18] The earliest historical find of an arithmetical nature is a fragment of a table:Plimpton 322(Larsa, Mesopotamia, c. 1800 BC), a broken clay tablet, contains a list of "Pythagorean triples", that is, integers(a,b,c){\displaystyle (a,b,c)}such thata2+b2=c2{\displaystyle a^{2}+b^{2}=c^{2}}. The triples are too numerous and too large to have been obtained bybrute force. The heading over the first column reads: "Thetakiltumof the diagonal which has been subtracted such that the width..."[19] The table's layout suggests that it was constructed by means of what amounts, in modern language, to theidentity[20] (12(x−1x))2+1=(12(x+1x))2,{\displaystyle \left({\frac {1}{2}}\left(x-{\frac {1}{x}}\right)\right)^{2}+1=\left({\frac {1}{2}}\left(x+{\frac {1}{x}}\right)\right)^{2},} which is implicit in routineOld Babylonianexercises. If some other method was used, the triples were first constructed and then reordered byc/a{\displaystyle c/a}, presumably for actual use as a "table", for example, with a view to applications.[21] It is not known what these applications may have been, or whether there could have been any;Babylonian astronomy, for example, truly came into its own many centuries later. It has been suggested instead that the table was a source of numerical examples for school problems.[22][note 2]Plimpton 322 tablet is the only surviving evidence of what today would be called number theory within Babylonian mathematics, though a kind ofBabylonian algebrawas much more developed.[23] Although other civilizations probably influenced Greek mathematics at the beginning,[24]all evidence of such borrowings appear relatively late,[25][26]and it is likely that Greekarithmētikḗ(the theoretical or philosophical study of numbers) is an indigenous tradition. Aside from a few fragments, most of what is known about Greek mathematics in the 6th to 4th centuries BC (theArchaicandClassicalperiods) comes through either the reports of contemporary non-mathematicians or references from mathematical works in the earlyHellenistic period.[27]In the case of number theory, this means largelyPlato,Aristotle, andEuclid. Plato had a keen interest in mathematics, and distinguished clearly betweenarithmētikḗand calculation (logistikē). Plato reports in his dialogueTheaetetusthatTheodorushad proven that3,5,…,17{\displaystyle {\sqrt {3}},{\sqrt {5}},\dots ,{\sqrt {17}}}are irrational.Theaetetus, a disciple of Theodorus's, worked on distinguishing different kinds ofincommensurables, and was thus arguably a pioneer in the study ofnumber systems. Aristotle further claimed that the philosophy of Plato closely followed the teachings of thePythagoreans,[28]and Cicero repeats this claim:Platonem ferunt didicisse Pythagorea omnia("They say Plato learned all things Pythagorean").[29] Euclid devoted part of hisElements(Books VII–IX) to topics that belong to elementary number theory, includingprime numbersanddivisibility.[30]He gave an algorithm, theEuclidean algorithm, for computing thegreatest common divisorof two numbers (Prop. VII.2) and aproof implying the infinitude of primes(Prop. IX.20). There is also older material likely based on Pythagorean teachings (Prop. IX.21–34), such as "odd times even is even" and "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it".[31]This is all that is needed to prove that2{\displaystyle {\sqrt {2}}}isirrational.[32]Pythagoreans apparently gave great importance to the odd and the even.[33]The discovery that2{\displaystyle {\sqrt {2}}}is irrational is credited to the early Pythagoreans, sometimes assigned toHippasus, who was expelled or split from the Pythagorean community as a result.[34][35]This forced a distinction betweennumbers(integers and the rationals—the subjects of arithmetic) andlengthsandproportions(which may be identified with real numbers, whether rational or not). The Pythagorean tradition also spoke of so-calledpolygonalorfigurate numbers.[36]Whilesquare numbers,cubic numbers, etc., are seen now as more natural thantriangular numbers,pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in theearly modern period(17th to early 19th centuries). Anepigrampublished byLessingin 1773 appears to be a letter sent byArchimedestoEratosthenes.[37][38]The epigram proposed what has become known asArchimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamedPell's equation). As far as it is known, such equations were first successfully treated by Indian mathematicians. It is not known whether Archimedes himself had a method of solution. Aside from the elementary work of Neopythagoreans such asNicomachusandTheon of Smyrna, the foremost authority inarithmētikḗin Late Antiquity wasDiophantus of Alexandria, who probably lived in the 3rd century AD, approximately five hundred years after Euclid. Little is known about his life, but he wrote two works that are extant:On Polygonal Numbers, a short treatise written in the Euclidean manner on the subject, and theArithmetica, a work on pre-modern algebra (namely, the use of algebra to solve numerical problems). Six out of the thirteen books of Diophantus'sArithmeticasurvive in the original Greek and four more survive in an Arabic translation. TheArithmeticais a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the formf(x,y)=z2{\displaystyle f(x,y)=z^{2}}orf(x,y,z)=w2{\displaystyle f(x,y,z)=w^{2}}. In modern parlance,Diophantine equationsarepolynomial equationsto which rational or integer solutions are sought. TheChinese remainder theoremappears as an exercise[39]inSunzi Suanjing(between the third and fifth centuries).[40](There is one important step glossed over in Sunzi's solution:[note 3]it is the problem that was later solved byĀryabhaṭa'sKuṭṭaka– seebelow.) The result was later generalized with a complete solution calledDa-yan-shu(大衍術) inQin Jiushao's 1247Mathematical Treatise in Nine Sections[41]which was translated into English in early nineteenth century by British missionaryAlexander Wylie.[42]There is also some numerical mysticism in Chinese mathematics,[note 4]but, unlike that of the Pythagoreans, it seems to have led nowhere. While Greek astronomy probably influenced Indian learning, to the point of introducingtrigonometry,[43]it seems to be the case that Indian mathematics is otherwise an autochthonous tradition;[44]in particular, there is no evidence that Euclid'sElementsreached India before the eighteenth century.[45]Āryabhaṭa (476–550 AD) showed that pairs of simultaneous congruencesn≡a1modm1{\displaystyle n\equiv a_{1}{\bmod {m}}_{1}},n≡a2modm2{\displaystyle n\equiv a_{2}{\bmod {m}}_{2}}could be solved by a method he calledkuṭṭaka, orpulveriser;[46]this is a procedure close to (a generalization of) the Euclidean algorithm, which was probably discovered independently in India.[47]Āryabhaṭa seems to have had in mind applications to astronomical calculations.[43] Brahmagupta (628 AD) started the systematic study of indefinite quadratic equations—in particular, the misnamedPell equation, in whichArchimedesmay have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (thechakravala, or "cyclic method") for solving Pell's equation was finally found byJayadeva(cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears inBhāskara II's Bīja-gaṇita (twelfth century).[48] Indian mathematics remained largely unknown in Europe until the late eighteenth century;[49]Brahmagupta and Bhāskara's work was translated into English in 1817 byHenry Colebrooke.[50] In the early ninth century, the caliphal-Ma'munordered translations of many Greek mathematical works and at least one Sanskrit work (theSindhind, which may[51]or may not[52]be Brahmagupta'sBrāhmasphuṭasiddhānta). Diophantus's main work, theArithmetica, was translated into Arabic byQusta ibn Luqa(820–912). Part of the treatiseal-Fakhri(byal-Karajī, 953 – c. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporaryIbn al-Haythamknew[53]what would later be calledWilson's theorem. Other than a treatise on squares in arithmetic progression byFibonacci—who traveled and studied in north Africa andConstantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the lateRenaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus'Arithmetica.[54] Pierre de Fermat(1607–1665) never published his writings but communicated through correspondence instead. Accordingly, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes.[55]Although he drew inspiration from classical sources, in his notes and letters Fermat scarcely wrote any proofs—he had no models in the area.[56] Over his lifetime, Fermat made the following contributions to the field: The interest ofLeonhard Euler(1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur[note 7]Goldbach, pointed him towards some of Fermat's work on the subject.[67][68]This has been called the "rebirth" of modern number theory,[69]after Fermat's relative lack of success in getting his contemporaries' attention for the subject.[70]Euler's work on number theory includes the following:[71] Joseph-Louis Lagrange(1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations; for instance, thefour-square theoremand the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studiedquadratic formsin full generality (as opposed tomX2+nY2{\displaystyle mX^{2}+nY^{2}}), including defining their equivalence relation, showing how to put them in reduced form, etc. Adrien-Marie Legendre(1752–1833) was the first to state the law of quadratic reciprocity. He also conjectured what amounts to theprime number theoremandDirichlet's theorem on arithmetic progressions. He gave a full treatment of the equationax2+by2+cz2=0{\displaystyle ax^{2}+by^{2}+cz^{2}=0}[82]and worked on quadratic forms along the lines later developed fully by Gauss.[83]In his old age, he was the first to prove Fermat's Last Theorem forn=5{\displaystyle n=5}(completing work byPeter Gustav Lejeune Dirichlet, and crediting both him andSophie Germain).[84] Carl Friedrich Gauss(1777–1855) worked in a wide variety of fields in both mathematics and physics including number theory, analysis, differential geometry, geodesy, magnetism, astronomy and optics. TheDisquisitiones Arithmeticae(1801), which he wrote three years earlier when he was 21, had an immense influence in the area of number theory and set its agenda for much of the 19th century. Gauss proved in this work the law ofquadratic reciprocityand developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests.[85]The last section of theDisquisitionesestablished a link betweenroots of unityand number theory: The theory of the division of the circle...which is treated in sec. 7 does not belong by itself to arithmetic, but its principles can only be drawn from higher arithmetic.[86] In this way, Gauss arguably made forays towardsÉvariste Galois's work and the areaalgebraic number theory. Starting early in the nineteenth century, the following developments gradually took place: Algebraic number theory may be said to start with the study of reciprocity andcyclotomy, but truly came into its own with the development ofabstract algebraand early ideal theory andvaluationtheory; see below. A conventional starting point for analytic number theory isDirichlet's theorem on arithmetic progressions(1837),[88][89]whose proof introducedL-functionsand involved some asymptotic analysis and a limiting process on a real variable.[90]The first use of analytic ideas in number theory actually goes back to Euler (1730s),[91][92]who used formal power series and non-rigorous (or implicit) limiting arguments. The use ofcomplexanalysis in number theory comes later: the work ofBernhard Riemann(1859) on thezeta functionis the canonical starting point;[93]Jacobi's four-square theorem(1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms).[94] TheAmerican Mathematical Societyawards theCole Prizein Number Theory. Moreover, number theory is one of the three mathematical subdisciplines rewarded by theFermat Prize. Elementary number theory deals with the topics in number theory by means of basic methods in arithmetic.[4]Its primary subjects of study aredivisibility,factorization, andprimality, as well ascongruencesinmodular arithmetic.[95][12]Other topic in elementary number theory areDiophantine equations,continued fraction,integer partitions, andDiophantine approximations.[96] Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations ofaddition,subtraction,multiplication,division,exponentiation, extraction ofroots, andlogarithms. Multiplication, for instance, is an operation that combines two numbers, referred to as factors, to form a single number, termed theproduct, such as2×3=6{\displaystyle 2\times 3=6}.[97] Divisibility is a property between two nonzero integers related to division. An integera{\displaystyle a}is said to be divisible by a nonzero integerb{\displaystyle b}ifa{\displaystyle a}is a multiple ofb{\displaystyle b}; that is, if there exists an integerq{\displaystyle q}such thata=bq{\displaystyle a=bq}. An equivalent formulation is thatb{\displaystyle b}dividesa{\displaystyle a}and is denoted by a vertical bar, which in this case isb|a{\displaystyle b|a}. Conversely, if this were not the case, thena{\displaystyle a}would not be divided evenly byb{\displaystyle b}, resulting in a remainder.Euclid's division lemmaasserts thata{\displaystyle a}andb{\displaystyle b}can generally be written asa=bq+r{\displaystyle a=bq+r}, where the remainderr<b{\displaystyle r<b}accounts for the leftover quantity. Elementary number theory studiesdivisibility rulesin order to quickly identify if a given integer is divisible by a fixed divisor. For instance, it is known that any integer is divisible by 3 if its decimaldigit sumis divisible by 3.[98][9] A common divisor of several nonzero integers is an integer that divides all of them. Thegreatest common divisor(gcd) is the largest of such divisors. Two integers are said to be coprime or relatively prime to one another if their greatest common divisor, and simultaneously their only divisor, is 1. TheEuclidean algorithmcomputes the greatest common divisor of two integersa,b{\displaystyle a,b}by means of repeatedly applying the division lemma and shifting the divisor and remainder after every step. The algorithmcan be extendedto solve a special case oflinear Diophantine equationsax+by=1{\displaystyle ax+by=1}. A Diophantine equation is an equation with several unknowns and integer coefficients. Another kind of Diophantine equation is described in thePythagorean theorem,x2+y2=z2{\displaystyle x^{2}+y^{2}=z^{2}}, whose solutions are called Pythagorean triples if they are all integers.[9][10] Elementary number theory studies the divisibility properties of integers such asparity(even and odd numbers),prime numbers, andperfect numbers. Important number-theoric functions include thedivisor-counting function, thedivisor summatory functionand its modifications, andEuler's totient function. Aprime numberis an integer greater than 1 whose only positive divisors are 1 and the prime itself. A positive integer greater than 1 that is not prime is called a composite number.Euclid's theoremdemonstrates that there are infinitely many prime numbers that comprise the set {2, 3, 5, 7, 11, ...}. Thesieve of Eratostheneswas devised as an efficient algorithm for identifying all primes up to a given natural number by eliminating all composite numbers. Thedistribution of primesis unpredictable and is a major subject of study in number theory. Formulas for a partial sequence of primes, includingEuler's prime-generating polynomialshave been developed. However, these cease to function as the primes become too large.[99][3] Factorizationis a method of expressing a number as aproduct. Specifically in number theory,integer factorizationis the decomposition of an integer into a product of integers. The process of repeatedly applying this procedure until all factors are prime is known asprime factorization. A fundamental property of primes is shown inEuclid's lemma. It is a consequence of the lemma that if a prime divides a product of integers, then that prime divides at least one of the factors in the product. Theunique factorization theoremis the fundamental theorem of arithmetic that relates to prime factorization. The theorem states that every integer greater than 1 can be factorised into a product of prime numbers and that this factorisation is unique up to the order of the factors. For example,120{\displaystyle 120}is expressed uniquely as2×2×2×3×5{\displaystyle 2\times 2\times 2\times 3\times 5}or simply23×3×5{\displaystyle 2^{3}\times 3\times 5}.[100][9] Modular arithmeticworks with finite sets of integers and introduces the concepts of congruence and residue classes. A congruence of two integersa,b{\displaystyle a,b}modulon{\displaystyle n}(a positive integer called the modulus) is anequivalence relationwherebyn|(a−b){\displaystyle n|(a-b)}is true. PerformingEuclidean divisionon botha{\displaystyle a}andn{\displaystyle n}, and onb{\displaystyle b}andn{\displaystyle n}, yields the same remainder. This written asa≡b(modn){\textstyle a\equiv b{\pmod {n}}}. In a manner analogous to the 12-hour clock, the sum of 4 and 9 is equal to 13, yet congruent to 1. A residue class modulon{\displaystyle n}is a set that contains all integers congruent to a specifiedr{\displaystyle r}modulon{\displaystyle n}. For example,6Z+1{\displaystyle 6\mathbb {Z} +1}contains all multiples of 6 incremented by 1. Modular arithmetic provides a range of formulas for rapidly solving congruences of very large powers. An influential theorem isFermat's little theorem, which states that if a primep{\displaystyle p}is coprime to some integera{\displaystyle a}, thenap−1≡1(modp){\textstyle a^{p-1}\equiv 1{\pmod {p}}}is true. Euler's theorem extends this to assert that every integern{\displaystyle n}satisfies the congruenceaφ(n)≡1(modn),{\displaystyle a^{\varphi (n)}\equiv 1{\pmod {n}},}where Euler's totient functionφ{\displaystyle \varphi }counts all positive integers up ton{\displaystyle n}that are coprime ton{\displaystyle n}. Modular arithmetic also provides formulas that are used to solve congruences with unknowns in a similar vein to equation solving in algebra, such as theChinese remainder theorem.[101] More specifically, elementary number theory works withelementary proofs, a term that excludes the use ofcomplex numbersbut may include basic analysis.[96]For example, theprime number theoremwas first proven using complex analysis in 1896, but an elementary proof was found only in 1949 byErdősandSelberg.[102]The term is somewhat ambiguous. For example, proofs based on complexTauberian theorems, such asWiener–Ikehara, are often seen as quite enlightening but not elementary despite usingFourier analysis, not complex analysis. Here as elsewhere, anelementaryproof may be longer and more difficult for most readers than a more advanced proof. Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, many of the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics.[103] Analytic number theory may be defined Some subjects generally considered to be part of analytic number theory (e.g.,sieve theory) are better covered by the second rather than the first definition.[note 8]Small sieves, for instance, use little analysis and yet still belong to analytic number theory.[note 9] The following are examples of problems in analytic number theory: theprime number theorem, theGoldbach conjecture, thetwin prime conjecture, theHardy–Littlewood conjectures, theWaring problemand theRiemann hypothesis. Some of the most important tools of analytic number theory are thecircle method,sieve methodsandL-functions(or, rather, the study of their properties). The theory ofmodular forms(and, more generally,automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory.[105] One may ask analytic questions aboutalgebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may defineprime ideals(generalizations ofprime numbersin the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This questioncan be answeredby means of an examination ofDedekind zeta functions, which are generalizations of theRiemann zeta function, a key analytic object at the roots of the subject.[106]This is an example of a general procedure in analytic number theory: deriving information about the distribution of asequence(here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function.[107] Analgebraic numberis anycomplex numberthat is a solution to some polynomial equationf(x)=0{\displaystyle f(x)=0}with rational coefficients; for example, every solutionx{\displaystyle x}ofx5+(11/2)x3−7x2+9=0{\displaystyle x^{5}+(11/2)x^{3}-7x^{2}+9=0}is an algebraic number. Fields of algebraic numbers are also calledalgebraic number fields, or shortlynumber fields. Algebraic number theory studies algebraic number fields.[108] It could be argued that the simplest kind of number fields, namelyquadratic fields, were already studied by Gauss, as the discussion of quadratic forms inDisquisitiones Arithmeticaecan be restated in terms ofidealsandnormsin quadratic fields. (Aquadratic fieldconsists of all numbers of the forma+bd{\displaystyle a+b{\sqrt {d}}}, wherea{\displaystyle a}andb{\displaystyle b}are rational numbers andd{\displaystyle d}is a fixed rational number whose square root is not rational.) For that matter, the eleventh-centurychakravala methodamounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such. The grounds of the subject were set in the late nineteenth century, whenideal numbers, thetheory of idealsandvaluation theorywere introduced; these are three complementary ways of dealing with the lack of unique factorization in algebraic number fields. (For example, in the field generated by the rationals and−5{\displaystyle {\sqrt {-5}}}, the number6{\displaystyle 6}can be factorised both as6=2⋅3{\displaystyle 6=2\cdot 3}and6=(1+−5)(1−−5){\displaystyle 6=(1+{\sqrt {-5}})(1-{\sqrt {-5}})}; all of2{\displaystyle 2},3{\displaystyle 3},1+−5{\displaystyle 1+{\sqrt {-5}}}and1−−5{\displaystyle 1-{\sqrt {-5}}}are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (byKummer) seems to have come from the study of higher reciprocity laws,[109]that is, generalizations ofquadratic reciprocity. Number fields are often studied as extensions of smaller number fields: a fieldLis said to be anextensionof a fieldKifLcontainsK. (For example, the complex numbersCare an extension of the realsR, and the realsRare an extension of the rationalsQ.) Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensionsLofKsuch that theGalois group[note 10]Gal(L/K) ofLoverKis anabelian group—are relatively well understood. Their classification was the object of the programme ofclass field theory, which was initiated in the late nineteenth century (partly byKroneckerandEisenstein) and carried out largely in 1900–1950. An example of an active area of research in algebraic number theory isIwasawa theory. TheLanglands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields. The central problem of Diophantine geometry is to determine when aDiophantine equationhas integer or rational solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object. For example, an equation in two variables defines a curve in the plane. More generally, an equation or system of equations in two or more variables defines acurve, asurface, or some other such object inn-dimensional space. In Diophantine geometry, one asks whether there are anyrational points(points all of whose coordinates are rationals) orintegral points(points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is whether there are finitely or infinitely many rational points on a given curve or surface. Consider, for instance, thePythagorean equationx2+y2=1{\displaystyle x^{2}+y^{2}=1}. One would like to know its rational solutions, namely(x,y){\displaystyle (x,y)}such thatxandyare both rational. This is the same as asking for all integer solutions toa2+b2=c2{\displaystyle a^{2}+b^{2}=c^{2}}; any solution to the latter equation gives us a solutionx=a/c{\displaystyle x=a/c},y=b/c{\displaystyle y=b/c}to the former. It is also the same as asking for all points with rational coordinates on the curve described byx2+y2=1{\displaystyle x^{2}+y^{2}=1}(a circle of radius 1 centered on the origin). The rephrasing of questions on equations in terms of points on curves is felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve (that is, rational or integer solutions to an equationf(x,y)=0{\displaystyle f(x,y)=0}, wheref{\displaystyle f}is a polynomial in two variables) depends crucially on thegenusof the curve.[note 11]A major achievement of this approach isWiles's proof of Fermat's Last Theorem, for which other geometrical notions are just as crucial. There is also the closely linked area ofDiophantine approximations: given a numberx{\displaystyle x}, determine how well it can be approximated by rational numbers. One seeks approximations that are good relative to the amount of space required to write the rational number: calla/q{\displaystyle a/q}(withgcd(a,q)=1{\displaystyle \gcd(a,q)=1}) a good approximation tox{\displaystyle x}if|x−a/q|<1qc{\displaystyle |x-a/q|<{\frac {1}{q^{c}}}}, wherec{\displaystyle c}is large. This question is of special interest ifx{\displaystyle x}is an algebraic number. Ifx{\displaystyle x}cannot be approximated well, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that ofheight) are critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest intranscendental number theory: if a number can be approximated better than any algebraic number, then it is atranscendental number. It is by this argument thatπandehave been shown to be transcendental. Diophantine geometry should not be confused with thegeometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory.Arithmetic geometryis a contemporary term for the same domain covered by Diophantine geometry, particularly when one wishes to emphasize the connections to modern algebraic geometry (for example, inFaltings's theorem) rather than to techniques in Diophantine approximations. The areas below date from no earlier than the mid-twentieth century, even if they are based on older material. For example, although algorithms in number theory have a long history, the modern study ofcomputabilitybegan only in the 1930s and 1940s, whilecomputational complexity theoryemerged in the 1970s. Probabilistic number theory starts with questions such as the following: Take an integernat random between one and a million. How likely is it to be prime? (this is just another way of asking how many primes there are between one and a million). How many prime divisors willnhave on average? What is the probability that it will have many more or many fewer divisors or prime divisors than the average? Much of probabilistic number theory can be seen as an important special case of the study of variables that are almost, but not quite, mutuallyindependent. For example, the event that a random integer between one and a million be divisible by two and the event that it be divisible by three are almost independent, but not quite. It is sometimes said thatprobabilistic combinatoricsuses the fact that whatever happens with probability greater than0{\displaystyle 0}must happen sometimes; one may say with equal justice that many applications of probabilistic number theory hinge on the fact that whatever is unusual must be rare. If certain algebraic objects (say, rational or integer solutions to certain equations) can be shown to be in the tail of certain sensibly defined distributions, it follows that there must be few of them; this is a very concrete non-probabilistic statement following from a probabilistic one. At times, a non-rigorous, probabilistic approach leads to a number ofheuristicalgorithms and open problems, notablyCramér's conjecture. Combinatorics in number theory starts with questions like the following: Does a fairly "thick"infinite setA{\displaystyle A}contain many elements in arithmetic progression:a{\displaystyle a},a+b,a+2b,a+3b,…,a+10b{\displaystyle a+b,a+2b,a+3b,\ldots ,a+10b}? Should it be possible to write large integers as sums of elements ofA{\displaystyle A}? These questions are characteristic ofarithmetic combinatorics,a coalescing field that subsumesadditive number theory(which concerns itself with certain very specific setsA{\displaystyle A}of arithmetic significance, such as the primes or the squares), some of thegeometry of numbers, as well as some rapidly developing new material. Its focus on issues of growth and distribution accounts in part for its developing links withergodic theory,finite group theory,model theory, and other fields. The termadditive combinatoricsis also used; however, the setsA{\displaystyle A}being studied need not be sets of integers, but rather subsets of non-commutativegroups, for which the multiplication symbol, not the addition symbol, is traditionally used; they can also be subsets ofrings, in which case the growth ofA+A{\displaystyle A+A}andA{\displaystyle A}·A{\displaystyle A}may be compared. While the wordalgorithmgoes back only to certain readers ofal-Khwārizmī, careful descriptions of methods of solution are older than proofs: such methods (that is, algorithms) are as old as any recognisable mathematics—ancient Egyptian, Babylonian, Vedic, Chinese—whereas proofs appeared only with the Greeks of the classical period. An early case is that of what is now called the Euclidean algorithm. In its basic form (namely, as an algorithm for computing thegreatest common divisor) it appears as Proposition 2 of Book VII inElements, together with a proof of correctness. However, in the form that is often used in number theory (namely, as an algorithm for finding integer solutions to an equationax+by=c{\displaystyle ax+by=c}, or, what is the same, for finding the quantities whose existence is assured by theChinese remainder theorem) it first appears in the works ofĀryabhaṭa(fifth to sixth centuries) as an algorithm calledkuṭṭaka("pulveriser"), without a proof of correctness. There are two main questions: "Can this be computed?" and "Can it be computed rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. Fast algorithms fortesting primalityare now known, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring. The difficulty of a computation can be useful: modern protocols forencrypting messages(for example,RSA) depend on functions that are known to all, but whose inverses are known only to a chosen few, and would take one too long a time to figure out on one's own. For example, these functions can be such that their inverses can be computed only if certain large integers are factorized. While many difficult computational problems outside number theory are known, most working encryption protocols nowadays are based on the difficulty of a few number-theoretical problems. Some things may not be computable at all; in fact, this can be proven in some instances. For instance, in 1970, it was proven, as a solution toHilbert's tenth problem, that there is noTuring machinewhich can solve all Diophantine equations.[110]In particular, this means that, given acomputably enumerableset of axioms, there are Diophantine equations for which there is no proof, starting from the axioms, of whether the set of equations has or does not have integer solutions. (i.e., Diophantine equations for which there are no integer solutions, since, given a Diophantine equation with at least one solution, the solution itself provides a proof of the fact that a solution exists. It cannot be proven that a particular Diophantine equation is of this kind, since this would imply that it has no solutions.) For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly.[111]In particular, number theorists such asBritishmathematicianG. H. Hardyprided themselves on doing work that had absolutely no military significance.[112]The number-theoristLeonard Dickson(1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory.[113] This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation ofpublic-key cryptographyalgorithms.[114]Schemes such as RSA are based on the difficulty of factoring large composite numbers into their prime factors.[115]These applications have led to significant study ofalgorithmsfor computing with prime numbers, and in particular ofprimality testing, methods for determining whether a given number is prime. Prime numbers are also used in computing forchecksums,hash tables, andpseudorandom number generators. In 1974,Donald Knuthsaid "virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations".[116]Elementary number theory is taught indiscrete mathematicscourses forcomputer scientists. It also has applications to the continuous innumerical analysis.[117] Number theory has now several modern applications spanning diverse areas such as: [...] the question "how was the tablet calculated?" does not have to have the same answer as the question "what problems does the tablet set?" The first can be answered most satisfactorily by reciprocal pairs, as first suggested half a century ago, and the second by some sort of right-triangle problems (Robson 2001, p. 202). Robson takes issue with the notion that the scribe who produced Plimpton 322 (who had to "work for a living", and would not have belonged to a "leisured middle class") could have been motivated by his own "idle curiosity" in the absence of a "market for new mathematics".(Robson 2001, pp. 199–200) [26] Now there are an unknown number of things. If we count by threes, there is a remainder 2; if we count by fives, there is a remainder 3; if we count by sevens, there is a remainder 2. Find the number of things.Answer: 23. Method: If we count by threes and there is a remainder 2, put down 140. If we count by fives and there is a remainder 3, put down 63. If we count by sevens and there is a remainder 2, put down 30. Add them to obtain 233 and subtract 210 to get the answer. If we count by threes and there is a remainder 1, put down 70. If we count by fives and there is a remainder 1, put down 21. If we count by sevens and there is a remainder 1, put down 15. When [a number] exceeds 106, the result is obtained by subtracting 105. [36] Now there is a pregnant woman whose age is 29. If the gestation period is 9 months, determine the sex of the unborn child.Answer: Male. Method: Put down 49, add the gestation period and subtract the age. From the remainder take away 1 representing the heaven, 2 the earth, 3 the man, 4 the four seasons, 5 the five phases, 6 the six pitch-pipes, 7 the seven stars [of the Dipper], 8 the eight winds, and 9 the nine divisions [of China under Yu the Great]. If the remainder is odd, [the sex] is male and if the remainder is even, [the sex] is female. This is the last problem in Sunzi's otherwise matter-of-fact treatise. Two of the most popular introductions to the subject are: Hardy and Wright's book is a comprehensive classic, though its clarity sometimes suffers due to the authors' insistence on elementary methods (Apostol 1981). Vinogradov's main attraction consists in its set of problems, which quickly lead to Vinogradov's own research interests; the text itself is very basic and close to minimal. Other popular first introductions are: Popular choices for a second textbook include:
https://en.wikipedia.org/wiki/Number_theory
Date windowingis a method by which dates with two-digit years are converted to and from dates with four-digit years.[1]The year at which thecenturychanges is called thepivot yearof the date window.[2]Date windowing was one of several techniques used to resolve theyear 2000 probleminlegacy computer systems.[3] For organizations and institutions with data that is only decades old, a "date windowing" solution was considered easier and more economical than the massive conversions and testing required when converting two-digit years into four-digit years.[3][4] There are three primary methods used to determine the date window: Information Builders'sFOCUS"Century Aware" implementation[5]allowed the user to focus on field-specific and file-specific settings. This flexibility gave the best of all three major mechanisms: A school could have file RecentDonors set a field named BirthDate to use Those born 2031 are not likely to be donating before 2049, by which time those born 1931 would be 118 years old, and unlikely current donors. DEFCENT and YRTHRESH for a file containing present students and recent graduates would use different values. Below is a typical example of COBOL code that establishes a fixed date window, used to figure the century for ordinary business dates. The above code establishes a fixed date window of 1960 through 2059. It assumes that none of the receipt dates are before 1960, and should work until January 1, 2060.[3] Some systems haveenvironment variablesthat set the fixed pivot year for the system. Any year after the pivot year will belong to this century (the 21st century), and any year before or equal to the pivot year will belong to last century (the 20th century).[6] Some products, such as Microsoft Excel 95 used a window of years 1920–2019 which had the potential to encounter a windowing bug reoccurring only 20 years after the year 2000 problem had been addressed.[7] TheIBM ioperating system uses a window of 1940-2039 for date formats with a two-digit year.[8]In the 7.5 release of the operating system, an option was added to use a window of 1970-2069 instead.[9]
https://en.wikipedia.org/wiki/Date_windowing
Lollipop sequence numberingis anumbering schemeused inrouting protocols. In this numbering scheme, sequence numbers start at a negative value, increase until they reach zero, then cycle through a finite set of positive numbers indefinitely. When a system is rebooted, the sequence is restarted from a negative number again. This allows recently rebooted systems to be distinguished from systems which have simply looped around their numbering space. This path can be visualized as a line with a circle at the end; hence alollipop. Lollipop sequence numbering was originally believed to resolve the ambiguity problem in cyclic sequence numbering schemes, and was used inOSPFversion 1 for this reason. Later work showed that this was not the case, like in theARPANET sequence bug, and OSPF version 2 replaced it with a linear numbering space, with special rules for what happens when the sequence numbers reach the end of the numbering space.[1]
https://en.wikipedia.org/wiki/Lollipop_sequence_numbering
Inmathematical analysisand related areas ofmathematics, asetis calledboundedif all of its points are within a certain distance of each other. Conversely, a set which is not bounded is calledunbounded. The word "bounded" makes no sense in a general topological space without a correspondingmetric. Boundaryis a distinct concept; for example, acircle(not to be confused with adisk) in isolation is a boundaryless bounded set, while thehalf planeis unbounded yet has a boundary. A bounded set is not necessarily aclosed setand vice versa. For example, a subsetSof a 2-dimensional real spaceR2constrained by two parabolic curvesx2+ 1andx2− 1defined in aCartesian coordinate systemis closed by the curves but not bounded (so unbounded). A setSofreal numbersis calledbounded from aboveif there exists some real numberk(not necessarily inS) such thatk≥sfor allsinS. The numberkis called anupper boundofS. The termsbounded from belowandlower boundare similarly defined. A setSisboundedif it has both upper and lower bounds. Therefore, a set of real numbers is bounded if it is contained in afinite interval. AsubsetSof ametric space(M,d)isboundedif there existsr> 0such that for allsandtinS, we haved(s,t) <r. The metric space(M,d)is aboundedmetric space (ordis aboundedmetric) ifMis bounded as a subset of itself. Intopological vector spaces, a different definition for bounded sets exists which is sometimes calledvon Neumann boundedness. If the topology of the topological vector space is induced by ametricwhich ishomogeneous, as in the case of a metric induced by thenormofnormed vector spaces, then the two definitions coincide. A set of real numbers is bounded if and only if it has an upper and lower bound. This definition is extendable to subsets of anypartially ordered set. Note that this more general concept of boundedness does not correspond to a notion of "size". A subsetSof a partially ordered setPis calledbounded aboveif there is an elementkinPsuch thatk≥sfor allsinS. The elementkis called anupper boundofS. The concepts ofbounded belowandlower boundare defined similarly. (See alsoupper and lower bounds.) A subsetSof a partially ordered setPis calledboundedif it has both an upper and a lower bound, or equivalently, if it is contained in aninterval. Note that this is not just a property of the setSbut also one of the setSas subset ofP. Abounded posetP(that is, by itself, not as subset) is one that has a least element and agreatest element. Note that this concept of boundedness has nothing to do with finite size, and that a subsetSof a bounded posetPwith as order therestrictionof the order onPis not necessarily a bounded poset. A subsetSofRnis bounded with respect to theEuclidean distanceif and only if it bounded as subset ofRnwith theproduct order. However,Smay be bounded as subset ofRnwith thelexicographical order, but not with respect to the Euclidean distance. A class ofordinal numbersis said to be unbounded, orcofinal, when given any ordinal, there is always some element of the class greater than it. Thus in this case "unbounded" does not mean unbounded by itself but unbounded as a subclass of the class of all ordinal numbers.
https://en.wikipedia.org/wiki/Bounded_set
Inmathematics, aPadé approximantis the "best" approximation of a function near a specific point by arational functionof given order. Under this technique, the approximant'spower seriesagrees with the power series of the function it is approximating. The technique was developed around 1890 byHenri Padé, but goes back toGeorg Frobenius, who introduced the idea and investigated the features of rational approximations of power series. The Padé approximant often gives better approximation of the function than truncating itsTaylor series, and it may still work where the Taylor series does notconverge. For these reasons Padé approximants are used extensively in computercalculations. They have also been used asauxiliary functionsinDiophantine approximationandtranscendental number theory, though for sharp results ad hoc methods—in some sense inspired by the Padé theory—typically replace them. Since a Padé approximant is a rational function, an artificial singular point may occur as an approximation, but this can be avoided byBorel–Padé analysis. The reason the Padé approximant tends to be a better approximation than a truncating Taylor series is clear from the viewpoint of the multi-point summation method. Since there are many cases in which the asymptotic expansion at infinity becomes 0 or a constant, it can be interpreted as the "incomplete two-point Padé approximation", in which the ordinary Padé approximation improves on the method of truncating a Taylor series. Given a functionfand twointegersm≥ 0andn≥ 1, thePadé approximantof order[m/n]is the rational function R(x)=∑j=0majxj1+∑k=1nbkxk=a0+a1x+a2x2+⋯+amxm1+b1x+b2x2+⋯+bnxn,{\displaystyle R(x)={\frac {\sum _{j=0}^{m}a_{j}x^{j}}{1+\sum _{k=1}^{n}b_{k}x^{k}}}={\frac {a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{m}x^{m}}{1+b_{1}x+b_{2}x^{2}+\dots +b_{n}x^{n}}},}which agrees withf(x)to the highest possible order, which amounts tof(0)=R(0),f′(0)=R′(0),f″(0)=R″(0),⋮f(m+n)(0)=R(m+n)(0).{\displaystyle {\begin{aligned}f(0)&=R(0),\\f'(0)&=R'(0),\\f''(0)&=R''(0),\\&\mathrel {\;\vdots } \\f^{(m+n)}(0)&=R^{(m+n)}(0).\end{aligned}}} Equivalently, ifR(x){\displaystyle R(x)}is expanded in a Maclaurin series (Taylor seriesat 0), its firstm+n{\displaystyle m+n}terms would equal the firstm+n{\displaystyle m+n}terms off(x){\displaystyle f(x)}, and thusf(x)−R(x)=cm+n+1xm+n+1+cm+n+2xm+n+2+⋯{\displaystyle f(x)-R(x)=c_{m+n+1}x^{m+n+1}+c_{m+n+2}x^{m+n+2}+\cdots } When it exists, the Padé approximant is unique as a formal power series for the givenmandn.[1] The Padé approximant defined above is also denoted as[m/n]f(x).{\displaystyle [m/n]_{f}(x).} For givenx, Padé approximants can be computed byWynn's epsilon algorithm[2]and also othersequence transformations[3]from the partial sumsTN(x)=c0+c1x+c2x2+⋯+cNxN{\displaystyle T_{N}(x)=c_{0}+c_{1}x+c_{2}x^{2}+\cdots +c_{N}x^{N}}of theTaylor seriesoff, i.e., we haveck=f(k)(0)k!.{\displaystyle c_{k}={\frac {f^{(k)}(0)}{k!}}.}fcan also be aformal power series, and, hence, Padé approximants can also be applied to the summation ofdivergent series. One way to compute a Padé approximant is via theextended Euclidean algorithmfor thepolynomial greatest common divisor.[4]The relationR(x)=P(x)/Q(x)=Tm+n(x)modxm+n+1{\displaystyle R(x)=P(x)/Q(x)=T_{m+n}(x){\bmod {x}}^{m+n+1}}is equivalent to the existence of some factorK(x){\displaystyle K(x)}such thatP(x)=Q(x)Tm+n(x)+K(x)xm+n+1,{\displaystyle P(x)=Q(x)T_{m+n}(x)+K(x)x^{m+n+1},}which can be interpreted as theBézout identityof one step in the computation of the extended greatest common divisor of the polynomialsTm+n(x){\displaystyle T_{m+n}(x)}andxm+n+1{\displaystyle x^{m+n+1}}. Recall that, to compute the greatest common divisor of two polynomialspandq, one computes via long division the remainder sequencer0=p,r1=q,rk−1=qkrk+rk+1,{\displaystyle r_{0}=p,\;r_{1}=q,\quad r_{k-1}=q_{k}r_{k}+r_{k+1},}k= 1, 2, 3, ...withdeg⁡rk+1<deg⁡rk{\displaystyle \deg r_{k+1}<\deg r_{k}\,}, untilrk+1=0{\displaystyle r_{k+1}=0}. For the Bézout identities of the extended greatest common divisor one computes simultaneously the two polynomial sequencesu0=1,v0=0,u1=0,v1=1,uk+1=uk−1−qkuk,vk+1=vk−1−qkvk{\displaystyle u_{0}=1,\;v_{0}=0,\quad u_{1}=0,\;v_{1}=1,\quad u_{k+1}=u_{k-1}-q_{k}u_{k},\;v_{k+1}=v_{k-1}-q_{k}v_{k}}to obtain in each step the Bézout identityrk(x)=uk(x)p(x)+vk(x)q(x).{\displaystyle r_{k}(x)=u_{k}(x)p(x)+v_{k}(x)q(x).} For the[m/n]approximant, one thus carries out the extended Euclidean algorithm forr0=xm+n+1,r1=Tm+n(x){\displaystyle r_{0}=x^{m+n+1},\;r_{1}=T_{m+n}(x)}and stops it at the last instant thatvk{\displaystyle v_{k}}has degreenor smaller. Then the polynomialsP=rk,Q=vk{\displaystyle P=r_{k},\;Q=v_{k}}give the[m/n]Padé approximant. If one were to compute all steps of the extended greatest common divisor computation, one would obtain an anti-diagonal of thePadé table. To study the resummation of adivergent series, say∑z=1∞f(z),{\displaystyle \sum _{z=1}^{\infty }f(z),}it can be useful to introduce the Padé or simply rational zeta function asζR(s)=∑z=1∞R(z)zs,{\displaystyle \zeta _{R}(s)=\sum _{z=1}^{\infty }{\frac {R(z)}{z^{s}}},}whereR(x)=[m/n]f(x){\displaystyle R(x)=[m/n]_{f}(x)}is the Padé approximation of order(m,n)of the functionf(x). Thezeta regularizationvalue ats= 0is taken to be the sum of the divergent series. The functional equation for this Padé zeta function is∑j=0najζR(s−j)=∑j=0mbjζ0(s−j),{\displaystyle \sum _{j=0}^{n}a_{j}\zeta _{R}(s-j)=\sum _{j=0}^{m}b_{j}\zeta _{0}(s-j),}whereajandbjare the coefficients in the Padé approximation. The subscript '0' means that the Padé is of order [0/0] and hence, we have theRiemann zeta function. Padé approximants can be used to extract critical points and exponents of functions.[5][6]In thermodynamics, if a functionf(x)behaves in a non-analytic way near a pointx=rlikef(x)∼|x−r|p{\displaystyle f(x)\sim |x-r|^{p}}, one callsx=ra critical point andpthe associated critical exponent off. If sufficient terms of the series expansion offare known, one can approximately extract the critical points and the critical exponents from respectively the poles and residues of the Padé approximants[n/n+1]g(x){\displaystyle [n/n+1]_{g}(x)}, whereg=f′/f{\displaystyle g=f'/f}. A Padé approximant approximates a function in one variable. An approximant in two variables is called a Chisholm approximant (afterJ. S. R. Chisholm),[7]in multiple variables a Canterbury approximant (after Graves-Morris at the University of Kent).[8] The conventional Padé approximation is determined to reproduce the Maclaurin expansion up to a given order. Therefore, the approximation at the value apart from the expansion point may be poor. This is avoided by the 2-point Padé approximation, which is a type of multipoint summation method.[9]Atx=0{\displaystyle x=0}, consider a case that a functionf(x){\displaystyle f(x)}which is expressed by asymptotic behaviorf0(x){\displaystyle f_{0}(x)}:f∼f0(x)+o(f0(x)),x→0,{\displaystyle f\sim f_{0}(x)+o{\big (}f_{0}(x){\big )},\quad x\to 0,}and atx→∞{\displaystyle x\to \infty }, additional asymptotic behaviorf∞(x){\displaystyle f_{\infty }(x)}:f(x)∼f∞(x)+o(f∞(x)),x→∞.{\displaystyle f(x)\sim f_{\infty }(x)+o{\big (}f_{\infty }(x){\big )},\quad x\to \infty .} By selecting the major behavior off0(x),f∞(x){\displaystyle f_{0}(x),f_{\infty }(x)}, approximate functionsF(x){\displaystyle F(x)}such that simultaneously reproduce asymptotic behavior by developing the Padé approximation can be found in various cases. As a result, at the pointx→∞{\displaystyle x\to \infty }, where the accuracy of the approximation may be the worst in the ordinary Padé approximation, good accuracy of the 2-point Padé approximant is guaranteed. Therefore, the 2-point Padé approximant can be a method that gives a good approximation globally forx=0∼∞{\displaystyle x=0\sim \infty }. In cases wheref0(x),f∞(x){\displaystyle f_{0}(x),f_{\infty }(x)}are expressed by polynomials or series of negative powers, exponential function, logarithmic function orxln⁡x{\displaystyle x\ln x}, we can apply 2-point Padé approximant tof(x){\displaystyle f(x)}. There is a method of using this to give an approximate solution of a differential equation with high accuracy.[9]Also, for the nontrivial zeros of the Riemann zeta function, the first nontrivial zero can be estimated with some accuracy from the asymptotic behavior on the real axis.[9] A further extension of the 2-point Padé approximant is the multi-point Padé approximant.[9]This method treats singularity pointsx=xj(j=1,2,3,…,N){\displaystyle x=x_{j}(j=1,2,3,\dots ,N)}of a functionf(x){\displaystyle f(x)}which is to be approximated. Consider the cases when singularities of a function are expressed with indexnj{\displaystyle n_{j}}byf(x)∼Aj(x−xj)nj,x→xj.{\displaystyle f(x)\sim {\frac {A_{j}}{(x-x_{j})^{n_{j}}}},\quad x\to x_{j}.} Besides the 2-point Padé approximant, which includes information atx=0,x→∞{\displaystyle x=0,x\to \infty }, this method approximates to reduce the property of diverging atx∼xj{\displaystyle x\sim x_{j}}. As a result, since the information of the peculiarity of the function is captured, the approximation of a functionf(x){\displaystyle f(x)}can be performed with higher accuracy.
https://en.wikipedia.org/wiki/Pad%C3%A9_approximant
Inmathematics, theTaylor seriesorTaylor expansionof afunctionis aninfinite sumof terms that are expressed in terms of the function'sderivativesat a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor series are named afterBrook Taylor, who introduced them in 1715. A Taylor series is also called aMaclaurin serieswhen 0 is the point where the derivatives are considered, afterColin Maclaurin, who made extensive use of this special case of Taylor series in the 18th century. Thepartial sumformed by the firstn+ 1terms of a Taylor series is apolynomialof degreenthat is called thenthTaylor polynomialof the function. Taylor polynomials are approximations of a function, which become generally more accurate asnincreases.Taylor's theoremgives quantitative estimates on the error introduced by the use of such approximations. If the Taylor series of a function isconvergent, its sum is thelimitof theinfinite sequenceof the Taylor polynomials. A function may differ from the sum of its Taylor series, even if its Taylor series is convergent. A function isanalyticat a pointxif it is equal to the sum of its Taylor series in someopen interval(oropen diskin thecomplex plane) containingx. This implies that the function is analytic at every point of the interval (or disk). The Taylor series of arealorcomplex-valued functionf(x), that isinfinitely differentiableat arealorcomplex numbera, is thepower seriesf(a)+f′(a)1!(x−a)+f″(a)2!(x−a)2+⋯=∑n=0∞f(n)(a)n!(x−a)n.{\displaystyle f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+\cdots =\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}.}Here,n!denotes thefactorialofn. The functionf(n)(a)denotes thenthderivativeoffevaluated at the pointa. The derivative of order zero offis defined to befitself and(x−a)0and0!are both defined to be 1. This series can be written by usingsigma notation, as in the right side formula.[1]Witha= 0, the Maclaurin series takes the form:[2]f(0)+f′(0)1!x+f″(0)2!x2+⋯=∑n=0∞f(n)(0)n!xn.{\displaystyle f(0)+{\frac {f'(0)}{1!}}x+{\frac {f''(0)}{2!}}x^{2}+\cdots =\sum _{n=0}^{\infty }{\frac {f^{(n)}(0)}{n!}}x^{n}.} The Taylor series of anypolynomialis the polynomial itself. The Maclaurin series of⁠1/1 −x⁠is thegeometric series 1+x+x2+x3+⋯.{\displaystyle 1+x+x^{2}+x^{3}+\cdots .} So, by substitutingxfor1 −x, the Taylor series of⁠1/x⁠ata= 1is 1−(x−1)+(x−1)2−(x−1)3+⋯.{\displaystyle 1-(x-1)+(x-1)^{2}-(x-1)^{3}+\cdots .} By integrating the above Maclaurin series, we find the Maclaurin series ofln(1 −x), wherelndenotes thenatural logarithm: −x−12x2−13x3−14x4−⋯.{\displaystyle -x-{\tfrac {1}{2}}x^{2}-{\tfrac {1}{3}}x^{3}-{\tfrac {1}{4}}x^{4}-\cdots .} The corresponding Taylor series oflnxata= 1is (x−1)−12(x−1)2+13(x−1)3−14(x−1)4+⋯,{\displaystyle (x-1)-{\tfrac {1}{2}}(x-1)^{2}+{\tfrac {1}{3}}(x-1)^{3}-{\tfrac {1}{4}}(x-1)^{4}+\cdots ,} and more generally, the corresponding Taylor series oflnxat an arbitrary nonzero pointais: ln⁡a+1a(x−a)−1a2(x−a)22+⋯.{\displaystyle \ln a+{\frac {1}{a}}(x-a)-{\frac {1}{a^{2}}}{\frac {\left(x-a\right)^{2}}{2}}+\cdots .} The Maclaurin series of theexponential functionexis ∑n=0∞xnn!=x00!+x11!+x22!+x33!+x44!+x55!+⋯=1+x+x22+x36+x424+x5120+⋯.{\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}&={\frac {x^{0}}{0!}}+{\frac {x^{1}}{1!}}+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+{\frac {x^{5}}{5!}}+\cdots \\&=1+x+{\frac {x^{2}}{2}}+{\frac {x^{3}}{6}}+{\frac {x^{4}}{24}}+{\frac {x^{5}}{120}}+\cdots .\end{aligned}}} The above expansion holds because the derivative ofexwith respect toxis alsoex, ande0equals 1. This leaves the terms(x− 0)nin the numerator andn!in the denominator of each term in the infinite sum. Theancient Greek philosopherZeno of Eleaconsidered the problem of summing an infinite series to achieve a finite result, but rejected it as an impossibility;[3]the result wasZeno's paradox. Later,Aristotleproposed a philosophical resolution of the paradox, but the mathematical content was apparently unresolved until taken up byArchimedes, as it had been prior to Aristotle by the Presocratic AtomistDemocritus. It was through Archimedes'smethod of exhaustionthat an infinite number of progressive subdivisions could be performed to achieve a finite result.[4]Liu Huiindependently employed a similar method a few centuries later.[5] In the 14th century, the earliest examples of specific Taylor series (but not the general method) were given by Indian mathematicianMadhava of Sangamagrama.[6]Though no record of his work survives, writings of his followers in theKerala school of astronomy and mathematicssuggest that he found the Taylor series for thetrigonometric functionsofsine,cosine, andarctangent(seeMadhava series). During the following two centuries his followers developed further series expansions and rational approximations. In late 1670,James Gregorywas shown in a letter fromJohn Collinsseveral Maclaurin series(sin⁡x,{\textstyle \sin x,}cos⁡x,{\textstyle \cos x,}arcsin⁡x,{\textstyle \arcsin x,}andxcot⁡x{\textstyle x\cot x})derived byIsaac Newton, and told that Newton had developed a general method for expanding functions in series. Newton had in fact used a cumbersome method involving long division of series and term-by-term integration, but Gregory did not know it and set out to discover a general method for himself. In early 1671 Gregory discovered something like the general Maclaurin series and sent a letter to Collins including series forarctan⁡x,{\textstyle \arctan x,}tan⁡x,{\textstyle \tan x,}sec⁡x,{\textstyle \sec x,}lnsec⁡x{\textstyle \ln \,\sec x}(the integral oftan{\displaystyle \tan }),lntan⁡12(12π+x){\textstyle \ln \,\tan {\tfrac {1}{2}}{{\bigl (}{\tfrac {1}{2}}\pi +x{\bigr )}}}(theintegral ofsec, the inverseGudermannian function),arcsec⁡(2ex),{\textstyle \operatorname {arcsec} {\bigl (}{\sqrt {2}}e^{x}{\bigr )},}and2arctan⁡ex−12π{\textstyle 2\arctan e^{x}-{\tfrac {1}{2}}\pi }(the Gudermannian function). However, thinking that he had merely redeveloped a method by Newton, Gregory never described how he obtained these series, and it can only be inferred that he understood the general method by examining scratch work he had scribbled on the back of another letter from 1671.[7] In 1691–1692, Isaac Newton wrote down an explicit statement of the Taylor and Maclaurin series in an unpublished version of his workDe Quadratura Curvarum. However, this work was never completed and the relevant sections were omitted from the portions published in 1704 under the titleTractatus de Quadratura Curvarum. It was not until 1715 that a general method for constructing these series for all functions for which they exist was finally published byBrook Taylor,[8]after whom the series are now named. The Maclaurin series was named afterColin Maclaurin, a Scottish mathematician, who published a special case of the Taylor result in the mid-18th century. Iff(x)is given by a convergent power series in an open disk centred atbin the complex plane (or an interval in the real line), it is said to beanalyticin this region. Thus forxin this region,fis given by a convergent power series f(x)=∑n=0∞an(x−b)n.{\displaystyle f(x)=\sum _{n=0}^{\infty }a_{n}(x-b)^{n}.} Differentiating byxthe above formulantimes, then settingx=bgives: f(n)(b)n!=an{\displaystyle {\frac {f^{(n)}(b)}{n!}}=a_{n}} and so the power series expansion agrees with the Taylor series. Thus a function is analytic in an open disk centered atbif and only if its Taylor series converges to the value of the function at each point of the disk. Iff(x)is equal to the sum of its Taylor series for allxin the complex plane, it is calledentire. The polynomials,exponential functionex, and thetrigonometric functionssine and cosine, are examples of entire functions. Examples of functions that are not entire include thesquare root, thelogarithm, thetrigonometric functiontangent, and its inverse,arctan. For these functions the Taylor series do notconvergeifxis far fromb. That is, the Taylor seriesdivergesatxif the distance betweenxandbis larger than theradius of convergence. The Taylor series can be used to calculate the value of an entire function at every point, if the value of the function, and of all of its derivatives, are known at a single point. Uses of the Taylor series for analytic functions include: Pictured is an accurate approximation ofsinxaround the pointx= 0. The pink curve is a polynomial of degree seven: sin⁡x≈x−x33!+x55!−x77!.{\displaystyle \sin {x}\approx x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-{\frac {x^{7}}{7!}}.\!} The error in this approximation is no more than|x|9/ 9!. For a full cycle centered at the origin (−π <x< π) the error is less than 0.08215. In particular, for−1 <x< 1, the error is less than 0.000003. In contrast, also shown is a picture of the natural logarithm functionln(1 +x)and some of its Taylor polynomials arounda= 0. These approximations converge to the function only in the region−1 <x≤ 1; outside of this region the higher-degree Taylor polynomials areworseapproximations for the function. Theerrorincurred in approximating a function by itsnth-degree Taylor polynomial is called theremainderorresidualand is denoted by the functionRn(x). Taylor's theorem can be used to obtain a bound on thesize of the remainder. In general, Taylor series need not beconvergentat all. In fact, the set of functions with a convergent Taylor series is ameager setin theFréchet spaceofsmooth functions. Even if the Taylor series of a functionfdoes converge, its limit need not be equal to the value of the functionf(x). For example, the function f(x)={e−1/x2ifx≠00ifx=0{\displaystyle f(x)={\begin{cases}e^{-1/x^{2}}&{\text{if }}x\neq 0\\[3mu]0&{\text{if }}x=0\end{cases}}} isinfinitely differentiableatx= 0, and has all derivatives zero there. Consequently, the Taylor series off(x)aboutx= 0is identically zero. However,f(x)is not the zero function, so does not equal its Taylor series around the origin. Thus,f(x)is an example of anon-analytic smooth function. Inreal analysis, this example shows that there areinfinitely differentiable functionsf(x)whose Taylor series arenotequal tof(x)even if they converge. By contrast, theholomorphic functionsstudied incomplex analysisalways possess a convergent Taylor series, and even the Taylor series ofmeromorphic functions, which might have singularities, never converge to a value different from the function itself. The complex functione−1/z2, however, does not approach 0 whenzapproaches 0 along the imaginary axis, so it is notcontinuousin the complex plane and its Taylor series is undefined at 0. More generally, every sequence of real or complex numbers can appear ascoefficientsin the Taylor series of an infinitely differentiable function defined on the real line, a consequence ofBorel's lemma. As a result, theradius of convergenceof a Taylor series can be zero. There are even infinitely differentiable functions defined on the real line whose Taylor series have a radius of convergence 0 everywhere.[9] A function cannot be written as a Taylor series centred at asingularity; in these cases, one can often still achieve a series expansion if one allows also negative powers of the variablex; seeLaurent series. For example,f(x) =e−1/x2can be written as a Laurent series. The generalization of the Taylor series does converge to the value of the function itself for anyboundedcontinuous functionon(0,∞), and this can be done by using the calculus offinite differences. Specifically, the following theorem, due toEinar Hille, that for anyt> 0,[10] limh→0+∑n=0∞tnn!Δhnf(a)hn=f(a+t).{\displaystyle \lim _{h\to 0^{+}}\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}{\frac {\Delta _{h}^{n}f(a)}{h^{n}}}=f(a+t).} HereΔnhis thenth finite difference operator with step sizeh. The series is precisely the Taylor series, except that divided differences appear in place of differentiation: the series is formally similar to theNewton series. When the functionfis analytic ata, the terms in the series converge to the terms of the Taylor series, and in this sense generalizes the usual Taylor series. In general, for any infinite sequenceai, the following power series identity holds: ∑n=0∞unn!Δnai=e−u∑j=0∞ujj!ai+j.{\displaystyle \sum _{n=0}^{\infty }{\frac {u^{n}}{n!}}\Delta ^{n}a_{i}=e^{-u}\sum _{j=0}^{\infty }{\frac {u^{j}}{j!}}a_{i+j}.} So in particular, f(a+t)=limh→0+e−t/h∑j=0∞f(a+jh)(t/h)jj!.{\displaystyle f(a+t)=\lim _{h\to 0^{+}}e^{-t/h}\sum _{j=0}^{\infty }f(a+jh){\frac {(t/h)^{j}}{j!}}.} The series on the right is theexpected valueoff(a+X), whereXis aPoisson-distributedrandom variablethat takes the valuejhwith probabilitye−t/h·⁠(t/h)j/j!⁠. Hence, f(a+t)=limh→0+∫−∞∞f(a+x)dPt/h,h(x).{\displaystyle f(a+t)=\lim _{h\to 0^{+}}\int _{-\infty }^{\infty }f(a+x)dP_{t/h,h}(x).} Thelaw of large numbersimplies that the identity holds.[11] Several important Maclaurin series expansions follow. All these expansions are valid for complex argumentsx. Theexponential functionex{\displaystyle e^{x}}(with basee) has Maclaurin series[12] ex=∑n=0∞xnn!=1+x+x22!+x33!+⋯.{\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+\cdots .}It converges for allx. The exponentialgenerating functionof theBell numbersis the exponential function of the predecessor of the exponential function: exp⁡(exp⁡x−1)=∑n=0∞Bnn!xn{\displaystyle \exp(\exp {x}-1)=\sum _{n=0}^{\infty }{\frac {B_{n}}{n!}}x^{n}} Thenatural logarithm(with basee) has Maclaurin series[13] ln⁡(1−x)=−∑n=1∞xnn=−x−x22−x33−⋯,ln⁡(1+x)=∑n=1∞(−1)n+1xnn=x−x22+x33−⋯.{\displaystyle {\begin{aligned}\ln(1-x)&=-\sum _{n=1}^{\infty }{\frac {x^{n}}{n}}=-x-{\frac {x^{2}}{2}}-{\frac {x^{3}}{3}}-\cdots ,\\\ln(1+x)&=\sum _{n=1}^{\infty }(-1)^{n+1}{\frac {x^{n}}{n}}=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}-\cdots .\end{aligned}}} The last series is known asMercator series, named afterNicholas Mercator(since it was published in his 1668 treatiseLogarithmotechnia).[14]Both of these series converge for|x|<1{\displaystyle |x|<1}. (In addition, the series forln(1 −x)converges forx= −1, and the series forln(1 +x)converges forx= 1.)[13] Thegeometric seriesand its derivatives have Maclaurin series 11−x=∑n=0∞xn1(1−x)2=∑n=1∞nxn−11(1−x)3=∑n=2∞(n−1)n2xn−2.{\displaystyle {\begin{aligned}{\frac {1}{1-x}}&=\sum _{n=0}^{\infty }x^{n}\\{\frac {1}{(1-x)^{2}}}&=\sum _{n=1}^{\infty }nx^{n-1}\\{\frac {1}{(1-x)^{3}}}&=\sum _{n=2}^{\infty }{\frac {(n-1)n}{2}}x^{n-2}.\end{aligned}}} All are convergent for|x|<1{\displaystyle |x|<1}. These are special cases of thebinomial seriesgiven in the next section. Thebinomial seriesis the power series (1+x)α=∑n=0∞(αn)xn{\displaystyle (1+x)^{\alpha }=\sum _{n=0}^{\infty }{\binom {\alpha }{n}}x^{n}} whose coefficients are the generalizedbinomial coefficients[15] (αn)=∏k=1nα−k+1k=α(α−1)⋯(α−n+1)n!.{\displaystyle {\binom {\alpha }{n}}=\prod _{k=1}^{n}{\frac {\alpha -k+1}{k}}={\frac {\alpha (\alpha -1)\cdots (\alpha -n+1)}{n!}}.} (Ifn= 0, this product is anempty productand has value 1.) It converges for|x|<1{\displaystyle |x|<1}for any real or complex numberα. Whenα= −1, this is essentially the infinite geometric series mentioned in the previous section. The special casesα=⁠1/2⁠andα= −⁠1/2⁠give thesquare rootfunction and itsinverse:[16] (1+x)12=1+12x−18x2+116x3−5128x4+7256x5−⋯=∑n=0∞(−1)n−1(2n)!4n(n!)2(2n−1)xn,(1+x)−12=1−12x+38x2−516x3+35128x4−63256x5+⋯=∑n=0∞(−1)n(2n)!4n(n!)2xn.{\displaystyle {\begin{aligned}(1+x)^{\frac {1}{2}}&=1+{\frac {1}{2}}x-{\frac {1}{8}}x^{2}+{\frac {1}{16}}x^{3}-{\frac {5}{128}}x^{4}+{\frac {7}{256}}x^{5}-\cdots &=\sum _{n=0}^{\infty }{\frac {(-1)^{n-1}(2n)!}{4^{n}(n!)^{2}(2n-1)}}x^{n},\\(1+x)^{-{\frac {1}{2}}}&=1-{\frac {1}{2}}x+{\frac {3}{8}}x^{2}-{\frac {5}{16}}x^{3}+{\frac {35}{128}}x^{4}-{\frac {63}{256}}x^{5}+\cdots &=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{4^{n}(n!)^{2}}}x^{n}.\end{aligned}}} When only thelinear termis retained, this simplifies to thebinomial approximation. The usualtrigonometric functionsand their inverses have the following Maclaurin series:[17] sin⁡x=∑n=0∞(−1)n(2n+1)!x2n+1=x−x33!+x55!−⋯for allxcos⁡x=∑n=0∞(−1)n(2n)!x2n=1−x22!+x44!−⋯for allxtan⁡x=∑n=1∞B2n(−4)n(1−4n)(2n)!x2n−1=x+x33+2x515+⋯for|x|<π2sec⁡x=∑n=0∞(−1)nE2n(2n)!x2n=1+x22+5x424+⋯for|x|<π2arcsin⁡x=∑n=0∞(2n)!4n(n!)2(2n+1)x2n+1=x+x36+3x540+⋯for|x|≤1arccos⁡x=π2−arcsin⁡x=π2−∑n=0∞(2n)!4n(n!)2(2n+1)x2n+1=π2−x−x36−3x540−⋯for|x|≤1arctan⁡x=∑n=0∞(−1)n2n+1x2n+1=x−x33+x55−⋯for|x|≤1,x≠±i{\displaystyle {\begin{aligned}\sin x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)!}}x^{2n+1}&&=x-{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}-\cdots &&{\text{for all }}x\\[6pt]\cos x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n)!}}x^{2n}&&=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots &&{\text{for all }}x\\[6pt]\tan x&=\sum _{n=1}^{\infty }{\frac {B_{2n}(-4)^{n}\left(1-4^{n}\right)}{(2n)!}}x^{2n-1}&&=x+{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\sec x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}E_{2n}}{(2n)!}}x^{2n}&&=1+{\frac {x^{2}}{2}}+{\frac {5x^{4}}{24}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\arcsin x&=\sum _{n=0}^{\infty }{\frac {(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&=x+{\frac {x^{3}}{6}}+{\frac {3x^{5}}{40}}+\cdots &&{\text{for }}|x|\leq 1\\[6pt]\arccos x&={\frac {\pi }{2}}-\arcsin x\\&={\frac {\pi }{2}}-\sum _{n=0}^{\infty }{\frac {(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&={\frac {\pi }{2}}-x-{\frac {x^{3}}{6}}-{\frac {3x^{5}}{40}}-\cdots &&{\text{for }}|x|\leq 1\\[6pt]\arctan x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}x^{2n+1}&&=x-{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}-\cdots &&{\text{for }}|x|\leq 1,\ x\neq \pm i\end{aligned}}} All angles are expressed inradians. The numbersBkappearing in the expansions oftanxare theBernoulli numbers. TheEkin the expansion ofsecxareEuler numbers.[18] Thehyperbolic functionshave Maclaurin series closely related to the series for the corresponding trigonometric functions:[19] sinh⁡x=∑n=0∞x2n+1(2n+1)!=x+x33!+x55!+⋯for allxcosh⁡x=∑n=0∞x2n(2n)!=1+x22!+x44!+⋯for allxtanh⁡x=∑n=1∞B2n4n(4n−1)(2n)!x2n−1=x−x33+2x515−17x7315+⋯for|x|<π2arsinh⁡x=∑n=0∞(−1)n(2n)!4n(n!)2(2n+1)x2n+1=x−x36+3x540−⋯for|x|≤1artanh⁡x=∑n=0∞x2n+12n+1=x+x33+x55+⋯for|x|≤1,x≠±1{\displaystyle {\begin{aligned}\sinh x&=\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{(2n+1)!}}&&=x+{\frac {x^{3}}{3!}}+{\frac {x^{5}}{5!}}+\cdots &&{\text{for all }}x\\[6pt]\cosh x&=\sum _{n=0}^{\infty }{\frac {x^{2n}}{(2n)!}}&&=1+{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}+\cdots &&{\text{for all }}x\\[6pt]\tanh x&=\sum _{n=1}^{\infty }{\frac {B_{2n}4^{n}\left(4^{n}-1\right)}{(2n)!}}x^{2n-1}&&=x-{\frac {x^{3}}{3}}+{\frac {2x^{5}}{15}}-{\frac {17x^{7}}{315}}+\cdots &&{\text{for }}|x|<{\frac {\pi }{2}}\\[6pt]\operatorname {arsinh} x&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}(2n)!}{4^{n}(n!)^{2}(2n+1)}}x^{2n+1}&&=x-{\frac {x^{3}}{6}}+{\frac {3x^{5}}{40}}-\cdots &&{\text{for }}|x|\leq 1\\[6pt]\operatorname {artanh} x&=\sum _{n=0}^{\infty }{\frac {x^{2n+1}}{2n+1}}&&=x+{\frac {x^{3}}{3}}+{\frac {x^{5}}{5}}+\cdots &&{\text{for }}|x|\leq 1,\ x\neq \pm 1\end{aligned}}} The numbersBkappearing in the series fortanhxare theBernoulli numbers.[19] Thepolylogarithmshave these defining identities: Li2(x)=∑n=1∞1n2xnLi3(x)=∑n=1∞1n3xn{\displaystyle {\begin{aligned}{\text{Li}}_{2}(x)&=\sum _{n=1}^{\infty }{\frac {1}{n^{2}}}x^{n}\\{\text{Li}}_{3}(x)&=\sum _{n=1}^{\infty }{\frac {1}{n^{3}}}x^{n}\end{aligned}}} TheLegendre chi functionsare defined as follows: χ2(x)=∑n=0∞1(2n+1)2x2n+1χ3(x)=∑n=0∞1(2n+1)3x2n+1{\displaystyle {\begin{aligned}\chi _{2}(x)&=\sum _{n=0}^{\infty }{\frac {1}{(2n+1)^{2}}}x^{2n+1}\\\chi _{3}(x)&=\sum _{n=0}^{\infty }{\frac {1}{(2n+1)^{3}}}x^{2n+1}\end{aligned}}} And the formulas presented below are calledinverse tangent integrals: Ti2(x)=∑n=0∞(−1)n(2n+1)2x2n+1Ti3(x)=∑n=0∞(−1)n(2n+1)3x2n+1{\displaystyle {\begin{aligned}{\text{Ti}}_{2}(x)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)^{2}}}x^{2n+1}\\{\text{Ti}}_{3}(x)&=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{(2n+1)^{3}}}x^{2n+1}\end{aligned}}} Instatistical thermodynamicsthese formulas are of great importance. The completeelliptic integralsof first kind K and of second kind E can be defined as follows: 2πK(x)=∑n=0∞[(2n)!]216n(n!)4x2n2πE(x)=∑n=0∞[(2n)!]2(1−2n)16n(n!)4x2n{\displaystyle {\begin{aligned}{\frac {2}{\pi }}K(x)&=\sum _{n=0}^{\infty }{\frac {[(2n)!]^{2}}{16^{n}(n!)^{4}}}x^{2n}\\{\frac {2}{\pi }}E(x)&=\sum _{n=0}^{\infty }{\frac {[(2n)!]^{2}}{(1-2n)16^{n}(n!)^{4}}}x^{2n}\end{aligned}}} TheJacobi theta functionsdescribe the world of the elliptic modular functions and they have these Taylor series: ϑ00(x)=1+2∑n=1∞xn2ϑ01(x)=1+2∑n=1∞(−1)nxn2{\displaystyle {\begin{aligned}\vartheta _{00}(x)&=1+2\sum _{n=1}^{\infty }x^{n^{2}}\\\vartheta _{01}(x)&=1+2\sum _{n=1}^{\infty }(-1)^{n}x^{n^{2}}\end{aligned}}} The regularpartition number sequenceP(n) has this generating function: ϑ00(x)−1/6ϑ01(x)−2/3[ϑ00(x)4−ϑ01(x)416x]−1/24=∑n=0∞P(n)xn=∏k=1∞11−xk{\displaystyle \vartheta _{00}(x)^{-1/6}\vartheta _{01}(x)^{-2/3}{\biggl [}{\frac {\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}}{16\,x}}{\biggr ]}^{-1/24}=\sum _{n=0}^{\infty }P(n)x^{n}=\prod _{k=1}^{\infty }{\frac {1}{1-x^{k}}}} The strict partition number sequence Q(n) has that generating function: ϑ00(x)1/6ϑ01(x)−1/3[ϑ00(x)4−ϑ01(x)416x]1/24=∑n=0∞Q(n)xn=∏k=1∞11−x2k−1{\displaystyle \vartheta _{00}(x)^{1/6}\vartheta _{01}(x)^{-1/3}{\biggl [}{\frac {\vartheta _{00}(x)^{4}-\vartheta _{01}(x)^{4}}{16\,x}}{\biggr ]}^{1/24}=\sum _{n=0}^{\infty }Q(n)x^{n}=\prod _{k=1}^{\infty }{\frac {1}{1-x^{2k-1}}}} Several methods exist for the calculation of Taylor series of a large number of functions. One can attempt to use the definition of the Taylor series, though this often requires generalizing the form of the coefficients according to a readily apparent pattern. Alternatively, one can use manipulations such as substitution, multiplication or division, addition or subtraction of standard Taylor series to construct the Taylor series of a function, by virtue of Taylor series being power series. In some cases, one can also derive the Taylor series by repeatedly applyingintegration by parts. Particularly convenient is the use ofcomputer algebra systemsto calculate Taylor series. In order to compute the 7th degree Maclaurin polynomial for the function f(x)=ln⁡(cos⁡x),x∈(−π2,π2),{\displaystyle f(x)=\ln(\cos x),\quad x\in {\bigl (}{-{\tfrac {\pi }{2}}},{\tfrac {\pi }{2}}{\bigr )},} one may first rewrite the function as f(x)=ln(1+(cos⁡x−1)),{\displaystyle f(x)={\ln }{\bigl (}1+(\cos x-1){\bigr )},} the composition of two functionsx↦ln⁡(1+x){\displaystyle x\mapsto \ln(1+x)}andx↦cos⁡x−1.{\displaystyle x\mapsto \cos x-1.}The Taylor series for the natural logarithm is (usingbig O notation) ln⁡(1+x)=x−x22+x33+O(x4){\displaystyle \ln(1+x)=x-{\frac {x^{2}}{2}}+{\frac {x^{3}}{3}}+O{\left(x^{4}\right)}} and for the cosine function cos⁡x−1=−x22+x424−x6720+O(x8).{\displaystyle \cos x-1=-{\frac {x^{2}}{2}}+{\frac {x^{4}}{24}}-{\frac {x^{6}}{720}}+O{\left(x^{8}\right)}.} The first several terms from the second series can be substituted into each term of the first series. Because the first term in the second series has degree 2, three terms of the first series suffice to give a 7th-degree polynomial: f(x)=ln⁡(1+(cos⁡x−1))=(cos⁡x−1)−12(cos⁡x−1)2+13(cos⁡x−1)3+O((cos⁡x−1)4)=−x22−x412−x645+O(x8).{\displaystyle {\begin{aligned}f(x)&=\ln {\bigl (}1+(\cos x-1){\bigr )}\\&=(\cos x-1)-{\tfrac {1}{2}}(\cos x-1)^{2}+{\tfrac {1}{3}}(\cos x-1)^{3}+O{\left((\cos x-1)^{4}\right)}\\&=-{\frac {x^{2}}{2}}-{\frac {x^{4}}{12}}-{\frac {x^{6}}{45}}+O{\left(x^{8}\right)}.\end{aligned}}\!} Since the cosine is aneven function, the coefficients for all the odd powers are zero. Suppose we want the Taylor series at 0 of the function g(x)=excos⁡x.{\displaystyle g(x)={\frac {e^{x}}{\cos x}}.\!} The Taylor series for the exponential function is ex=1+x+x22!+x33!+x44!+⋯,{\displaystyle e^{x}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\cdots ,} and the series for cosine is cos⁡x=1−x22!+x44!−⋯.{\displaystyle \cos x=1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots .} Assume the series for their quotient is excos⁡x=c0+c1x+c2x2+c3x3+c4x4+⋯{\displaystyle {\frac {e^{x}}{\cos x}}=c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+\cdots } Multiplying both sides by the denominatorcos⁡x{\displaystyle \cos x}and then expanding it as a series yields ex=(c0+c1x+c2x2+c3x3+c4x4+⋯)(1−x22!+x44!−⋯)=c0+c1x+(c2−c02)x2+(c3−c12)x3+(c4−c22+c04!)x4+⋯{\displaystyle {\begin{aligned}e^{x}&=\left(c_{0}+c_{1}x+c_{2}x^{2}+c_{3}x^{3}+c_{4}x^{4}+\cdots \right)\left(1-{\frac {x^{2}}{2!}}+{\frac {x^{4}}{4!}}-\cdots \right)\\[5mu]&=c_{0}+c_{1}x+\left(c_{2}-{\frac {c_{0}}{2}}\right)x^{2}+\left(c_{3}-{\frac {c_{1}}{2}}\right)x^{3}+\left(c_{4}-{\frac {c_{2}}{2}}+{\frac {c_{0}}{4!}}\right)x^{4}+\cdots \end{aligned}}} Comparing the coefficients ofg(x)cos⁡x{\displaystyle g(x)\cos x}with the coefficients ofex,{\displaystyle e^{x},} c0=1,c1=1,c2−12c0=12,c3−12c1=16,c4−12c2+124c0=124,….{\displaystyle c_{0}=1,\ \ c_{1}=1,\ \ c_{2}-{\tfrac {1}{2}}c_{0}={\tfrac {1}{2}},\ \ c_{3}-{\tfrac {1}{2}}c_{1}={\tfrac {1}{6}},\ \ c_{4}-{\tfrac {1}{2}}c_{2}+{\tfrac {1}{24}}c_{0}={\tfrac {1}{24}},\ \ldots .} The coefficientsci{\displaystyle c_{i}}of the series forg(x){\displaystyle g(x)}can thus be computed one at a time, amounting to long division of the series forex{\displaystyle e^{x}}andcos⁡x{\displaystyle \cos x}: excos⁡x=1+x+x2+23x3+12x4+⋯.{\displaystyle {\frac {e^{x}}{\cos x}}=1+x+x^{2}+{\tfrac {2}{3}}x^{3}+{\tfrac {1}{2}}x^{4}+\cdots .} Here we employ a method called "indirect expansion" to expand the given function. This method uses the known Taylor expansion of the exponential function. In order to expand(1 +x)exas a Taylor series inx, we use the known Taylor series of functionex: ex=∑n=0∞xnn!=1+x+x22!+x33!+x44!+⋯.{\displaystyle e^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\cdots .} Thus, (1+x)ex=ex+xex=∑n=0∞xnn!+∑n=0∞xn+1n!=1+∑n=1∞xnn!+∑n=0∞xn+1n!=1+∑n=1∞xnn!+∑n=1∞xn(n−1)!=1+∑n=1∞(1n!+1(n−1)!)xn=1+∑n=1∞n+1n!xn=∑n=0∞n+1n!xn.{\displaystyle {\begin{aligned}(1+x)e^{x}&=e^{x}+xe^{x}=\sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=0}^{\infty }{\frac {x^{n+1}}{n!}}=1+\sum _{n=1}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=0}^{\infty }{\frac {x^{n+1}}{n!}}\\&=1+\sum _{n=1}^{\infty }{\frac {x^{n}}{n!}}+\sum _{n=1}^{\infty }{\frac {x^{n}}{(n-1)!}}=1+\sum _{n=1}^{\infty }\left({\frac {1}{n!}}+{\frac {1}{(n-1)!}}\right)x^{n}\\&=1+\sum _{n=1}^{\infty }{\frac {n+1}{n!}}x^{n}\\&=\sum _{n=0}^{\infty }{\frac {n+1}{n!}}x^{n}.\end{aligned}}} Classically,algebraic functionsare defined by an algebraic equation, andtranscendental functions(including those discussed above) are defined by some property that holds for them, such as adifferential equation. For example, theexponential functionis the function which is equal to its own derivative everywhere, and assumes the value 1 at the origin. However, one may equally well define ananalytic functionby its Taylor series. Taylor series are used to define functions and "operators" in diverse areas of mathematics. In particular, this is true in areas where the classical definitions of functions break down. For example, using Taylor series, one may extend analytic functions to sets of matrices and operators, such as thematrix exponentialormatrix logarithm. In other areas, such as formal analysis, it is more convenient to work directly with thepower seriesthemselves. Thus one may define a solution of a differential equationasa power series which, one hopes to prove, is the Taylor series of the desired solution. The Taylor series may also be generalized to functions of more than one variable with[20] T(x1,…,xd)=∑n1=0∞⋯∑nd=0∞(x1−a1)n1⋯(xd−ad)ndn1!⋯nd!(∂n1+⋯+ndf∂x1n1⋯∂xdnd)(a1,…,ad)=f(a1,…,ad)+∑j=1d∂f(a1,…,ad)∂xj(xj−aj)+12!∑j=1d∑k=1d∂2f(a1,…,ad)∂xj∂xk(xj−aj)(xk−ak)+13!∑j=1d∑k=1d∑l=1d∂3f(a1,…,ad)∂xj∂xk∂xl(xj−aj)(xk−ak)(xl−al)+⋯{\displaystyle {\begin{aligned}T(x_{1},\ldots ,x_{d})&=\sum _{n_{1}=0}^{\infty }\cdots \sum _{n_{d}=0}^{\infty }{\frac {(x_{1}-a_{1})^{n_{1}}\cdots (x_{d}-a_{d})^{n_{d}}}{n_{1}!\cdots n_{d}!}}\,\left({\frac {\partial ^{n_{1}+\cdots +n_{d}}f}{\partial x_{1}^{n_{1}}\cdots \partial x_{d}^{n_{d}}}}\right)(a_{1},\ldots ,a_{d})\\&=f(a_{1},\ldots ,a_{d})+\sum _{j=1}^{d}{\frac {\partial f(a_{1},\ldots ,a_{d})}{\partial x_{j}}}(x_{j}-a_{j})+{\frac {1}{2!}}\sum _{j=1}^{d}\sum _{k=1}^{d}{\frac {\partial ^{2}f(a_{1},\ldots ,a_{d})}{\partial x_{j}\partial x_{k}}}(x_{j}-a_{j})(x_{k}-a_{k})\\&\qquad \qquad +{\frac {1}{3!}}\sum _{j=1}^{d}\sum _{k=1}^{d}\sum _{l=1}^{d}{\frac {\partial ^{3}f(a_{1},\ldots ,a_{d})}{\partial x_{j}\partial x_{k}\partial x_{l}}}(x_{j}-a_{j})(x_{k}-a_{k})(x_{l}-a_{l})+\cdots \end{aligned}}} For example, for a functionf(x,y){\displaystyle f(x,y)}that depends on two variables,xandy, the Taylor series to second order about the point(a,b)is f(a,b)+(x−a)fx(a,b)+(y−b)fy(a,b)+12!((x−a)2fxx(a,b)+2(x−a)(y−b)fxy(a,b)+(y−b)2fyy(a,b)){\displaystyle f(a,b)+(x-a)f_{x}(a,b)+(y-b)f_{y}(a,b)+{\frac {1}{2!}}{\Big (}(x-a)^{2}f_{xx}(a,b)+2(x-a)(y-b)f_{xy}(a,b)+(y-b)^{2}f_{yy}(a,b){\Big )}} where the subscripts denote the respectivepartial derivatives. A second-order Taylor series expansion of a scalar-valued function of more than one variable can be written compactly as T(x)=f(a)+(x−a)TDf(a)+12!(x−a)T{D2f(a)}(x−a)+⋯,{\displaystyle T(\mathbf {x} )=f(\mathbf {a} )+(\mathbf {x} -\mathbf {a} )^{\mathsf {T}}Df(\mathbf {a} )+{\frac {1}{2!}}(\mathbf {x} -\mathbf {a} )^{\mathsf {T}}\left\{D^{2}f(\mathbf {a} )\right\}(\mathbf {x} -\mathbf {a} )+\cdots ,} whereDf(a)is thegradientoffevaluated atx=aandD2f(a)is theHessian matrix. Applying themulti-index notationthe Taylor series for several variables becomes T(x)=∑|α|≥0(x−a)αα!(∂αf)(a),{\displaystyle T(\mathbf {x} )=\sum _{|\alpha |\geq 0}{\frac {(\mathbf {x} -\mathbf {a} )^{\alpha }}{\alpha !}}\left({\mathrm {\partial } ^{\alpha }}f\right)(\mathbf {a} ),} which is to be understood as a still more abbreviatedmulti-indexversion of the first equation of this paragraph, with a full analogy to the single variable case. In order to compute a second-order Taylor series expansion around point(a,b) = (0, 0)of the functionf(x,y)=exln⁡(1+y),{\displaystyle f(x,y)=e^{x}\ln(1+y),} one first computes all the necessary partial derivatives: fx=exln⁡(1+y)fy=ex1+yfxx=exln⁡(1+y)fyy=−ex(1+y)2fxy=fyx=ex1+y.{\displaystyle {\begin{aligned}f_{x}&=e^{x}\ln(1+y)\\[6pt]f_{y}&={\frac {e^{x}}{1+y}}\\[6pt]f_{xx}&=e^{x}\ln(1+y)\\[6pt]f_{yy}&=-{\frac {e^{x}}{(1+y)^{2}}}\\[6pt]f_{xy}&=f_{yx}={\frac {e^{x}}{1+y}}.\end{aligned}}} Evaluating these derivatives at the origin gives the Taylor coefficients fx(0,0)=0fy(0,0)=1fxx(0,0)=0fyy(0,0)=−1fxy(0,0)=fyx(0,0)=1.{\displaystyle {\begin{aligned}f_{x}(0,0)&=0\\f_{y}(0,0)&=1\\f_{xx}(0,0)&=0\\f_{yy}(0,0)&=-1\\f_{xy}(0,0)&=f_{yx}(0,0)=1.\end{aligned}}} Substituting these values in to the general formula T(x,y)=f(a,b)+(x−a)fx(a,b)+(y−b)fy(a,b)+12!((x−a)2fxx(a,b)+2(x−a)(y−b)fxy(a,b)+(y−b)2fyy(a,b))+⋯{\displaystyle {\begin{aligned}T(x,y)=&f(a,b)+(x-a)f_{x}(a,b)+(y-b)f_{y}(a,b)\\&{}+{\frac {1}{2!}}\left((x-a)^{2}f_{xx}(a,b)+2(x-a)(y-b)f_{xy}(a,b)+(y-b)^{2}f_{yy}(a,b)\right)+\cdots \end{aligned}}} produces T(x,y)=0+0(x−0)+1(y−0)+12(0(x−0)2+2(x−0)(y−0)+(−1)(y−0)2)+⋯=y+xy−12y2+⋯{\displaystyle {\begin{aligned}T(x,y)&=0+0(x-0)+1(y-0)+{\frac {1}{2}}{\big (}0(x-0)^{2}+2(x-0)(y-0)+(-1)(y-0)^{2}{\big )}+\cdots \\&=y+xy-{\tfrac {1}{2}}y^{2}+\cdots \end{aligned}}} Sinceln(1 +y)is analytic in|y| < 1, we have exln⁡(1+y)=y+xy−12y2+⋯,|y|<1.{\displaystyle e^{x}\ln(1+y)=y+xy-{\tfrac {1}{2}}y^{2}+\cdots ,\qquad |y|<1.} The trigonometricFourier seriesenables one to express aperiodic function(or a function defined on a closed interval[a,b]) as an infinite sum oftrigonometric functions(sinesandcosines). In this sense, the Fourier series is analogous to Taylor series, since the latter allows one to express a function as an infinite sum ofpowers. Nevertheless, the two series differ from each other in several relevant issues:
https://en.wikipedia.org/wiki/Taylor_series
Inmathematics, arational functionis anyfunctionthat can be defined by arational fraction, which is analgebraic fractionsuch that both thenumeratorand thedenominatorarepolynomials. Thecoefficientsof the polynomials need not berational numbers; they may be taken in anyfieldK. In this case, one speaks of a rational function and a rational fractionoverK. The values of thevariablesmay be taken in any fieldLcontainingK. Then thedomainof the function is the set of the values of the variables for which the denominator is not zero, and thecodomainisL. The set of rational functions over a fieldKis a field, thefield of fractionsof theringof thepolynomial functionsoverK. A functionf{\displaystyle f}is called a rational function if it can be written in the form[1] whereP{\displaystyle P}andQ{\displaystyle Q}arepolynomial functionsofx{\displaystyle x}andQ{\displaystyle Q}is not thezero function. Thedomainoff{\displaystyle f}is the set of all values ofx{\displaystyle x}for which the denominatorQ(x){\displaystyle Q(x)}is not zero. However, ifP{\displaystyle \textstyle P}andQ{\displaystyle \textstyle Q}have a non-constantpolynomial greatest common divisorR{\displaystyle \textstyle R}, then settingP=P1R{\displaystyle \textstyle P=P_{1}R}andQ=Q1R{\displaystyle \textstyle Q=Q_{1}R}produces a rational function which may have a larger domain thanf{\displaystyle f}, and is equal tof{\displaystyle f}on the domain off.{\displaystyle f.}It is a common usage to identifyf{\displaystyle f}andf1{\displaystyle f_{1}}, that is to extend "by continuity" the domain off{\displaystyle f}to that off1.{\displaystyle f_{1}.}Indeed, one can define a rational fraction as anequivalence classof fractions of polynomials, where two fractionsA(x)B(x){\displaystyle \textstyle {\frac {A(x)}{B(x)}}}andC(x)D(x){\displaystyle \textstyle {\frac {C(x)}{D(x)}}}are considered equivalent ifA(x)D(x)=B(x)C(x){\displaystyle A(x)D(x)=B(x)C(x)}. In this caseP(x)Q(x){\displaystyle \textstyle {\frac {P(x)}{Q(x)}}}is equivalent toP1(x)Q1(x).{\displaystyle \textstyle {\frac {P_{1}(x)}{Q_{1}(x)}}.} Aproper rational functionis a rational function in which thedegreeofP(x){\displaystyle P(x)}is less than the degree ofQ(x){\displaystyle Q(x)}and both arereal polynomials, named by analogy to aproper fractioninQ.{\displaystyle \mathbb {Q} .}[2] Incomplex analysis, a rational function is the ratio of two polynomials with complex coefficients, whereQis not the zero polynomial andPandQhave no common factor (this avoidsftaking the indeterminate value 0/0). The domain offis the set of complex numbers such thatQ(z)≠0{\displaystyle Q(z)\neq 0}. Every rational function can be naturally extended to a function whose domain and range are the wholeRiemann sphere(complex projective line). A complex rational function with degree one is aMöbius transformation. Rational functions are representative examples ofmeromorphic functions.[3] Iteration of rational functions on theRiemann sphere(i.e. arational mapping) createsdiscrete dynamical systems.[4] There are several non equivalent definitions of the degree of a rational function. Most commonly, thedegreeof a rational function is the maximum of thedegreesof its constituent polynomialsPandQ, when the fraction is reduced tolowest terms. If the degree offisd, then the equation hasddistinct solutions inzexcept for certain values ofw, calledcritical values, where two or more solutions coincide or where some solution is rejectedat infinity(that is, when the degree of the equation decreases after havingcleared the denominator). Thedegreeof thegraphof a rational function is not the degree as defined above: it is the maximum of the degree of the numerator and one plus the degree of the denominator. In some contexts, such as inasymptotic analysis, thedegreeof a rational function is the difference between the degrees of the numerator and the denominator.[5]: §13.6.1[6]: Chapter IV Innetwork synthesisandnetwork analysis, a rational function of degree two (that is, the ratio of two polynomials of degree at most two) is often called abiquadratic function.[7] The rational function is not defined at It is asymptotic tox2{\displaystyle {\tfrac {x}{2}}}asx→∞.{\displaystyle x\to \infty .} The rational function is defined for allreal numbers, but not for allcomplex numbers, since ifxwere a square root of−1{\displaystyle -1}(i.e. theimaginary unitor its negative), then formal evaluation would lead to division by zero: which is undefined. Aconstant functionsuch asf(x) = πis a rational function since constants are polynomials. The function itself is rational, even though thevalueoff(x)is irrational for allx. Everypolynomial functionf(x)=P(x){\displaystyle f(x)=P(x)}is a rational function withQ(x)=1.{\displaystyle Q(x)=1.}A function that cannot be written in this form, such asf(x)=sin⁡(x),{\displaystyle f(x)=\sin(x),}is not a rational function. However, the adjective "irrational" isnotgenerally used for functions. EveryLaurent polynomialcan be written as a rational function while the converse is not necessarily true, i.e., the ring of Laurent polynomials is asubringof the rational functions. The rational functionf(x)=xx{\displaystyle f(x)={\tfrac {x}{x}}}is equal to 1 for allxexcept 0, where there is aremovable singularity. The sum, product, or quotient (excepting division by the zero polynomial) of two rational functions is itself a rational function. However, the process of reduction to standard form may inadvertently result in the removal of such singularities unless care is taken. Using the definition of rational functions as equivalence classes gets around this, sincex/xis equivalent to 1/1. The coefficients of aTaylor seriesof any rational function satisfy alinear recurrence relation, which can be found by equating the rational function to a Taylor series with indeterminate coefficients, and collectinglike termsafter clearing the denominator. For example, Multiplying through by the denominator and distributing, After adjusting the indices of the sums to get the same powers ofx, we get Combining like terms gives Since this holds true for allxin theradius of convergenceof the original Taylor series, we can compute as follows. Since theconstant termon the left must equal the constant term on the right it follows that Then, since there are no powers ofxon the left, all of thecoefficientson the right must be zero, from which it follows that Conversely, any sequence that satisfies a linear recurrence determines a rational function when used as the coefficients of a Taylor series. This is useful in solving such recurrences, since by usingpartial fraction decompositionwe can write any proper rational function as a sum of factors of the form1 / (ax+b)and expand these asgeometric series, giving an explicit formula for the Taylor coefficients; this is the method ofgenerating functions. Inabstract algebrathe concept of a polynomial is extended to include formal expressions in which the coefficients of the polynomial can be taken from anyfield. In this setting, given a fieldFand some indeterminateX, arational expression(also known as arational fractionor, inalgebraic geometry, arational function) is any element of thefield of fractionsof thepolynomial ringF[X]. Any rational expression can be written as the quotient of two polynomialsP/QwithQ≠ 0, although this representation isn't unique.P/Qis equivalent toR/S, for polynomialsP,Q,R, andS, whenPS=QR. However, sinceF[X] is aunique factorization domain, there is aunique representationfor any rational expressionP/QwithPandQpolynomials of lowest degree andQchosen to bemonic. This is similar to how afractionof integers can always be written uniquely in lowest terms by canceling out common factors. The field of rational expressions is denotedF(X). This field is said to be generated (as a field) overFby (atranscendental element)X, becauseF(X) does not contain any proper subfield containing bothFand the elementX. Likepolynomials, rational expressions can also be generalized tonindeterminatesX1,...,Xn, by taking the field of fractions ofF[X1,...,Xn], which is denoted byF(X1,...,Xn). An extended version of the abstract idea of rational function is used in algebraic geometry. There thefunction field of an algebraic varietyVis formed as the field of fractions of thecoordinate ringofV(more accurately said, of aZariski-denseaffine open set inV). Its elementsfare considered as regular functions in the sense of algebraic geometry on non-empty open setsU, and also may be seen as morphisms to theprojective line. Rational functions are used innumerical analysisforinterpolationandapproximationof functions, for example thePadé approximantsintroduced byHenri Padé. Approximations in terms of rational functions are well suited forcomputer algebra systemsand other numericalsoftware. Like polynomials, they can be evaluated straightforwardly, and at the same time they express more diverse behavior than polynomials. Rational functions are used to approximate or model more complex equations in science and engineering includingfieldsandforcesin physics,spectroscopyin analytical chemistry, enzyme kinetics in biochemistry, electronic circuitry, aerodynamics, medicine concentrations in vivo,wave functionsfor atoms and molecules, optics and photography to improve image resolution, and acoustics and sound.[citation needed] Insignal processing, theLaplace transform(for continuous systems) or thez-transform(for discrete-time systems) of theimpulse responseof commonly-usedlinear time-invariant systems(filters) withinfinite impulse responseare rational functions over complex numbers.
https://en.wikipedia.org/wiki/Rational_function
Incomputer science, theanalysis of algorithmsis the process of finding thecomputational complexityofalgorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining afunctionthat relates the size of an algorithm's input to the number of steps it takes (itstime complexity) or the number of storage locations it uses (itsspace complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, sobest, worst and average casedescriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually anupper bound, determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined byDonald Knuth.[1]Algorithm analysis is an important part of a broadercomputational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a givencomputational problem. These estimates provide an insight into reasonable directions of search forefficient algorithms. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input.Big O notation,Big-omega notationandBig-theta notationare used to this end.[2]For instance,binary searchis said to run in a number of steps proportional to the logarithm of the sizenof the sorted list being searched, or inO(logn), colloquially "inlogarithmic time". Usuallyasymptoticestimates are used because differentimplementationsof the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called ahidden constant. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called amodel of computation. A model of computation may be defined in terms of anabstract computer, e.g.Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search hasnelements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at mostlog2(n) + 1time units are needed to return an answer. Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant. Two cost models are generally used:[3][4][5][6][7] The latter is more cumbersome to use, so it is only employed when necessary, for example in the analysis ofarbitrary-precision arithmeticalgorithms, like those used incryptography. A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible.[8] Run-time analysis is a theoretical classification that estimates and anticipates the increase inrunning time(or run-time or execution time) of analgorithmas itsinput size(usually denoted asn) increases. Run-time efficiency is a topic of great interest incomputer science: Aprogramcan take seconds, hours, or even years to finish executing, depending on which algorithm it implements. Whilesoftware profilingtechniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis. Since algorithms areplatform-independent(i.e. a given algorithm can be implemented in an arbitraryprogramming languageon an arbitrarycomputerrunning an arbitraryoperating system), there are additional significant drawbacks to using anempiricalapproach to gauge the comparative performance of a given set of algorithms. Take as an example a program that looks up a specific entry in asortedlistof sizen. Suppose this program were implemented on Computer A, a state-of-the-art machine, using alinear searchalgorithm, and on Computer B, a much slower machine, using abinary search algorithm.Benchmark testingon the two computers running their respective programs might look something like the following: Based on these metrics, it would be easy to jump to the conclusion thatComputer Ais running an algorithm that is far superior in efficiency to that ofComputer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error: Computer A, running the linear search program, exhibits alineargrowth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits alogarithmicgrowth rate. Quadrupling the input size only increases the run-time by aconstantamount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it is running an algorithm with a much slower growth rate. Informally, an algorithm can be said to exhibit a growth rate on the order of amathematical functionif beyond a certain input sizen, the functionf(n)times a positive constant provides anupper bound or limitfor the run-time of that algorithm. In other words, for a given input sizengreater than somen0and a constantc, the run-time of that algorithm will never be larger thanc×f(n). This concept is frequently expressed using Big O notation. For example, since the run-time ofinsertion sortgrows quadraticallyas its input size increases, insertion sort can be said to be of orderO(n2). Big O notation is a convenient way to express theworst-case scenariofor a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario forquicksortisO(n2), but the average-case run-time isO(nlogn). Assuming the run-time follows power rule,t≈kna, the coefficientacan be found[9]by taking empirical measurements of run-time{t1,t2} at some problem-size points{n1,n2}, and calculatingt2/t1= (n2/n1)aso thata= log(t2/t1)/log(n2/n1). In other words, this measures the slope of the empirical line on thelog–log plotof run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on the log–log plot is indeed a straight line), the empirical value ofwill stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to theirempirical local orders of growthbehaviour. Applied to the above table: It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one. The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the followingpseudocode: A given computer will take adiscrete amount of timeto execute each of theinstructionsinvolved with carrying out this algorithm. Say that the actions carried out in step 1 are considered to consume time at mostT1, step 2 uses time at mostT2, and so forth. In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1–3 and step 7 is: Theloopsin steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute (n+ 1 ) times,[10]which will consumeT4(n+ 1 ) time. The inner loop, on the other hand, is governed by the value of j, whichiteratesfrom 1 toi. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumesT6time, and the inner loop test (step 5) consumes 2T5time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6time, and the inner loop test (step 5) consumes 3T5time. Altogether, the total time required to run the inner loopbodycan be expressed as anarithmetic progression: which can befactored[11]as The total time required to run the inner looptestcan be evaluated similarly: which can be factored as Therefore, the total run-time for this algorithm is: which reduces to As arule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2is the highest-order term, so one can conclude thatf(n) =O(n2). Formally this can be proven as follows: Prove that[12(n2+n)]T6+[12(n2+3n)]T5+(n+1)T4+T1+T2+T3+T7≤cn2,n≥n0{\displaystyle \left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},\ n\geq n_{0}} Letkbe a constant greater than or equal to [T1..T7]T6(n2+n)+T5(n2+3n)+(n+1)T4+T1+T2+T3+T7≤k(n2+n)+k(n2+3n)+kn+5k=2kn2+5kn+5k≤2kn2+5kn2+5kn2(forn≥1)=12kn2{\displaystyle {\begin{aligned}&T_{6}(n^{2}+n)+T_{5}(n^{2}+3n)+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq k(n^{2}+n)+k(n^{2}+3n)+kn+5k\\={}&2kn^{2}+5kn+5k\leq 2kn^{2}+5kn^{2}+5kn^{2}\ ({\text{for }}n\geq 1)=12kn^{2}\end{aligned}}}Therefore[12(n2+n)]T6+[12(n2+3n)]T5+(n+1)T4+T1+T2+T3+T7≤cn2,n≥n0forc=12k,n0=1{\displaystyle \left[{\frac {1}{2}}(n^{2}+n)\right]T_{6}+\left[{\frac {1}{2}}(n^{2}+3n)\right]T_{5}+(n+1)T_{4}+T_{1}+T_{2}+T_{3}+T_{7}\leq cn^{2},n\geq n_{0}{\text{ for }}c=12k,n_{0}=1} A moreelegantapproach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows:[12] 4+∑i=1ni≤4+∑i=1nn=4+n2≤5n2(forn≥1)=O(n2).{\displaystyle 4+\sum _{i=1}^{n}i\leq 4+\sum _{i=1}^{n}n=4+n^{2}\leq 5n^{2}\ ({\text{for }}n\geq 1)=O(n^{2}).} The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption ofmemory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of afilewhich that program manages: In this instance, as the file size n increases, memory will be consumed at anexponential growthrate, which is orderO(2n). This is an extremely rapid and most likely unmanageable growth rate for consumption of memoryresources. Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless. Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232= 4 GiB (greater ifsegmented memoryis used) and on 64-bit machines 264= 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms areO(1)for a large enough constant, or for small enough data. This interpretation is primarily useful for functions that grow extremely slowly: (binary)iterated logarithm(log*) is less than 5 for all practical data (265536bits); (binary) log-log (log logn) is less than 6 for virtually all practical data (264bits); and binary log (logn) is less than 64 for virtually all practical data (264bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may haveK>klog⁡log⁡n{\displaystyle K>k\log \log n}so long asK/k>6{\displaystyle K/k>6}andn<226=264{\displaystyle n<2^{2^{6}}=2^{64}}. For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used inhybrid algorithms, likeTimsort, which use an asymptotically efficient algorithm (heremerge sort, with time complexitynlog⁡n{\displaystyle n\log n}), but switch to an asymptotically inefficient algorithm (hereinsertion sort, with time complexityn2{\displaystyle n^{2}}) for small data, as the simpler algorithm is faster on small data.
https://en.wikipedia.org/wiki/Analysis_of_algorithms
Incomputing, abenchmarkis the act of running acomputer program, a set of programs, or other operations, in order to assess the relativeperformanceof an object, normally by running a number of standardtestsand trials against it.[1] The termbenchmarkis also commonly utilized for the purposes of elaborately designed benchmarking programs themselves. Benchmarking is usually associated with assessing performance characteristics ofcomputer hardware, for example, thefloating point operationperformance of aCPU, but there are circumstances when the technique is also applicable tosoftware. Software benchmarks are, for example, run againstcompilersordatabase management systems(DBMS). Benchmarks provide a method of comparing the performance of various subsystems across different chip/systemarchitectures. Benchmarking as a part ofcontinuous integrationis called Continuous Benchmarking.[2] Ascomputer architectureadvanced, it became more difficult to compare the performance of various computer systems simply by looking at their specifications. Therefore, tests were developed that allowed comparison of different architectures. For example,Pentium 4processors generally operated at a higher clock frequency thanAthlon XPorPowerPCprocessors, which did not necessarily translate to more computational power; a processor with a slower clock frequency might perform as well as or even better than a processor operating at a higher frequency. SeeBogoMipsand themegahertz myth. Benchmarks are designed to mimic a particular type of workload on a component or system. Synthetic benchmarks do this by specially created programs that impose the workload on the component. Application benchmarks run real-world programs on the system. While application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks are useful for testing individual components, like ahard diskor networking device. Benchmarks are particularly important inCPU design, giving processor architects the ability to measure and make tradeoffs inmicroarchitecturaldecisions. For example, if a benchmark extracts the keyalgorithmsof an application, it will contain the performance-sensitive aspects of that application. Running this much smaller snippet on a cycle-accurate simulator can give clues on how to improve performance. Prior to 2000, computer and microprocessor architects usedSPECto do this, although SPEC's Unix-based benchmarks were quite lengthy and thus unwieldy to use intact. Computer manufacturers are known to configure their systems to give unrealistically high performance on benchmark tests that are not replicated in real usage. For instance, during the 1980s some compilers could detect a specific mathematical operation used in a well-known floating-point benchmark and replace the operation with a faster mathematically equivalent operation. However, such a transformation was rarely useful outside the benchmark until the mid-1990s, whenRISCandVLIWarchitectures emphasized the importance ofcompilertechnology as it related to performance. Benchmarks are now regularly used bycompilercompanies to improve not only their own benchmark scores, but real application performance. CPUs that have many execution units — such as asuperscalarCPU, aVLIWCPU, or areconfigurable computingCPU — typically have slower clock rates than a sequential CPU with one or two execution units when built from transistors that are just as fast. Nevertheless, CPUs with many execution units often complete real-world and benchmark tasks in less time than the supposedly faster high-clock-rate CPU. Given the large number of benchmarks available, a manufacturer can usually find at least one benchmark that shows its system will outperform another system; the other systems can be shown to excel with a different benchmark. Manufacturers commonly report only those benchmarks (or aspects of benchmarks) that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light. Taken together, these practices are calledbench-marketing. Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port to a specific processor or computer system. If performance is critical, the only benchmark that matters is the target environment's application suite. Features of benchmarking software may include recording/exportingthe course of performance to aspreadsheetfile, visualization such as drawingline graphsorcolor-codedtiles, and pausing the process to be able to resume without having to start over. Software can have additional features specific to its purpose, for example, disk benchmarking software may be able to optionally start measuring the disk speed within a specified range of the disk rather than the full disk, measurerandom accessreading speed andlatency, have a "quick scan" feature which measures the speed through samples of specified intervals and sizes, and allow specifying adata blocksize, meaning the number of requested bytes per read request.[3] Benchmarking is not easy and often involves several iterative rounds in order to arrive at predictable, useful conclusions. Interpretation of benchmarking data is also extraordinarily difficult. Here is a partial list of common challenges: There are seven vital characteristics for benchmarks.[6]These key properties are:
https://en.wikipedia.org/wiki/Benchmark_(computing)
Incomputer science,best,worst, andaverage casesof a givenalgorithmexpress what theresourceusage isat least,at mostandon average, respectively. Usually the resource being considered is running time, i.e.time complexity, but could also be memory or some other resource. Best case is the function which performs the minimum number of steps on input data of n elements. Worst case is the function which performs the maximum number of steps on input data of size n. Average case is the function which performs an average number of steps on input data of n elements. Inreal-time computing, theworst-case execution timeis often of particular concern since it is important to know how much time might be neededin the worst caseto guarantee that the algorithm will always finish on time. Average performanceandworst-case performanceare the most used in algorithm analysis. Less widely found isbest-case performance, but it does have uses: for example, where the best cases of individual tasks are known, they can be used to improve the accuracy of an overall worst-case analysis.Computer scientistsuseprobabilistic analysistechniques, especiallyexpected value, to determine expected running times. The terms are used in other contexts; for example the worst- and best-case outcome of an epidemic, worst-case temperature to which an electronic circuit element is exposed, etc. Where components of specifiedtoleranceare used, devices must be designed to work properly with the worst-case combination of tolerances and external conditions. The termbest-case performanceis used in computer science to describe an algorithm's behavior under optimal conditions. For example, the best case for a simple linear search on a list occurs when the desired element is the first element of the list. Development and choice of algorithms is rarely based on best-case performance: most academic and commercial enterprises are more interested in improvingaverage-case complexityandworst-case performance. Algorithms may also be trivially modified to have good best-case running time by hard-coding solutions to a finite set of inputs, making the measure almost meaningless.[1] Worst-case performance analysis and average-case performance analysis have some similarities, but in practice usually require different tools and approaches. Determining whattypical inputmeans is difficult, and often that average input has properties which make it difficult to characterise mathematically (consider, for instance, algorithms that are designed to operate onstringsof text). Similarly, even when a sensible description of a particular "average case" (which will probably only be applicable for some uses of the algorithm) is possible, they tend to result in more difficult analysis of equations.[2] Worst-case analysis gives asafeanalysis (the worst case is never underestimated), but one which can be overlypessimistic, since there may be no (realistic) input that would take this many steps. In some situations it may be necessary to use a pessimistic analysis in order to guarantee safety. Often however, a pessimistic analysis may be too pessimistic, so an analysis that gets closer to the real value but may be optimistic (perhaps with some known low probability of failure) can be a much more practical approach. One modern approach in academic theory to bridge the gap between worst-case and average-case analysis is calledsmoothed analysis. When analyzing algorithms which often take a small time to complete, but periodically require a much larger time,amortized analysiscan be used to determine the worst-case running time over a (possibly infinite) series ofoperations. Thisamortizedcost can be much closer to the average cost, while still providing a guaranteed upper limit on the running time. So e.g.online algorithmsare frequently based on amortized analysis. The worst-case analysis is related to theworst-case complexity.[3] Many algorithms with bad worst-case performance have good average-case performance. For problems we want to solve, this is a good thing: we can hope that the particular instances we care about are average. Forcryptography, this is very bad: we want typical instances of a cryptographic problem to be hard. Here methods likerandom self-reducibilitycan be used for some specific problems to show that the worst case is no harder than the average case, or, equivalently, that the average case is no easier than the worst case. On the other hand, some data structures likehash tableshave very poor worst-case behaviors, but a well written hash table of sufficient size will statistically never give the worst case; the average number of operations performed follows an exponential decay curve, and so the run time of an operation is statistically bounded.
https://en.wikipedia.org/wiki/Best,_worst_and_average_case
Anoptimizing compileris acompilerdesigned to generate code that isoptimizedin aspects such as minimizing programexecution time,memory usage, storage size, andpower consumption.[1]Optimization is generally implemented as a sequence ofoptimizing transformations, a.k.a.compiler optimizations– algorithms that transform code to produce semantically equivalent code optimized for some aspect. Optimization is limited by a number of factors. Theoretical analysis indicates that some optimization problems areNP-complete, or evenundecidable.[2]Also, producing perfectlyoptimalcode is not possible since optimizing for one aspect often degrades performance for another. Optimization is a collection ofheuristicmethods for improving resource usage in typical programs.[3]: 585 Scopedescribes how much of the input code is considered to apply optimizations. Local scope optimizations use information local to abasic block.[4]Since basic blocks contain no control flow statements, these optimizations require minimal analysis, reducing time and storage requirements. However, no information is retained across jumps. Global scope optimizations, also known as intra-procedural optimizations, operate on individual functions.[4]This gives them more information to work with but often makes expensive computations necessary. Worst-case assumptions need to be made when function calls occur or global variables are accessed because little information about them is available. Peephole optimizationsare usually performed late in the compilation process aftermachine codehas been generated. This optimization examines a few adjacent instructions (similar to "looking through a peephole" at the code) to see whether they can be replaced by a single instruction or a shorter sequence of instructions.[3]: 554For instance, a multiplication of a value by two might be more efficiently executed byleft-shiftingthe value or by adding the value to itself (this example is also an instance ofstrength reduction). Interprocedural optimizationsanalyze all of a program's source code. The more information available, the more effective the optimizations can be. The information can be used for various optimizations, including functioninlining, where a call to a function is replaced by a copy of the function body. Link-time optimization(LTO), or whole-program optimization, is a more general class of interprocedural optimization. During LTO, the compiler has visibility across translation units which allows it to perform more aggressive optimizations like cross-module inlining anddevirtualization. Machine code optimization involves using anobject code optimizerto analyze the program after all machine code has beenlinked. Techniques such as macro compression, which conserves space by condensing common instruction sequences, become more effective when the entire executable task image is available for analysis.[5] Most high-levelprogramming languagesshare common programming constructs and abstractions, such as branching constructs (if, switch), looping constructs (for, while), and encapsulation constructs (structures, objects). Thus, similar optimization techniques can be used across languages. However, certain language features make some optimizations difficult. For instance, pointers inCandC++make array optimization difficult; seealias analysis. However, languages such asPL/Ithat also support pointers implement optimizations for arrays. Conversely, some language features make certain optimizations easier. For example, in some languages, functions are not permitted to haveside effects. Therefore, if a program makes several calls to the same function with the same arguments, the compiler can infer that the function's result only needs to be computed once. In languages where functions are allowed to have side effects, the compiler can restrict such optimization to functions that it can determine have no side effects. Many optimizations that operate on abstract programming concepts (loops, objects, structures) are independent of the machine targeted by the compiler, but many of the most effective optimizations are those that best exploit special features of the target platform. Examples are instructions that do several things at once, such as decrement register and branch if not zero. The following is an instance of a local machine-dependent optimization. To set aregisterto 0, the obvious way is to use the constant '0' in an instruction that sets a register value to a constant. A less obvious way is toXORa register with itself or subtract it from itself. It is up to the compiler to know which instruction variant to use. On manyRISCmachines, both instructions would be equally appropriate, since they would both be the same length and take the same time. On many othermicroprocessorssuch as theIntelx86family, it turns out that the XOR variant is shorter and probably faster, as there will be no need to decode an immediate operand, nor use the internal "immediate operand register"; the same applies onIBM System/360and successors for the subtract variant.[6]A potential problem with this is that XOR or subtract may introduce a data dependency on the previous value of the register, causing apipelinestall, which occurs when the processor must delay execution of an instruction because it depends on the result of a previous instruction. However, processors often treat the XOR of a register with itself or the subtract of a register from itself as a special case that does not cause stalls. Optimization includes the following, sometimes conflicting themes. Loop optimizationacts on the statements that make up a loop, such as aforloop, for exampleloop-invariant code motion. Loop optimizations can have a significant impact because many programs spend a large percentage of their time inside loops.[3]: 596 Some optimization techniques primarily designed to operate on loops include: Prescient store optimizations allow store operations to occur earlier than would otherwise be permitted in the context ofthreadsand locks. The process needs some way of knowing ahead of time what value will be stored by the assignment that it should have followed. The purpose of this relaxation is to allow compiler optimization to perform certain kinds of code rearrangements that preserve the semantics of properly synchronized programs.[9] Data-flowoptimizations, based ondata-flow analysis, primarily depend on how certain properties of data are propagated by control edges in thecontrol-flow graph. Some of these include: These optimizations are intended to be done after transforming the program into a special form calledStatic Single Assignment, in which every variable is assigned in only one place. Although some function without SSA, they are most effective with SSA. Many optimizations listed in other sections also benefit with no special changes, such as register allocation. Although many of these also apply to non-functional languages, they either originate in or are particularly critical infunctional languagessuch asLispandML. Interprocedural optimizationworks on the entire program, across procedure and file boundaries. It works tightly with intraprocedural counterparts, carried out with the cooperation of a local part and a global part. Typical interprocedural optimizations are procedureinlining, interprocedural dead-code elimination, interprocedural constant propagation, and procedure reordering. As usual, the compiler needs to perform interprocedural analysis before its actual optimizations. Interprocedural analyses include alias analysis,array access analysis, and the construction of acall graph. Interprocedural optimization is common in modern commercial compilers fromSGI,Intel,Microsoft, andSun Microsystems. For a long time, the open sourceGCCwas criticized for a lack of powerful interprocedural analysis and optimizations, though this is now improving.[16]Another open-source compiler with full analysis and optimization infrastructure isOpen64. Due to the extra time and space required by interprocedural analysis, most compilers do not perform it by default. Users must use compiler options explicitly to tell the compiler to enable interprocedural analysis and other expensive optimizations. There can be a wide range of optimizations that a compiler can perform, ranging from simple and straightforward optimizations that take little compilation time to elaborate and complex optimizations that involve considerable amounts of compilation time.[3]: 15Accordingly, compilers often provide options to their control command or procedure to allow the compiler user to choose how much optimization to request; for instance, the IBM FORTRAN H compiler allowed the user to specify no optimization, optimization at the registers level only, or full optimization.[3]: 737By the 2000s, it was common for compilers, such asClang, to have several compiler command options that could affect a variety of optimization choices, starting with the familiar-O2switch.[17] An approach to isolating optimization is the use of so-calledpost-pass optimizers(some commercial versions of which date back to mainframe software of the late 1970s).[18]These tools take the executable output by an optimizing compiler and optimize it even further. Post-pass optimizers usually work on theassembly languageormachine codelevel (in contrast with compilers that optimize intermediate representations of programs). One such example is thePortable C Compiler(PCC) of the 1980s, which had an optional pass that would perform post-optimizations on the generated assembly code.[3]: 736 Another consideration is that optimization algorithms are complicated and, especially when being used to compile large, complex programming languages, can contain bugs that introduce errors in the generated code or cause internal errors during compilation. Compiler errors of any kind can be disconcerting to the user, but especially so in this case, since it may not be clear that the optimization logic is at fault.[19]In the case of internal errors, the problem can be partially ameliorated by a "fail-safe" programming technique in which the optimization logic in the compiler is coded such that a failure is trapped, a warning message issued, and the rest of the compilation proceeds to successful completion.[20] Early compilers of the 1960s were often primarily concerned with simply compiling code correctly or efficiently, such that compile times were a major concern. One notable early optimizing compiler was the IBM FORTRAN H compiler of the late 1960s.[3]: 737Another of the earliest and important optimizing compilers, that pioneered several advanced techniques, was that forBLISS(1970), which was described inThe Design of an Optimizing Compiler(1975).[3]: 740, 779By the late 1980s, optimizing compilers were sufficiently effective that programming in assembly language declined. This co-evolved with the development of RISC chips and advanced processor features such assuperscalar processors,out-of-order execution, andspeculative execution, which were designed to be targeted by optimizing compilers rather than by human-written assembly code.[citation needed]
https://en.wikipedia.org/wiki/Compiler_optimization
Incomputing,computer performanceis the amount of useful work accomplished by acomputer system. Outside of specific contexts, computer performance is estimated in terms of accuracy,efficiencyand speed of executingcomputer programinstructions. When it comes to high computer performance, one or more of the following factors might be involved: The performance of any computer system can be evaluated in measurable, technical terms, using one or more of the metrics listed above. This way the performance can be Whilst the above definition relates to a scientific, technical approach, the following definition given byArnold Allenwould be useful for a non-technical audience: The wordperformancein computer performance means the same thing that performance means in other contexts, that is, it means "How well is the computer doing the work it is supposed to do?"[1] Computer softwareperformance, particularlysoftware applicationresponse time, is an aspect ofsoftware qualitythat is important inhuman–computer interactions. Performance engineering within systems engineering encompasses the set of roles, skills, activities, practices, tools, and deliverables applied at every phase of the systems development life cycle which ensures that a solution will be designed, implemented, and operationally supported to meet the performance requirements defined for the solution. Performance engineering continuously deals with trade-offs between types of performance. Occasionally aCPU designercan find a way to make aCPUwith better overall performance by improving one of the aspects of performance, presented below, without sacrificing the CPU's performance in other areas. For example, building the CPU out of better, fastertransistors. However, sometimes pushing one type of performance to an extreme leads to a CPU with worse overall performance, because other important aspects were sacrificed to get one impressive-looking number, for example, the chip'sclock rate(see themegahertz myth). Application Performance Engineering (APE) is a specific methodology withinperformance engineeringdesigned to meet the challenges associated with application performance in increasingly distributed mobile, cloud and terrestrial IT environments. It includes the roles, skills, activities, practices, tools and deliverables applied at every phase of the application lifecycle that ensure an application will be designed, implemented and operationally supported to meet non-functional performance requirements. Computer performancemetrics(things to measure) includeavailability,response time,channel capacity,latency,completion time,service time,bandwidth,throughput,relative efficiency,scalability,performance per watt,compression ratio,instruction path lengthandspeed up.CPUbenchmarks are available.[2] Availability of a system is typically measured as a factor of its reliability - as reliability increases, so does availability (that is, lessdowntime). Availability of a system may also be increased by the strategy of focusing on increasing testability and maintainability and not on reliability. Improving maintainability is generally easier than reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, it is likely to dominate the availability (prediction uncertainty) problem, even while maintainability levels are very high. Response time is the total amount of time it takes to respond to a request for service. In computing, that service can be any unit of work from a simpledisk IOto loading a complexweb page. The response time is the sum of three numbers:[3] Most consumers pick a computer architecture (normallyIntelIA-32architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (seemegahertz myth). Some system designers building parallel computers pick CPUs based on the speed per dollar. Channel capacity is the tightest upper bound on the rate ofinformationthat can be reliably transmitted over acommunications channel. By thenoisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in units ofinformationper unit time) that can be achieved with arbitrarily small error probability.[4][5] Information theory, developed byClaude E. ShannonduringWorld War II, defines the notion of channel capacity and provides a mathematical model by which one can compute it. The key result states that the capacity of the channel, as defined above, is given by the maximum of themutual informationbetween the input and output of the channel, where the maximization is with respect to the input distribution.[6] Latency is a time delay between the cause and the effect of some physical change in the system being observed. Latency is a result of the limited velocity with which any physical interaction can take place. This velocity is always lower or equal to speed of light. Therefore, every physical system that has non-zero spatial dimensions will experience some sort of latency. The precise definition of latency depends on the system being observed and the nature of stimulation. In communications, the lower limit of latency is determined by the medium being used for communications. In reliable two-way communication systems, latency limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. In the field of human-machine interaction, perceptible latency (delay between what the user commands and when the computer provides the results) has a strong effect on user satisfaction and usability. Computers run sets of instructions called a process. In operating systems, the execution of the process can be postponed if other processes are also executing. In addition, the operating system can schedule when to perform the action that the process is commanding. For example, suppose a process commands that a computer card's voltage output be set high-low-high-low and so on at a rate of 1000 Hz. The operating system may choose to adjust the scheduling of each transition (high-low or low-high) based on an internal clock. The latency is the delay between the process instruction commanding the transition and the hardware actually transitioning the voltage from high to low or low to high. System designers buildingreal-time computingsystems want to guarantee worst-case response. That is easier to do when the CPU has lowinterrupt latencyand when it has a deterministic response. In computer networking, bandwidth is a measurement of bit-rate of available or consumeddata communicationresources, expressed in bits per second or multiples of it (bit/s, kbit/s, Mbit/s, Gbit/s, etc.). Bandwidth sometimes defines the net bit rate (aka. peak bit rate, information rate, or physical layer useful bit rate), channel capacity, or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth tests measure the maximum throughput of a computer network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical communication link is proportional to its bandwidth in hertz, which is sometimes called frequency bandwidth, spectral bandwidth, RF bandwidth, signal bandwidth or analog bandwidth. In general terms, throughput is the rate of production or the rate at which something can be processed. In communication networks, throughput is essentially synonymous to digital bandwidth consumption. Inwireless networksorcellular communication networks, thesystem spectral efficiencyin bit/s/Hz/area unit, bit/s/Hz/site or bit/s/Hz/cell, is the maximum system throughput (aggregate throughput) divided by the analog bandwidth and some measure of the system coverage area. In integrated circuits, often a block in adata flow diagramhas a single input and a single output, and operates on discrete packets of information. Examples of such blocks areFFTmodules orbinary multipliers. Because the units of throughput are the reciprocal of the unit forpropagation delay, which is 'seconds per message' or 'seconds per output', throughput can be used to relate a computational device performing a dedicated function such as anASICorembedded processorto a communications channel, simplifying system analysis. Scalability is the ability of a system, network, or process to handle a growing amount of work in a capable manner or its ability to be enlarged to accommodate that growth. The amount ofelectric powerused by the computer (power consumption). This becomes especially important for systems with limited power sources such as solar, batteries, and human power. System designers buildingparallel computers, such asGoogle's hardware, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.[7] For spaceflight computers, the processing speed per watt ratio is a more useful performance criterion than raw processing speed due to limited on-board resources of power.[8] Compression is useful because it helps reduce resource usage, such as data storage space or transmission capacity. Because compressed data must be decompressed to use, this extra processing imposes computational or other costs through decompression; this situation is far from being a free lunch. Data compression is subject to a space–time complexity trade-off. This is an important performance feature of mobile systems, from the smart phones you keep in your pocket to the portable embedded systems in a spacecraft. The effect of computing on the environment, during manufacturing and recycling as well as during use. Measurements are taken with the objectives of reducing waste, reducing hazardous materials, and minimizing a computer'secological footprint. The number oftransistorson anintegrated circuit(IC). Transistor count is the most common measure of IC complexity. Because there are so many programs to test a CPU on all aspects of performance,benchmarkswere developed. The most famous benchmarks are the SPECint andSPECfpbenchmarks developed byStandard Performance Evaluation Corporationand theCertification Markbenchmark developed by the Embedded Microprocessor Benchmark ConsortiumEEMBC. In software engineering, performance testing is, in general, conducted to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate, or verify other quality attributes of the system, such as scalability, reliability, and resource usage. Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the implementation, design, and architecture of a system. Insoftware engineering, profiling ("program profiling", "software profiling") is a form ofdynamic program analysisthat measures, for example, the space (memory) or timecomplexity of a program, theusage of particular instructions, or frequency and duration of function calls. The most common use of profiling information is to aid programoptimization. Profiling is achieved by instrumenting either the programsource codeor its binary executable form using a tool called aprofiler(orcode profiler). A number of different techniques may be used by profilers, such as event-based, statistical, instrumented, and simulation methods. Performance tuning is the improvement ofsystemperformance. This is typically a computer application, but the same methods can be applied to economic markets, bureaucracies or other complex systems. The motivation for such activity is called a performance problem, which can be real or anticipated. Most systems will respond to increasedloadwith some degree of decreasing performance. A system's ability to accept a higher load is calledscalability, and modifying a system to handle a higher load is synonymous to performance tuning. Systematic tuning follows these steps: Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly touser acceptanceaspects. The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as provides a visual cue to let them know the system is handling their request. In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance. The total amount of time (t) required to execute a particular benchmark program is where Even on one machine, a different compiler or the same compiler with differentcompiler optimizationswitches can change N and CPI—the benchmark executes faster if the new compiler can improve N or C without making the other worse, but often there is a trade-off between them—is it better, for example, to use a few complicated instructions that take a long time to execute, or to use instructions that execute very quickly, although it takes more of them to execute the benchmark? A CPU designer is often required to implement a particularinstruction set, and so cannot change N. Sometimes a designer focuses on improving performance by making significant improvements in f (with techniques such as deeper pipelines and faster caches), while (hopefully) not sacrificing too much C—leading to aspeed-demonCPU design. Sometimes a designer focuses on improving performance by making significant improvements in CPI (with techniques such asout-of-order execution,superscalarCPUs, larger caches, caches with improved hit rates, improvedbranch prediction,speculative execution, etc.), while (hopefully) not sacrificing too much clock frequency—leading to a brainiac CPU design.[10]For a given instruction set (and therefore fixed N) and semiconductor process, the maximum single-thread performance (1/t) requires a balance between brainiac techniques and speedracer techniques.[9]
https://en.wikipedia.org/wiki/Computer_performance
Incomputer science,empirical algorithmics(orexperimental algorithmics) is the practice of usingempirical methodsto study the behavior ofalgorithms. The practice combines algorithm development and experimentation: algorithms are not just designed, but also implemented and tested in a variety of situations. In this process, an initial design of an algorithm is analyzed so that the algorithm may be developed in a stepwise manner.[1] Methods from empirical algorithmics complement theoretical methods for theanalysis of algorithms.[2]Through the principled application of empirical methods, particularly fromstatistics, it is often possible to obtain insights into the behavior of algorithms such as high-performanceheuristic algorithmsfor hardcombinatorial problemsthat are (currently) inaccessible to theoretical analysis.[3]Empirical methods can also be used to achieve substantial improvements inalgorithmic efficiency.[4] American computer scientistCatherine McGeochidentifies two main branches of empirical algorithmics: the first (known asempirical analysis) deals with the analysis and characterization of the behavior ofalgorithms, and the second (known asalgorithm designoralgorithm engineering) is focused on empirical methods for improving the performance ofalgorithms.[5]The former often relies on techniques and tools fromstatistics, while the latter is based on approaches fromstatistics,machine learningandoptimization.Dynamic analysistools, typicallyperformance profilers, are commonly used when applying empirical methods for the selection and refinement of algorithms of various types for use in various contexts.[6][7][8] Research in empirical algorithmics is published in several journals, including theACM Journal on Experimental Algorithmics(JEA) and theJournal of Artificial Intelligence Research(JAIR). Besides Catherine McGeoch, well-known researchers in empirical algorithmics includeBernard Moret,Giuseppe F. Italiano,Holger H. Hoos,David S. Johnson, andRoberto Battiti.[9] In the absence of empirical algorithmics, analyzing the complexity of an algorithm can involve various theoretical methods applicable to various situations in which the algorithm may be used.[10]Memory and cache considerations are often significant factors to be considered in the theoretical choice of a complex algorithm, or the approach to its optimization, for a given purpose.[11][12]Performanceprofilingis adynamic program analysistechnique typically used for finding and analyzing bottlenecks in an entire application's code[13][14][15]or for analyzing an entire application to identify poorly performing code.[16]A profiler can reveal the code most relevant to an application's performance issues.[17] A profiler may help to determine when to choose one algorithm over another in a particular situation.[18]When an individual algorithm is profiled, as with complexity analysis, memory and cache considerations are often more significant than instruction counts or clock cycles; however, the profiler's findings can be considered in light of how the algorithm accesses data rather than the number of instructions it uses.[19] Profiling may provide intuitive insight into an algorithm's behavior[20]by revealing performance findings as a visual representation.[21]Performance profiling has been applied, for example, during the development of algorithms formatching wildcards. Early algorithms for matching wildcards, such asRich Salz'wildmatalgorithm,[22]typically relied onrecursion, a technique criticized on grounds of performance.[23]TheKrauss matching wildcards algorithmwas developed based on an attempt to formulate a non-recursive alternative usingtest cases[24]followed by optimizations suggested via performance profiling,[25]resulting in a new algorithmic strategy conceived in light of the profiling along with other considerations.[26]Profilers that collect data at the level ofbasic blocks[27]or that rely on hardware assistance[28]provide results that can be accurate enough to assist software developers in optimizing algorithms for a particular computer or situation.[29]Performance profiling can aid developer understanding of the characteristics of complex algorithms applied in complex situations, such ascoevolutionaryalgorithms applied to arbitrary test-based problems, and may help lead to design improvements.[30]
https://en.wikipedia.org/wiki/Empirical_algorithmics
Incomputer science,program optimization,code optimization, orsoftware optimizationis the process of modifying a software system to make some aspect of it work moreefficientlyor use fewer resources.[1]In general, acomputer programmay be optimized so that it executes more rapidly, or to make it capable of operating with lessmemory storageor other resources, or draw less power. Although the term "optimization" is derived from "optimum",[2]achieving a truly optimal system is rare in practice, which is referred to assuperoptimization. Optimization typically focuses on improving a system with respect to a specific quality metric rather than making it universally optimal. This often leads to trade-offs, where enhancing one metric may come at the expense of another. One popular example isspace-time tradeoff, reducing a program’s execution time by increasing its memory consumption. Conversely, in scenarios where memory is limited, engineers might prioritize a sloweralgorithmto conserve space. There is rarely a single design that can excel in all situations, requiringengineersto prioritize attributes most relevant to the application at hand. Furthermore, achieving absolute optimization often demands disproportionate effort relative to the benefits gained. Consequently, optimization processes usually stop once sufficient improvements are achieved, without striving for perfection. Fortunately, significant gains often occur early in the optimization process, making it practical to stop before reachingdiminishing returns. Optimization can occur at a number of levels. Typically the higher levels have greater impact, and are harder to change later on in a project, requiring significant changes or a complete rewrite if they need to be changed. Thus optimization can typically proceed via refinement from higher to lower, with initial gains being larger and achieved with less work, and later gains being smaller and requiring more work. However, in some cases overall performance depends on performance of very low-level portions of a program, and small changes at a late stage or early consideration of low-level details can have outsized impact. Typically some consideration is given to efficiency throughout a project – though this varies significantly – but major optimization is often considered a refinement to be done late, if ever. On longer-running projects there are typically cycles of optimization, where improving one area reveals limitations in another, and these are typically curtailed when performance is acceptable or gains become too small or costly. As performance is part of the specification of a program – a program that is unusably slow is not fit for purpose: a video game with 60 Hz (frames-per-second) is acceptable, but 6 frames-per-second is unacceptably choppy – performance is a consideration from the start, to ensure that the system is able to deliver sufficient performance, and early prototypes need to have roughly acceptable performance for there to be confidence that the final system will (with optimization) achieve acceptable performance. This is sometimes omitted in the belief that optimization can always be done later, resulting in prototype systems that are far too slow – often by anorder of magnitudeor more – and systems that ultimately are failures because they architecturally cannot achieve their performance goals, such as theIntel 432(1981); or ones that take years of work to achieve acceptable performance, such as Java (1995), which only achieved acceptable performance withHotSpot(1999). The degree to which performance changes between prototype and production system, and how amenable it is to optimization, can be a significant source of uncertainty and risk. At the highest level, the design may be optimized to make best use of the available resources, given goals, constraints, and expected use/load. The architectural design of a system overwhelmingly affects its performance. For example, a system that is network latency-bound (where network latency is the main constraint on overall performance) would be optimized to minimize network trips, ideally making a single request (or no requests, as in apush protocol) rather than multiple roundtrips. Choice of design depends on the goals: when designing acompiler, if fast compilation is the key priority, aone-pass compileris faster than amulti-pass compiler(assuming same work), but if speed of output code is the goal, a slower multi-pass compiler fulfills the goal better, even though it takes longer itself. Choice of platform and programming language occur at this level, and changing them frequently requires a complete rewrite, though a modular system may allow rewrite of only some component – for example, for a Python program one may rewrite performance-critical sections in C. In a distributed system, choice of architecture (client-server,peer-to-peer, etc.) occurs at the design level, and may be difficult to change, particularly if all components cannot be replaced in sync (e.g., old clients). Given an overall design, a good choice ofefficient algorithmsanddata structures, and efficient implementation of these algorithms and data structures comes next. After design, the choice ofalgorithmsand data structures affects efficiency more than any other aspect of the program. Generally data structures are more difficult to change than algorithms, as a data structure assumption and its performance assumptions are used throughout the program, though this can be minimized by the use ofabstract data typesin function definitions, and keeping the concrete data structure definitions restricted to a few places. For algorithms, this primarily consists of ensuring that algorithms are constant O(1), logarithmic O(logn), linear O(n), or in some cases log-linear O(nlogn) in the input (both in space and time). Algorithms with quadratic complexity O(n2) fail to scale, and even linear algorithms cause problems if repeatedly called, and are typically replaced with constant or logarithmic if possible. Beyond asymptotic order of growth, the constant factors matter: an asymptotically slower algorithm may be faster or smaller (because simpler) than an asymptotically faster algorithm when they are both faced with small input, which may be the case that occurs in reality. Often ahybrid algorithmwill provide the best performance, due to this tradeoff changing with size. A general technique to improve performance is to avoid work. A good example is the use of afast pathfor common cases, improving performance by avoiding unnecessary work. For example, using a simple text layout algorithm for Latin text, only switching to a complex layout algorithm for complex scripts, such asDevanagari. Another important technique is caching, particularlymemoization, which avoids redundant computations. Because of the importance of caching, there are often many levels of caching in a system, which can cause problems from memory use, and correctness issues from stale caches. Beyond general algorithms and their implementation on an abstract machine, concrete source code level choices can make a significant difference. For example, on early C compilers,while(1)was slower thanfor(;;)for an unconditional loop, becausewhile(1)evaluated 1 and then had a conditional jump which tested if it was true, whilefor (;;)had an unconditional jump . Some optimizations (such as this one) can nowadays be performed byoptimizing compilers. This depends on the source language, the target machine language, and the compiler, and can be both difficult to understand or predict and changes over time; this is a key place where understanding of compilers and machine code can improve performance.Loop-invariant code motionandreturn value optimizationare examples of optimizations that reduce the need for auxiliary variables and can even result in faster performance by avoiding round-about optimizations. Between the source and compile level,directivesandbuild flagscan be used to tune performance options in the source code and compiler respectively, such as usingpreprocessordefines to disable unneeded software features, optimizing for specific processor models or hardware capabilities, or predictingbranching, for instance. Source-based software distribution systems such asBSD'sPortsandGentoo'sPortagecan take advantage of this form of optimization. Use of anoptimizing compilertends to ensure that theexecutable programis optimized at least as much as the compiler can predict. At the lowest level, writing code using anassembly language, designed for a particular hardware platform can produce the most efficient and compact code if the programmer takes advantage of the full repertoire ofmachine instructions. Manyoperating systemsused onembedded systemshave been traditionally written in assembler code for this reason. Programs (other than very small programs) are seldom written from start to finish in assembly due to the time and cost involved. Most are compiled down from a high level language to assembly and hand optimized from there. When efficiency and size are less important large parts may be written in a high-level language. With more modernoptimizing compilersand the greater complexity of recentCPUs, it is harder to write more efficient code than what the compiler generates, and few projects need this "ultimate" optimization step. Much of the code written today is intended to run on as many machines as possible. As a consequence, programmers and compilers don't always take advantage of the more efficient instructions provided by newer CPUs or quirks of older models. Additionally, assembly code tuned for a particular processor without using such instructions might still be suboptimal on a different processor, expecting a different tuning of the code. Typically today rather than writing in assembly language, programmers will use adisassemblerto analyze the output of a compiler and change the high-level source code so that it can be compiled more efficiently, or understand why it is inefficient. Just-in-timecompilers can produce customized machine code based on run-time data, at the cost of compilation overhead. This technique dates to the earliestregular expressionengines, and has become widespread with Java HotSpot and V8 for JavaScript. In some casesadaptive optimizationmay be able to performrun timeoptimization exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors. Profile-guided optimizationis an ahead-of-time (AOT) compilation optimization technique based on run time profiles, and is similar to a static "average case" analog of the dynamic technique of adaptive optimization. Self-modifying codecan alter itself in response to run time conditions in order to optimize code; this was more common in assembly language programs. SomeCPU designscan perform some optimizations at run time. Some examples includeout-of-order execution,speculative execution,instruction pipelines, andbranch predictors. Compilers can help the program take advantage of these CPU features, for example throughinstruction scheduling. Code optimization can be also broadly categorized asplatform-dependent and platform-independent techniques. While the latter ones are effective on most or all platforms, platform-dependent techniques use specific properties of one platform, or rely on parameters depending on the single platform or even on the single processor. Writing or producing different versions of the same code for different processors might therefore be needed. For instance, in the case of compile-level optimization, platform-independent techniques are generic techniques (such asloop unrolling, reduction in function calls, memory efficient routines, reduction in conditions, etc.), that impact most CPU architectures in a similar way. A great example of platform-independent optimization has been shown with inner for loop, where it was observed that a loop with an inner for loop performs more computations per unit time than a loop without it or one with an inner while loop.[3]Generally, these serve to reduce the totalinstruction path lengthrequired to complete the program and/or reduce total memory usage during the process. On the other hand, platform-dependent techniques involve instruction scheduling,instruction-level parallelism, data-level parallelism, cache optimization techniques (i.e., parameters that differ among various platforms) and the optimal instruction scheduling might be different even on different processors of the same architecture. Computational tasks can be performed in several different ways with varying efficiency. A more efficient version with equivalent functionality is known as astrength reduction. For example, consider the followingCcode snippet whose intention is to obtain the sum of all integers from 1 toN: This code can (assuming noarithmetic overflow) be rewritten using a mathematical formula like: The optimization, sometimes performed automatically by an optimizing compiler, is to select a method (algorithm) that is more computationally efficient, while retaining the same functionality. Seealgorithmic efficiencyfor a discussion of some of these techniques. However, a significant improvement in performance can often be achieved by removing extraneous functionality. Optimization is not always an obvious or intuitive process. In the example above, the "optimized" version might actually be slower than the original version ifNwere sufficiently small and the particular hardware happens to be much faster at performing addition andloopingoperations than multiplication and division. In some cases, however, optimization relies on using more elaborate algorithms, making use of "special cases" and special "tricks" and performing complex trade-offs. A "fully optimized" program might be more difficult to comprehend and hence may contain morefaultsthan unoptimized versions. Beyond eliminating obvious antipatterns, some code level optimizations decrease maintainability. Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource. This will usually require a trade-off – where one factor is optimized at the expense of others. For example, increasing the size ofcacheimproves run time performance, but also increases the memory consumption. Other common trade-offs include code clarity and conciseness. There are instances where the programmer performing the optimization must decide to make the software better for some operations but at the cost of making other operations less efficient. These trade-offs may sometimes be of a non-technical nature – such as when a competitor has published abenchmarkresult that must be beaten in order to improve commercial success but comes perhaps with the burden of making normal usage of the software less efficient. Such changes are sometimes jokingly referred to aspessimizations. Optimization may include finding abottleneckin a system – a component that is the limiting factor on performance. In terms of code, this will often be ahot spot– a critical part of the code that is the primary consumer of the needed resource – though it can be another factor, such as I/O latency or network bandwidth. In computer science, resource consumption often follows a form ofpower lawdistribution, and thePareto principlecan be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations.[4]In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context). More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup, initialization time, and constant factors of the more complex algorithm can outweigh the benefit, and thus ahybrid algorithmoradaptive algorithmmay be faster than any single algorithm. A performance profiler can be used to narrow down decisions about which functionality fits which conditions.[5] In some cases, adding morememorycan help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor, due to the latency of each disk read. Caching the result is similarly effective, though also requiring larger memory use. Optimization can reducereadabilityand add code that is used only to improve theperformance. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of thedevelopment stage. Donald Knuthmade the following two statements on optimization: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"[6] (He also attributed the quote toTony Hoareseveral years later,[7]although this might have been an error as Hoare disclaims having coined the phrase.[8]) "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering"[6] "Premature optimization" is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing. When deciding whether to optimize a specific part of the program,Amdahl's Lawshould always be considered: the impact on the overall program depends very much on how much time is actually spent in that specific part, which is not always clear from looking at the code without aperformance analysis. A better approach is therefore to design first, code from the design and thenprofile/benchmarkthe resulting code to see which parts should be optimized. A simple and elegant design is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization. In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization. Modern compilers and operating systems are so efficient that the intended performance increases often fail to materialize. As an example, caching data at the application level that is again cached at the operating system level does not yield improvements in execution. Even so, it is a rare case when the programmer will remove failed optimizations from production code. It is also true that advances in hardware will more often than not obviate any potential improvements, yet the obscuring code will persist into the future long after its purpose has been negated. Optimization during code development usingmacrostakes on different forms in different languages. In some procedural languages, such asCandC++, macros are implemented using token substitution. Nowadays,inline functionscan be used as atype safealternative in many cases. In both cases, the inlined function body can then undergo further compile-time optimizations by the compiler, includingconstant folding, which may move some computations to compile time. In manyfunctional programminglanguages, macros are implemented using parse-time substitution of parse trees/abstract syntax trees, which it is claimed makes them safer to use. Since in many cases interpretation is used, that is one way to ensure that such computations are only performed at parse-time, and sometimes the only way. Lisporiginated this style of macro,[citation needed]and such macros are often called "Lisp-like macros". A similar effect can be achieved by usingtemplate metaprogramminginC++. In both cases, work is moved to compile-time. The difference betweenCmacros on one side, and Lisp-like macros andC++template metaprogrammingon the other side, is that the latter tools allow performing arbitrary computations at compile-time/parse-time, while expansion ofCmacros does not perform any computation, and relies on the optimizer ability to perform it. Additionally,Cmacros do not directly supportrecursionoriteration, so are notTuring complete. As with any optimization, however, it is often difficult to predict where such tools will have the most impact before a project is complete. See alsoCategory:Compiler optimizations Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superioralgorithm. Optimizing a whole system is usually undertaken by programmers because it is too complex for automated optimizers. In this situation, programmers orsystem administratorsexplicitly change code so that the overall system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations. Since many parameters influence the program performance, the program optimization space is large. Meta-heuristics and machine learning are used to address the complexity of program optimization.[9] Use aprofiler(orperformance analyzer) to find the sections of the program that are taking the most resources – thebottleneck. Programmers sometimes believe they have a clear idea of where the bottleneck is, but intuition is frequently wrong.[citation needed]Optimizing an unimportant piece of code will typically do little to help the overall performance. When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program. More often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with aquicksortroutine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine. After the programmer is reasonably sure that the best algorithm is selected, code optimization can start. Loops can be unrolled (for lower loop overhead, although this can often lead tolowerspeed if it overloads theCPU cache), data types as small as possible can be used, integer arithmetic can be used instead of floating-point, and so on. (Seealgorithmic efficiencyarticle for these and other techniques.) Performance bottlenecks can be due to language limitations rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a differentprogramming languagethat gives more direct access to the underlying machine. For example, it is common for veryhigh-levellanguages likePythonto have modules written inCfor greater speed. Programs already written in C can have modules written inassembly. Programs written inDcan use theinline assembler. Rewriting sections "pays off" in these circumstances because of a general "rule of thumb" known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So, putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed – if the correct part(s) can be located. Manual optimization sometimes has the side effect of undermining readability. Thus code optimizations should be carefully documented (preferably using in-line comments), and their effect on future development evaluated. The program that performs an automated optimization is called anoptimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers can often tailor the generated code to specific processors. Today, automated optimizations are almost exclusively limited tocompiler optimization. However, because compiler optimizations are usually limited to a fixed set of rather general optimizations, there is considerable demand for optimizers which can accept descriptions of problem and language-specific optimizations, allowing an engineer to specify custom optimizations. Tools that accept descriptions of optimizations are calledprogram transformationsystems and are beginning to be applied to real software systems such as C++. Some high-level languages (Eiffel,Esterel) optimize their programs by using anintermediate language. Grid computingordistributed computingaims to optimize the whole system, by moving tasks from computers with high usage to computers with idle time. Sometimes, the time taken to undertake optimization therein itself may be an issue. Optimizing existing code usually does not add new features, and worse, it might add newbugsin previously working code (as any change might). Because manually optimized code might sometimes have less "readability" than unoptimized code, optimization might impact maintainability of it as well. Optimization comes at a price and it is important to be sure that the investment is worthwhile. An automatic optimizer (oroptimizing compiler, a program that performs code optimization) may itself have to be optimized, either to further improve the efficiency of its target programs or else speed up its own operation. A compilation performed with optimization "turned on" usually takes longer, although this is usually only a problem when programs are quite large. In particular, forjust-in-time compilersthe performance of therun timecompile component, executing together with its target code, is the key to improving overall execution speed.
https://en.wikipedia.org/wiki/Program_optimization
Insoftware engineering,profiling(program profiling,software profiling) is a form ofdynamic program analysisthat measures, for example, the space (memory) or timecomplexity of a program, theusage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aidprogram optimization, and more specifically,performance engineering. Profiling is achieved byinstrumentingeither the programsource codeor its binary executable form using a tool called aprofiler(orcode profiler). Profilers may use a number of different techniques, such as event-based, statistical, instrumented, and simulation methods. Profilers use a wide variety of techniques to collect data, includinghardware interrupts,code instrumentation,instruction set simulation, operating systemhooks, andperformance counters. Program analysis tools are extremely important for understanding program behavior. Computer architects need such tools to evaluate how well programs will perform on newarchitectures. Software writers need tools to analyze their programs and identify critical sections of code.Compilerwriters often use such tools to find out how well theirinstruction schedulingorbranch predictionalgorithm is performing... The output of a profiler may be: A profiler can be applied to an individual method or at the scale of a module or program, to identify performance bottlenecks by making long-running code obvious.[1]A profiler can be used to understand code from a timing point of view, with the objective of optimizing it to handle various runtime conditions[2]or various loads.[3]Profiling results can be ingested by a compiler that providesprofile-guided optimization.[4]Profiling results can be used to guide the design and optimization of an individual algorithm; theKrauss matching wildcards algorithmis an example.[5]Profilers are built into someapplication performance managementsystems that aggregate profiling data to provide insight intotransactionworkloads indistributedapplications.[6] Performance-analysis tools existed onIBM/360andIBM/370platforms from the early 1970s, usually based on timer interrupts which recorded theprogram status word(PSW) at set timer-intervals to detect "hot spots" in executing code.[citation needed]This was an early example ofsampling(see below). In early 1974instruction-set simulatorspermitted full trace and other performance-monitoring features.[citation needed] Profiler-driven program analysis on Unix dates back to 1973,[7]when Unix systems included a basic tool,prof, which listed each function and how much of program execution time it used. In 1982gprofextended the concept to a completecall graphanalysis.[8] In 1994, Amitabh Srivastava andAlan EustaceofDigital Equipment Corporationpublished a paper describing ATOM[9](Analysis Tools with OM). The ATOM platform converts a program into its own profiler: atcompile time, it inserts code into the program to be analyzed. That inserted code outputs analysis data. This technique - modifying a program to analyze itself - is known as "instrumentation". In 2004 both thegprofand ATOM papers appeared on the list of the 50 most influentialPLDIpapers for the 20-year period ending in 1999.[10] Flat profilers compute the average call times, from the calls, and do not break down the call times based on the callee or the context. Call graphprofilers[8]show the call times, and frequencies of the functions, and also the call-chains involved based on the callee. In some tools full context is not preserved. Input-sensitive profilers[11][12][13]add a further dimension to flat or call-graph profilers by relating performance measures to features of the input workloads, such as input size or input values. They generate charts that characterize how an application's performance scales as a function of its input. Profilers, which are also programs themselves, analyze target programs by collecting information on the target program's execution. Based on their data granularity, which depends upon how profilers collect information, they are classified asevent-basedorstatisticalprofilers. Profilers interrupt program execution to collect information. Those interrupts can limit time measurement resolution, which implies that timing results should be taken with a grain of salt.Basic blockprofilers report a number of machineclock cyclesdevoted to executing each line of code, or timing based on adding those together; the timings reported per basic block may not reflect a difference betweencachehits and misses.[14][15] Event-based profilers are available for the following programming languages: These profilers operate bysampling. A sampling profiler probes the target program'scall stackat regular intervals usingoperating systeminterrupts. Sampling profiles are typically less numerically accurate and specific, providing only a statistical approximation, but allow the target program to run at near full speed. "The actual amount of error is usually more than one sampling period. In fact, if a value is n times the sampling period, the expected error in it is the square-root of n sampling periods."[16] In practice, sampling profilers can often provide a more accurate picture of the target program's execution than other approaches, as they are not as intrusive to the target program and thus don't have as many side effects (such as on memory caches or instruction decoding pipelines). Also since they don't affect the execution speed as much, they can detect issues that would otherwise be hidden. They are also relatively immune to over-evaluating the cost of small, frequently called routines or 'tight' loops. They can show the relative amount of time spent in user mode versus interruptible kernel mode such assystem callprocessing. Unfortunately, running kernel code to handle the interrupts incurs a minor loss of CPU cycles from the target program, diverts cache usage, and cannot distinguish the various tasks occurring in uninterruptible kernel code (microsecond-range activity) from user code. Dedicated hardware can do better: ARM Cortex-M3 and some recent MIPS processors' JTAG interfaces have a PCSAMPLE register, which samples theprogram counterin a truly undetectable manner, allowing non-intrusive collection of a flat profile. Some commonly used[17]statistical profilers for Java/managed code areSmartBear Software'sAQtime[18]andMicrosoft'sCLR Profiler.[19]Those profilers also support native code profiling, along withApple Inc.'sShark(OSX),[20]OProfile(Linux),[21]IntelVTuneand Parallel Amplifier (part ofIntel Parallel Studio), andOraclePerformance Analyzer,[22]among others. This technique effectively adds instructions to the target program to collect the required information. Note thatinstrumentinga program can cause performance changes, and may in some cases lead to inaccurate results and/orheisenbugs. The effect will depend on what information is being collected, on the level of timing details reported, and on whether basic block profiling is used in conjunction with instrumentation.[23]For example, adding code to count every procedure/routine call will probably have less effect than counting how many times each statement is obeyed. A few computers have special hardware to collect information; in this case the impact on the program is minimal. Instrumentation is key to determining the level of control and amount of time resolution available to the profilers.
https://en.wikipedia.org/wiki/Profiling_(computer_programming)
Inmathematics,Gödel's speed-up theorem, proved byGödel(1936), shows that there aretheoremswhoseproofscan be drastically shortened by working in more powerful axiomatic systems. Kurt Gödelshowed how to find explicit examples of statements in formal systems that are provable in that system but whose shortest proof is unimaginably long. For example, the statement: is provable in Peano arithmetic (PA) but the shortest proof has at least a googolplex symbols, by an argument similar to the proof ofGödel's first incompleteness theorem: If PA isconsistent, then it cannot prove the statement in fewer than a googolplex symbols, because the existence of such a proof would itself be a theorem of PA, acontradiction. But simply enumerating all strings of length up to a googolplex and checking that each such string is not a proof (in PA) of the statement, yields a proof of the statement (which is necessarily longer than a googolplex symbols). The statement has a short proof in a more powerful system: in fact the proof given in the previous paragraph is a proof in the system of Peano arithmetic plus the statement "Peano arithmetic is consistent" (which, per the incompleteness theorem, cannot be proved in Peano arithmetic). In this argument, Peano arithmetic can be replaced by any more powerful consistent system, and a googolplex can be replaced by any number that can be described concisely in the system. Harvey Friedmanfound some explicit natural examples of this phenomenon, giving some explicit statements in Peano arithmetic and other formal systems whose shortest proofs are ridiculously long (Smoryński 1982). For example, the statement is provable in Peano arithmetic, but the shortest proof has length at leastA(1000), whereA(0) = 1 andA(n+1) = 2A(n). The statement is a special case ofKruskal's theoremand has a short proof insecond order arithmetic. If one takes Peano arithmetic together with thenegationof the statement above, one obtains an inconsistent theory whose shortest known contradiction is equivalently long.
https://en.wikipedia.org/wiki/G%C3%B6del%27s_speed-up_theorem
Incomputational complexity theory, theImmerman–Szelepcsényi theoremstates thatnondeterministic spacecomplexity classesare closed under complementation. It was proven independently byNeil ImmermanandRóbert Szelepcsényiin 1987, for which they shared the 1995Gödel Prize. In its general form the theorem states thatNSPACE(s(n)) = co-NSPACE(s(n)) for any functions(n) ≥ logn. The result is equivalently stated asNL= co-NL; although this is the special case whens(n) = logn, it implies the general theorem by a standardpadding argument.[1]The result solved thesecond LBA problem. In other words, if a nondeterministic machine can solve a problem, another machine with the same resource bounds can solve itscomplementproblem (with theyesandnoanswers reversed) in the same asymptotic amount of space. No similar result is known for thetime complexityclasses, and indeed it is conjectured thatNPis not equal toco-NP. The principle used to prove the theorem has become known asinductive counting. It has also been used to prove other theorems in computational complexity, including the closure ofLOGCFLunder complementation and the existence of error-free randomized logspace algorithms forUSTCON.[2] We prove here that NL = co-NL. The theorem is obtained from this special case by apadding argument. Thest-connectivityproblem asks, given adigraphGand two verticessandt, whether there is a directed path fromstotinG. This problem is NL-complete, therefore its complementst-non-connectivityis co-NL-complete. It suffices to show thatst-non-connectivityis in NL. This proves co-NL ⊆ NL, and by complementation, NL ⊆ co-NL. We fix a digraphG, a source vertexs, and a target vertext. We denote byRkthe set of vertices which are reachable fromsin at mostksteps. Note that iftis reachable froms, it is reachable in at mostn-1steps, wherenis the number of vertices, therefore we are reduced to testing whethert∉Rn-1. We remark thatR0= {s}, andRk+1is the set of verticesvwhich are either inRk, or the target of an edgew→vwherewis inRk. This immediately gives an algorithm to decidet∈Rn, by successively computingR1, …,Rn. However, this algorithm uses too much space to solve the problem in NL, since storing a setRkrequires one bit per vertex. The crucial idea of the proof is that instead of computingRk+1fromRk, it is possible to compute thesizeofRk+1from thesizeofRk, with the help of non-determinism. We iterate over vertices and increment a counter for each vertex that is found to belong toRk+1. The problem is how to determine whetherv∈Rk+1for a given vertexv, when we only have the size ofRkavailable. To this end, we iterate over verticesw, and for eachw, we non-deterministicallyguesswhetherw∈Rk. If we guessw∈Rk, andv=wor there is an edgew→v, then we determine thatvbelongs toRk+1. If this fails for all verticesw, thenvdoes not belong toRk+1. Thus, the computation that determines whethervbelongs toRk+1splits into branches for the different guesses of which vertices belong toRk. A mechanism is needed to make all of these branches abort (reject immediately), except the one where all the guesses were correct. For this, when we have made a “yes-guess” thatw∈Rk, wecheckthis guess, by non-deterministically looking for a path fromstowof length at mostk. If this check fails, we abort the current branch. If it succeeds, we increment a counter of “yes-guesses”. On the other hand, we do not check the “no-guesses” thatw∉Rk(this would require solvingst-non-connectivity, which is precisely the problem that we are solving in the first place). However, at the end of the loop overw, we check that the counter of “yes-guesses” matches the size ofRk, which we know. If there is a mismatch, we abort. Otherwise, all the “yes-guesses” were correct, and there was exactly the right number of them, thus all “no-guesses” were correct as well. This concludes the computation of the size ofRk+1from the size ofRk. Iteratively, we compute the sizes ofR1,R2, …,Rn-2. Finally, we check whethert∈Rn-1, which is possible from the size ofRn-2by the sub-algorithm that is used inside the computation of the size ofRk+1. The followingpseudocodesummarizes the algorithm: As a corollary, in the same article, Immerman proved that, usingdescriptive complexity's equality betweenNLandFO(Transitive Closure), the logarithmic hierarchy, i.e. the languages decided by analternating Turing machinein logarithmic space with a bounded number of alternations, is the same class as NL.
https://en.wikipedia.org/wiki/Immerman%E2%80%93Szelepcs%C3%A9nyi_theorem
Inmathematics, specifically inalgebraic geometry, thefiber product of schemesis a fundamental construction. It has many interpretations and special cases. For example, the fiber product describes how analgebraic varietyover onefielddetermines a variety over a bigger field, or the pullback of a family of varieties, or a fiber of a family of varieties.Base changeis a closely related notion. Thecategoryofschemesis a broad setting for algebraic geometry. A fruitful philosophy (known asGrothendieck's relative point of view) is that much of algebraic geometry should be developed for amorphism of schemesX→Y(called a schemeXoverY), rather than for a single schemeX. For example, rather than simply studyingalgebraic curves, one can study families of curves over any base schemeY. Indeed, the two approaches enrich each other. In particular, a scheme over acommutative ringRmeans a schemeXtogether with a morphismX→Spec(R). The older notion of an algebraic variety over a fieldkis equivalent to a scheme overkwith certain properties. (There are different conventions for exactly which schemes should be called "varieties". One standard choice is that a variety over a fieldkmeans anintegral separatedscheme offinite typeoverk.[1]) In general, a morphism of schemesX→Ycan be imagined as a family of schemes parametrized by the points ofY. Given a morphism from some other schemeZtoY, there should be a "pullback" family of schemes overZ. This is exactly the fiber productX×YZ→Z. Formally: it is a useful property of the category of schemes that thefiber productalways exists.[2]That is, for any morphisms of schemesX→YandZ→Y, there is a schemeX×YZwith morphisms toXandZ, making the diagram commutative, and which isuniversalwith that property. That is, for any schemeWwith morphisms toXandZwhose compositions toYare equal, there is a unique morphism fromWtoX×YZthat makes the diagram commute. As always with universal properties, this condition determines the schemeX×YZup to a unique isomorphism, if it exists. The proof that fiber products of schemes always do exist reduces the problem to thetensor product of commutative rings(cf.gluing schemes). In particular, whenX,Y, andZare allaffine schemes, soX= Spec(A),Y= Spec(B), andZ= Spec(C) for some commutative ringsA,B,C, the fiber product is the affine scheme The morphismX×YZ→Zis called thebase changeorpullbackof the morphismX→Yvia the morphismZ→Y. In some cases, the fiber product of schemes has a right adjoint, therestriction of scalars. Some important properties P of morphisms of schemes arepreserved under arbitrary base change. That is, ifX→Yhas property P andZ→Yis any morphism of schemes, then the base changeXxYZ→Zhas property P. For example,flat morphisms,smooth morphisms,proper morphisms, and many other classes of morphisms are preserved under arbitrary base change.[5] The worddescentrefers to the reverse question: if the pulled-back morphismXxYZ→Zhas some property P, must the original morphismX→Yhave property P? Clearly this is impossible in general: for example,Zmight be the empty scheme, in which case the pulled-back morphism loses all information about the original morphism. But if the morphismZ→Yis flat and surjective (also calledfaithfully flat) andquasi-compact, then many properties do descend fromZtoY. Properties that descend include flatness, smoothness, properness, and many other classes of morphisms.[6]These results form part ofGrothendieck's theory offaithfully flat descent. Example: for any field extensionk⊂E, the morphism Spec(E) → Spec(k) is faithfully flat and quasi-compact. So the descent results mentioned imply that a schemeXoverkis smooth overkif and only if the base changeXEis smooth overE. The same goes for properness and many other properties.
https://en.wikipedia.org/wiki/Fiber_product_of_schemes#Base_change_and_descent
Inmathematics,Grothendieck's six operations, named afterAlexander Grothendieck, is a formalism inhomological algebra, also known as thesix-functor formalism.[1]It originally sprang from the relations inétale cohomologythat arise from amorphismofschemesf:X→Y. The basic insight was that many of the elementary facts relating cohomology onXandYwere formal consequences of a small number of axioms. These axioms hold in many cases completely unrelated to the original context, and therefore the formal consequences also hold. The six operations formalism has since been shown to apply to contexts such asD-modulesonalgebraic varieties,sheavesonlocally compact topological spaces, andmotives. The operations are sixfunctors. Usually these are functors betweenderived categoriesand so are actually left and rightderived functors. The functorsf∗{\displaystyle f^{*}}andf∗{\displaystyle f_{*}}form anadjoint functorpair, as dof!{\displaystyle f_{!}}andf!{\displaystyle f^{!}}.[2]Similarly, internal tensor product is left adjoint to internal Hom. Letf:X→Ybe a morphism of schemes. The morphismfinduces several functors. Specifically, it givesadjoint functorsf∗{\displaystyle f^{*}}andf∗{\displaystyle f_{*}}between the categories of sheaves onXandY, and it gives the functorf!{\displaystyle f_{!}}of direct image with proper support. In thederived category,Rf!admits a right adjointf!{\displaystyle f^{!}}. Finally, when working with abelian sheaves, there is a tensor product functor ⊗ and an internal Hom functor, and these are adjoint. The six operations are the corresponding functors on the derived category:Lf*,Rf*,Rf!,f!,⊗L, andRHom. Suppose that we restrict ourselves to a category ofℓ{\displaystyle \ell }-adictorsion sheaves, whereℓ{\displaystyle \ell }is coprime to the characteristic ofXand ofY. InSGA4 III, Grothendieck andArtinproved that iffissmoothofrelative dimensiond, thenLf∗{\displaystyle Lf^{*}}is isomorphic tof!(−d)[−2d], where(−d)denotes thedth inverseTate twistand[−2d]denotes a shift in degree by−2d. Furthermore, suppose thatfisseparatedandof finite type. Ifg:Y′ →Yis another morphism of schemes, ifX′denotes the base change ofXbyg, and iff′ andg′ denote the base changes offandgbygandf, respectively, then there exist natural isomorphisms: Again assuming thatfis separated and of finite type, for any objectsMin the derived category ofXandNin the derived category ofY, there exist natural isomorphisms: Ifiis a closed immersion ofZintoSwith complementary open immersionj, then there is a distinguished triangle in the derived category: where the first two maps are the counit and unit, respectively, of the adjunctions. IfZandSareregular, then there is an isomorphism: where1Zand1Sare the units of the tensor product operations (which vary depending on which category ofℓ{\displaystyle \ell }-adic torsion sheaves is under consideration). IfSis regular andg:X→S, and ifKis an invertible object in the derived category onSwith respect to⊗L, then defineDXto be the functorRHom(—,g!K). Then, for objectsMandM′ in the derived category onX, the canonical maps: are isomorphisms. Finally, iff:X→Yis a morphism ofS-schemes, and ifMandNare objects in the derived categories ofXandY, then there are natural isomorphisms:
https://en.wikipedia.org/wiki/Six_operations
Inmathematics, thetensor productof twofieldsis theirtensor productasalgebrasover a commonsubfield. If no subfield is explicitly specified, the two fields must have the samecharacteristicand the common subfield is theirprime subfield. The tensor product of two fields is sometimes a field, and often adirect productof fields; In some cases, it can contain non-zeronilpotent elements. The tensor product of two fields expresses in a single structure the different way to embed the two fields in a commonextension field. First, one defines the notion of the compositum of fields. This construction occurs frequently infield theory. The idea behind the compositum is to make the smallest field containing two other fields. In order to formally define the compositum, one must first specify atower of fields. Letkbe a field andLandKbe two extensions ofk. The compositum, denotedK.L, is defined to beK.L=k(K∪L){\displaystyle K.L=k(K\cup L)}where the right-hand side denotes the extension generated byKandL. This assumessomefield containing bothKandL. Either one starts in a situation where an ambient field is easy to identify (for example ifKandLare both subfields of thecomplex numbers), or one proves a result that allows one to place bothKandL(asisomorphiccopies) in some large enough field. In many cases one can identifyK.Las avector spacetensor product, taken over the fieldNthat is the intersection ofKandL. For example, if one adjoins √2 to therational fieldQ{\displaystyle \mathbb {Q} }to getK, and √3 to getL, it is true that the fieldMobtained asK.Linside the complex numbersC{\displaystyle \mathbb {C} }is (up toisomorphism) as a vector space overQ{\displaystyle \mathbb {Q} }. (This type of result can be verified, in general, by using theramificationtheory ofalgebraic number theory.) SubfieldsKandLofMarelinearly disjoint(over a subfieldN) when in this way the naturalN-linear map of toK.Lisinjective.[1]Naturally enough this isn't always the case, for example whenK=L. When the degrees are finite, injectivity is equivalent here tobijectivity. Hence, whenKandLare linearly disjoint finite-degree extension fields overN,K.L≅K⊗NL{\displaystyle K.L\cong K\otimes _{N}L}, as with the aforementioned extensions of the rationals. A significant case in the theory ofcyclotomic fieldsis that for thenthroots of unity, fornacomposite number, the subfields generated by thepkth roots of unity forprime powersdividingnare linearly disjoint for distinctp.[2] To get a general theory, one needs to consider aringstructure onK⊗NL{\displaystyle K\otimes _{N}L}. One can define the product(a⊗b)(c⊗d){\displaystyle (a\otimes b)(c\otimes d)}to beac⊗bd{\displaystyle ac\otimes bd}(seeTensor product of algebras). This formula is multilinear overNin each variable; and so defines a ring structure on the tensor product, makingK⊗NL{\displaystyle K\otimes _{N}L}into acommutativeN-algebra, called thetensor product of fields. The structure of the ring can be analysed by considering all ways of embedding bothKandLin some field extension ofN. The construction here assumes the common subfieldN; but does not assumea priorithatKandLare subfields of some fieldM(thus getting round the caveats about constructing a compositum field). Whenever one embedsKandLin such a fieldM, say using embeddings α ofKand β ofL, there results aring homomorphismγ fromK⊗NL{\displaystyle K\otimes _{N}L}intoMdefined by: Thekernelof γ will be aprime idealof the tensor product; andconverselyany prime ideal of the tensor product will give a homomorphism ofN-algebras to anintegral domain(inside afield of fractions) and so provides embeddings ofKandLin some field as extensions of (a copy of)N. In this way one can analyse the structure ofK⊗NL{\displaystyle K\otimes _{N}L}: there may in principle be a non-zeronilradical(intersection of all prime ideals) – and after taking the quotient by that one can speak of the product of all embeddings ofKandLin variousM,overN. In caseKandLare finite extensions ofN, the situation is particularly simple since the tensor product is of finite dimension as anN-algebra (and thus anArtinian ring). One can then say that ifRis the radical, one has(K⊗NL)/R{\displaystyle (K\otimes _{N}L)/R}as a direct product of finitely many fields. Each such field is a representative of anequivalence classof (essentially distinct) field embeddings forKandLin some extensionM. To give an explicit example consider the fieldsK=Q[x]/(x2−2){\displaystyle K=\mathbb {Q} [x]/(x^{2}-2)}andL=Q[y]/(y2−2){\displaystyle L=\mathbb {Q} [y]/(y^{2}-2)}. ClearlyQ(2)≅K≅L≠K{\displaystyle \mathbb {Q} ({\sqrt {2}})\cong K\cong L\neq K}are isomorphic but technically unequal fields with their (set theoretic) intersection being the prime fieldN=Q{\displaystyle N=\mathbb {Q} }. Their tensor product is not a field, but a 4-dimensionalQ{\displaystyle \mathbb {Q} }-algebra. Furthermore this algebra is isomorphic to a direct sum of fields via the map induced by1↦(1,1),z↦(2,−2){\displaystyle 1\mapsto (1,1),z\mapsto ({\sqrt {2}},-{\sqrt {2}})}. MorallyN~=Q(2){\displaystyle {\tilde {N}}=\mathbb {Q} ({\sqrt {2}})}should be considered thelargest common subfield up to isomorphismofKandLvia the isomorphismsN~=Q(2)≅K≅L{\displaystyle {\tilde {N}}=\mathbb {Q} ({\sqrt {2}})\cong K\cong L}. When one performs the tensor product over this better candidate for the largest common subfield we actually get a (rather trivial) field For another example, ifKis generated overQ{\displaystyle \mathbb {Q} }by thecube rootof 2, thenK⊗QK{\displaystyle K\otimes _{\mathbb {Q} }K}is the sum of (a copy of)K, and asplitting fieldof of degree 6 overQ{\displaystyle \mathbb {Q} }. One can prove this by calculating the dimension of the tensor product overQ{\displaystyle \mathbb {Q} }as 9, and observing that the splitting field does contain two (indeed three) copies ofK, and is the compositum of two of them. That incidentally shows thatR= {0} in this case. An example leading to a non-zero nilpotent: let withKthe field ofrational functionsin the indeterminateTover thefinite fieldwithpelements (seeSeparable polynomial: the point here is thatPisnotseparable). IfLis the field extensionK(T1/p) (thesplitting fieldofP) thenL/Kis an example of apurely inseparable field extension. InL⊗KL{\displaystyle L\otimes _{K}L}the element is nilpotent: by taking itspth power one gets 0 by usingK-linearity. Inalgebraic number theory, tensor products of fields are (implicitly, often) a basic tool. IfKis an extension ofQ{\displaystyle \mathbb {Q} }of finite degreen,K⊗QR{\displaystyle K\otimes _{\mathbb {Q} }\mathbb {R} }is always a product of fields isomorphic toR{\displaystyle \mathbb {R} }orC{\displaystyle \mathbb {C} }. Thetotally real number fieldsare those for which onlyrealfields occur: in general there arer1real andr2complex fields, withr1+ 2r2=nas one sees by counting dimensions. The field factors are in 1–1 correspondence with thereal embeddings, andpairs of complex conjugate embeddings, described in the classical literature. This idea applies also toK⊗QQp,{\displaystyle K\otimes _{\mathbb {Q} }\mathbb {Q} _{p},}whereQ{\displaystyle \mathbb {Q} }pis the field ofp-adic numbers. This is a product of finite extensions ofQ{\displaystyle \mathbb {Q} }p, in 1–1 correspondence with the completions ofKfor extensions of thep-adic metric onQ{\displaystyle \mathbb {Q} }. This gives a general picture, and indeed a way of developingGalois theory(along lines exploited inGrothendieck's Galois theory). It can be shown that forseparable extensionsthe radical is always {0}; therefore the Galois theory case is thesemisimpleone, of products of fields alone.
https://en.wikipedia.org/wiki/Tensor_product_of_fields
Inmathematics, thetensor-hom adjunctionis that thetensor product−⊗X{\displaystyle -\otimes X}andhom-functorHom⁡(X,−){\displaystyle \operatorname {Hom} (X,-)}form anadjoint pair: This is made more precise below. The order of terms in the phrase "tensor-hom adjunction" reflects their relationship: tensor is the left adjoint, while hom is the right adjoint. SayRandSare (possibly noncommutative)rings, and consider the rightmodulecategories (an analogous statement holds for left modules): Fix an(R,S){\displaystyle (R,S)}-bimoduleX{\displaystyle X}and define functorsF:D→C{\displaystyle F\colon {\mathcal {D}}\rightarrow {\mathcal {C}}}andG:C→D{\displaystyle G\colon {\mathcal {C}}\rightarrow {\mathcal {D}}}as follows: ThenF{\displaystyle F}is leftadjointtoG{\displaystyle G}. This means there is anatural isomorphism This is actually an isomorphism ofabelian groups. More precisely, ifY{\displaystyle Y}is an(A,R){\displaystyle (A,R)}-bimodule andZ{\displaystyle Z}is a(B,S){\displaystyle (B,S)}-bimodule, then this is an isomorphism of(B,A){\displaystyle (B,A)}-bimodules. This is one of the motivating examples of the structure in a closedbicategory.[1] Like all adjunctions, the tensor-hom adjunction can be described by its counit and unitnatural transformations. Using the notation from the previous section, the counit hascomponents given by evaluation: For Thecomponentsof the unit are defined as follows: Fory{\displaystyle y}inY{\displaystyle Y}, is a rightS{\displaystyle S}-module homomorphism given by Thecounit and unit equations[broken anchor]can now be explicitly verified. ForY{\displaystyle Y}inD{\displaystyle {\mathcal {D}}}, is given onsimple tensorsofY⊗X{\displaystyle Y\otimes X}by Likewise, Forϕ{\displaystyle \phi }inHomS⁡(X,Z){\displaystyle \operatorname {Hom} _{S}(X,Z)}, is a rightS{\displaystyle S}-module homomorphism defined by and therefore TheHom functorhom⁡(X,−){\displaystyle \hom(X,-)}commutes with arbitrary limits, while the tensor product−⊗X{\displaystyle -\otimes X}functor commutes with arbitrary colimits that exist in their domain category. However, in general,hom⁡(X,−){\displaystyle \hom(X,-)}fails to commute with colimits, and−⊗X{\displaystyle -\otimes X}fails to commute with limits; this failure occurs even among finite limits or colimits. This failure to preserve shortexact sequencesmotivates the definition of theExt functorand theTor functor. We can illustrate thetensor-hom adjunctionin thecategoryoffunctionsof finitesets. Given a setN{\displaystyle N}, itsHom functortakes any setA{\displaystyle A}to the set of functions fromN{\displaystyle N}toA{\displaystyle A}. Theisomorphism classof this set of functions is thenatural numberAN{\displaystyle A^{N}}. Similarly, the tensor product−⊗N{\displaystyle -\otimes N}takes a setA{\displaystyle A}to itscartesian productwithN{\displaystyle N}. Its isomorphism class is thus the natural numberAN{\displaystyle AN}. This allows us to interpret the isomorphism ofhom-sets thatuniversally characterizesthe tensor-hom adjunction, as thecategorificationof the remarkably basiclaw of exponents
https://en.wikipedia.org/wiki/Tensor-hom_adjunction
Innumber theory,Chen's theoremstates that every sufficiently largeevennumber can be written as the sum of either twoprimes, or a prime and asemiprime(the product of two primes). It is a weakened form ofGoldbach's conjecture, which states that every even number is the sum of two primes. Thetheoremwas first stated byChinesemathematicianChen Jingrunin 1966,[1]with further details of theproofin 1973.[2]His original proof was much simplified by P. M. Ross in 1975.[3]Chen's theorem is a significant step towardsGoldbach's conjecture, and a celebrated application ofsieve methods. Chen's theorem represents the strengthening of a previous result due toAlfréd Rényi, who in 1947 had shown there exists a finiteKsuch that any even number can be written as the sum of a prime number and the product of at mostKprimes.[4][5] Chen's 1973 paper stated two results with nearly identical proofs.[2]: 158His Theorem I, on the Goldbach conjecture, was stated above. His Theorem II is a result on thetwin prime conjecture. It states that ifhis a positive eveninteger, there are infinitely many primespsuch thatp+his either prime or the product of two primes. Ying Chun Cai proved the following in 2002:[6] In 2025, Daniel R. Johnston, Matteo Bordignon, and Valeriia Starichkova provided an explicit version of Chen's theorem:[7] which refined upon an earlier result by Tomohiro Yamada[8]. Also in 2024, Bordignon and Starichkova[9]showed that the bound can be lowered toee14≈2.5⋅10522284{\displaystyle e^{e^{14}}\approx 2.5\cdot 10^{522284}}assuming theGeneralized Riemann hypothesis(GRH) forDirichlet L-functions. In 2019, Huixi Li gave a version of Chen's theorem for odd numbers. In particular, Li proved that every sufficiently large odd integerN{\displaystyle N}can be represented as[10] wherep{\displaystyle p}is prime anda{\displaystyle a}has at most 2 prime factors. Here, the factor of 2 is necessitated since every prime (except for 2) is odd, causingN−p{\displaystyle N-p}to be even. Li's result can be viewed as an approximation toLemoine's conjecture.
https://en.wikipedia.org/wiki/Chen%27s_theorem
Innumber theory, asphenic number(fromGreek:σφήνα, 'wedge') is apositive integerthat is theproductof three distinctprime numbers. Because there areinfinitely many prime numbers, there are also infinitely many sphenic numbers. A sphenic number is a productpqrwherep,q, andrare three distinct prime numbers. In other words, the sphenic numbers are thesquare-free3-almost primes. The smallest sphenic number is 30 = 2 × 3 × 5, the product of the smallest three primes. The first few sphenic numbers are The largest known sphenic number at any time can be obtained by multiplying together the threelargest known primes. All sphenic numbers have exactly eight divisors. If we express the sphenic number asn=p⋅q⋅r{\displaystyle n=p\cdot q\cdot r}, wherep,q, andrare distinct primes, then the set of divisors ofnwill be: The converse does not hold. For example, 24 is not a sphenic number, but it has exactly eight divisors. All sphenic numbers are by definitionsquarefree, because the prime factors must be distinct. TheMöbius functionof any sphenic number is −1. Thecyclotomic polynomialsΦn(x){\displaystyle \Phi _{n}(x)}, taken over all sphenic numbersn, may contain arbitrarily large coefficients[1](forna product of two primes the coefficients are±1{\displaystyle \pm 1}or 0). Any multiple of a sphenic number (except by 1) is not sphenic. This is easily provable by the multiplication process at a minimum adding another prime factor, or raising an existing factor to a higher power. The first case of two consecutive sphenic integers is 230 = 2×5×23 and 231 = 3×7×11. The first case of three is 1309 = 7×11×17, 1310 = 2×5×131, and 1311 = 3×19×23. There is no case of more than three, because every fourth consecutive positive integer is divisible by 4 = 2×2 and therefore not squarefree. The numbers 2013 (3×11×61), 2014 (2×19×53), and 2015 (5×13×31) are all sphenic. The next three consecutive sphenic years will be 2665 (5×13×41), 2666 (2×31×43) and 2667 (3×7×127) (sequenceA165936in theOEIS).
https://en.wikipedia.org/wiki/Sphenic_number
Innumber theory, theparity problemrefers to a limitation insieve theorythat prevents sieves from giving good estimates in many kinds ofprime-counting problems. The problem was identified and named byAtle Selbergin 1949. Beginning around 1996,John FriedlanderandHenryk Iwaniecdeveloped some parity-sensitive sieves that make the parity problem less of an obstacle. Terence Taogave this "rough" statement of the problem:[1] Parity problem. IfAis a set whose elements are all products of an odd number of primes (or are all products of an even number of primes), then (without injecting additional ingredients), sieve theory is unable to provide non-trivial lower bounds on the size ofA. Also, any upper bounds must be off from the truth by a factor of 2 or more. This problem is significant because it may explain why it is difficult for sieves to "detect primes," in other words to give a non-trivial lower bound for the number of primes with some property. For example, in a senseChen's theoremis very close to a solution of thetwin prime conjecture, since it says that there are infinitely many primespsuch thatp+ 2 is either prime or the product of two primes (semiprime). The parity problem suggests that, because the case of interest has an odd number of prime factors (namely 1), it won't be possible to separate out the two cases using sieves. This example is due toSelbergand is given as an exercise with hints by Cojocaru & Murty.[2]: 133–134 The problem is to estimate separately the number of numbers ≤xwith no prime divisors ≤x1/2, that have an even (or an odd) number ofprime factors. It can be shown that, no matter what the choice of weights in aBrun- orSelberg-type sieve, the upper bound obtained will be at least (2 +o(1))x/ lnxfor both problems. But in fact the set with an even number of factors is empty and so has size 0. The set with an odd number of factors is just theprimesbetweenx1/2and x, so by theprime number theoremits size is (1 +o(1))x/ lnx. Thus these sieve methods are unable to give a useful upper bound for the first set, and overestimate the upper bound on the second set by a factor of 2. Beginning around 1996John FriedlanderandHenryk Iwaniecdeveloped some new sieve techniques to "break" the parity problem.[3][4]One of the triumphs of these new methods is theFriedlander–Iwaniec theorem, which states that there are infinitely many primes of the forma2+b4. Glyn Harmanrelates the parity problem to the distinction betweenType IandType IIinformation in a sieve.[5] In 2007Anatolii Alexeevitch Karatsubadiscovered an imbalance between the numbers in an arithmetic progression with given parities of the number of prime factors. His papers[6][7]were published after his death. LetN{\displaystyle \mathbb {N} }be a set ofnatural numbers(positive integers) that is, the numbers1,2,3,…{\displaystyle 1,2,3,\dots }. The set of primes, that is, such integersn∈N{\displaystyle n\in \mathbb {N} },n>1{\displaystyle n>1}, that have just two distinct divisors (namely,n{\displaystyle n}and1{\displaystyle 1}), is denoted byP{\displaystyle \mathbb {P} },P={2,3,5,7,11,…}⊂N{\displaystyle \mathbb {P} =\{2,3,5,7,11,\dots \}\subset \mathbb {N} }. Every natural numbern∈N{\displaystyle n\in \mathbb {N} },n>1{\displaystyle n>1}, can be represented as a product of primes (not necessarily distinct), that isn=p1p2…pk,{\displaystyle n=p_{1}p_{2}\dots p_{k},}wherep1∈P,p2∈P,…,pk∈P{\displaystyle p_{1}\in \mathbb {P} ,\ p_{2}\in \mathbb {P} ,\ \dots ,\ p_{k}\in \mathbb {P} }, and such representation is unique up to the order of factors. If we form two sets, the first consisting of positive integers having even number of prime factors, the second consisting of positive integers having an odd number of prime factors, in their canonical representation, then the two sets are approximately the same size. If, however, we limit our two sets to those positive integers whose canonical representation contains noprimes in arithmetic progression, for example6m+1{\displaystyle 6m+1},m=1,2,…{\displaystyle m=1,2,\dots }or the progressionkm+l{\displaystyle km+l},1≤l<k{\displaystyle 1\leq l<k},(l,k)=1{\displaystyle (l,k)=1},m=0,1,2,…{\displaystyle m=0,1,2,\dots }, then of these positive integers, those with an even number of prime factors will tend to be fewer than those with odd number of prime factors. Karatsuba discovered this property. He found also a formula for this phenomenon, a formula for the difference incardinalitiesof sets of natural numbers with odd and even amount of prime factors, when these factors are complied with certain restrictions. In all cases, since the sets involved are infinite, by "larger" and "smaller" we mean the limit of the ratio of the sets as an upper bound on the primes goes to infinity. In the case of primes containing an arithmetic progression, Karatsuba proved that this limit is infinite. We restate the Karatsuba phenomenon using mathematical terminology. LetN0{\displaystyle \mathbb {N} _{0}}andN1{\displaystyle \mathbb {N} _{1}}be subsets ofN{\displaystyle \mathbb {N} }, such thatn∈N0{\displaystyle n\in \mathbb {N} _{0}}, ifn{\displaystyle n}contains an even number of prime factors, andn∈N1{\displaystyle n\in \mathbb {N} _{1}}, ifn{\displaystyle n}contains an odd number of prime factors. Intuitively, the sizes of the two setsN0{\displaystyle \mathbb {N} _{0}}andN1{\displaystyle \mathbb {N} _{1}}are approximately the same. More precisely, for allx≥1{\displaystyle x\geq 1}, we definen0(x){\displaystyle n_{0}(x)}andn1(x){\displaystyle n_{1}(x)}, wheren0(x){\displaystyle n_{0}(x)}is the cardinality of the set of all numbersn{\displaystyle n}fromN0{\displaystyle \mathbb {N} _{0}}such thatn≤x{\displaystyle n\leq x}, andn1(x){\displaystyle n_{1}(x)}is the cardinality of the set of all numbersn{\displaystyle n}fromN1{\displaystyle \mathbb {N} _{1}}such thatn≤x{\displaystyle n\leq x}. The asymptotic behavior ofn0(x){\displaystyle n_{0}(x)}andn1(x){\displaystyle n_{1}(x)}was derived byE. Landau:[8] This shows that that isn0(x){\displaystyle n_{0}(x)}andn1(x){\displaystyle n_{1}(x)}are asymptotically equal. Further, so that the difference between the cardinalities of the two sets is small. On the other hand, if we letk≥2{\displaystyle k\geq 2}be a natural number, andl1,l2,…lr{\displaystyle l_{1},l_{2},\dots l_{r}}be a sequence of natural numbers,1≤r<φ(k){\displaystyle 1\leq r<\varphi (k)}, such that1≤lj<k{\displaystyle 1\leq l_{j}<k};(lj,k)=1{\displaystyle (l_{j},k)=1}; everylj{\displaystyle l_{j}}are different modulok{\displaystyle k};j=1,2,…r.{\displaystyle j=1,2,\dots r.}LetA{\displaystyle \mathbb {A} }be a set of primes belonging to the progressionskn+lj{\displaystyle kn+l_{j}};j≤r{\displaystyle j\leq r}. (A{\displaystyle \mathbb {A} }is the set of all primes not dividingk{\displaystyle k}). We denote asN∗{\displaystyle \mathbb {N} ^{*}}a set of natural numbers, which do not contain prime factors fromA{\displaystyle \mathbb {A} }, and asN0∗{\displaystyle \mathbb {N} _{0}^{*}}a subset of numbers fromN∗{\displaystyle \mathbb {N} ^{*}}with even number of prime factors, asN1∗{\displaystyle \mathbb {N} _{1}^{*}}a subset of numbers fromN∗{\displaystyle \mathbb {N} ^{*}}with odd number of prime factors. We define the functions Karatsuba proved that forx→+∞{\displaystyle x\to +\infty }, the asymptotic formula is valid, whereC{\displaystyle C}is a positive constant. He also showed that it is possible to prove the analogous theorems for other sets of natural numbers, for example, for numbers which are representable in the form of the sum of two squares, and that sets of natural numbers, all factors of which do belong toA{\displaystyle \mathbb {A} }, will display analogous asymptotic behavior. The Karatsuba theorem was generalized for the case whenA{\displaystyle \mathbf {A} }is a certain unlimited set of primes. The Karatsuba phenomenon is illustrated by the following example. We consider the natural numbers whose canonical representation does not include primes belonging to the progression6m+1{\displaystyle 6m+1},m=1,2,…{\displaystyle m=1,2,\dots }. Then this phenomenon is expressed by the formula:
https://en.wikipedia.org/wiki/Parity_problem_(sieve_theory)
This is a list of notablebackup softwarethat performsdata backups.Archivers,transfer protocols, andversion control systemsare often used for backups but only software focused on backup is listed here. SeeComparison of backup softwarefor features. May 25, 2023(v2.0.7.1, beta)
https://en.wikipedia.org/wiki/List_of_backup_software
This is acomparison of online backup services. Online backup is a special kind of online storage service; however, various products that are designed for file storage may not have features or characteristics that others designed for backup have. Online Backup usually requires a backup client program. A browser-only online storage service is usually not considered a valid online backup service. Online folder sync services can be used for backup purposes. However, some Online Folder Sync services may not provide a safe Online Backup. If a file is accidentally locally corrupted or deleted, it depends on the versioning features of a Folder Sync service, whether this file will still be retrievable. Any changes can be undone, and files can be undeleted. Other notable limitations or features.
https://en.wikipedia.org/wiki/Comparison_of_online_backup_services
Featurecomparison of backup software. For a more general comparison seeList of backup software.
https://en.wikipedia.org/wiki/Comparison_of_backup_software
The subject of computer backups is rife with jargon and highly specialized terminology. This page is aglossary of backup termsthat aims to clarify the meaning of such jargon and terminology. 3-2-1 Rule (or 3-2-1 Backup Strategy) Backup policy Backup rotation scheme Backup site Backup software Backup window Copy backup Daily backup Data salvaging/recovery Differential backup Disaster recovery Disk cloning Disk image Full backup Hot backup Incremental backup Media spanning Multiplexing Multistreaming Normal backup Near store Open file backup Remote store Restore time Retention time Site-to-site backup Synthetic backup Tape library Trusted paper key Virtual Tape Library(VTL)
https://en.wikipedia.org/wiki/Glossary_of_backup_terms
VMware Infrastructureis a collection of virtualization products fromVMware.Virtualizationis an abstraction layer that decouples hardware from operating systems. The VMware Infrastructure suite allows enterprises to optimize and manage their IT infrastructure through virtualization as an integrated offering. The core product families are vSphere, vSAN and NSX for on-premises virtualization.[1]VMware Cloud Foundation (VCF) is an infrastructure platform for hybrid cloud management.[1]The VMware Infrastructure suite is designed to span a large range of deployment types to provide maximum flexibility and scalability. The suite included: Users can supplement this softwarebundleby purchasing optional products, such as VMotion, as well as distributed services such as high availability (HA), distributed resource scheduler (DRS), or consolidated backup. VMware Inc. released VMware Infrastructure 3 in June 2006. The suite came in three "editions": Starter, Standard and Enterprise. Known limitations in VMware Infrastructure 3 may constrain the design ofdata centers:[2] As of June 2008[update]limitations in VMware Infrastructure version 3.5 included the following: No limitations were, for example,[clarification needed]volume size of 64TBwith no more than 6SCSI controllersper virtual machine; maximum number of remote consoles to a virtual machine is 10. It is also not possible to connectFibre Channeltape drives, which hinders the ability to do backups using these drives. VMware renamed their productVMware vSpherefor release 4, and marketed it forcloud computing.
https://en.wikipedia.org/wiki/Virtual_backup_appliance
Data maskingordata obfuscationis the process of modifyingsensitive datain such a way that it is of no or little value to unauthorized intruders while still being usable bysoftwareor authorized personnel. Data masking can also be referred asanonymization, ortokenization, depending on different context. The main reason to mask data is to protect information that is classified aspersonally identifiable information, or mission critical data. However, the data must remain usable for the purposes of undertaking valid test cycles. It must also look real and appear consistent. It is more common to have masking applied to data that is represented outside of a corporate production system. In other words, where data is needed for the purpose ofapplication development, building program extensions and conducting varioustest cycles. It is common practice in enterprise computing to take data from the production systems to fill the data component, required for these non-production environments. However, this practice is not always restricted to non-production environments. In some organizations, data that appears on terminal screens to call center operators may have masking dynamically applied based on user security permissions (e.g. preventing call center operators from viewing credit card numbers in billing systems). The primary concern from a corporate governance perspective[1]is that personnel conducting work in these non-production environments are not always security cleared to operate with the information contained in the production data. This practice represents a security hole where data can be copied by unauthorized personnel, and security measures associated with standard production level controls can be easily bypassed. This represents an access point for adata security breach. Data involved in any data masking or obfuscation must remain meaningful at several levels: Substitution is one of the most effective methods of applying data masking and being able to preserve the authentic look and feel of the data records. It allows the masking to be performed in such a manner that another authentic-looking value can be substituted for the existing value.[2]There are several data field types where this approach provides optimal benefit in disguising the overall data subset as to whether or not it is a masked data set. For example, if dealing with source data which contains customer records, real life surname or first name can be randomly substituted from a supplied or customised look up file. If the first pass of the substitution allows for applying a male first name to all first names, then the second pass would need to allow for applying a female first name to all first names where gender equals "F." Using this approach we could easily maintain the gender mix within the data structure, apply anonymity to the data records but also maintain a realistic looking database, which could not easily be identified as a database consisting of masked data. This substitution method needs to be applied for many of the fields that are in database structures across the world, such astelephone numbers,zip codesand postcodes, as well ascredit card numbersand other card type numbers like Social Security numbers andMedicarenumbers where these numbers actually need to conform to a checksum test of theLuhn algorithm. In most cases, the substitution files will need to be fairly extensive so having large substitution datasets as well the ability to apply customized data substitution sets should be a key element of the evaluation criteria for any data masking solution. The shuffling method is a very common form of data obfuscation. It is similar to the substitution method but it derives the substitution set from the same column of data that is being masked. In very simple terms, the data is randomly shuffled within the column.[3]However, if used in isolation, anyone with any knowledge of the original data can then apply a "what if" scenario to the data set and then piece back together a real identity. The shuffling method is also open to being reversed if the shuffling algorithm can be deciphered.[citation needed] Data shuffling overcomes reservations about using perturbed or modified confidential data because it retains all the desirable properties of perturbation while performing better than other masking techniques in both data utility and disclosure risk.[3] Shuffling, however, has some real strengths in certain areas. If for instance, the end of year figures for financial information in a test data base, one can mask the names of the suppliers and then shuffle the value of the accounts throughout the masked database. It is highly unlikely that anyone, even someone with intimate knowledge of the original data could derive a true data record back to its original values. The numeric variance method is very useful for applying to financial and date driven information fields. Effectively, a method utilising this manner of masking can still leave a meaningful range in a financial data set such as payroll. If the variance applied is around +/- 10% then it is still a very meaningful data set in terms of the ranges of salaries that are paid to the recipients. The same also applies to the date information. If the overall data set needs to retain demographic and actuarial data integrity, then applying a random numeric variance of +/- 120 days to date fields would preserve the date distribution, but it would still prevent traceability back to a known entity based on their known actual date or birth or a known date value for whatever record is being masked. Encryptionis often the most complex approach to solving the data masking problem. The encryptionalgorithmoften requires that a "key" be applied to view the data based on user rights. This often sounds like the best solution, but in practice the key may then be given out to personnel without the proper rights to view the data. This then defeats the purpose of the masking exercise. Old databases may then get copied with the original credentials of the supplied key and the same uncontrolled problem lives on. Recently, the problem of encrypting data while preserving the properties of the entities got recognition and a newly acquired interest among the vendors and academia. New challenge gave birth to algorithms performingformat-preserving encryption. These are based on the accepted Advanced Encryption Standard (AES) algorithmic mode recognized byNIST.[4] Sometimes a very simplistic approach to masking is adopted through applying a null value to a particular field. The null value approach is really only useful to prevent visibility of the data element. In almost all cases, it lessens the degree ofdata integritythat is maintained in the masked data set. It is not a realistic value and will then fail any application logic validation that may have been applied in the front end software that is in the system under test. It also highlights to anyone that wishes to reverse engineer any of the identity data that data masking has been applied to some degree on the data set. Character scrambling or masking out of certain fields is also another simplistic yet very effective method of preventing sensitive information to be viewed. It is really an extension of the previous method of nulling out, but there is a greater emphasis on keeping the data real and not fully masked all together. This is commonly applied to credit card data in production systems. For instance, an operator at a call centre might bill an item to a customer's credit card. They then quote a billing reference to the card with the last 4 digits of XXXX XXXX xxxx 6789. As an operator they can only see the last 4 digits of the card number, but once the billing system passes the customer's details for charging, the full number is revealed to the payment gateway systems. This system is not very effective for test systems, but it is very useful for the billing scenario detailed above. It is also commonly known as a dynamic data masking method.[5][6] Additional rules can also be factored into any masking solution regardless of how the masking methods are constructed. Product agnostic white papers[7]are a good source of information for exploring some of the more common complex requirements for enterprise masking solutions, which include row internal synchronization rules, table internal synchronization rules and table[8]to Table Synchronization Rules. Data masking is tightly coupled with building test data. Two major types of data masking are static and on-the-fly data masking. Static data masking is usually performed on thegolden copyof the database, but can also be applied to values in other sources, including files. In DB environments, production database administrators will typically load table backups to a separate environment, reduce the dataset to a subset that holds the data necessary for a particular round of testing (a technique called "subsetting"), apply data masking rules while data is in stasis, apply necessary code changes from source control, and/or and push data to desired environment.[9] Deterministic masking is the process of replacing a value in a column with the same value whether in the same row, the same table, the same database/schema and between instances/servers/database types. Example: A database has multiple tables, each with a column that has first names. With deterministic masking the first name will always be replaced with the same value – “Lynne” will always become “Denise” – wherever “Lynne” may be in the database.[10] There are also alternatives to the static data masking that rely on stochastic perturbations of the data that preserve some of the statistical properties of the original data. Examples of statistical data obfuscation methods includedifferential privacy[11]and theDataSiftermethod.[12] On-the-fly data masking[13]happens in the process of transferring data from environment to environment without data touching the disk on its way. The same technique is applied to "Dynamic Data Masking" but one record at a time. This type of data masking is most useful for environments that do continuous deployments as well as for heavily integrated applications. Organizations that employ continuous deployment orcontinuous deliverypractices do not have the time necessary to create a backup and load it to the golden copy of the database. Thus, continuously sending smaller subsets (deltas) of masked testing data from production is important. In heavily integrated applications, developers get feeds from other production systems at the very onset of development and masking of these feeds is either overlooked and not budgeted until later, making organizations non-compliant. Having on-the-fly data masking in place becomes essential. Dynamic data masking is similar to on-the-fly data masking, but it differs in the sense that on-the-fly data masking is about copying data from one source to another source so that the latter can be shared. Dynamic data masking happens at runtime, dynamically, and on-demand so that there doesn't need to be a second data source where to store the masked data dynamically. Dynamic data masking enables several scenarios, many of which revolve around strict privacy regulations e.g. the Singapore Monetary Authority or the Privacy regulations in Europe. Dynamic data masking isattribute-basedand policy-driven. Policies include: Dynamic data masking can also be used to encrypt or decrypt values on the fly especially when usingformat-preserving encryption. Several standards have emerged in recent years to implement dynamic data filtering and masking. For instance,XACMLpolicies can be used to mask data inside databases. There are six possible technologies to apply Dynamic data masking: In latest years, organizations develop their new applications in the cloud more and more often, regardless of whether final applications will be hosted in the cloud or on- premises. The cloud solutions as of now allow organizations to useinfrastructure as a service,platform as a service, andsoftware as a service. There are various modes of creating test data and moving it from on-premises databases to the cloud, or between different environments within the cloud. Dynamic Data Masking becomes even more critical in cloud when customers need to protecting PII data while relying on cloud providers to administer their databases. Data masking invariably becomes the part of these processes in thesystems development life cycle(SDLC) as the development environments'service-level agreements(SLAs) are usually not as stringent as the production environments' SLAs regardless of whether application is hosted in the cloud or on-premises.
https://en.wikipedia.org/wiki/Data_masking
Defense in depthis a concept used ininformation securityin which multiple layers of security controls (defense) are placed throughout aninformation technology(IT) system. Its intent is to provideredundancyin the event asecurity controlfails or a vulnerability is exploited that can cover aspects ofpersonnel,procedural,technicalandphysicalsecurity for the duration of the system's life cycle. The idea behind the defense in depth approach is to defend a system against any particular attack using several independent methods.[1]It is a layering tactic, conceived[2]by theNational Security Agency(NSA) as a comprehensive approach to information and electronic security.[3][4] An insight into defense in depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, andnetwork security, host-based security, andapplication securityforming the outermost layers of the onion.[5]Both perspectives are equally valid, and each provides valuable insight into the implementation of a good defense in depth strategy.[6] Defense in depth can be divided into three areas: Physical, Technical, and Administrative.[7] Physical controls[3]are anything that physically limits or prevents access to IT systems. Examples of physical defensive security are: fences, guards, dogs, andCCTVsystems. Technical controls are hardware or software whose purpose is to protect systems and resources. Examples of technical controls would be disk encryption, file integrity software, and authentication. Hardware technical controls differ from physical controls in that they prevent access to the contents of a system, but not the physical systems themselves. Administrative controls are the organization's policies and procedures. Their purpose is to ensure that there is proper guidance available in regard to security and that regulations are met. They include things such as hiring practices, data handling procedures, and security requirements. Using more than one of the following layers constitutes an example of defense in depth.
https://en.wikipedia.org/wiki/Defense_in_depth_(computing)
Information securityis the practice of protectinginformationby mitigating information risks. It is part of information risk management.[1]It typically involves preventing or reducing the probability of unauthorized or inappropriate access todataor the unlawful use,disclosure, disruption, deletion, corruption, modification, inspection, recording, or devaluation of information. It also involves actions intended to reduce the adverse impacts of such incidents. Protected information may take any form, e.g., electronic or physical, tangible (e.g.,paperwork), or intangible (e.g.,knowledge).[2][3]Information security's primary focus is the balanced protection ofdata confidentiality,integrity, andavailability(also known as the 'CIA' triad)[4][5]while maintaining a focus on efficientpolicyimplementation, all without hampering organizationproductivity.[6]This is largely achieved through a structuredrisk managementprocess.[7] To standardize this discipline, academics and professionals collaborate to offer guidance, policies, and industry standards onpasswords,antivirus software,firewalls,encryption software,legal liability,security awarenessand training, and so forth.[8]Thisstandardizationmay be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, transferred, and destroyed.[9] While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are increasingly being emphasized,[10][11]withinformation assurancenow typically being dealt with by information technology (IT) security specialists. These specialists apply information security to technology (most often some form of computer system). IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses.[12]They are responsible for keeping all of thetechnologywithin the company secure from malicious attacks that often attempt to acquire critical private information or gain control of the internal systems.[13][14] There are many specialist roles in Information Security including securing networks and alliedinfrastructure, securingapplicationsanddatabases,security testing, information systemsauditing,business continuity planning, electronic record discovery, anddigital forensics.[15] Information security standards are techniques generally outlined in published materials that attempt to protect the information of a user or organization.[16]This environment includes users themselves, networks, devices, all software, processes, information in storage or transit, applications, services, and systems that can be connected directly or indirectly to networks. The principal objective is to reduce the risks, including preventing or mitigating attacks. These published materials consist of tools, policies, security concepts, security safeguards, guidelines, risk management approaches, actions, training, best practices, assurance and technologies. Various definitions of information security are suggested below, summarized from different sources: Information securitythreatscome in many different forms.[27]Some of the most common threats today are software attacks, theft of intellectual property, theft of identity, theft of equipment or information, sabotage, and information extortion.[28][29]Viruses,[30]worms,phishing attacks, andTrojan horsesare a few common examples of software attacks. Thetheft of intellectual propertyhas also been an extensive issue for many businesses.[31]Identity theftis the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information throughsocial engineering.[32][33]Sabotageusually consists of the destruction of an organization'swebsitein an attempt to cause loss of confidence on the part of its customers.[34]Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as withransomware.[35]One of the most functional precautions against these attacks is to conduct periodical user awareness.[36] Governments,military,corporations,financial institutions,hospitals, non-profit organizations, and privatebusinessesamass a great deal of confidential information about their employees, customers, products, research, and financial status.[37]Should confidential information about a business's customers or finances or new product line fall into the hands of a competitor orhacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation.[38]From a business perspective, information security must be balanced against cost; theGordon-Loeb Modelprovides a mathematical economic approach for addressing this concern.[39] For the individual, information security has a significant effect onprivacy, which is viewed very differently in variouscultures.[40] Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detectingtampering.[41]Julius Caesaris credited with the invention of theCaesar cipherc. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands.[42]However, for the most part protection was achieved through the application of procedural handling controls.[43][44]Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box.[45]As postal services expanded, governments created official organizations to intercept, decipher, read, and reseal letters (e.g., the U.K.'s Secret Office, founded in 1653[46]). In the mid-nineteenth century more complexclassification systemswere developed to allow governments to manage their information according to the degree of sensitivity.[47]For example, the British Government codified this, to some extent, with the publication of theOfficial Secrets Actin 1889.[48]Section 1 of the law concerned espionage and unlawful disclosures of information, while Section 2 dealt with breaches of official trust.[49]A public interest defense was soon added to defend disclosures in the interest of the state.[50]A similar law was passed in India in 1889, The Indian Official Secrets Act, which was associated with the British colonial era and used to crack down on newspapers that opposed the Raj's policies.[51]A newer version was passed in 1923 that extended to all matters of confidential or secret information for governance.[52]By the time of theFirst World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters.[53]Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information.[54] The establishment ofcomputer securityinaugurated the history of information security. The need for such appeared duringWorld War II.[55]The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls.[56]An arcane range of markings evolved to indicate who could handle documents (usually officers rather than enlisted troops) and where they should be stored as increasingly complex safes and storage facilities were developed.[57]TheEnigma Machine, which was employed by the Germans to encrypt the data of warfare and was successfully decrypted byAlan Turing, can be regarded as a striking example of creating and using secured information.[58]Procedures evolved to ensure documents were destroyed properly, and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g., the capture ofU-570[58]). Variousmainframe computerswere connected online during theCold Warto complete more sophisticated tasks, in a communication process easier than mailingmagnetic tapesback and forth by computer centers. As such, theAdvanced Research Projects Agency(ARPA), of theUnited States Department of Defense, started researching the feasibility of a networked system of communication to trade information within theUnited States Armed Forces. In 1968, theARPANETproject was formulated byLarry Roberts, which would later evolve into what is known as theinternet.[59] In 1973, important elements of ARPANET security were found by internet pioneerRobert Metcalfeto have many flaws such as the: "vulnerability of password structure and formats; lack of safety procedures fordial-up connections; and nonexistent user identification and authorizations", aside from the lack of controls and safeguards to keep data safe from unauthorized access. Hackers had effortless access to ARPANET, as phone numbers were known by the public.[60]Due to these problems, coupled with the constant violation of computer security, as well as the exponential increase in the number of hosts and users of the system, "network security" was often alluded to as "network insecurity".[60] The end of the twentieth century and the early years of the twenty-first century saw rapid advancements intelecommunications, computinghardwareandsoftware, and dataencryption.[61]The availability of smaller, more powerful, and less expensive computing equipment madeelectronic data processingwithin the reach ofsmall businessand home users.[62]The establishment of Transfer Control Protocol/Internetwork Protocol (TCP/IP) in the early 1980s enabled different types of computers to communicate.[63]These computers quickly became interconnected through theinternet.[64] The rapid growth and widespread use of electronic data processing andelectronic businessconducted through the internet, along with numerous occurrences of internationalterrorism, fueled the need for better methods of protecting the computers and the information they store, process, and transmit.[65]The academic disciplines ofcomputer securityandinformation assuranceemerged along with numerous professional organizations, all sharing the common goals of ensuring the security and reliability ofinformation systems.[66] The "CIA triad" ofconfidentiality,integrity, andavailabilityis at the heart of information security.[67]The concept was introduced in the Anderson Report in 1972 and later repeated inThe Protection of Information in Computer Systems.The abbreviation was coined by Steve Lipner around 1986.[68] Debate continues about whether or not this triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy.[4]Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such asnon-repudiationdo not fit well within the three core concepts.[69] In information security,confidentiality"is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes."[70]While similar to "privacy", the two words are not interchangeable. Rather, confidentiality is a component of privacy that implements to protect our data from unauthorized viewers.[71]Examples of confidentiality of electronic data being compromised include laptop theft, password theft, or sensitive emails being sent to the incorrect individuals.[72] In IT security,data integritymeans maintaining and assuring the accuracy and completeness of data over its entire lifecycle.[73]This means that data cannot be modified in an unauthorized or undetected manner.[74]This is not the same thing asreferential integrityindatabases, although it can be viewed as a special case of consistency as understood in the classicACIDmodel oftransaction processing.[75]Information security systems typically incorporate controls to ensure their own integrity, in particular protecting the kernel or core functions against both deliberate and accidental threats.[76]Multi-purpose and multi-user computer systems aim to compartmentalize the data and processing such that no user or process can adversely impact another: the controls may not succeed however, as we see in incidents such as malware infections, hacks, data theft, fraud, and privacy breaches.[77] More broadly, integrity is an information security principle that involves human/social, process, and commercial integrity, as well as data integrity. As such it touches on aspects such as credibility, consistency, truthfulness, completeness, accuracy, timeliness, and assurance.[78] For any information system to serve its purpose, the information must beavailablewhen it is needed.[79]This means the computing systems used to store and process the information, thesecurity controlsused to protect it, and the communication channels used to access it must be functioning correctly.[80]High availabilitysystems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades.[81]Ensuring availability also involves preventingdenial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down.[82] In the realm of information security, availability can often be viewed as one of the most important parts of a successful information security program.[citation needed]Ultimately end-users need to be able to perform job functions; by ensuring availability an organization is able to perform to the standards that an organization's stakeholders expect.[83]This can involve topics such as proxy configurations, outside web access, the ability to access shared drives and the ability to send emails.[84]Executives oftentimes do not understand the technical side of information security and look at availability as an easy fix, but this often requires collaboration from many different organizational teams, such as network operations, development operations, incident response, and policy/change management.[85]A successful information security team involves many different key roles to mesh and align for the "CIA" triad to be provided effectively.[86] In addition to the classic CIA triad of security goals, some organisations may want to include security goals like authenticity, accountability, non-repudiation, and reliability. In law,non-repudiationimplies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction, nor can the other party deny having sent a transaction.[87] It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology.[88]It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message, and nobody else could have altered it in transit (data integrity).[89]The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised.[90]The fault for these violations may or may not lie with the sender, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity. As such, the sender may repudiate the message (because authenticity and integrity are pre-requisites for non-repudiation).[91] In 1992 and revised in 2002, theOECD'sGuidelines for the Security of Information Systems and Networks[92]proposed the nine generally accepted principles:awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment.[93]Building upon those, in 2004 theNIST'sEngineering Principles for Information Technology Security[69]proposed 33 principles. In 1998,Donn Parkerproposed an alternative model for the classic "CIA" triad that he called thesix atomic elements of information. The elements areconfidentiality,possession,integrity,authenticity,availability, andutility. The merits of theParkerian Hexadare a subject of debate amongst security professionals.[94] In 2011,The Open Grouppublished the information security management standardO-ISM3.[95]This standard proposed anoperational definitionof the key concepts of security, with elements called "security objectives", related toaccess control(9),availability(3),data quality(1), compliance, and technical (4). Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset).[96]A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made oract of nature) that has the potential to cause harm.[97]The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact.[98]In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property).[99] TheCertified Information Systems Auditor(CISA) Review Manual 2006definesrisk managementas "the process of identifyingvulnerabilitiesandthreatsto the information resources used by an organization in achieving business objectives, and deciding whatcountermeasures,[100]if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."[101] There are two things in this definition that may need some clarification. First, theprocessof risk management is an ongoing, iterativeprocess. It must be repeated indefinitely. The business environment is constantly changing and newthreatsandvulnerabilitiesemerge every day.[102]Second, the choice ofcountermeasures(controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected.[103]Furthermore, these processes have limitations as security breaches are generally rare and emerge in a specific context which may not be easily duplicated.[104]Thus, any process and countermeasure should itself be evaluated for vulnerabilities.[105]It is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called "residual risk".[106] Arisk assessmentis carried out by a team of people who have knowledge of specific areas of the business.[107]Membership of the team may vary over time as different parts of the business are assessed.[108]The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may usequantitativeanalysis. Research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human.[109]TheISO/IEC 27002:2005Code of practice forinformation security managementrecommends the following be examined during a risk assessment: In broad terms, the risk management process consists of:[110][111] For any given risk, management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business.[118]Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or outsourcing to another business.[119]The reality of some risks may be disputed. In such cases leadership may choose to deny the risk.[120] Selecting and implementing proper security controls will initially help an organization bring down risk to acceptable levels.[121]Control selection should follow and should be based on the risk assessment.[122]Controls can vary in nature, but fundamentally they are ways of protecting the confidentiality, integrity or availability of information.ISO/IEC 27001has defined controls in different areas.[123]Organizations can implement additional controls according to requirement of the organization.[124]ISO/IEC 27002offers a guideline for organizational information security standards.[125] Defense in depth is a fundamental security philosophy that relies on overlapping security systems designed to maintain protection even if individual components fail. Rather than depending on a single security measure, it combines multiple layers of security controls both in the cloud and at network endpoints. This approach includes combinations like firewalls with intrusion-detection systems, email filtering services with desktop anti-virus, and cloud-based security alongside traditional network defenses.[126]The concept can be implemented through three distinct layers of administrative, logical, and physical controls,[127]or visualized as an onion model with data at the core, surrounded by people, network security, host-based security, and application security layers.[128]The strategy emphasizes that security involves not just technology, but also people and processes working together, with real-time monitoring and response being crucial components.[126] An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information.[129]Not all information is equal and so not all information requires the same degree of protection.[130]This requires information to be assigned asecurity classification.[131]The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy.[132]The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the requiredsecurity controlsfor each classification.[133] Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete.[134]Laws and other regulatory requirements are also important considerations when classifying information.[135]TheInformation Systems Audit and Control Association(ISACA) and itsBusiness Model for Information Securityalso serves as a tool for security professionals to examine security from a systems perspective, creating an environment where security can be managed holistically, allowing actual risks to be addressed.[136] The type of information security classification labels selected and used will depend on the nature of the organization, with examples being:[133] All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification.[139]The classification of a particular information asset that has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place and are followed in their right procedures.[140] Access to protected information must be restricted to people who are authorized to access the information.[141]The computer programs, and in many cases the computers that process the information, must also be authorized.[142]This requires that mechanisms be in place to control the access to protected information.[142]The sophistication of the access control mechanisms should be in parity with the value of the information being protected; the more sensitive or valuable the information the stronger the control mechanisms need to be.[143]The foundation on which access control mechanisms are built start with identification andauthentication.[144] Access control is generally considered in three steps: identification,authentication, andauthorization.[145][72] Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name isJohn Doe" they are making a claim of who they are.[146]However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe.[147]Typically the claim is in the form of a username. By entering that username you are claiming "I am the person the username belongs to".[148] Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells thebank tellerhe is John Doe, a claim of identity.[149]The bank teller asks to see a photo ID, so he hands the teller hisdriver's license.[150]The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe.[151]If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. Similarly, by entering the correct password, the user is providing evidence that he/she is the person the username belongs to.[152] There are three different types of information that can be used for authentication:[153][154] Strong authentication requires providing more than one type of authentication information (two-factor authentication).[160]Theusernameis the most common form of identification on computer systems today and the password is the most common form of authentication.[161]Usernames and passwords have served their purpose, but they are increasingly inadequate.[162]Usernames and passwords are slowly being replaced or supplemented with more sophisticated authentication mechanisms such astime-based one-time password algorithms.[163] After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change).[164]This is calledauthorization. Authorization to access information and other computing services begins with administrative policies and procedures.[165]The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies.[166]Different computing systems are equipped with different kinds of access control mechanisms. Some may even offer a choice of different access control mechanisms.[167]The access control mechanism a system offers will be based upon one of three approaches to access control, or it may be derived from a combination of the three approaches.[72] The non-discretionary approach consolidates all access control under a centralized administration.[168]The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform.[169][170]The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources.[168]In the mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource.[141] Examples of common access control mechanisms in use today includerole-based access control, available in many advanced database management systems; simplefile permissionsprovided in the UNIX and Windows operating systems;[171]Group Policy Objectsprovided in Windows network systems; andKerberos,RADIUS,TACACS, and the simple access lists used in manyfirewallsandrouters.[172] To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions.[173]TheU.S. Treasury's guidelines for systems processing sensitive or proprietary information, for example, states that all failed and successful authentication and access attempts must be logged, and all access to information must leave some type ofaudit trail.[174] Also, the need-to-know principle needs to be in effect when talking about access control. This principle gives access rights to a person to perform their job functions.[175]This principle is used in the government when dealing with difference clearances.[176]Even though two employees in different departments have atop-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee the least amount of privilege to prevent employees from accessing more than what they are supposed to.[177]Need-to-know helps to enforce the confidentiality-integrity-availability triad. Need-to-know directly impacts the confidential area of the triad.[178] Information security usescryptographyto transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is calledencryption.[179]Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user who possesses thecryptographic key, through the process of decryption.[180]Cryptography is used in information security to protect information from unauthorized or accidental disclosure while theinformationis in transit (either electronically or physically) and while information is in storage.[72] Cryptography provides information security with other useful applications as well, including improved authentication methods, message digests, digital signatures,non-repudiation, and encrypted network communications.[181]Older, less secure applications such asTelnetandFile Transfer Protocol(FTP) are slowly being replaced with more secure applications such asSecure Shell(SSH) that use encrypted network communications.[182]Wireless communications can be encrypted using protocols such asWPA/WPA2or the older (and less secure)WEP. Wired communications (such asITU‑TG.hn) are secured usingAESfor encryption andX.1035for authentication and key exchange.[183]Software applications such asGnuPGorPGPcan be used to encrypt data files and email.[184] Cryptography can introduce security problems when it is not implemented correctly.[185]Cryptographic solutions need to be implemented using industry-accepted solutions that have undergone rigorous peer review by independent experts in cryptography.[186]Thelength and strengthof the encryption key is also an important consideration.[187]A key that isweakor too short will produceweak encryption.[187]The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information.[188]They must be protected from unauthorized disclosure and destruction, and they must be available when needed.[citation needed]Public key infrastructure(PKI) solutions address many of the problems that surroundkey management.[72] U.S.Federal Sentencing Guidelinesnow make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems.[189] In the field of information security, Harris[190]offers the following definitions of due care and due diligence: "Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees[191]."And, [Due diligence are the]"continual activities that make sure the protection mechanisms are continually maintained and operational."[192] Attention should be made to two important points in these definitions.[193][194]First, in due care, steps are taken to show; this means that the steps can be verified, measured, or even produce tangible artifacts.[195][196]Second, in due diligence, there are continual activities; this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing.[197] Organizations have a responsibility with practicing duty of care when applying information security. The Duty of Care Risk Analysis Standard (DoCRA)[198]provides principles and practices for evaluating risk.[199]It considers all parties that could be affected by those risks.[200]DoCRA helps evaluate safeguards if they are appropriate in protecting others from harm while presenting a reasonable burden.[201]With increased data breach litigation, companies must balance security controls, compliance, and its mission.[202] Computer security incident management is a specialized form of incident management focused on monitoring, detecting, and responding to security events on computers and networks in a predictable way.[203] Organizations implement this through incident response plans (IRPs) that are activated when security breaches are detected.[204]These plans typically involve an incident response team (IRT) with specialized skills in areas like penetration testing, computer forensics, and network security.[205] Change management is a formal process for directing and controlling alterations to the information processing environment.[206][207]This includes alterations to desktop computers, the network, servers, and software.[208]The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made.[209]It is not the objective of change management to prevent or hinder necessary changes from being implemented.[210][211] Any change to the information processing environment introduces an element of risk.[212]Even apparently simple changes can have unexpected effects.[213]One of management's many responsibilities is the management of risk.[214][215]Change management is a tool for managing the risks introduced by changes to the information processing environment.[216]Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented.[217] Not every change needs to be managed.[218][219]Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment.[220]Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management.[221]However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity.[222]The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system.[223] Change management is usually overseen by a change review board composed of representatives from key business areas,[224]security, networking, systems administrators, database administration, application developers, desktop support, and the help desk.[225]The tasks of the change review board can be facilitated with the use of automated work flow application.[226]The responsibility of the change review board is to ensure the organization's documented change management procedures are followed.[227]The change management process is as follows[228] Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment.[260]Good change management procedures improve the overall quality and success of changes as they are implemented.[261]This is accomplished through planning, peer review, documentation, and communication.[262] ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps[263](Full book summary),[264]andITILall provide valuable guidance on implementing an efficient and effective change management program information security.[265] Business continuity management (BCM) concerns arrangements aiming to protect an organization's critical business functions from interruption due to incidents, or at least minimize the effects.[266][267]BCM is essential to any organization to keep technology and business in line with current threats to the continuation of business as usual.[268]The BCM should be included in an organizationsrisk analysisplan to ensure that all of the necessary business functions have what they need to keep going in the event of any type of threat to any business function.[269] It encompasses: Whereas BCM takes a broad approach to minimizing disaster-related risks by reducing both the probability and the severity of incidents, adisaster recovery plan(DRP) focuses specifically on resuming business operations as quickly as possible after a disaster.[279]A disaster recovery plan, invoked soon after a disaster occurs, lays out the steps necessary to recover criticalinformation and communications technology(ICT) infrastructure.[280]Disaster recovery planning includes establishing a planning group, performing risk assessment, establishing priorities, developing recovery strategies, preparing inventories and documentation of the plan, developing verification criteria and procedure, and lastly implementing the plan.[281] Below is a partial listing of governmental laws and regulations in various parts of the world that have, had, or will have, a significant effect on data processing and information security.[282][283]Important industry sector regulations have also been included when they have a significant impact on information security.[282] The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[318] Describing more than simply how security aware employees are, information security culture is the ideas, customs, and social behaviors of an organization that impact information security in both positive and negative ways.[319]Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. The way employees think and feel about security and the actions they take can have a big impact on information security in organizations. Roer & Petric (2017) identify seven core dimensions of information security culture in organizations:[320] Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational information security best interests.[322]Research shows information security culture needs to be improved continuously. InInformation Security Culture from Analysis to Change, authors commented, "It's a never ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[323]
https://en.wikipedia.org/wiki/Information_security_policies
Raz-Lee Security, Inc.is an international organization that providesdata securitysolutions forIBM'sPoweriservers. The company's clients includeFiat,Agfa,Teva Pharmaceuticals,Avnet,AIG,Dun & Bradstreetand the Israel branch of American insurance companyAmerican International Group, among others.[2][3] Founded in 1983,[4]the company was formerly headquartered inHerzliya,Israel.[2]By 1992, 33% of the company's products were being sold in theUnited States,[5]and from 1994 to 1996 its business there, where it had offices inNanuet,New York, dramatically increased, though its primary purchasers were still spread throughEurope, the Middle East and Africa.[2]As of 2009, the company is headquartered in Nanuet, with aresearch and developmentfacility in Israel.[3]The company also has offices in Israel andItalyand maintains a Technical Support and US Account Management center inSan Francisco.[6] This software company article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Raz-Lee
Enterprise architecture(EA) is a business function concerned with the structures and behaviours of a business, especially business roles and processes that create and use businessdata. The international definition according to the Federation of Enterprise Architecture Professional Organizations is "a well-defined practice for conductingenterpriseanalysis, design, planning, and implementation, using a comprehensive approach at all times, for the successful development and execution of strategy. Enterprise architecture applies architecture principles and practices to guide organizations through the business, information, process, andtechnology changesnecessary to execute their strategies. These practices utilize the various aspects of an enterprise to identify, motivate, and achieve these changes."[1] TheUnited States Federal Governmentis an example of an organization that practices EA, in this case with itsCapital Planning and Investment Controlprocesses.[2]Companies such asIndependence Blue Cross,Intel,Volkswagen AG,[3]andInterContinental Hotels Groupalso use EA to improve their business architectures as well as to improvebusiness performanceandproductivity. Additionally, theFederal Enterprise Architecture's reference guide aids federal agencies in the development of their architectures.[4] As a discipline, EA "proactively and holistically lead[s] enterprise responses to disruptive forces by identifying and analyzing the execution of change" towards organizational goals. EA gives business and IT leaders recommendations for policy adjustments and provides best strategies to support and enable business development and change within the information systems the business depends on. EA provides a guide fordecision makingtowards these objectives.[5]TheNational Computing Centre's EA best practice guidance states that an EA typically "takes the form of a comprehensive set of cohesive models that describe the structure and functions of an enterprise. The individual models in an EA are arranged in a logical manner that provides an ever-increasing level of detail about the enterprise."[6] Important players within EA include enterprise architects and solutions architects. Enterprise architects are at the top level of the architect hierarchy, meaning they have more responsibilities than solutions architects. While solutions architects focus on their own relevant solutions, enterprise architects focus on solutions for and the impact on the whole organization. Enterprise architects oversee many solution architects and business functions. As practitioners of EA, enterprise architects support an organization's strategic vision by acting to align people, process, and technology decisions with actionable goals and objectives that result in quantifiable improvements toward achieving that vision. The practice of EA "analyzes areas of common activity within or between organizations, where information and other resources are exchanged to guide future states from an integrated viewpoint of strategy, business, and technology."[7] The termenterprisecan be defined as anorganizational unit,organization, or collection of organizations that share a set of common goals and collaborate to provide specific products or services to customers.[8]In that sense, the term enterprise covers various types of organizations, regardless of their size, ownership model, operational model, or geographical distribution. It includes those organizations' completesociotechnical system,[9]including people, information, processes, and technologies. Enterprise as a sociotechnical system defines the scope of EA. The termarchitecturerefers to fundamental concepts or properties of a system in its environment; and embodied in its elements, relationships, and in the principles of its design and evolution.[10]A methodology for developing and using architecture to guide thetransformation of a businessfrom a baseline state to a target state, sometimes through several transition states, is usually known as anenterprise architecture framework. A framework provides a structured collection of processes, techniques,artifact descriptions, reference models, and guidance for the production and use of an enterprise-specific architecture description.[citation needed] Paramount tochangingthe EA is the identification of asponsor. Their mission,vision, strategy, and the governance framework define all roles, responsibilities, and relationships involved in the anticipated transformation. Changes considered by enterprise architects typically include innovations in the structure or processes of an organization; innovations in the use of information systems or technologies; the integration and/orstandardizationof business processes; and improvement of the quality and timeliness of business information.[citation needed] According to the standardISO/IEC/IEEE 42010,[10]the product used to describe the architecture of a system is called anarchitectural description. In practice, an architectural description contains a variety of lists, tables, and diagrams. These are models known asviews. In the case of EA, these models describe the logical business functions or capabilities,business processes, human roles and actors, the physical organization structure,data flowsanddata stores,business applicationsand platform applications, hardware, and communications infrastructure. The first use of the term "enterprise architecture" is often incorrectly attributed toJohn Zachman's 1987A framework for information systems architecture.[11]The first publication to use it was instead aNational Institute of Standards(NIST) Special Publication[12]on the challenges of information system integration.[citation needed]The NIST article describes EA as consisting of several levels.Business unit architectureis the top level and might be a total corporate entity or a sub-unit. It establishes for the whole organization necessary frameworks for "satisfying both internal information needs" as well as the needs of external entities, which includecooperating organizations,customers, andfederal agencies. The lower levels of the EA that provide information to higher levels are more attentive to detail on behalf of their superiors. In addition to this structure, business unit architecture establishesstandards,policies, andproceduresthat either enhance or stymie the organization's mission.[12] The main difference between these two definitions is that Zachman's concept was the creation of individual information systems optimized for business, while NIST's described the management of all information systems within a business unit. The definitions in both publications, however, agreed that due to the "increasing size and complexity of the [i]mplementations of [i]nformation systems... logical construct[s] (or architecture) for defining and controlling the interfaces and... [i]ntegration of all the components of a system" is necessary. Zachman in particular urged for a "strategic planningmethodology."[11] Within the field of enterprise architecture, there are three overarching schools: Enterprise IT Design, Enterprise Integrating, and Enterprise Ecosystem Adaption. Which school one subscribes to will impact how they see the EA's purpose and scope, as well as the means of achieving it, the skills needed to conduct it, and the locus of responsibility for conducting it.[13] Under Enterprise IT Design, the main purpose of EA is to guide the process of planning and designing an enterprise'sIT/IScapabilities to meet the desired organizational objectives, often by greater alignment between IT/IS and business concerns. Architecture proposals and decisions are limited to the IT/IS aspects of the enterprise and other aspects service only as inputs. The Enterprise Integrating school believes that the purpose of EA is to create a greater coherency between the various concerns of an enterprise (HR, IT, Operations, etc.), including the link between strategy formulation and execution. Architecture proposals and decisions here encompass all aspects of the enterprise. The Enterprise Ecosystem Adaption school states that the purpose of EA is to foster and maintain the learning capabilities of enterprises so they may be sustainable. Consequently, a great deal of emphasis is put on improving the capabilities of the enterprise to improve itself, to innovate, and to coevolve with its environment. Typically, proposals and decisions encompass both the enterprise and its environment. The benefits of EA are achieved through its direct and indirect contributions to organizational goals.[14]Notable benefits include support in the areas related to design and re-design of the organizational structures during mergers, acquisitions, or general organizational change;[15][16][17][18]enforcement of discipline and business process standardization, and enablement of process consolidation, reuse, andintegration;[19][20]support for investment decision-making and work prioritization;[16][21][17]enhancement of collaboration and communication betweenproject stakeholdersand contribution to efficientproject scopingand to defining more complete and consistent projectdeliverabless;[18][19]and an increase in the timeliness ofrequirements elicitationand the accuracy of requirement definitions through publishing of the EA documentation.[22] Other benefits include contribution tooptimal system designsand efficient resource allocation during system development and testing;[16][17]enforcement of discipline and standardization of IT planning activities and contribution to a reduction in time for technology-related decision making;[17][20]reduction of the system's implementation and operational costs, and minimization of replicate infrastructure services across business units;[20][23]reduction in IT complexity, consolidation of data and applications, and improvement ofinteroperabilityof the systems;[19][20][23]moreopenandresponsiveIT as reflected through increased accessibility of data forregulatory compliance, and increased transparency of infrastructure changes;[20][24]and a reduction ofbusiness risksfrom system failures and security breaches. EA also helps reduce risks of project delivery.[20][25]Establishing EA as an accepted, recognized, functionally integrated and fully involved concept at operational and tactical levels is one of the biggest challenges facing Enterprise Architects today and one of the main reasons why many EA initiatives fail.[26] A key concern about EA has been the difficulty in arriving atmetrics of successbecause of the broad-brush and often opaque nature of EA projects.[27]Additionally, there have been a number of reports, including those written byIvar Jacobson,[28]Gartner,[29]Erasmus University RotterdamandIDS Scheer,[30]Dion Hinchcliffe,[31]andStanley Gaver,[32]that argue that the frequent failure of EA initiatives makes the concept not worth the effort and that the methodology will fade out quickly. According to the Federation of Enterprise Architecture Professional Organizations (FEAPO), EA interacts with a wide array of other disciplines commonly found in business settings such asperformance engineeringandmanagement,process engineeringandmanagement,ITandenterprise portfolio management,governance and compliance, IT strategic planning,risk analysis,information management,metadata management,organization development,design thinking,systems thinking, anduser experience design.[1][33][34][35]The EA of an organization is too complex and extensive to document in its entirety, soknowledge managementtechniques provide a way to explore and analyze these hidden, tacit, or implicit areas. In return, EA provides a way of documenting the components of an organization and their interaction in a systemic and holistic way that complements knowledge management.[36] In various venues,[37]EA has been discussed as having a relationship withService Oriented Architecture(SOA), a particular style of application integration. Research points to EA promoting the use of SOA as an enterprise-wide integration pattern.[38][39]The broad reach of EA has resulted in this business role being included in theinformation technology governanceprocesses of many organizations. Analyst firm Real Story Group suggested that EA and the emerging concept of thedigital workplaceare "two sides to the same coin."[40]The Cutter Consortium described EA as an information and knowledge-based discipline.[41]
https://en.wikipedia.org/wiki/Enterprise_architecture
Enterprise architecture planning(EAP) inenterprise architectureis theplanningprocess of definingarchitecturesfor the use of information in support of the business and the plan for implementing those architectures.[2] One of the earlier professional practitioners in the field of system architectureSteven H. Spewakin 1992 defined Enterprise Architecture Planning (EAP) as "the process of defining architectures for the use of information in support of the business and the plan for implementing those architectures."[3]Spewak’s approach to EAP is similar to that taken by DOE in that the business mission is the primary driver. That is followed by the data required to satisfy the mission, followed by the applications that are built using that data, and finally by the technology to implement the applications.[1] This hierarchy of activity is represented in the figure above, in which the layers are implemented in order, from top to bottom. Based on theBusiness Systems Planning(BSP) approach developed byJohn Zachman, EAP takes a data-centric approach to architecture planning to provide data quality, access to data, adaptability to changing requirements, data interoperability and sharing, and cost containment. This view counters the more traditional view that applications should be defined before data needs are determined or provided for.[1] EAP defines the blueprint for subsequent design and implementation and it places the planning/defining stages into a framework. It does not explain how to define the top two rows of theZachman Frameworkin detail but for the sake of the planning exercise, abbreviates the analysis. The Zachman Framework provides the broad context for the description of the architecture layers, while EAP focuses on planning and managing the process of establishing the business alignment of the architectures.[2] EAP is planning that focuses on the development of matrixes for comparing and analyzing data, applications, and technology. Most important, EAP produces an implementation plan. Within the Federal Enterprise Architecture, EAP will be completed segment enterprise by segment enterprise. The results of these efforts may be of Government wide value; therefore, as each segment completes EAP, the results will be published on the ArchitecturePlus web site.[2] Enterprise Architecture Planning model consists of four levels: The Enterprise Architecture Planning (EAP) methodology is beneficial to understanding the further definition of theFederal Enterprise Architecture Frameworkat level IV. EAP is a how to approach for creating the top two rows of theZachman Framework, Planner and Owner. The design of systems begins in the third row, outside the scope of EAP.[2] EAP focuses on defining what data, applications, and technology architectures are appropriate for and support the overall enterprise. Exhibit 6 shows the seven components (or steps) of EAP for defining these architectures and the related migration plan. The seven components are in the shape of a wedding cake, with each layer representing a different focus of each major task (or step).[2] The effectiveness of the EAP methodology have been questioned in the late 1980s and early 1990s:
https://en.wikipedia.org/wiki/Enterprise_architecture_planning
Genuine progress indicator(GPI) is ametricthat has been suggested to replace, or supplement,gross domestic product(GDP).[1]The GPI is designed to take fuller account of thewell-beingof a nation, only a part of which pertains to the size of the nation's economy, by incorporating environmental and social factors which are not measured by GDP. For instance, some models of GPI decrease in value when thepovertyrate increases.[2]The GPI separates the concept of societal progress fromeconomic growth. The GPI is used inecological economics, "green" economics, sustainability and more inclusive types of economics. It factors inenvironmentalandcarbon footprintsthat businesses produce or eliminate, including in the forms ofresource depletion,pollutionand long-term environmental damage.[2]GDP is increased twice when pollution is created, since it increases once upon creation (as a side-effect of some valuable process) and again when the pollution is cleaned up; in contrast, GPI counts the initial pollution as a loss rather than a gain, generally equal to the amount it will cost to clean up later plus the cost of any negative impact the pollution will have in the meantime. While quantifying costs and benefits of these environmental and socialexternalitiesis a difficult task, "Earthster-type databases could bring more precision and currency to GPI's metrics."[2]It has been noted that such data may also be embraced by those who attempt to "internalize externalities" by making companies pay the costs of the pollution they create (rather than having the government or society at large bear those costs) "by taxing their goods proportionally to their negative ecological and social impacts".[2] GPI is an attempt to measure whether the environmental impact and social costs of economic production and consumption in a country are negative or positive factors in overall health and well-being. By accounting for the costs borne by the society as a whole to repair or control pollution and poverty, GPI balances GDP spending against external costs. GPI advocates claim that it can more reliably measure economic progress, as it distinguishes between the overall "shift in the 'value basis' of a product, adding its ecological impacts into the equation".[2]: Ch. 10.3Comparatively speaking, the relationship between GDP and GPI is analogous to the relationship between the gross profit of a company and the net profit; the net profit is the gross profit minus the costs incurred, while the GPI is the GDP (value of all goods and services produced) minus the environmental and social costs. Accordingly, the GPI will be zero if the financialcosts of povertyand pollution equal the financial gains in production of goods and services, all other factors being constant. Someeconomists[who?]assess progress in an economy'swelfareby comparing its gross domestic product over time — that is, by adding up the annual dollar value of all goods and services produced within a country over successive years. However, GDP was not intended to be used for such purpose.[3]It is prone toproductivismorconsumerism, over-valuing production and consumption of goods, and not reflecting improvement in human well-being. It also does not distinguish between money spent for new production and money spent to repair negative outcomes from previous expenditure. For example, it would treat as equivalent one million dollars spent to build new homes and one million dollars spent in aid relief to those whose homes have been destroyed, despite these expenditures arguably not representing the same kind of progress. This is relevant for example when considering the true costs of development that destroys wetlands and hence exacerbates flood damages.Simon Kuznets, the inventor of the concept of GDP, noted in his first report to theUS Congressin 1934: the welfare of a nation can scarcely be inferred from a measure of national income.[3] In 1962, he also wrote: Distinctions must be kept in mind between quantity and quality of growth, between costs and returns, and between the short and long run... Goals for more growth should specify more growth of what and for what.[4] Some[who?]have argued that an adequate measure must also take into accountecological yieldand the ability of nature to provideservices, and that these things are part of a more inclusive ideal of progress, which transcends the traditional focus on raw industrial production. The need for a GPI to supplement indicators such as GDP was highlighted by analyses ofuneconomic growthin the 1980s, notably that ofMarilyn Waring, who studied biases in theUN System of National Accounts.[citation needed] By the early 1990s, there was a consensus inhuman development theoryandecological economicsthat growth inmoney supplywas actually reflective of a loss of well-being: that shortfalls in essential natural and social services were being paid for in cash and that this was expanding the economy but degrading life.[citation needed] The matter remains controversial and is a main issue between advocates ofgreen economicsandneoclassical economics. Neoclassical economists understand the limitations of GDP for measuring human well-being but nevertheless regard GDP as an important, though imperfect, measure of economic output and would be wary of too close an identification of GDP growth with aggregate human welfare. However, GDP tends to be reported as synonymous with economic progress by journalists and politicians, and the GPI seeks to correct this shorthand by providing a more encompassing measure. Some economists, notablyHerman Daly,John B. Cobb[5]andPhilip Lawn,[6]have asserted that a country's growth, increased goods production, and expanding services have both costs and benefits, not just the benefits that contribute to GDP. They assert that, in some situations, expanded production facilities damage the health, culture, and welfare of people. Growth that was in excess of sustainable norms (e.g., ofecological yield) had to be considered to be uneconomic. According to the "threshold hypothesis", developed byManfred Max-Neef, "when macroeconomic systems expand beyond a certain size, the additional benefits of growth are exceeded by the attendant costs" (Max-Neef, 1995). This hypothesis is borne out in data comparing GDP/capita with GPI/capita from 17 countries. The graph demonstrates that, while GDP does increase overall well-being to a point, beyond $7,000 GDP/capita the increase in GPI is reduced or remains stagnant.[7]Similar trends can be seen when comparing GDP to life satisfaction as well as in aGallup Pollpublished in 2008.[8] According to Lawn's model, the "costs" of economic activity include the following potential harmful effects:[9] Analysis byRobert Costanzaalso around 1995 ofnature's servicesand their value showed that a great deal of degradation of nature's ability to clear waste, prevent erosion, pollinate crops, etc., was being done in the name of monetary profit opportunity: this was adding to GDP but causing a great deal of long term risk in the form of mudslides, reduced yields, lost species, water pollution, etc. Such effects have been very marked in areas that suffered seriousdeforestation, notablyHaiti,Indonesia, and some coastalmangroveregions ofIndiaandSouth America. Some of the worst land abuses for instance have beenshrimp farmingoperations that destroyed mangroves, evicted families, left coastal lands salted and useless for agriculture, but generated a significant cash profit for those who were able to control the export market in shrimp. This has become a signal example to those who contest the idea that GDP growth is necessarily desirable. GPI systems generally try to take account of these problems by incorporatingsustainability: whether a country's economic activity over a year has left the country with a better or worse future possibility of repeating at least the same level of economic activity in the long run. For example, agricultural activity that uses replenishing water resources, such as river runoff, would score a higher GPI than the same level of agricultural activity that drastically lowers the water table by pumping irrigation water from wells. Hicks(1946) pointed out that the practical purpose of calculating income is to indicate the maximum amount that people can produce and consume without undermining their capacity to produce and consume the same amount in the future. From a national income perspective, it is necessary to answer the following question: "Can a nation's entire GDP be consumed without undermining its ability to produce and consume the same GDP in the future?" This question is largely ignored in contemporary economics but fits under the idea ofsustainability. The best-known[dubious–discuss]attempts to apply the concepts of GPI to legislative decisions are probably theGPI Atlantic,[10]an index, not an indicator, invented by Ronald Colman forAtlantic Canada, who explicitly avoids aggregating the results obtained through research to a single number, alleging that it keeps decisions makers in the dark; the Alberta GPI[11]created by ecological economist Mark Anielski to measure the long-term economic, social andenvironmental sustainabilityof the province ofAlbertaand the "environmental and sustainable development indicators" used by theGovernment of Canadato measure its own progress to achieving well-being goals. TheCanadian Environmental Sustainability Indicatorsprogram is an effort to justifystate servicesin GPI terms.[citation needed]It assigns the Commissioner of the Environment and Sustainable Development, an officer in theAuditor-General of Canada's office, to perform the analysis and report to theHouse of Commons. However, Canada continues to state its overall budgetary targets in terms of reducing itsdebt to GDP ratio, which implies that GDP increase and debt reduction in some combination are its main priorities. In theEuropean Union (EU)the Metropole efforts and theLondon Health Observatorymethods are equivalents focused mostly on urban lifestyle. The EU and Canadian efforts are among the most advanced in any of theG8orOECDnations,[citation needed]but there are parallel efforts to measurequality of lifeorstandard of livinginhealth(not strictlywealth) terms in alldeveloped nations. This has also been a recent focus of thelabour movement. The calculation of GPI presented in the simplified form is the following: GPI = A + B - C - D + I A is income weighted private consumption B is value of non-market services generating welfare C is private defensive cost of natural deterioration D is cost of deterioration of nature and natural resources I is increase in capital stock and balance of international trade The GPI indicator is based on the concept of sustainable income, presented by economist John Hicks (1948). The sustainable income is the amount a person or an economy can consume during one period without decreasing his or her consumption during the next period. In the same manner, GPI depicts the state of welfare in the society by taking into account the ability to maintain welfare on at least the same level in the future. The Genuine Progress Indicator is measured by 26 indicators which can be divided into three main categories: Economic, Environmental, and Social. Some regions, nations, or states may adjust the verbiage slightly to accommodate their particular scenario.[12]For example, the GPI template uses the phrase "Carbon Dioxide Emissions Damage" whereas the state of Maryland uses "Cost of Climate Change"[13]because it also accounts for other greenhouse gases (GHG) such as methane andnitrous oxide. Non-profit organizations and universities have measured the GPI of Vermont, Maryland, Colorado, Ohio, and Utah. These efforts have incited government action in some states. As of 2014,Vermont,Maryland, Washington and Hawai'i have passed state government initiatives to consider GPI[15]in budgeting decisions, with a focus on long-term cost and benefits. Hawai'i's GPI spans the years from 2000 to 2020 and will be updated annually.[16] In 2009, the state of Maryland formed a coalition of representatives from several state government departments in search of a metric that would factor social well-being into the more traditional gross product indicators of the economy. The metric would help determine the sustainability of growth and economic progress against social and environmental factors typically left out of national indicators. The GPI was chosen as a comprehensive measure of sustainability as it has a well-accepted scientific methodology that can be adopted by other states and compared over time.[17]Maryland's GPI trends are comparable to other states and nations that have measured their GPI in thatgross state product(GSP) and GPI have diverged over the past four decades where GSP has increased more rapidly than GPI. While economic elements of GPI have increased overall (with a significant drop off during theGreat Recession), social well-being has stagnated, with any values added being cancelled out by costs deducted, and environmental indicators, while improving slightly, are always considered costs. Combined, these elements bring the GPI below GSP.[18]However, Maryland's GPI did increase by two points from 2010 to 2011.[19] The calculation methodology of GPI was first developed and published in 1995 by Redefining Progress and applied to US data from 1950 to 1994.[20]The original work on the GPI in 1995 was a modification of the 1994 version of the Index of Sustainable Economic Welfare in Daly and Cobb. Results showed that GDP increased substantially from 1950 to 1994. Over the same period, the GPI stagnated. Thus, according to GPI theory, economic growth in the US, i.e., the growth of GDP, did not increase the welfare of the people over that 44 year period. So far, GPI time-series have been calculated for the US and Australia as well as for several of their states. In addition, GPI has been calculated for Austria, Canada, Chile, France, Finland, Italy, the Netherlands, Scotland, and the rest of the UK. The GPI time-series 1945 to 2011 for Finland have been calculated byStatistics Finland. The calculation closely followed the US methodology. The results show that in the 1970s and 1980s economic growth, as measured by GDP, clearly increased welfare, measured by the GPI. After the economic recession of the early-1990s the GDP continued to grow, but the GPI stayed on a lower level. This indicates a widening gap between the trends of GDP and GPI that began in the early-1990s. In the 1990s and 2000s the growth of GDP has not benefited the average Finn. If measured by GPI, sustainable economic welfare has actually decreased due to environmental hazards that have accumulated in the environment. The Finnish GPI time series[21]have been updated by Dr Jukka Hoffrén at Statistics Finland. Within EU's Interreg IV C FRESH Project (Forwarding Regional Environmental Sustainable Hierarchies) GPI time-series were calculated to Päijät-Häme, Kainuu and South-Ostrobotnia (Etelä-Pohjanmaa) regions in 2009–2010.[22]During 2011 these calculations were completed with GPI calculations for the Lappland, Northern Ostrobothnia (Pohjois-Pohjanmaa) and Central-Ostrobothnia (Keski-Pohjanmaa) regions. GPI considers some types of production to have a negative impact upon being able to continue some types of production. GDP measures the entirety of production at a given time. GDP is relatively straightforward to measure compared to GPI. Competing measures like GPI define well-being, which are arguably impossible to define. Therefore, opponents of GPI claim that GPI cannot function to measure the goals of a diverse, plural society. Supporters of GDP as a measure of societal well-being claim that competing measures such as GPI are more vulnerable to political manipulation.[23] Finnish economists Mika Maliranta and Niku Määttänen write that the problem of alternative development indexes is their attempt to combine things that are incommensurable. It is hard to say what they exactly indicate and difficult to make decisions based on them. They can be compared to an indicator that shows the mean of a car's velocity and the amount of fuel left. They add that it indeed seems as if the economy has to grow in order for the people to even remain as happy as they are at present. In Japan, for example, the degree of happiness expressed by the citizens in polls has been declining since the early 1990s, the period when Japan's economic growth stagnated.[24]
https://en.wikipedia.org/wiki/Genuine_progress_indicator
Adigital identityis data stored oncomputer systemsrelating to an individual, organization, application, or device. For individuals, it involves the collection ofpersonal datathat is essential for facilitating automated access to digital services, confirming one's identity on the internet, and allowing digital systems to manage interactions between different parties. It is a component of a person's social identity in the digital realm, often referred to as theironline identity. Digital identities are composed of the full range of data produced by a person's activities on the internet, which may include usernames and passwords, search histories, dates of birth,social security numbers, and records of online purchases. When such personal information is accessible in the public domain, it can be used by others to piece together a person's offline identity. Furthermore, this information can be compiled to construct a "data double"—a comprehensive profile created from a person's scattereddigital footprintsacross variousplatforms. These profiles are instrumental in enabling personalized experiences on the internet and within different digital services.[1][2] Should the exchange of personal data for online content and services become a practice of the past, an alternative transactional model must emerge. As the internet becomes more attuned toprivacy concerns, media publishers, application developers, and online retailers are re-evaluating their strategies, sometimes reinventing their business models completely. Increasingly, the trend is shifting towardsmonetizingonline offerings directly, with users being asked to pay for access through subscriptions and other forms of payment, moving away from the reliance on collecting personal data.[3] Navigating the legal and societal implications of digital identity is intricate and fraught with challenges. Misrepresenting one's legal identity in the digital realm can pose numerous threats to a society increasingly reliant on digital interactions, opening doors for various illicit activities. Criminals, fraudsters, and terrorists could exploit these vulnerabilities to perpetrate crimes that can affect the virtual domain, the physical world, or both.[4] A critical problem incyberspaceis knowing who one is interacting with. Using only static identifiers such aspasswordsandemail, there is no way to precisely determine the identity of a person in cyberspace because this information can be stolen or used by many individuals acting as one. Digital identity based on dynamic entity relationships captured from behavioral history across multiple websites and mobile apps can verify and authenticate identity with up to 95% accuracy.[citation needed] By comparing a set of entity relationships between a new event (e.g., login) and past events, a pattern ofconvergencecan verify or authenticate the identity as legitimate whereas divergence indicates an attempt to mask an identity. Data used for digital identity is generally encrypted using a one-wayhash, thereby avoiding privacy concerns. Because it is based on behavioral history, a digital identity is very hard to fake or steal. A digital identity may also be referred to as adigital subjectordigital entity. They are the digital representation of a set of claims made by one party about itself or another person, group, thing, or concept. A digital twin[5]which is also commonly known as a data double or virtual twin is a secondary version of the original user's data. Which is used both as a way to observe what said user does on the internet as well as customize a more personalized internet experience.[citation needed]Due to the collection of personal data, there have been many social, political, and legal controversies tying into data doubles. Theattributesof a digital identity are acquired and contain information about a user, such as medical history, purchasing behavior, bank balance, age, and so on. Preferences retain a user's choices such as favorite brand of shoes, and preferred currency. Traits are features of the user that are inherent, such as eye color, nationality, and place of birth. Although attributes of a user can change easily, traits change slowly, if at all. A digital identity also has entity relationships derived from the devices, environment, and locations from which an individual is active on theInternet. Some of those include facial recognition, fingerprints, photos, and so many more personal attributes/preferences.[6] Digital identities can be issued throughdigital certificates. These certificates contain data associated with a user and are issued with legal guarantees by recognizedcertification authorities. In order to assign a digital representation to an entity, the attributing party must trust that the claim of an attribute (such as name, location, role as an employee, or age) is correct and associated with the person or thing presenting the attribute. Conversely, the individual claiming an attribute may only grant selective access to its information (e.g., proving identity in a bar or PayPal authentication for payment at a website). In this way, digital identity is better understood as a particular viewpoint within a mutually-agreed relationship than as an objective property.[citation needed] Authenticationis the assurance of the identity of one entity to another. It is a key aspect of digital trust. In general, business-to-business authentication is designed for security, but user-to-business authentication is designed for simplicity. Authentication techniques include the presentation of a unique object such as abank credit card, the provision of confidential information such as apasswordor the answer to a pre-arranged question, the confirmation of ownership of an email address, and more robust but costly techniques usingencryption. Physical authentication techniques includeiris scanning, fingerprinting, andvoice recognition; those techniques are calledbiometrics. The use of both static identifiers (e.g., username and password) and personal unique attributes (e.g., biometrics) is calledmulti-factor authenticationand is more secure than the use of one component alone.[citation needed] Whilst technological progress in authentication continues to evolve, these systems do not prevent aliases from being used. The introduction of strong authentication[citation needed]foronline paymenttransactions within theEuropean Unionnow links a verified person to an account, where such person has been identified in accordance with statutory requirements prior to account being opened. Verifying a person opening an account online typically requires a form of device binding to the credentials being used. This verifies that the device that stands in for a person on the Internet is actually the individual's device and not the device of someone simply claiming to be the individual. The concept ofreliance authenticationmakes use of pre-existing accounts, to piggy back further services upon those accounts, providing that the original source is reliable. The concept of reliability comes from various anti-money laundering and counter-terrorism funding legislation in the US,[7]EU28,[8]Australia,[9]Singapore and New Zealand[10]where second parties may place reliance on the customer due diligence process of the first party, where the first party is say a financial institution. An example of reliance authentication is PayPal's verification method. Authorizationis the determination of any entity that controls resources that the authenticated can access those resources. Authorization depends on authentication, because authorization requires that the critical attribute (i.e., the attribute that determines the authorizer's decision) must be verified.[citation needed]For example, authorization on a credit card gives access to the resources owned byAmazon, e.g., Amazon sends one a product. Authorization of an employee will provide that employee with access to network resources, such as printers, files, or software. For example, a database management system might be designed so as to provide certain specified individuals with the ability to retrieve information from a database but not the ability to change data stored in the database, while giving other individuals the ability to change data.[citation needed] Consider the person who rents a car and checks into a hotel with a credit card. The car rental and hotel company may request authentication that there is credit enough for an accident, or profligate spending on room service. Thus a card may later be refused when trying to purchase an activity such as a balloon trip. Though there is adequate credit to pay for the rental, the hotel, and the balloon trip, there is an insufficient amount to also cover the authorizations. The actual charges are authorized after leaving the hotel and returning the car, which may be too late for the balloon trip. Valid online authorization requires analysis of information related to the digital event including device and environmental variables. These are generally derived from the data exchanged between a device and a business server over the Internet.[11] Digital identity requires digital identifiers—strings or tokens that are unique within a given scope (globally or locally within a specific domain, community, directory, application, etc.). Identifiers may be classified asomnidirectionalorunidirectional.[12]Omnidirectional identifiers are public and easily discoverable, whereas unidirectional identifiers are intended to be private and used only in the context of a specific identity relationship. Identifiers may also be classified asresolvableornon-resolvable. Resolvable identifiers, such as adomain nameoremail address, may be easily dereferenced into the entity they represent, or some current state data providing relevant attributes of that entity. Non-resolvable identifiers, such as a person's real name, or the name of a subject or topic, can be compared for equivalence but are not otherwise machine-understandable. There are many different schemes and formats for digital identifiers.Uniform Resource Identifier(URI) and the internationalized versionInternationalized Resource Identifier(IRI) are the standard for identifiers for websites on theWorld Wide Web.OpenIDandLight-weight Identityare two web authentication protocols that use standardHTTPURIs (often called URLs). AUniform Resource Nameis a persistent, location-independent identifier assigned within the defined namespace. Digital object architecture[13]is a means of managing digital information in a network environment. In digital object architecture, a digital object has a machine and platform independent structure that allows it to be identified, accessed and protected, as appropriate. A digital object may incorporate not only informational elements, i.e., a digitized version of a paper, movie or sound recording, but also the unique identifier of the digital object and other metadata about the digital object. The metadata may include restrictions on access to digital objects, notices of ownership, and identifiers for licensing agreements, if appropriate. TheHandle Systemis a general purpose distributed information system that provides efficient, extensible, and secure identifier and resolution services for use on networks such as the internet. It includes an open set of protocols, anamespace, and a reference implementation of the protocols. The protocols enable adistributed computer systemto store identifiers, known as handles, of arbitrary resources and resolve those handles into the information necessary to locate, access, contact, authenticate, or otherwise make use of the resources. This information can be changed as needed to reflect the current state of the identified resource without changing its identifier, thus allowing the name of the item to persist over changes of location and other related state information. The original version of the Handle System technology was developed with support from theDefense Advanced Research Projects Agency. A newOASISstandard for abstract, structured identifiers,XRI(Extensible Resource Identifiers), adds new features to URIs and IRIs that are especially useful for digital identity systems.OpenIDalso supports XRIs, which are the basis fori-names. Risk-based authenticationis an application of digital identity whereby multiple entity relationship from the device (e.g., operating system), environment (e.g., DNS Server) and data entered by a user for any given transaction is evaluated for correlation with events from known behaviors for the same identity.[14]Analysis are performed based on quantifiable metrics, such as transaction velocity, locale settings (or attempts to obfuscate), and user-input data (such as ship-to address). Correlation and deviation are mapped to tolerances and scored, then aggregated across multiple entities to compute a transaction risk-score, which assess the risk posed to an organization. Digital identity attributes exist within the context ofontologies. The development of digital identity network solutions that can interoperate taxonomically diverse representations of digital identity is a contemporary challenge.Free-tagginghas emerged recently as an effective way of circumventing this challenge (to date, primarily with application to the identity of digital entities such as bookmarks and photos) by effectively flattening identity attributes into a single, unstructured layer. However, the organic integration of the benefits of both structured and fluid approaches to identity attribute management remains elusive. Identity relationships within a digital network may include multiple identity entities. However, in a decentralized network like the Internet, such extended identity relationships effectively requires both the existence of independent trust relationships between each pair of entities in the relationship and a means of reliably integrating the paired relationships into larger relational units. And if identity relationships are to reach beyond the context of a single, federated ontology of identity (seeTaxonomies of identityabove), identity attributes must somehow be matched across diverse ontologies. The development of network approaches that can embody such integrated "compound" trust relationships is currently a topic of much debate in theblogosphere. Integrated compound trust relationships allow, for example, entity A to accept an assertion or claim about entity B by entity C. C thus vouches for an aspect of B's identity to A. A key feature of "compound" trust relationships is the possibility of selective disclosure from one entity to another of locally relevant information. As an illustration of the potential application of selective disclosure, let us suppose a certain Diana wished to book a hire car without disclosing irrelevant personal information (using a notional digital identity network that supports compound trust relationships). As an adult, UK resident with a current driving license, Diana might have the UK'sDriver and Vehicle Licensing Agencyvouch for her driving qualification, age, and nationality to a car-rental company without having her name or contact details disclosed. Similarly, Diana's bank might assert just her banking details to the rental company. Selective disclosure allows for appropriateprivacyof information within a network of identity relationships. A classic form of networked digital identity based on international standards is the "White Pages". An electronicwhite pageslinks various devices, like computers and telephones, to an individual or organization. Various attributes such as X.509v3 digital certificates for secure cryptographic communications are captured under a schema, and published in anLDAPorX.500directory. Changes to the LDAP standard are managed by working groups in theIETF, and changes in X.500 are managed by theISO. The ITU did significant analysis of gaps in digital identity interoperability via the FGidm (ƒfocus group onidentity management). Implementations of X.500[2005] and LDAPv3 have occurred worldwide but are primarily located in major data centers with administrative policy boundaries regarding sharing of personal information. Since combined X.500 [2005] and LDAPv3 directories can hold millions of unique objects for rapid access, it is expected to play a continued role for large scale secure identity access services. LDAPv3 can act as a lightweight standalone server, or in the original design as a TCP-IP based Lightweight Directory Access Protocol compatible with making queries to a X.500 mesh of servers which can run the native OSI protocol. This will be done by scaling individual servers into larger groupings that represent defined "administrative domains", (such as the country level digital object) which can add value not present in the original "White Pages" that was used to look up phone numbers and email addresses, largely now available through non-authoritative search engines. The ability to leverage and extend a networked digital identity is made more practicable by the expression of the level of trust associated with the given identity through a commonIdentity Assurance Framework. The term 'digital identity' is utilized within the academic field of digital rhetoric to refer to identity as a 'rhetorical construction'.[15]Digital rhetoric explores how identities are formed, negotiated, influenced, or challenged within the ever-evolving digital environments. Understanding different rhetorical situations in digital spaces is complex but crucial for effective communication, as scholars argue that the ability to evaluate such situations is necessary for constructing appropriate identities in varying rhetorical contexts.[16][17][18]Furthermore, it is important to recognize that physical and digital identities are intertwined, and the visual elements in online spaces shape the representation of one's physical identity.[19]As Bay suggests, "what we do online now requires more continuity—or at least fluidity—between our online and offline selves".[19] Regarding the positioning of digital identity in rhetoric, scholars pay close attention to how issues of race, gender, agency, and power manifest in digital spaces. While some radical theorists initially posited that cyberspace would liberate individuals from their bodies and blur the lines between humans and technology,[20]others theorized that this 'disembodied' communication could potentially free society from discrimination based on race, sex, gender, sexuality, or class.[21]Moreover, the construction of digital identity is intricately tied to the network. This is evident in the practices of reputation management companies, which aim to create a positive online identity to increase visibility in various search engines.[15] Clare Sullivan presents the grounds for digital identity as an emerging legal concept. The UK'sIdentity Cards Act 2006confirms Sullivan's argument and unfolds the new legal concept involving database identity and transaction identity. Database identity is the collection of data that is registered about an individual within the databases of the scheme and transaction identity is a set of information that defines the individual's identity for transactional purposes. Although there is reliance on the verification of identity, none of the processes used are entirely trustworthy. The consequences of digital identity abuse and fraud are potentially serious since in possible implications the person is held legally responsible.[22] Corporations are recognizing the power of the internet to tailor their online presence to each individual customer. Purchase suggestions,personalized adverts, and other tailored marketing strategies are a great success for businesses. Such tailoring, however, depends on the ability to connect attributes and preferences to the identity of the visitor. For technology to enable direct value transfer of rights and non-bearer assets, human agency must be conveyed, including the authorization, authentication, and identification of the buyer and/or seller, as well as “proof of life,” without a third party. A solution to confirm legal identities resulted from the financial crisis of 2008. The Global LEI System would be able to provide every registered business in the world with an LEI. The LEI - Legal Entity Identifier provides businesses permanent identification worldwide for legal identities.    The LEI[23]is: Digital death is the phenomenon of people continuing to have Internet accounts after their deaths. This results in several ethical issues concerning how the information stored by the deceased person may be used or stored or given to the family members. It also may result in confusion due to automated social media features such as birthday reminders, as well as uncertainty about the deceased person's willingness to pass their personal information to a third party. Many social media platforms do not have clear policies about digital death. Many companies secure digital identities after death or legally pass those on to the deceased people's families. Some companies will also provide options for digital identity erasure after death. Facebook/Meta is a clear-cut example of a company that provides digital options after death. Descendants or friends of the deceased individual can let Facebook know about the death and have all of their previous digital activity removed. Digital activity is but not limited to messages, photos, posts, comments, reactions, stories, archived history, etc. Furthermore, the entire Facebook account will be deleted upon request.[24] There are proponents of treatingself-determinationand freedom of expression of digital identity as a newhuman right. Some have speculated that digital identities could become a new form oflegal entity.[25]As technology develops so does the intelligence of certain digital identities, moving forward many believe that there should be more developments in legal aspects that regulate online presences and collection. Several writers have pointed out the tension between services that use digital identity on the one hand and user privacy on the other.[1][2][3][4][5]Services that gather and store data linked to a digital identity, which in turn can be linked to a user's real identity, can learn a great deal about individuals.GDPRis one attempt to address this concern using the regulation. This regulation tactic was introduced by the European Union (EU) in 2018 for addressing concerns about the privacy and personal data of EU citizens. GDPR applies to all companies, regardless of location, that handle users within the EU. Any company that collects, stores, and operates with data from EU citizens must disclose key details about the management of that data to EU individuals. EU citizens can also request for certain aspects of their collected data to be deleted.[26]To help enforce GDPR, the EU has applied penalties to companies that operate with data from EU citizens but fail to follow the regulations[27] Many systems provide privacy-related mitigations when analyzing data linked to digital identities. One common mitigation isdata anonymization, such as hashing user identifiers with acryptographic hash function. Another popular technique is adding statistical noise to a data set to reduce identifiability, such as indifferential privacy. Although a digital identity allows consumers to transact from anywhere and more easily manage various ID cards, it also poses a potential single point of compromise that malicious hackers can use to steal all of that personal information.[6] Hence, several different account authentication methods have been created to protect users. Initially,  these authentication methods will require a setup from the user to enable these security features when attempting a login. Although many facets of digital identity are universal owing in part to the ubiquity of the Internet, some regional variations exist due to specific laws, practices, and government services that are in place. For example, digital identity can use services that validatedriving licences,passportsand other physical documents online to help improve the quality of a digital identity. Also, strict policies againstmoney launderingmean that some services, such as money transfers need a stricter level of validation of digital identity. Digital identity in the national sense can mean a combination of single sign on, and/or validation of assertions by trusted authorities (generally the government).[citation needed] Countries or regions with official or unofficial digital identity systems include: Countries or regions with proposed digital identity systems include:
https://en.wikipedia.org/wiki/Digital_Identity
Attribute-based access control(ABAC), also known aspolicy-based access controlforIAM, defines an access control paradigm whereby a subject's authorization to perform a set of operations is determined by evaluating attributes associated with the subject, object, requested operations, and, in some cases, environment attributes.[1] ABAC is a method of implementing access control policies that is highly adaptable and can be customized using a wide range of attributes, making it suitable for use in distributed or rapidly changing environments. The only limitations on the policies that can be implemented with ABAC are the capabilities of the computational language and the availability of relevant attributes.[2]ABAC policy rules are generated as Boolean functions of the subject's attributes, the object's attributes, and the environment attributes.[3] Unlikerole-based access control(RBAC), which defines roles that carry a specific set of privileges associated with them and to which subjects are assigned, ABAC can express complex rule sets that can evaluate many different attributes. Through defining consistent subject and object attributes into security policies, ABAC eliminates the need for explicit authorizations to individuals’ subjects needed in a non-ABAC access method, reducing the complexity of managing access lists and groups. Attribute values can be set-valued or atomic-valued. Set-valued attributes contain more than one atomic value. Examples areroleandproject. Atomic-valued attributes contain only one atomic value. Examples areclearanceandsensitivity. Attributes can be compared to static values or to one another, thus enabling relation-based access control.[citation needed] Although the concept itself existed for many years, ABAC is considered a "next generation" authorization model because it provides dynamic, context-aware and risk-intelligent access control to resources allowing access control policies that include specific attributes from many different information systems to be defined to resolve an authorization and achieve an efficient regulatory compliance, allowing enterprises flexibility in their implementations based on their existing infrastructures. Attribute-based access control is sometimes referred to aspolicy-based access control(PBAC) orclaims-based access control(CBAC), which is a Microsoft-specific term. The key standards that implement ABAC areXACMLandALFA (XACML).[4] ABAC can be seen as: ABAC comes with a recommended architecture which is as follows: Attributes can be about anything and anyone. They tend to fall into 4 different categories: Policies are statements that bring together attributes to express what can happen and is not allowed. Policies in ABAC can be granting or denying policies. Policies can also be local or global and can be written in a way that they override other policies. Examples include: With ABAC you can have an unlimited number of policies that cater to many different scenarios and technologies.[7] Historically, access control models have includedmandatory access control(MAC),discretionary access control(DAC), and more recentlyrole-based access control(RBAC). These access control models are user-centric and do not take into account additional parameters such as resource information, the relationship between the user (the requesting entity) and the resource, and dynamic information, e.g. time of the day or user IP. ABAC tries to address this by defining access control based on attributes which describe the requesting entity (the user), the targeted object or resource, the desired action (view, edit, delete), and environmental or contextual information. This is why access control is said to be attribute-based. There are three main implementations of ABAC: XACML, the eXtensible Access Control Markup Language, defines an architecture (shared with ALFA and NGAC), a policy language, and a request/response scheme. It does not handle attribute management (user attribute assignment, object attribute assignment, environment attribute assignment) which is left to traditionalIAMtools, databases, and directories. Companies, including every branch in the United States military, have started using ABAC. At a basic level, ABAC protects data with 'IF/THEN/AND' rules rather than assign data to users. The US Department of Commerce has made this a mandatory practice and the adoption is spreading throughout several governmental and military agencies.[8] The concept of ABAC can be applied at any level of the technology stack and an enterprise infrastructure. For example, ABAC can be used at the firewall, server, application, database, and data layer. The use of attributes bring additional context to evaluate the legitimacy of any request for access and inform the decision to grant or deny access. An important consideration when evaluating ABAC solutions is to understand its potential overhead on performance and its impact on the user experience. It is expected that the more granular the controls, the higher the overhead. ABAC can be used to apply attribute-based, fine-grained authorization to the API methods or functions. For instance, a banking API may expose an approveTransaction(transId) method. ABAC can be used to secure the call. With ABAC, a policy author can write the following: The flow would be as follows: One of the key benefits to ABAC is that the authorization policies and attributes can be defined in a technology neutral way. This means policies defined for APIs or databases can be reused in the application space. Common applications that can benefit from ABAC are: The same process and flow as the one described in the API section applies here too. Security for databases has long been specific to the database vendors: Oracle VPD, IBM FGAC, and Microsoft RLS are all means to achieve fine-grained ABAC-like security. An example would be: Data security typically goes one step further than database security and applies control directly to the data element. This is often referred to as data-centric security. On traditional relational databases, ABAC policies can control access to data at the table, column, field, cell and sub-cell using logical controls with filtering conditions and masking based on attributes. Attributes can be data, user, session or tools based to deliver the greatest level of flexibility in dynamically granting/denying access to a specific data element. Onbig data, and distributed file systems such as Hadoop, ABAC applied at the data layer control access to folder, sub-folder, file, sub-file and other granular. Attribute-based access control can also be applied to Big Data systems like Hadoop. Policies similar to those used previously can be applied when retrieving data from data lakes.[9][10] As of Windows Server 2012, Microsoft has implemented an ABAC approach to controlling access to files and folders. This is achieved through dynamic access control (DAC)[11]and Security Descriptor Definition Language (SDDL). SDDL can be seen as an ABAC language as it uses metadata of the user (claims) and of the file/ folder to control access.
https://en.wikipedia.org/wiki/Attribute-based_access_control
Afederated identityininformation technologyis the means of linking a person'selectronic identityand attributes, stored across multiple distinctidentity managementsystems.[1] Federated identity is related tosingle sign-on(SSO), in which a user's singleauthenticationticket, ortoken, is trusted across multiple IT systems or even organizations.[2][3]SSO is a subset of federated identity management, as it relates only to authentication and is understood on the level of technical interoperability, and it would not be possible without some sort offederation.[4] In information technology (IT), federated identity management (FIdM) amounts to having a common set of policies, practices and protocols in place to manage the identity and trust into IT users and devices across organizations.[5] Single sign-on (SSO) systems allow a single user authentication process across multiple IT systems or even organizations. SSO is a subset of federated identity management, as it relates only to authentication and technical interoperability. Centralizedidentity management solutions were created to help deal with user and data security where the user and the systems they accessed were within the same network – or at least the same "domain of control". Increasingly, however, users are accessing external systems which are fundamentally outside their domain of control, and external users are accessing internal systems. The increasingly common separation of the user from the systems requiring access is an inevitable by-product of the decentralization brought about by the integration of the Internet into every aspect of both personal and business life. Evolving identity management challenges, and especially the challenges associated with cross-company, cross-domain access, have given rise to a new approach to identity management, known now as "federated identity management".[6] FIdM, or the "federation" of identity, describes the technologies, standards and use-cases which serve to enable the portability of identity information across otherwise autonomous security domains. The ultimate goal of identity federation is to enable users of one domain to securely access data or systems of another domain seamlessly, and without the need for completely redundant user administration. Identity federation comes in many flavors, including "user-controlled" or "user-centric" scenarios, as well as enterprise-controlled orbusiness-to-businessscenarios. Federation is enabled through the use of open industry standards and/or openly published specifications, such that multiple parties can achieve interoperability for common use-cases. Typical use-cases involve things such as cross-domain, web-based single sign-on, cross-domain user account provisioning, cross-domain entitlement management and cross-domain user attribute exchange. Use of identity federation standards can reduce cost by eliminating the need to scale one-off or proprietary solutions. It can increase security and lower risk by enabling an organization to identify and authenticate a user once, and then use that identity information across multiple systems, including external partner websites. It can improve privacy compliance by allowing the user to control what information is shared, or by limiting the amount of information shared. And lastly, it can drastically improve the end-user experience by eliminating the need for new account registration through automatic "federated provisioning" or the need to redundantly login through cross-domain single sign-on. The notion of identity federation is extremely broad, and also evolving. It could involve user-to-user and user-to-application as well as application-to-application use-case scenarios at both the browser tier and the web services orservice-oriented architecture(SOA) tier. It can involve high-trust, high-security scenarios as well as low-trust, low-security scenarios. The levels of identity assurance that may be required for a given scenario are also being standardized through a common and openIdentity Assurance Framework. It can involve user-centric use-cases, as well as enterprise-centric use-cases. The term "identity federation" is by design a generic term, and is not bound to any one specific protocol, technology, implementation or company. Identity federations may be bi-lateral relationships or multilateral relationships. In the latter case, the multilateral federation frequently occurs in a vertical market, such as in law enforcement (such as the National Identity Exchange Federation - NIEF[7]), and research and education (such as InCommon).[8]If the identity federation is bilateral, the two parties can exchange the necessary metadata (assertion signing keys, etc.) to implement the relationship. In a multilateral federation, the metadata exchange among participants is a more complex issue. It can be handled in a hub-and-spoke exchange or by the distribution of a metadata aggregate by a federated operator. One thing that is consistent, however, is the fact that "federation" describes methods of identity portability which are achieved in an open, often standards-based manner – meaning anyone adhering to the open specification or standard can achieve the full spectrum of use-cases and interoperability.[9] Identity federation can be accomplished any number of ways, some of which involve the use of formal Internet standards, such as theOASISSecurity Assertion Markup Language(SAML) specification, and some of which may involve open-source technologies and/or other openly published specifications (e.g.Information Cards,OpenID, theHiggins trust frameworkor Novell's Bandit project). Technologies used for federated identity includeSAML (Security Assertion Markup Language),OAuth, OpenID, Security Tokens (Simple Web Tokens, JSON Web Tokens, and SAML assertions),Web Service Specifications, andWindows Identity Foundation.[10] In the United States, theNational Institute of Standards and Technology(NIST), through theNational Cybersecurity Center of Excellence, has published a building block white paper in December 2016 on this topic[11] The Federal Risk and Authorization Management Program (FedRAMP) is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. FedRAMP enables Agencies to rapidly adapt from old, insecure legacy IT to mission-enabling, secure, and cost-effective cloud-based IT.[12] Digital identity platforms that allow users to log onto third-party websites, applications, mobile devices and gaming systems with their existing identity, i.e. enablesocial login, include: Here is a list of services that provide social login features which they encourage other websites to use. Related arefederated identity login providers.
https://en.wikipedia.org/wiki/Federated_identity
Identity driven networking(IDN) is the process of applyingnetwork controlsto anetwork deviceaccess based on the identity of an individual or a group of individuals responsible to or operating the device.[1]Individuals are identified, and the network is tuned to respond to their presence by context. TheOSI modelprovides a method to delivernetwork traffic, not only to the system but to the application that requested or is listening for data. These applications can operate either as a system based user-daemon process, or as a user application such as aweb browser. Internet securityis built around the idea that the ability to request or respond to requests should be subjected to some degree ofauthentication,validation,authorization, andpolicyenforcement. Identity driven networking endeavors to resolve user and system based policy into a singlemanagement paradigm. Since the internet comprises a vast range of devices and applications there are also many boundaries and therefore ideas on how to resolve connectivity to users within those boundaries. An endeavor to overlay the system with an identity framework must first decide what an Identity is, determine it, and only then use existing controls to decide what is intended with this new information. Adigital identityrepresents the connectedness between the real and some projection of an identity; and it may incorporate references todevicesas well asresourcesandpolicies. In some systems, policies provide the entitlements that an identity can claim at any particular point in time and space. For example, a person may be entitled to some privilegesduring work from their workplacethat may be deniedfrom home out of hours. Before a user gets to the network there is usually some form of machine authentication, this probably verifies and configures the system for some basic level of access. Short of mapping a user to aMAC addressprior or during this process (802.1x) it is not simple to have users authenticate at this point. It is more usual for a user to attempt to authenticate once the system processes (daemons) are started, and this may well require the network configuration to have already been performed. It follows that, in principle, the network identity of a device should be establishedbeforepermitting network connectivity, for example by usingdigital certificatesin place of hardware addresses which are trivial to spoof as device identifiers. Furthermore, a consistent identity model has to account for typical network devices such as routers and switches which can't depend on user identity, since no distinctive user is associated with the device. Absent this capability in practice, however, strong identity is not asserted at the network level. The first task when seeking to apply Identity Driven Network controls comprises some form of authentication, if not at the device level then further up the stack. Since the first piece of infrastructure placed upon a network is often anetwork operating system(NOS) there will often be an Identity Authority that controls the resources that the NOS contains (usually printers and file shares). There will also be procedures to authenticate users onto it. Incorporating some form ofsingle sign-onmeans that the flow on effect to other controls can be seamless. Many network capabilities can be made to rely upon authentication technologies for the provisioning of an access control policy. For instance; Packet filtering -firewall,content-control software, Quota Management systems andQuality of service(QoS) systems are good examples of where controls can be made dependent upon authentication.
https://en.wikipedia.org/wiki/Identity_driven_networking
Identity and access management(IAMorIdAM) orIdentity management(IdM), is a framework of policies and technologies to ensure that the right users (that are part of theecosystemconnected to or within an enterprise) have the appropriate access to technology resources. IAM systems fall under the overarching umbrellas ofIT securityanddata management. Identity and access management systems not only identify, authenticate, and control access for individuals who will be utilizing IT resources but also the hardware and applications employees need to access.[1] The terms "identity management" (IdM) and "identity and access management" are used interchangeably in the area of identity access management.[2] Identity-management systems, products, applications and platforms manage identifying and ancillary data about entities that include individuals, computer-related hardware, andsoftware applications. IdM covers issues such as how users gain anidentity, the roles, and sometimes the permissions that identity grants, the protection of that identity, and the technologies supporting that protection (e.g.,network protocols,digital certificates,passwords, etc.). Identity management (ID management) – or identity and access management (IAM) – is the organizational and technical processes for first registering and authorizing access rights in the configuration phase, and then in the operation phase for identifying, authenticating and controlling individuals or groups of people to have access to applications, systems or networks based on previously authorized access rights. Identity management (IdM) is the task of controlling information about users on computers. Such information includes information thatauthenticatesthe identity of a user, and information that describes data and actions they areauthorizedto access and/or perform. It also includes the management of descriptive information about the user and how and by whom that information can be accessed and modified. In addition to users, managed entities typically include hardware and network resources and even applications.[3]The diagram below shows the relationship between the configuration and operation phases of IAM, as well as the distinction between identity management and access management. Access controlis the enforcement of access rights defined as part ofaccess authorization. Digital identityis an entity's online presence, encompassing personal identifying information (PII) and ancillary information. SeeOECD[4]andNIST[5]guidelines on protecting PII.[6]It can be interpreted as the codification of identity names and attributes of a physical instance in a way that facilitates processing. In the real-world context of engineering online systems, identity management can involve five basic functions: A general model ofidentitycan be constructed from a small set of axioms, for example that all identities in a givennamespaceare unique, or that such identities bear a specific relationship to corresponding entities in the real world. Such an axiomatic model expresses "pure identity" in the sense that the model is not constrained by a specific application context. In general, an entity (real or virtual) can have multiple identities and each identity can encompass multiple attributes, some of which are unique within a given name space. The diagram below illustrates the conceptual relationship between identities and entities, as well as between identities and their attributes. In most theoretical and all practical models ofdigital identity, a given identity object consists of a finite set ofproperties(attribute values). These properties record information about the object, either for purposes external to the model or to operate the model, for example in classification and retrieval. A "pure identity" model is strictly not concerned with the externalsemanticsof these properties. The most common departure from "pure identity" in practice occurs with properties intended to assure some aspect of identity, for example adigital signatureorsoftware tokenwhich the model may use internally to verify some aspect of the identity in satisfaction of an external purpose. To the extent that the model expresses such semantics internally, it is not a pure model. Contrast this situation with properties that might be externally used for purposes ofinformation securitysuch as managing access or entitlement, but which are simply stored, maintained and retrieved, without special treatment by the model. The absence of external semantics within the model qualifies it as a "pure identity" model. Identity management can thus be defined as a set of operations on a given identity model, or more generally, as a set of capabilities with reference to it. In practice, identity management often expands to express how model content is to beprovisionedandreconciledamong multiple identity models. The process of reconciling accounts may also be referred to as de-provisioning.[7] User access enables users to assume a specific digital identity across applications, which enables access controls to be assigned and evaluated against this identity. The use of a single identity for a given user across multiple systems eases tasks for administrators and users. It simplifies access monitoring and verification and allows the organizations to minimize excessive privileges granted to one user. Ensuring user access security is crucial in this process, as it involves protecting the integrity and confidentiality of user credentials and preventing unauthorized access. Implementing robust authentication mechanisms, such as multi-factor authentication (MFA), regular security audits, and strict access controls, helps safeguard user identities and sensitive data. User access can be tracked from initiation to termination of user access. When organizations deploy an identity management process or system, their motivation is normally not primarily to manage a set of identities, but rather to grant appropriate access rights to those entities via their identities. In other words, access management is normally the motivation for identity management and the two sets of processes are consequently closely related.[8] Organizations continue to add services for both internal users and by customers. Many such services require identity management to properly provide these services. Increasingly, identity management has been partitioned from application functions so that a single identity can serve many or even all of an organization's activities.[8] For internal use identity management is evolving to control access to all digital assets, including devices, network equipment, servers, portals, content, applications and/or products. Services often require access to extensive information about a user, including address books, preferences, entitlements and contact information. Since much of this information is subject to privacy and/or confidentiality requirements, controlling access to it is vital.[9] Identity federation comprises one or more systems that share user access and allow users to log in based on authenticating against one of the systems participating in the federation. This trust between several systems is often known as a "circle of trust". In this setup, one system acts as theidentity provider(IdP) and other systems act asservice providers(SPs). When a user needs to access some service controlled by SP, they first authenticate against the IdP. Upon successful authentication, the IdP sends a secure "assertion" to the SP. "SAML assertions, specified using a markup language intended for describing security assertions, can be used by a verifier to make a statement to a relying party about the identity of a claimant. SAML assertions may optionally be digitally signed."[10] In addition to creation, deletion, modification of user identity data either assisted or self-service, identity management controls ancillary entity data for use by applications, such as contact information or location. Putting personal information onto computer networks necessarily raisesprivacyconcerns. Absent proper protections, the data may be used to implement asurveillance society.[12] Social webandonline social networkingservices make heavy use of identity management. Helping users decide how to manage access to their personal information has become an issue of broad concern.[13][14] Research related to the management of identity covers disciplines such as technology, social sciences, humanities and the law.[15] Decentralized identity management is identity management based ondecentralized identifiers(DIDs).[16] Within theSeventh Research Framework Programmeof the European Union from 2007 to 2013, several new projects related to Identity Management started. The PICOS Project investigates and develops a state-of-the-art platform for providing trust, privacy and identity management in mobile communities.[17] SWIFT focuses on extending identity functions and federation to the network while addressing usability and privacy concerns and leverages identity technology as a key to integrate service and transport infrastructures for the benefit of users and the providers.[18] Ongoing projects include Future of Identity in the Information Society (FIDIS),[19]GUIDE,[20]and PRIME.[21] Academic journalsthat publish articles related to identity management include: Less specialized journals publish on the topic and, for instance, have special issues on identity such as: ISO(and more specificallyISO/IEC JTC 1, SC27 IT Security techniques WG5 Identity Access Management and Privacy techniques) is conducting some standardization work for identity management (ISO 2009), such as the elaboration of a framework for identity management, including the definition of identity-related terms. The published standards and current work items includes the following: In each organization there is normally a role or department that is responsible for managing the schema of digital identities of their staff and their own objects, which are represented by object identities orobject identifiers(OID).[23]The organizational policies and processes and procedures related to the oversight of identity management are sometimes referred to asIdentity Governance and Administration(IGA). How effectively and appropriately such tools are used falls within scope of broadergovernance, risk management, and complianceregimes. Since 2016 identity and access management professionals have their own professional organization, IDPro. In 2018 the committee initiated the publication of An Annotated Bibliography, listing a number of important publications, books, presentations and videos.[24] Anidentity-management systemrefers to aninformation system, or to a set of technologies that can be used for enterprise or cross-network identity management. The following terms are used in relationship with "identity-management system":[25] Identity management, otherwise known as identity and access management (IAM) is an identity security framework that works to authenticate and authorize user access to resources such as applications, data, systems, and cloud platforms. It seeks to ensure only the right people are being provisioned to the right tools, and for the right reasons. As our digital ecosystem continues to advance, so does the world of identity management. "Identity management" and "access and identity management" (or AIM) are terms that are used interchangeably under the title of identity management while identity management itself falls under the umbrella ofIT security[26]and information privacy[27][28]and privacy risk[29]as well as usability and e-inclusion studies.[30][31]
https://en.wikipedia.org/wiki/Identity_management_system
Adata infrastructureis adigital infrastructurepromotingdata sharingand consumption. Similarly to otherinfrastructures, it is a structure needed for the operation of a society as well as the services and facilities necessary for an economy to function, the data economy in this case. There is an intense discussion at international level one-infrastructuresand data infrastructure serving scientific work. The European Strategy Forum on Research Infrastructures (ESFRI) presented the first European roadmap for large-scale Research Infrastructures.[1]These are modeled as layered hardware and software systems which support sharing of a wide spectrum of resources, spanning from networks, storage, computing resources, and system-level middleware software, to structured information within collections, archives, and databases. Thee-Infrastructure Reflection Group(e-IRG) has proposed a similar vision. In particular, it envisions e-Infrastructures where the principles of global collaboration andshared resourcesare intended to encompass the sharing needs of all research activities.[2] In the framework of theJoint Information Systems Committee(JISC) e-infrastructure programme, e-Infrastructures are defined in terms of integration of networks, grids,data centersandcollaborative environments, and are intended to include supporting operation centers, service registries, credential delegation services,certificate authorities, training andhelp deskservices.[3]The Cyberinfrastructure programme launched by the USNational Science Foundation(NSF) plans to develop new research environments in which advanced computational, collaborative, data acquisition and management services are made available to researchers connected through high-performance networks.[4] More recently, the vision for “global research data infrastructures” has been drawn by identifying a number of recommendations for developers of future research infrastructures.[5]This vision document highlighted the open issues affecting data infrastructures development – both technical and organizational – and identified future research directions. Besides these initiatives targeting “generic” infrastructures there are others oriented to specific domains, e.g. theEuropean Commissionpromotes theINSPIREinitiative for an e-Infrastructure oriented to the sharing of content and service resources of European countries in the ambit ofgeospatial datasets.[6]
https://en.wikipedia.org/wiki/Data_infrastructure
Information science[1][2][3]is an academic field which is primarily concerned withanalysis, collection,classification, manipulation, storage,retrieval, movement, dissemination, and protection ofinformation.[4]Practitioners within and outside the field study the application and the usage of knowledge inorganizationsin addition to the interaction between people, organizations, and any existinginformation systemswith the aim of creating, replacing, improving, or understanding the information systems. Historically, information science has evolved as a transdisciplinary field, both drawing from and contributing to diverse domains.[5] Information science methodologies are applied across numerous domains, reflecting the discipline's versatility and relevance. Key application areas include: The interdisciplinary nature of information science continues to expand as new technological developments and social practices emerge, creating innovative research frontiers that bridge traditional disciplinary boundaries. Information science focuses on understandingproblemsfrom the perspective of the stakeholders involved and then applying information and other technologies as needed. In other words, it tackles systemic problems first rather than individual pieces oftechnologywithin that system. In this respect, one can see information science as a response totechnological determinism, the belief that technology "develops by its own laws, that it realizes its own potential, limited only by the material resources available and the creativity of its developers. It must therefore be regarded as an autonomous system controlling and ultimately permeating all other subsystems of society."[7] Many universities have entire colleges, departments or schools devoted to the study of information science, while numerous information-science scholars work in disciplines such ascommunication,healthcare,computer science,law, andsociology. Several institutions have formed an I-School Caucus (seeList of I-Schools), but numerous others besides these also have comprehensive information specializations. Within information science, current issues as of 2013[update]include: The first known usage of the term "information science" was in 1955.[8]An early definition of Information science (going back to 1968, the year when theAmerican Documentation Instituterenamed itself as theAmerican Society for Information Science and Technology) states: Some authors useinformaticsas a synonym forinformation science. This is especially true when related to the concept developed byA. I. Mikhailovand other Soviet authors in the mid-1960s. The Mikhailov school saw informatics as a discipline related to the study of scientific information.[10]Informatics is difficult to precisely define because of the rapidly evolving andinterdisciplinarynature of the field. Definitions reliant on the nature of the tools used for deriving meaningful information from data are emerging in Informatics academic programs.[11] Regional differences and international terminology complicate the problem. Some people[which?]note that much of what is called "Informatics" today was once called "Information Science" – at least in fields such asMedical Informatics. For example, when library scientists also began to use the phrase "Information Science" to refer to their work, the term "informatics" emerged: Another term discussed as a synonym for "information studies" is "information systems".Brian Campbell Vickery'sInformation Systems(1973) placed information systems within IS.[12]Ellis, Allen & Wilson (1999), on the other hand, provided a bibliometric investigation describing the relation between twodifferentfields: "information science" and "information systems".[13] Philosophy of information studies conceptual issues arising at the intersection ofpsychology,computer science,information technology, andphilosophy. It includes the investigation of the conceptual nature and basic principles ofinformation, including its dynamics, utilisation and sciences, as well as the elaboration and application of information-theoretic and computational methodologies to its philosophical problems.[14]Robert Hammarberg pointed out that there is no coherent distinction between information and data: "an Information Processing System (IPS) cannot process data except in terms of whatever representational language is inherent to it, [so] data could not even be apprehended by an IPS without becoming representational in nature, and thus losing their status of being raw, brute, facts."[15] In science and information science, an ontology formally represents knowledge as a set of concepts within adomain, and the relationships between those concepts. It can be used toreasonabout the entities within that domain and may be used to describe the domain. More specifically, an ontology is a model for describing the world that consists of a set of types, properties, and relationship types. Exactly what is provided around these varies, but they are the essentials of an ontology. There is also generally an expectation that there be a close resemblance between the real world and the features of the model in an ontology.[16] In theory, an ontology is a "formal, explicit specification of a shared conceptualisation".[17]An ontology renders sharedvocabularyandtaxonomywhich models a domain with the definition of objects and/or concepts and their properties and relations.[18] Ontologies are the structural frameworks for organizing information and are used inartificial intelligence, theSemantic Web,systems engineering,software engineering,biomedical informatics,library science,enterprise bookmarking, andinformation architectureas a form ofknowledge representationabout the world or some part of it. The creation of domain ontologies is also essential to the definition and use of anenterprise architecture framework. Authors such as Ingwersen[3]argue that informatology has problems defining its own boundaries with other disciplines. According to Popper "Information science operates busily on an ocean of commonsense practical applications, which increasingly involve the computer ... and on commonsense views of language, of communication, of knowledge and Information, computer science is in little better state".[19]Other authors, such as Furner, deny that information science is a true science.[20] An information scientist is an individual, usually with a relevant subject degree or high level of subject knowledge, who provides focused information to scientific and technical research staff in industry or to subject faculty and students in academia. The industry *information specialist/scientist* and the academic information subject specialist/librarian have, in general, similar subject background training, but the academic position holder will be required to hold a second advanced degree—e.g. Master of Library Science (MLS), Military Intelligence (MI), Master of Arts (MA)—in information and library studies in addition to a subject master's. The title also applies to an individual carrying out research in information science. A systems analyst works on creating, designing, and improving information systems for a specific need. Often systems analysts work with one or morebusinessesto evaluate and implement organizational processes and techniques for accessing information in order to improveefficiencyandproductivitywithin the organization(s). An information professional is an individual who preserves, organizes, and disseminates information. Information professionals are skilled in the organization and retrieval of recorded knowledge. Traditionally, their work has been with print materials, but these skills are being increasingly used with electronic, visual, audio, and digital materials. Information professionals work in a variety of public, private, non-profit, and academic institutions. Information professionals can also be found within organisational and industrial contexts, and are performing roles that include system design and development and system analysis. Information science, in studying the collection,classification, manipulation, storage,retrievaland dissemination ofinformationhas origins in the common stock of human knowledge. Information analysis has been carried out by scholars at least as early as the time of theAssyrian Empirewith the emergence of cultural depositories, what is today known as libraries and archives.[21]Institutionally, information science emerged in the 19th century along with many other social science disciplines. As a science, however, it finds its institutional roots in thehistory of science, beginning with publication of the first issues ofPhilosophical Transactions,generally considered the first scientific journal, in 1665 by the Royal Society. Theinstitutionalizationof science occurred throughout the 18th century. In 1731,Benjamin Franklinestablished theLibrary Company of Philadelphia, the first library owned by a group of public citizens, which quickly expanded beyond the realm of books and became a center ofscientific experimentation, and which hosted public exhibitions of scientific experiments.[22]Benjamin Franklin invested a town inMassachusettswith a collection of books that the town voted to make available to all free of charge, forming the firstpublic libraryof theUnited States.[23]Academie de Chirurgia (Paris) publishedMemoires pour les Chirurgiens, generally considered to be the firstmedical journal, in 1736. TheAmerican Philosophical Society, patterned on theRoyal Society(London), was founded in Philadelphia in 1743. As numerous other scientific journals and societies were founded,Alois Senefelderdeveloped the concept oflithographyfor use in mass printing work inGermanyin 1796. By the 19th century the first signs of information science emerged as separate and distinct from other sciences and social sciences but in conjunction with communication and computation. In 1801,Joseph Marie Jacquardinvented a punched card system to control operations of the cloth weaving loom in France. It was the first use of "memory storage of patterns" system.[24]As chemistry journals emerged throughout the 1820s and 1830s,[25]Charles Babbagedeveloped his "difference engine", the first step towards the modern computer, in 1822 and his "analytical engine" by 1834. By 1843Richard Hoedeveloped the rotary press, and in 1844Samuel Morsesent the first public telegraph message. By 1848 William F. Poole begins theIndex to Periodical Literature,the first general periodical literature index in the US. In 1854George BoolepublishedAn Investigation into Laws of Thought...,which lays the foundations forBoolean algebra, which is later used ininformation retrieval.[26]In 1860 a congress was held at Karlsruhe Technische Hochschule to discuss the feasibility of establishing a systematic and rational nomenclature for chemistry. The congress did not reach any conclusive results, but several key participants returned home withStanislao Cannizzaro's outline (1858), which ultimately convinces them of the validity of his scheme for calculating atomic weights.[27] By 1865, theSmithsonian Institutionbegan a catalog of current scientific papers, which became theInternational Catalogue of Scientific Papersin 1902.[28]The following year the Royal Society began publication of itsCatalogue of Papersin London. In 1868, Christopher Sholes, Carlos Glidden, and S. W. Soule produced thefirst practical typewriter. By 1872 Lord Kelvin devised an analogue computer to predict the tides, and by 1875Frank Stephen Baldwinwas granted the first US patent for a practical calculating machine that performs four arithmetic functions.[25]Alexander Graham BellandThomas Edisoninvented the telephone and phonograph in 1876 and 1877 respectively, and theAmerican Library Associationwas founded in Philadelphia. In 1879Index Medicuswas first issued by the Library of the Surgeon General, U.S. Army, withJohn Shaw Billingsas librarian, and later the library issuesIndex Catalogue,which achieved an international reputation as the most complete catalog of medical literature.[29] The discipline ofdocumentation science, which marks the earliest theoretical foundations of modern information science, emerged in the late part of the 19th century in Europe together with several more scientific indexes whose purpose was to organize scholarly literature. Many information science historians citePaul OtletandHenri La Fontaineas the fathers of information science with the founding of the International Institute of Bibliography (IIB) in 1895.[30]A second generation of European Documentalists emerged after theSecond World War, most notablySuzanne Briet.[31]However, "information science" as a term is not popularly used in academia until sometime in the latter part of the 20th century.[32] Documentalists emphasized the utilitarian integration of technology and technique toward specific social goals. According to Ronald Day, "As an organized system of techniques and technologies, documentation was understood as a player in the historical development of global organization in modernity – indeed, a major player inasmuch as that organization was dependent on the organization and transmission of information."[32]Otlet and Lafontaine (who won theNobel Prizein 1913) not only envisioned later technical innovations but also projected a global vision for information andinformation technologiesthat speaks directly to postwar visions of a global "information society". Otlet and Lafontaine established numerous organizations dedicated to standardization, bibliography, international associations, and consequently, international cooperation. These organizations were fundamental for ensuring international production in commerce, information, communication and modern economic development, and they later found their global form in such institutions as theLeague of Nationsand theUnited Nations. Otlet designed theUniversal Decimal Classification, based onMelville Dewey's decimal classification system.[32] Although he lived decades before computers and networks emerged, what he discussed prefigured what ultimately became theWorld Wide Web. His vision of a great network ofknowledgefocused ondocumentsand included the notions ofhyperlinks,search engines, remote access, andsocial networks. Otlet not only imagined that all the world's knowledge should be interlinked and made available remotely to anyone, but he also proceeded to build a structured document collection. This collection involved standardized paper sheets and cards filed in custom-designed cabinets according to a hierarchical index (which culled information worldwide from diverse sources) and a commercial information retrieval service (which answered written requests by copying relevant information from index cards). Users of this service were even warned if their query was likely to produce more than 50 results per search.[32]By 1937 documentation had formally been institutionalized, as evidenced by the founding of the American Documentation Institute (ADI), later called theAmerican Society for Information Science and Technology. With the 1950s came increasing awareness of the potential of automatic devices for literature searching and information storage and retrieval. As these concepts grew in magnitude and potential, so did the variety of information science interests. By the 1960s and 70s, there was a move from batch processing to online modes, from mainframe to mini and microcomputers. Additionally, traditional boundaries among disciplines began to fade and many information science scholars joined with other programs. They further made themselves multidisciplinary by incorporating disciplines in the sciences, humanities and social sciences, as well as other professional programs, such aslawandmedicinein their curriculum. Among the individuals who had distinct opportunities to facilitate interdisciplinary activity targeted at scientific communication wasFoster E. Mohrhardt, director of theNational Agricultural Libraryfrom 1954 to 1968.[33] By the 1980s, large databases, such as Grateful Med at theNational Library of Medicine, and user-oriented services such asDialogandCompuserve, were for the first time accessible by individuals from their personal computers. The 1980s also saw the emergence of numerousspecial interest groupsto respond to the changes. By the end of the decade, special interest groups were available involving non-print media, social sciences, energy and the environment, and community information systems. Today, information science largely examines technical bases, social consequences, and theoretical understanding of online databases, widespread use of databases in government, industry, and education, and the development of the Internet and World Wide Web.[34] Disseminationhas historically been interpreted as unilateral communication of information. With the advent of theinternet, and the explosion in popularity ofonline communities,social mediahas changed the information landscape in many respects, and creates both new modes of communication and new types of information",[35]changing the interpretation of the definition of dissemination. The nature of social networks allows for faster diffusion of information than through organizational sources.[36]The internet has changed the way we view, use, create, and store information; now it is time to re-evaluate the way we share and spread it. Social media networks provide an open information environment for the mass of people who have limited time or access to traditional outlets of information diffusion,[36]this is an "increasingly mobile and social world [that] demands...new types of information skills".[35]Social media integration as an access point is a very useful and mutually beneficial tool for users and providers. All major news providers have visibility and an access point through networks such asFacebookandTwittermaximizing their breadth of audience. Through social media people are directed to, or provided with, information by people they know. The ability to "share, like, and comment on...content"[37]increases the reach farther and wider than traditional methods. People like to interact with information, they enjoy including the people they know in their circle of knowledge. Sharing through social media has become so influential that publishers must "play nice" if they desire to succeed. Although, it is often mutually beneficial for publishers and Facebook to "share, promote and uncover new content"[37]to improve both user base experiences. The impact of popular opinion can spread in unimaginable ways. Social media allows interaction through simple to learn and access tools;The Wall Street Journaloffers an app through Facebook, andThe Washington Postgoes a step further and offers an independent social app that was downloaded by 19.5 million users in six months,[37]proving how interested people are in the new way of being provided information. The connections and networks sustained through social media help information providers learn what is important to people. The connections people have throughout the world enable the exchange of information at an unprecedented rate. It is for this reason that these networks have been realized for the potential they provide. "Most news media monitor Twitter for breaking news",[36]as well as news anchors frequently request the audience to tweet pictures of events.[37]The users and viewers of the shared information have earned "opinion-making and agenda-setting power"[36]This channel has been recognized for the usefulness of providing targeted information based on public demand. The following areas are some of those that information science investigates and develops. Information access is an area of research at the intersection ofInformatics, Information Science,Information Security,Language Technology, andComputer Science. The objectives of information access research are to automate the processing of large and unwieldy amounts of information and to simplify users' access to it. What about assigning privileges and restricting access to unauthorized users? The extent of access should be defined in the level of clearance granted for the information. Applicable technologies includeinformation retrieval,text mining,text editing,machine translation, andtext categorisation. In discussion, information access is often defined as concerning the insurance of free and closed or public access to information and is brought up in discussions oncopyright,patent law, andpublic domain. Public libraries need resources to provide knowledge of information assurance. Information architecture (IA) is the art and science of organizing and labellingwebsites,intranets,online communitiesand software to support usability.[38]It is an emerging discipline andcommunity of practicefocused on bringing together principles ofdesignandarchitectureto thedigital landscape.[39]Typically it involves amodelorconceptofinformationwhich is used and applied to activities that require explicit details of complexinformation systems. These activities includelibrarysystems anddatabasedevelopment. Information management (IM) is the collection and management of information from one or more sources and the distribution of that information to one or more audiences. This sometimes involves those who have a stake in, or a right to that information. Management means the organization of and control over the structure, processing and delivery of information. Throughout the 1970s this was largely limited to files, file maintenance, and the life cycle management of paper-based files, other media and records. With the proliferation of information technology starting in the 1970s, the job of information management took on a new light and also began to include the field of data maintenance. Information retrieval (IR) is the area of study concerned with searching for documents, forinformationwithin documents, and formetadataabout documents, as well as that of searchingstructured storage,relational databases, and theWorld Wide Web. Automated information retrieval systems are used to reduce what has been called "information overload". Many universities andpublic librariesuse IR systems to provide access to books, journals and other documents.Web search enginesare the most visibleIR applications. An information retrieval process begins when a user enters aqueryinto the system. Queries are formal statements ofinformation needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees ofrelevancy. An object is an entity that is represented by information in adatabase. User queries are matched against the database information. Depending on theapplicationthe data objects may be, for example, text documents, images,[40]audio,[41]mind maps[42]or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata. Most IR systems compute a numeric score on how well each object in the database match the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.[43] Information seeking is the process or activity of attempting to obtain information in both human and technological contexts. Information seeking is related to, but different from, information retrieval (IR). Much library and information science (LIS) research has focused on the information-seeking practices of practitioners within various fields of professional work. Studies have been carried out into the information-seeking behaviors of librarians,[44]academics,[45]medical professionals,[46]engineers[47]and lawyers[48](among others). Much of this research has drawn on the work done by Leckie, Pettigrew (now Fisher) and Sylvain, who in 1996 conducted an extensive review of the LIS literature (as well as the literature of other academic fields) on professionals' information seeking. The authors proposed an analytic model of professionals' information seeking behaviour, intended to be generalizable across the professions, thus providing a platform for future research in the area. The model was intended to "prompt new insights... and give rise to more refined and applicable theories of information seeking" (Leckie, Pettigrew & Sylvain 1996, p. 188). The model has been adapted byWilkinson (2001)who proposes a model of the information seeking of lawyers. Recent studies in this topic address the concept of information-gathering that "provides a broader perspective that adheres better to professionals' work-related reality and desired skills."[49](Solomon & Bronstein 2021). An information society is asocietywhere the creation, distribution, diffusion, uses, integration and manipulation ofinformationis a significant economic, political, and cultural activity. The aim of an information society is to gain competitive advantage internationally, through usingITin a creative and productive way. Theknowledge economyis its economic counterpart, whereby wealth is created through the economic exploitation of understanding. People who have the means to partake in this form of society are sometimes calleddigital citizens. Basically, an information society is the means of getting information from one place to another (Wark 1997, p. 22). As technology has become more advanced over time so too has the way we have adapted in sharing this information with each other. Information society theory discusses the role of information and information technology in society, the question of which key concepts should be used for characterizing contemporary society, and how to define such concepts. It has become a specific branch of contemporary sociology. Knowledge representation (KR) is an area ofartificial intelligenceresearch aimed at representing knowledge in symbols to facilitateinferencingfrom thoseknowledgeelements, creating new elements of knowledge. The KR can be made to be independent of the underlying knowledge model or knowledge base system (KBS) such as asemantic network.[50] Knowledge Representation (KR) research involves analysis of how to reason accurately and effectively and how best to use a set of symbols to represent a set of facts within a knowledge domain. A symbol vocabulary and a system of logic are combined to enableinferencesabout elements in the KR to create new KR sentences. Logic is used to supply formalsemanticsof how reasoning functions should be applied to the symbols in the KR system. Logic is also used to define how operators can process and reshape the knowledge. Examples of operators and operations include, negation, conjunction, adverbs, adjectives, quantifiers and modal operators. The logic is interpretation theory. These elements—symbols, operators, and interpretation theory—are what give sequences of symbols meaning within a KR.
https://en.wikipedia.org/wiki/Information_science
Information technology infrastructureis defined broadly as a set ofinformation technology(IT) components that are the foundation of an IT service; typically physical components (computerandnetworking hardwareand facilities), but also varioussoftwareandnetworkcomponents.[1][2] According to theITILFoundation Course Glossary, IT Infrastructure can also be termed as “All of the hardware, software, networks, facilities, etc., that are required to develop, test, deliver, monitor, control or support IT services. The term ITinfrastructureincludes all of the Information Technology but not the associated People, Processes and documentation.”[3] In IT Infrastructure, the above technological components contribute to and drive business functions. Leaders and managers within the IT field are responsible for ensuring that both the physical hardware and software networks and resources are working optimally. IT infrastructure can be looked at as the foundation of an organization's technology systems, thereby playing an integral part in driving its success.[4]All organizations who rely on technology to do their business can benefit from having a robust, interconnected IT Infrastructure. With the current speed that technology changes and the competitive nature of businesses, IT leaders have to ensure that their IT Infrastructure is designed such that changes can be made quickly and without impacting the business continuity.[5]While traditionally companies used to typically rely on physical data centers or colocation facilities to support their IT Infrastructure, cloud hosting has become more popular as it is easier to manage and scale. IT Infrastructure can be managed by the company itself or it can be outsourced to another company that has consulting expertise to develop robust infrastructures for an organization.[6]With advances in online outreach availability, it has become easier for end users to access technology. As a result, IT infrastructures have become more complex and therefore, it is harder for managers to oversee the end to end operations. In order to mitigate this issue, strong IT Infrastructures require employees with varying skill sets. The fields ofIT managementandIT service managementrely on IT infrastructure, and theITILframework was developed as a set of best practices with regard to IT infrastructure. The ITIL framework assists companies with the ability to be responsive to technological market demands. Technology can often be thought of as an innovative product which can incur high production costs. However, the ITIL framework helps address these issues and allows the company to be more cost effective which helps IT managers to keep the IT Infrastructure functioning.[7] Even though the IT infrastructure has been around for over 60 years, there have been incredible advances in technology in the past 15 years.[8] The primary components of an IT Infrastructure are the physical systems such as hardware, storage, any kind of routers/switches and the building itself but also networks and software .[9]In addition to these components, there is the need for “IT Infrastructure Security”. Security keeps the network and its devices safe in order to maintain the integrity within the overall infrastructure of the organization.[10] Specifically, the first three layers are directly involved with IT Infrastructure. The physical layer serves as the fundamental layer for hardware. The second and third layers (Data Link and Network), are essential for communication to and from hardware devices. Without this, networking is not possible. Therefore, in a sense, the internet itself would not be possible.[11]It's important to emphasize thatfiber opticsplay a crucial role in a network infrastructure. Fiber optics[12]serve as the primary means for connecting network equipment and establishing connections between buildings. Different types of technological tasks may require a tailored approach to the infrastructure. These can be achieved through a traditional, cloud or hyper converged IT Infrastructure.[13] There are many functioning parts that go into the health of an IT infrastructure. In order to contribute positively to the organization, employees can acquire abilities to benefit the company. These include key technical abilities such as cloud, network, and data administration skills and soft abilities such as collaboration and communication skills.[14][15] As data storage and management becomes more digitized, IT Infrastructure is moving towards the cloud. Infrastructure-as-a-service (IaaS) provides the ability to host on a server and is a platform for cloud computing.[16]
https://en.wikipedia.org/wiki/IT_infrastructure
Genetic privacyinvolves the concept of personalprivacyconcerning the storing, repurposing, provision to third parties, and displaying of information pertaining to one'sgenetic information.[1][2]This concept also encompasses privacy regarding the ability to identify specific individuals by theirgenetic sequence, and the potential to gain information on specific characteristics about that person via portions of their genetic information, such as their propensity for specific diseases or their immediate or distant ancestry.[3] With the public release of genome sequence information of participants in large-scale research studies, questions regarding participant privacy have been raised. In some cases, it has been shown that it is possible to identify previously anonymous participants from large-scale genetic studies that released gene sequence information.[4] Genetic privacy concerns also arise in the context of criminal law because the government can sometimes overcome criminal suspects' genetic privacy interests and obtain their DNA sample.[5]Due to the shared nature of genetic information between family members, this raises privacy concerns of relatives as well.[6] As concerns and issues of genetic privacy are raised, regulations and policies have been developed in the United States both at a federal and state level.[7][8] In the majority of cases, an individual'sgenetic sequenceis considered unique to that individual. One notable exception to this rule in humans is the case ofidentical twins, who have nearly identical genome sequences at birth.[9]In the remainder of cases, one's genetic fingerprint is considered specific to a particular person and is regularly used in the identification of individuals in the case of establishing innocence or guilt in legal proceedings viaDNA profiling.[10]Specific gene variants one's genetic code, known asalleles, have been shown to have strong predictive effects in the occurrences of diseases, such as theBRCA1andBRCA2mutant genes inBreast CancerandOvarian Cancer, or PSEN1, PSEN2, and APP genes inearly-onset Alzheimer's disease.[11][12][13]Additionally, gene sequences are passed down with aregular pattern of inheritancebetween generations, and can therefore reveal one's ancestry viagenealogical DNA testing. Additionally with knowledge of the sequence of one's biological relatives, traits can be compared that allow relationships between individuals, or the lack thereof, to be determined, as is often done inDNA paternity testing. As such, one's genetic code can be used to infer many characteristics about an individual, including many potentially sensitive subjects such as:[14] Common specimen types for direct-to-consumer genetic testing are cheek swabs and saliva samples.[15]One of the most popular reasons for at-home genetic testing is to obtain information on an individual's ancestry via genealogical DNA testing and is offered by many companies such as23andMe,AncestryDNA,Family Tree DNA, orMyHeritage.[16]Other tests are also available which provide consumers with information on genes which influence the risk of specific diseases, such as the risk of developinglate-onset Alzheimer's diseaseorceliac disease.[17] Studies have shown that genomic data is not immune to adversary attacks.[3][18][19]A study conducted in 2013 revealed vulnerabilities in the security of public databases that containgenetic data.[4]As a result, research subjects could sometimes be identified by their DNA alone.[20]Although reports of premeditated breaches outside of experimental research are disputed, researchers suggest the liability is still important to study.[21] While accessible genomic data has been pivotal in advancing biomedical research, it also escalates the possibility of exposing sensitive information.[3][18][19][21][22]A common practice in genomic medicine to protect patient anonymity involves removing patient identifiers.[3][18][19][23]However, de-identified data is not subject to the same privileges as the research subjects.[23][19]Furthermore, there is an increasing ability to re-identify patients and their genetic relatives from their genetic data.[3][18][19][22] One study demonstrated re-identification by piecing together genomic data fromshort tandem repeats(e.g.CODIS),SNPallele frequencies (e.g.ancestrytesting), andwhole-genome sequencing.[18]They also hypothesize using a patient's genetic information, ancestry testing, and social media to identify relatives.[18]Other studies have echoed the risks associated with linking genomic information with public data like social media, including voter registries, web searches, and personal demographics,[3]or with controlled data, like personal medical records.[19] There is also controversy regarding the responsibility aDNA testingcompany has to ensure thatleaksand breaches do not happen.[24]Determining who legally owns the genomic data, the company or the individual, is of legal concern. There have been published examples of personal genome information being exploited, as well as indirect identification of family members.[18][25]Additional privacy concerns, related to, e.g.,genetic discrimination, loss of anonymity, and psychological impacts, have been increasingly pointed out by the academic community[25][26]as well as government agencies.[17] Additionally, for criminal justice and privacy advocates, the use of genetic information in identifying suspects for criminal investigations proves worrisome underthe United States Fourth Amendment—especially when an indirect genetic link connects an individual to crime scene evidence.[27]Since 2018, law enforcement officials have been harnessing the power of genetic data to revisitcold caseswith DNA evidence.[28]Suspects discovered through this process are not directly identified by the input of their DNA into established criminal databases, like CODIS. Instead, suspects are identified as the result of familial genetic sleuthing by law enforcement, submitting crime scene DNA evidence to genetic database services that link users whose DNA similarity indicates a family connection.[28][29]Officers can then track the newly identified suspect in person, waiting to collect discarded trash that might carry DNA in order to confirm the match.[28] Despite the privacy concerns of suspects and their relatives, this procedure is likely to survive Fourth Amendment scrutiny.[6]Much like donors of biological samples in cases of genetic research,[30][31]criminal suspects do not retain property rights in abandoned waste; they can no longer assert an expectation of privacy in the discarded DNA used to confirm law enforcement suspicions, thereby eliminating their Fourth Amendment protection in that DNA.[6]Additionally, the genetic privacy of relatives is likely irrelevant under current caselaw since Fourth Amendment protection is “personal” to criminal defendants.[6] In a systematic review of perspectives toward genetic privacy, researchers highlight some of the concerns individuals hold regarding their genetic information, such as the potential dangers and effects on themselves and family members.[21]Academics note that participating in biomedical research or genetic testing has implications beyond the participant; it can also reveal information about genetic relatives.[18][20][21][25]The study also found that people expressed concerns as to which body controls their information and if their genetic information could be used against them.[21] Additionally, theAmerican Society of Human Geneticshas expressed issues about genetic tests in children.[32]They infer that testing could lead to negative consequences for the child. For example, if a child's likelihood for adoption was influenced by genetic testing, the child might suffer from self esteem issues. A child's well-being might also suffer due to paternity testing or custody battles that require this type of information.[14] When the access of genetic information is regulated, it can preventinsurance companiesand employers from reaching such data. This could avoid issues of discrimination, which oftentimes leaves an individual whose information has been breached without a job or without insurance.[14] In the United States, biomedical research containing human subjects is governed by a baseline standard of ethics known asThe Common Rule, which aims to protect a subject's privacy by requiring "identifiers" such as name or address to be removed from collected data.[33]A 2012 report by thePresidential Commission for the Study of Bioethical Issuesstated, however, that "what constitutes 'identifiable' and 'de-identified' data is fluid and that evolving technologies and the increasing accessibility of data could allow de-identified data to become re-identified".[33]In fact, research has already shown that it is "possible to discover a study participant's identity by cross-referencing research data about him and his DNA sequence … [with] genetic genealogy and public-records databases".[34]This has led to calls for policy-makers to establish consistent guidelines and best practices for the accessibility and usage of individual genomic data collected by researchers.[35] Privacy protections for genetic research participants were strengthened by provisions of the21st Century Cures Act(H.R.34) passed on 7 December 2016 for which the American Society of Human Genetics (ASHG) commended Congress,Senator WarrenandSenator Enzi.[8][36][37] TheGenetic Information Nondiscrimination Actof 2008 (GINA) protects the genetic privacy of the public, including research participants. The passage of GINA makes it illegal for health insurers or employers to request or require genetic information of an individual or of family members (and further prohibits the discriminatory use of such information).[38]This protection does not extend to other forms of insurance such as life insurance.[38] TheHealth Insurance Portability and Accountability Act of 1996 (HIPAA)also provides some genetic privacy protections. HIPAA defines health information to include genetic information,[39]which places restrictions on who health providers can share the information with.[40] Three kinds of laws are frequently associated with genetic privacy: those relating to informed consent and property rights, those preventing insurance discrimination, and those prohibiting employment discrimination.[41][42]According to the National Human Genome Research Institute, forty-one states have enacted genetic privacy laws as of January 2020.[41]However, those privacy laws vary in the scope of protection offered; while some laws "apply broadly to any person" others apply "narrowly to certain entities such as insurers, employers, or researchers."[41] Arizona, for example, falls in the former category and offers broad protection. Currently, Arizona's genetic privacy statutes focus on the need for informed consent to create, store, or release genetic testing results,[43][44]but a pending bill would amend the state genetic privacy law framework to grant exclusive property rights in genetic information derived from genetic testing to all persons tested.[45]In expanding privacy rights by including property rights, the bill would grant persons who undergo genetic testing greater control over their genetic information. Arizona also prohibits insurance and employment discrimination on the basis of genetic testing results.[46][47] New York State also has strong legislative measures protecting individuals from genetic discrimination. Section 79-I of the New York Civil Rights Law places strict restrictions on the usage of genetic data. The statute also outlines the proper conditions for consenting to genetic data collection or usage.[48] California similarly offers a broad range of protection for genetic privacy, but it stops short of granting individuals property rights in their genetic information. While currently enacted legislation focuses on prohibiting genetic discrimination in employment[49]and insurance,[50]a piece of pending legislation would extend genetic privacy rights to provide individuals with greater control over genetic information obtained through direct-to-consumer testing services like23andMe.[51] Florida passed House Bill 1189, a DNA privacy law that prohibits insurers from using genetic data, in July 2020.[7] On the other hand, Mississippi offers few genetic privacy protections beyond those required by the federal government. In the Mississippi Employment Fairness Act, the legislature recognized the applicability of theGenetic Information Nondiscrimination Act,[52]which "prohibit[s] discrimination on the basis of genetic information with respect to health insurance and employment."[53][54] To balance data sharing with the need to protect the privacy of research subjects geneticists are considering to move more data behind controlled-access barriers, authorizing trusted users to access the data from many studies, rather than "having to obtain it piecemeal from different studies".[4][20] In October 2005,IBMbecame the world's first major corporation to establish a genetics privacy policy. Its policy prohibits using employees' genetic information in employment decisions.[55] According to a 2014 study by Yaniv Erlich andArvind Narayanan, genetic privacy breaching techniques fall into three categories:[56] However, more recent studies have indicated new avenues for breaching genetic privacy: According to a 2022 study by Zhiyu Wan et al., safeguards for genetic privacy fall into two categories:[59]
https://en.wikipedia.org/wiki/Genetic_privacy
Youth wings Subnational Multi-national Pirate Partyis a label adopted by variouspolitical partiesworldwide that share a set of values and policies focused oncivil rightsin the digital age.[1][2][3][4]The fundamental principles of Pirate Parties includefreedom of information,freedom of the press,freedom of expression,digital rightsandinternet freedom. The first Pirate Party, initially named "Piratpartiet", was founded in Sweden in 2006 byRick Falkvinge, and the movement has since expanded to over 60 countries. Central to their vision is the defense offree access to and sharing of knowledge, and opposition to intellectual monopolies. They therefore advocate forcopyrightandpatent lawsreform, aiming to make them more flexible and fairer, fosterinnovationand balance creator' rights with public access to knowledge. Specifically, they support shorter copyright terms and promoteopen accesstoscientific literatureandeducational resources. Pirate parties are strong proponents offree and open-source softwaredevelopment. They recognize its inherent benefits: it provides freedom of use, modification and distribution, transparency to avoid unfair practices, global collaboration, innovation and cost reduction, and enhanced security through code verifiability.Net neutralityrepresents another key pillar: they advocate for equal access to the internet and oppose any attempts to restrict or prioritize internet traffic. They promote universalinternet access,digital inclusion, andSTEMandcybersecurityeducation to addressdigital divide. Equally crucial in their programs are public and private investments inR&D, techstartups,digital infrastructure,Internet infrastructure,smart citytechnologies to optimizeurban infrastructures, and robustcybersecuritymeasures to protect these systems fromcyberattacks. Some Pirate parties also supportuniversal basic incomeas a response to the economic challenges posed by advancedautomation. Pirate Parties advocate for a more equitable and inclusiveplatform economybased oncommons-based peer productionandcollaborative consumptionprinciples. These parties conceptualize technological innovations as elements of the globaldigital commonsthat should be freely accessible to all people worldwide. Unlike many conventional political positions, Pirate Parties oppose concepts ofcyber sovereigntyand digitalprotectionism, instead promoting unrestricted information flow across international borders and the systematic reduction of digital barriers between nations. Simultaneously, they work to diminish the concentrated influence of both corporate entities and state authorities that function as digital monopolies. The core Pirate Party position maintains that the internet must be preserved as an openpublic spacedevoid of unnecessary restrictions, where individuals can freely access, create, distribute, and share content without experiencing coercion or intimidation. This position reflects their fundamental commitment to digital freedom and thedemocratization of information technologies. A significant concern for Pirate Parties is the growing threat ofdisinformation,infodemicandmanipulationin cyberspace. They advocate formedia literacyandinformation literacyprograms and transparentcontent moderationpolicies that combat false information while preserving freedom of expression. Recognizing how algorithmicecho chamberscontribute tosocial polarization, they support technologies and policies that expose users to diverse viewpoints and promotecritical thinkingskills, viewing these as essential safeguards for democratic discourse in the digital age. In terms of governance, Pirate Parties support the implementation ofopene-governmentto enhance transparency, reduce costs, and increase the efficiency of decision-making processes. They propose a hybrid democratic model that integrates direct digital democracy (e-democracy) mechanisms with representative democratic institutions. This decentralized and participatory governance, known ascollaborative e-democracy, aims to distribute participation and decision-making among citizens through digital tools, allowing them to directly influence public policies (e-participation). It also incorporates forms ofAI-assisted governance, secure and transparentelectronic voting systems,data-driven decision-makingprocesses,evidence-based policies,technology assessments, andanti-corruptionmeasures to strengthen democratic processes and prevent manipulation and fraud. Furthermore, these parties strongly defend open-source, decentralized and privacy-enhancing technologies such asblockchain,cryptocurrenciesas an alternative to state currency (fiat money),peer-to-peer networks,instant messagingwithend-to-end encryption,virtual private networks, private and anonymous browsers, etc., considering them essential tools to protectpersonal data, individualprivacyandinformation security(both online and offline), againstmass surveillance,data collectionwithout consent, contentcensorshipwithout due process, forced decryption, internet throttling or blocking, backdoor requirements in encryption, discriminatory algorithmic practices, unauthorized access to personal data, and the concentration of power inBig Tech.[5][6][7][8][9][10]Ultimately, protectingindividual freedomis at the core of their political agenda, seen as a bulwark against the growing power of corporations and governments in controlling information and digital autonomy. This aligns perfectly withcyber-libertarianvalues and principles.[11] The reference to historicalpiracywas strategically constructed by Pirate Parties through a process of cultural and political resignification. Initially, the termpiratewas adopted provocatively and ironically in response to accusations from the entertainment industry against digital file sharing. Subsequently, this identity was more deeply elaborated to create a coherent political narrative. The members transformed what was initially a pejorative label into a symbol of cultural resistance, recalling the tradition of "pirates" as rebels against established powers. They established parallels withpirate radioof the 1960s-70s (such asRadio Carolinein the North Sea), which challenged state radio monopolies by broadcasting pop music from international waters. They recovered historical elements of thepirate republicsof the17th-18th centuries, such as Nassau, emphasizing aspects of democratic self-government, practices of equitable distribution of plunder, and challenges to colonial powers. They highlighted how some pirate crews adopted codes that provided for forms of direct democracy, compensation for the wounded, and limitation of the captain's powers. The adoption of the pirate flag (Jolly Roger) was reinterpreted as a symbol of freedom of information and resistance to knowledge monopolies. This narrative was particularly effective because it allowed Pirate Parties to present themselves not simply as supporters ofonline piracy, but as heirs to a long tradition of resistance to forms of monopolistic power, connecting the struggle for digital freedom to a romanticized historical tradition of challenging authority. Rather than completely rejecting the traditional political spectrumleft–right, Pirate Parties operate on a distinct political axis that political scientists might callauthoritarian-anarchistorcentralized-distributedin the digital and technological spheres. Therefore, they tend to combine libertarian and anarchist elements on digital issues with progressive (from the American point of view) positions on social issues.[12] The first Pirate Party to be established was thePirate Party of Sweden(Swedish:Piratpartiet), whose website was launched on 1 January 2006 byRick Falkvinge. Falkvinge was inspired to found the party after he found that Swedish politicians were generally unresponsive to Sweden's debate over changes tocopyright lawin 2005.[13] TheUnited States Pirate Partywas founded on 6 June 2006 byUniversity of Georgiagraduate student Brent Allison. The party's concerns were abolishing theDigital Millennium Copyright Act, reducing the length of copyrights from 95 years after publication or 70 years after the author's death to 14 years, and the expiry ofpatentsthat do not result in significant progress after four years, as opposed to 20 years. However, Allison stepped down as leader three days after founding the party.[14] ThePirate Party of Austria(German:Piratenpartei Österreichs) was founded in July 2006 in the run-up to the2006 Austrian legislative electionbyFlorian Hufskyand Jürgen "Juxi" Leitner.[15] ThePirate Party of Finlandwas founded in 2008 and entered the official registry of Finnish political parties in 2009. ThePirate Party of the Czech Republic(Czech:Česká pirátská strana) was founded on 19 April 2009 by Jiří Kadeřávek. The2009 European Parliament electiontook place between the 4 and 7 June 2009, and various Pirate Parties stood candidates. The most success was had inSweden, where the Pirate Party of Sweden won 7.1% of the vote, and hadChristian Engströmelected as the first ever Pirate PartyMember of European Parliament(MEP).[16][17]Following the introduction of theTreaty of Lisbon, the Pirate Party of Sweden were afforded another MEP in 2011, that beingAmelia Andersdotter. On 30 July 2009, thePirate Party UKwas registered with theElectoral Commission. Its firstparty leaderwas Andrew Robinson, and itstreasurerwas Eric Priezkalns.[18][19][20] In April 2010, an international organisation to encourage cooperation and unity between Pirate Parties,Pirate Parties International, was founded in Belgium.[21] In the2011 Berlin state electionto theAbgeordnetenhaus of Berlin, thePirate Party of Berlin(a state chapter ofPirate Party Germany) won 8.9% of the vote, which corresponded to winning 15 seats.[22][23]John Naughton, writing forThe Guardian, argued that the Pirate Party of Berlin's success could not be replicated by thePirate Party UK, as the UK does not use aproportional representationelectoral system.[24] In the2013 Icelandic parliamentary election, the IcelandicPirate Partywon 5.1% of the vote, returning three Pirate PartyMembers of Parliament. Those wereBirgitta Jónsdóttirfor theSouthwest Constituency,Helgi Hrafn GunnarssonforReykjavik Constituency NorthandJón Þór ÓlafssonforReykjavik Constituency South.[25][26]Birgitta had previously been an MP for theCitizens' Movement(from 2009 to 2013), representing Reykjavik Constituency South. As of 2015[update], it was the largest political party in Iceland, with 23.9% of the vote.[27] The2014 European Parliament electiontook place between 22 and 24 May.Felix Redawas at the top of the list forPirate Party Germany, and was subsequently elected as the party received 1.5% of the vote. Other notable results include theCzech Pirate Party, who received 4.8% of the vote, meaning they were only 0.2% shy of getting elected, thePirate Party of Luxembourg, who received 4.2% of the vote, and thePirate Party of Sweden, who received 2.2% of the vote, but lost both their MEPs.[28] Reda had previously worked as an assistant in the office of former Pirate Party MEP Amelia Andersdotter.[29]On 11 June 2014, Reda was elected vice-president of theGreens/EFAgroup in the European Parliament.[30]Reda was given the job of copyright reform rapporteur.[31] The Icelandic Pirate Party was leading the national polls in March 2015, with 23.9%. The Independence Party polled 23.4%, only 0.5% behind the Pirate Party. According to the poll, the Pirate Party would win 16 seats in theAlthing.[32][33]In April 2016, in the wake of thePanama Papersscandal, polls showed the Icelandic Pirate Party at 43% and the Independence Party at 21.6%,[34]although the Pirate Party eventually won 15% of the vote and 10 seats in the29 October 2016 parliamentary election. In April 2017, a group of students atUniversity of California, Berkeleyformed a Pirate Party to participate in theAssociated Students of the University of Californiasenate elections, winning the only third-party seat.[35] TheCzech Pirate Partyentered theChamber of Deputiesof the Czech Parliament for the first time after theelectionheld on 20 and 21 October 2017, with 10.8% of the vote. TheCzech Pirate Party, after finishing in second place with 17.1% of the vote in the2018 Prague municipal electionheld on 5 and 6 October 2018, formed a coalition withPrague TogetherandUnited Forces for Prague(TOP 09,Mayors and Independents,KDU-ČSL,Liberal-Environmental PartyandSNK European Democrats). The representative of theCzech Pirate Party,Zdeněk Hřib, was selected to beMayor of Prague. This was probably the first time a pirate party member became the mayor of a major world city. At the2019 European Parliament election, three Czech Pirate MEPs and one German Pirate MEP were voted in and joined theGreens–European Free Alliance, the aforementioned group in the European Parliament that had previously included Swedish Pirate MEPs and German Julia Reda. Some campaigns have included demands for the reform ofcopyrightandpatentlaws.[36]In 2010, Swedish MEPChristian Engströmcalled for supporters of amendments to theData Retention Directiveto withdraw their signatures, citing a misleading campaign.[37] Pirate Parties International (PPI) is the umbrella organization of the national Pirate Parties. Since 2006, the organization has existed as a loose union[38]of the national parties. Since October 2009, Pirate Parties International has had the status of anon-governmental organization(Feitelijke vereniging) based inBelgium. The organization was officially founded at a conference from 16 to 18 April 2010 inBrussels, when the organization's statutes were adopted by the 22 national pirate parties represented at the event.[39] TheEuropean Pirate Party(PPEU) is a European political alliance founded in March 2014 which consists of various pirate parties within European countries.[40]It is not currently registered as aEuropean political party.[41] In Parti Pirate Francophone, the French-speaking Pirate Parties are organized. Current members are the pirates parties in Belgium,Côte d'Ivoire, France, Canada, and Switzerland.[42] *Held in 2013 due to Croatia's entry into EU 1Party only participated inNorth West England constituency2PPAT is in alliance with two other parties: The Austrian Communist Party and Der Wandel. The alliance is called "Europa Anders" and also includes some independents in their lists3with Ecological Greens4PPEE are campaigning for an independent candidate (Silver Meikar) who supports the pirate program Representatives of the Pirate Party movement that have been elected to a national orsupranationallegislature. Since the2021 Czech legislative election, the following 4 MPs are in office: The following served as MPs during the 2017–2021 term: Since the2024 Czech senate election, the party had 1 senator, but she left the Pirates in 2025. She is still a supporter of the Pirates.[49] The following are former senators: Since the2024 EU elections, the party has 1 MEP: The following are former MEPs: Since the2024 EU elections, the party does not have any national elected representatives. The former MEPs are as follows: Since the2024 parliamentary election, the party does not have any national elected representatives. The former MPs are as follows: Outside Sweden, pirate parties have been started in over 40 countries,[51]inspired by the Swedish initiative.
https://en.wikipedia.org/wiki/Pirate_Party
Privacy(UK:/ˈprɪvəsi/,US:/ˈpraɪ-/)[1][2]is the ability of an individual or group to seclude themselves orinformationabout themselves, and thereby express themselves selectively. The domain of privacy partially overlaps withsecurity, which can include the concepts of appropriate use andprotection of information. Privacy may also take the form ofbodily integrity. Throughout history, there have been various conceptions of privacy. Most cultures acknowledge the right of individuals to keep aspects of their personal lives out of the public domain. The right to be free from unauthorized invasions of privacy by governments, corporations, or individuals is enshrined in the privacy laws of many countries and, in some instances, their constitutions. With the rise of technology, the debate regarding privacy has expanded from a bodily sense to include a digital sense. In most countries, the right todigital privacyis considered an extension of the originalright to privacy, and many countries have passed acts that further protect digital privacy from public and private entities. There are multiple techniques to invade privacy, which may be employed by corporations or governments for profit or political reasons. Conversely, in order to protect privacy, people may employencryptionoranonymitymeasures. The word privacy is derived from the Latin word and concept of ‘privatus’, which referred to things set apart from what is public; personal and belonging to oneself, and not to the state.[3]Literally, ‘privatus’ is the past participle of the Latin verb ‘privere’ meaning ‘to be deprived of’.[4] The concept of privacy has been explored and discussed by numerous philosophers throughout history. Privacy has historical roots in ancient Greek philosophical discussions. The most well-known of these wasAristotle's distinction between two spheres of life: the public sphere of thepolis, associated with political life, and the private sphere of theoikos, associated with domestic life.[5]Privacy is valued along with other basic necessities of life in the Jewishdeutero-canonicalBook of Sirach.[6] Islam's holy text, the Qur'an, states the following regarding privacy: ‘Do not spy on one another’ (49:12); ‘Do not enter any houses except your own homes unless you are sure of their occupants' consent’ (24:27).[7] English philosopherJohn Locke’s (1632-1704) writings on natural rights and the social contract laid the groundwork for modern conceptions of individual rights, including the right to privacy. In hisSecond Treatise of Civil Government(1689), Locke argued that a man is entitled to his own self through one’s natural rights of life, liberty, and property.[8]He believed that the government was responsible for protecting these rights so individuals were guaranteed private spaces to practice personal activities.[9] In the political sphere, philosophers hold differing views on the right of private judgment. German philosopherGeorg Wilhelm Friedrich Hegel(1770-1831) makes the distinction betweenmoralität, which refers to an individual’s private judgment, andsittlichkeit, pertaining to one’s rights and obligations as defined by an existing corporate order. On the contrary,Jeremy Bentham(1748-1832), an English philosopher, interpreted law as an invasion of privacy. His theory ofutilitarianismargued that legal actions should be judged by the extent of their contribution to human wellbeing, or necessary utility.[10] Hegel’s notions were modified by prominent 19th century English philosopherJohn Stuart Mill. Mill’s essayOn Liberty(1859) argued for the importance of protecting individual liberty against the tyranny of the majority and the interference of the state. His views emphasized the right of privacy as essential for personal development and self-expression.[11] Discussions surrounding surveillance coincided with philosophical ideas on privacy. Jeremy Bentham developed the phenomenon known as the Panoptic effect through his 1791 architectural design of a prison calledPanopticon. The phenomenon explored the possibility of surveillance as a general awareness of being watched that could never be proven at any particular moment.[12]French philosopherMichel Foucault(1926-1984) concluded that the possibility of surveillance in the instance of the Panopticon meant a prisoner had no choice but to conform to the prison's rules.[12] As technology has advanced, the way in which privacy is protected and violated has changed with it. In the case of some technologies, such as theprinting pressor theInternet, the increased ability to share information can lead to new ways in which privacy can be breached. It is generally agreed that the first publication advocating privacy in the United States was the 1890 article bySamuel WarrenandLouis Brandeis, "The Right to Privacy",[13]and that it was written mainly in response to the increase in newspapers and photographs made possible by printing technologies.[14] In 1948,1984,written byGeorge Orwell, was published. A classic dystopian novel,1984describes the life of Winston Smith in 1984, located in Oceania, a totalitarian state. The all-controlling Party, the party in power led by Big Brother, is able to control power through masssurveillanceand limited freedom of speech and thought. George Orwell provides commentary on the negative effects oftotalitarianism, particularly on privacy andcensorship.[15]Parallels have been drawn between1984and modern censorship and privacy, a notable example being that large social media companies, rather than the government, are able to monitor a user's data and decide what is allowed to be said online through their censorship policies, ultimately for monetary purposes.[16] In the 1960s, people began to consider how changes in technology were bringing changes in the concept of privacy.[17]Vance Packard'sThe Naked Societywas a popular book on privacy from that era and led US discourse on privacy at that time.[17]In addition,Alan Westin'sPrivacy and Freedomshifted the debate regarding privacy from a physical sense, how the government controls a person's body (i.e.Roe v. Wade) and other activities such as wiretapping and photography. As important records became digitized, Westin argued that personal data was becoming too accessible and that a person should have complete jurisdiction over their data, laying the foundation for the modern discussion of privacy.[18] New technologies can also create new ways to gather private information. In 2001, the legal caseKyllo v. United States(533 U.S. 27) determined that the use ofthermal imagingdevices that can reveal previously unknown information without a warrant constitutes a violation of privacy. In 2019, after developing a corporate rivalry in competing voice-recognition software,AppleandAmazonrequired employees to listen tointimatemoments and faithfully transcribe the contents.[19] Police and citizens often conflict on what degree the police can intrude a citizen's digital privacy. For instance, in 2012, theSupreme Courtruled unanimously inUnited States v. Jones(565 U.S. 400), in the case of Antoine Jones who was arrested of drug possession using aGPStracker on his car that was placed without a warrant, that warrantless tracking infringes theFourth Amendment. The Supreme Court also justified that there is some "reasonable expectation of privacy" in transportation since the reasonable expectation of privacy had already been established underGriswold v. Connecticut(1965). The Supreme Court also further clarified that the Fourth Amendment did not only pertain to physical instances of intrusion but also digital instances, and thusUnited States v. Jonesbecame a landmark case.[20] In 2014, the Supreme Court ruled unanimously inRiley v. California(573 U.S. 373), where David Leon Riley was arrested after he was pulled over for driving on expired license tags when the police searched his phone and discovered that he was tied to a shooting, that searching a citizen's phone without a warrant was an unreasonable search, a violation of the Fourth Amendment. The Supreme Court concluded that the cell phones contained personal information different from trivial items, and went beyond to state that information stored on the cloud was not necessarily a form of evidence.Riley v. Californiaevidently became a landmark case, protecting the digital protection of citizen's privacy when confronted with the police.[21] A recent notable occurrence of the conflict between law enforcement and a citizen in terms of digital privacy has been in the 2018 case,Carpenter v. United States(585 U.S. ____). In this case, the FBI used cell phone records without a warrant to arrest Timothy Ivory Carpenter on multiple charges, and the Supreme Court ruled that the warrantless search of cell phone records violated the Fourth Amendment, citing that the Fourth Amendment protects "reasonable expectations of privacy" and that information sent to third parties still falls under data that can be included under "reasonable expectations of privacy".[22] Beyond law enforcement, many interactions between the government and citizens have been revealed either lawfully or unlawfully, specifically through whistleblowers. One notable example isEdward Snowden, who released multiple operations related to the mass surveillance operations of theNational Security Agency(NSA), where it was discovered that the NSA continues to breach the security of millions of people, mainly through mass surveillance programs whether it was collecting great amounts of data through third party private companies, hacking into other embassies or frameworks of international countries, and various breaches of data, which prompted a culture shock and stirred international debate related to digital privacy.[23] The Internet and technologies built on it enable new forms of social interactions at increasingly faster speeds and larger scales. Because the computer networks which underlie the Internet introduce such a wide range of novel security concerns, the discussion ofprivacyon the Internet is often conflated withsecurity.[24]Indeed, many entities such as corporations involved in thesurveillance economyinculcate a security-focused conceptualization of privacy which reduces their obligations to uphold privacy into a matter ofregulatory compliance,[25]while at the same timelobbyingto minimize those regulatory requirements.[26] The Internet's effect on privacy includes all of the ways that computational technology and the entities that control it can subvert the privacy expectations of theirusers.[27][28]In particular, theright to be forgottenis motivated by both thecomputational abilityto store and search through massive amounts of data as well as thesubverted expectationsof users who share information online without expecting it to be stored and retained indefinitely. Phenomena such asrevenge pornanddeepfakesare not merely individual because they require both the ability to obtain images without someone's consent as well as the social and economic infrastructure to disseminate that content widely.[28]Therefore, privacy advocacy groups such as theCyber Civil Rights Initiativeand theElectronic Frontier Foundationargue that addressing the new privacy harms introduced by the Internet requires both technological improvements toencryptionandanonymityas well as societal efforts such aslegal regulationsto restrict corporate and government power.[29][30] While theInternetbegan as a government and academic effort up through the 1980s, private corporations began to enclose the hardware and software of the Internet in the 1990s, and now most Internet infrastructure is owned and managed by for-profit corporations.[31]As a result, the ability of governments to protect their citizens' privacy is largely restricted toindustrial policy, instituting controls on corporations that handle communications orpersonal data.[32][33]Privacy regulations are often further constrained to only protect specific demographics such as children,[34]or specific industries such as credit card bureaus.[35] Several online social network sites (OSNs) are among the top 10 most visited websites globally. Facebook for example, as of August 2015, was the largest social-networking site, with nearly 2.7 billion[36]members, who upload over 4.75 billion pieces of content daily. WhileTwitteris significantly smaller with 316 million registered users, the USLibrary of Congressrecently announced that it will be acquiring and permanently storing the entire archive of public Twitter posts since 2006.[27] A review and evaluation of scholarly work regarding the current state of the value of individuals' privacy of online social networking show the following results: "first, adults seem to be more concerned about potential privacy threats than younger users; second, policy makers should be alarmed by a large part of users who underestimate risks of their information privacy on OSNs; third, in the case of using OSNs and its services, traditional one-dimensional privacy approaches fall short".[37]This is exacerbated bydeanonymizationresearch indicating that personal traits such as sexual orientation, race, religious and political views, personality, or intelligence can be inferred based on a wide variety ofdigital footprints, such as samples of text, browsing logs, or Facebook Likes.[38] Intrusions of social media privacy are known to affect employment in the United States.Microsoftreports that 75 percent of U.S. recruiters and human-resource professionals now do online research about candidates, often using information provided by search engines, social-networking sites, photo/video-sharing sites, personal web sites and blogs, andTwitter. They also report that 70 percent of U.S. recruiters have rejected candidates based on internet information. This has created a need by many candidates to control various onlineprivacy settingsin addition to controlling their online reputations, the conjunction of which has led to legal suits against both social media sites and US employers.[27] Selfiesare popular today. A search for photos with the hashtag #selfie retrieves over 23 million results on Instagram and 51 million with the hashtag #me.[39]However, due to modern corporate and governmental surveillance, this may pose a risk to privacy.[40]In a research study which takes a sample size of 3763, researchers found that for users posting selfies on social media, women generally have greater concerns over privacy than men, and that users' privacy concerns inversely predict their selfie behavior and activity.[41] An invasion of someone's privacy may be widely and quickly disseminated over the Internet. When social media sites and other online communities fail to invest incontent moderation, an invasion of privacy can expose people to a much greater volume and degree of harassment than would otherwise be possible.Revenge pornmay lead tomisogynistorhomophobicharassment, such as in thesuicide of Amanda Toddand thesuicide of Tyler Clementi. When someone's physical location or other sensitive information is leaked over the Internet viadoxxing, harassment may escalate to direct physical harm such asstalkingorswatting. Despite the way breaches of privacy can magnify online harassment, online harassment is often used as a justification to curtailfreedom of speech, by removing the expectation of privacy viaanonymity, or by enabling law enforcement to invade privacy without asearch warrant. In the wake of Amanda Todd's death, the Canadian parliament proposed a motion purporting to stop bullying, but Todd's mother herself gave testimony to parliament rejecting the bill due to its provisions for warrantless breaches of privacy, stating "I don't want to see our children victimized again by losing privacy rights."[42][43][44] Even where these laws have been passed despite privacy concerns, they have not demonstrated a reduction in online harassment. When theKorea Communications Commissionintroduced a registration system for online commenters in 2007, they reported that malicious comments only decreased by 0.9%, and in 2011 it was repealed.[45]A subsequent analysis found that the set of users who posted the most comments actually increased the number of "aggressive expressions" when forced to use their real name.[46] In the US, while federal law only prohibits online harassment based on protected characteristics such as gender and race,[47]individual states have expanded the definition of harassment to further curtail speech: Florida's definition of online harassment includes "any use of data or computer software" that "Has the effect of substantially disrupting the orderly operation of a school."[48] Increasingly, mobile devices facilitatelocation tracking. This creates user privacy problems. A user's location and preferences constitutepersonal information, and their improper use violates that user's privacy. A recent MIT study by de Montjoye et al. showed that four spatio-temporal points constituting approximate places and times are enough to uniquely identify 95% of 1.5M people in a mobility database. The study further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets confer little privacy protection.[49] Several methods to protect user privacy in location-based services have been proposed, including the use of anonymizing servers and blurring of information. Methods to quantify privacy have also been proposed, to calculate the equilibrium between the benefit of obtaining accurate location information and the risks of breaching an individual's privacy.[50] There have been scandals regarding location privacy. One instance was the scandal concerningAccuWeather, where it was revealed that AccuWeather was selling locational data. This consisted of a user's locational data, even if they opted out within Accuweather, which tracked users' location. Accuweather sold this data to Reveal Mobile, a company that monetizes data related to a user's location.[51]Other international cases are similar to the Accuweather case. In 2017, a leaky API inside the McDelivery App exposed private data, which consisted of home addresses, of 2.2 million users.[52] In the wake of these types of scandals, many large American technology companies such as Google, Apple, and Facebook have been subjected to hearings and pressure under the U.S. legislative system. In 2011, US SenatorAl Frankenwrote an open letter toSteve Jobs, noting the ability ofiPhonesandiPadsto record and store users' locations in unencrypted files.[53][54]Apple claimed this was an unintentionalsoftware bug, but Justin Brookman of theCenter for Democracy and Technologydirectly challenged that portrayal, stating "I'm glad that they are fixing what they call bugs, but I take exception with their strong denial that they track users."[55]In 2021, the U.S. state of Arizona found in a court case that Google misled its users and stored the location of users regardless of their location settings.[56] The Internet has become a significant medium for advertising, with digital marketing making up approximately half of the global ad spending in 2019.[57]While websites are still able to sell advertising space without tracking, including viacontextual advertising, digital ad brokers such asFacebookandGooglehave instead encouraged the practice ofbehavioral advertising, providing code snippets used by website owners to track their users viaHTTP cookies. This tracking data is also sold to other third parties as part of themass surveillance industry. Since the introduction of mobile phones, data brokers have also been planted within apps, resulting in a $350 billion digital industry especially focused on mobile devices.[58] Digital privacy has become the main source of concern for many mobile users, especially with the rise of privacy scandals such as theFacebook–Cambridge Analytica data scandal.[58]Applehas received some reactions for features that prohibit advertisers from tracking a user's data without their consent.[59]Google attempted to introduce an alternative to cookies namedFLoCwhich it claimed reduced the privacy harms, but it later retracted the proposal due to antitrust probes and analyses that contradicted their claims of privacy.[60][61][62] The ability to do online inquiries about individuals has expanded dramatically over the last decade. Importantly, directly observed behavior, such as browsing logs, search queries, or contents of a public Facebook profile, can be automatically processed to infer secondary information about an individual, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality.[63] In Australia, theTelecommunications (Interception and Access) Amendment (Data Retention) Act 2015made a distinction between collecting the contents of messages sent between users and the metadata surrounding those messages. Most countries give citizens rights to privacy in their constitutions.[17]Representative examples of this include theConstitution of Brazil, which says "the privacy, private life, honor and image of people are inviolable"; theConstitution of South Africasays that "everyone has a right to privacy"; and theConstitution of the Republic of Koreasays "the privacy of no citizen shall be infringed."[17]TheItalian Constitutionalso defines the right to privacy.[64]Among most countries whose constitutions do not explicitly describe privacy rights, court decisions have interpreted their constitutions to intend to give privacy rights.[17] Many countries have broad privacy laws outside their constitutions, including Australia'sPrivacy Act 1988, Argentina's Law for the Protection of Personal Data of 2000, Canada's 2000Personal Information Protection and Electronic Documents Act, and Japan's 2003 Personal Information Protection Law.[17] Beyond national privacy laws, there are international privacy agreements.[65]The United NationsUniversal Declaration of Human Rightssays "No one shall be subjected to arbitrary interference with [their] privacy, family, home or correspondence, nor to attacks upon [their] honor and reputation."[17]TheOrganisation for Economic Co-operation and Developmentpublished its Privacy Guidelines in 1980. The European Union's 1995 Data Protection Directive guides privacy protection in Europe.[17]The 2004 Privacy Framework by theAsia-Pacific Economic Cooperationis a privacy protection agreement for the members of that organization.[17] Approaches to privacy can, broadly, be divided into two categories: free market orconsumer protection.[66] One example of the free market approach is to be found in the voluntary OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data.[67]The principles reflected in the guidelines, free of legislative interference, are analyzed in an article putting them into perspective with concepts of theGDPRput into law later in the European Union.[68] In a consumer protection approach, in contrast, it is claimed that individuals may not have the time or knowledge to make informed choices, or may not have reasonable alternatives available. In support of this view, Jensen and Potts showed that mostprivacy policiesare above the reading level of the average person.[69] ThePrivacy Act 1988is administered by the Office of the Australian Information Commissioner. The initial introduction of privacy law in 1998 extended to the public sector, specifically to Federal government departments, under the Information Privacy Principles. State government agencies can also be subject to state based privacy legislation. This built upon the already existing privacy requirements that applied to telecommunications providers (under Part 13 of theTelecommunications Act 1997), and confidentiality requirements that already applied to banking, legal and patient / doctor relationships.[70] In 2008 the Australian Law Reform Commission (ALRC) conducted a review of Australian privacy law and produced a report titled "For Your Information".[71]Recommendations were taken up and implemented by the Australian Government via the Privacy Amendment (Enhancing Privacy Protection) Bill 2012.[72] In 2015, theTelecommunications (Interception and Access) Amendment (Data Retention) Act 2015was passed, to somecontroversy over its human rights implicationsand the role of media. Canada is a federal state whose provinces and territories abide by thecommon lawsave the province of Quebec whose legal tradition is thecivil law. Privacy in Canada was first addressed through thePrivacy Act,[73]a 1985 piece of legislation applicable to personal information held by government institutions. The provinces and territories would later follow suit with their own legislation. Generally, the purposes of said legislation are to provide individuals rights to access personal information; to have inaccurate personal information corrected; and to prevent unauthorized collection, use, and disclosure of personal information.[74]In terms of regulating personal information in the private sector, the federalPersonal Information Protection and Electronic Documents Act[75]("PIPEDA") is enforceable in all jurisdictions unless a substantially similar provision has been enacted on the provincial level.[76]However, inter-provincial or international information transfers still engage PIPEDA.[76]PIPEDA has gone through two law overhaul efforts in 2021 and 2023 with the involvement of the Office of the Privacy Commissioner and Canadian academics.[77]In the absence of a statutory private right of action absent an OPC investigation, the common law torts of intrusion upon seclusion and public disclosure of private facts, as well as the Civil Code of Quebec may be brought for an infringement or violation of privacy.[78][79]Privacy is also protected under ss. 7 and 8 of theCanadian Charter of Rights and Freedoms[80]which is typically applied in the criminal law context.[81]In Quebec, individuals' privacy is safeguarded by articles 3 and 35 to 41 of theCivil Code of Quebec[82]as well as by s. 5 of theCharter of human rights and freedoms.[83] In 2016, the European Union passed the General Data Protection Regulation (GDPR), which was intended to reduce the misuse of personal data and enhance individual privacy, by requiring companies to receive consent before acquiring personal information from users.[84] Although there are comprehensive regulations for data protection in the European Union, one study finds that despite the laws, there is a lack of enforcement in that no institution feels responsible to control the parties involved and enforce their laws.[85]The European Union also champions theRight to be Forgottenconcept in support of its adoption by other countries.[86] Since the introduction of theAadhaarproject in 2009, which resulted in all 1.2 billion Indians being associated with a 12-digit biometric-secured number. Aadhaar has uplifted the poor in India[how?][promotion?]by providing them with a form of identity and preventing the fraud and waste of resources, as normally the government would not be able to allocate its resources to its intended assignees due to the ID issues.[citation needed]With the rise of Aadhaar, India has debated whether Aadhaar violates an individual's privacy and whether any organization should have access to an individual's digital profile, as the Aadhaar card became associated with other economic sectors, allowing for the tracking of individuals by both public and private bodies.[87]Aadhaar databases have suffered from security attacks as well and the project was also met with mistrust regarding the safety of the social protection infrastructures.[88]In 2017, where the Aadhar was challenged, the Indian Supreme Court declared privacy as a human right, but postponed the decision regarding the constitutionality of Aadhaar for another bench.[89]In September 2018, the Indian Supreme Court determined that the Aadhaar project did not violate the legal right to privacy.[90] In the United Kingdom, it is not possible to bring an action for invasion of privacy. An action may be brought under anothertort(usually breach of confidence) and privacy must then be considered under EC law. In the UK, it is sometimes a defence that disclosure of private information was in the public interest.[91]There is, however, theInformation Commissioner's Office(ICO), an independent public body set up to promote access to official information and protect personal information. They do this by promoting good practice, ruling on eligible complaints, giving information to individuals and organisations, and taking action when the law is broken. The relevant UK laws include:Data Protection Act 1998;Freedom of Information Act 2000;Environmental Information Regulations 2004;Privacy and Electronic Communications Regulations 2003. The ICO has also provided a "Personal Information Toolkit" online which explains in more detail the various ways of protecting privacy online.[92] In the United States, more systematic treatises of privacy did not appear until the 1890s, with the development ofprivacy law in America.[93]Although theUS Constitutiondoes not explicitly include theright to privacy, individual as well aslocational privacymay be implicitly granted by the Constitution under the4th Amendment.[94]TheSupreme Court of the United Stateshas found that other guarantees havepenumbrasthat implicitly grant a right to privacy against government intrusion, for example inGriswold v. ConnecticutandRoe v. Wade.Dobbs v. Jackson Women's Health Organizationlater overruledRoe v. Wade, with Supreme Court JusticeClarence ThomascharacterizingGriswold's penumbral argument as having a "facial absurdity",[95]casting doubt on the validity of a constitutional right to privacy in the United States and of previous decisions relying on it.[96]In the United States, the right offreedom of speechgranted in theFirst Amendmenthas limited the effects of lawsuits for breach of privacy. Privacy is regulated in the US by thePrivacy Act of 1974, and various state laws. The Privacy Act of 1974 only applies to federal agencies in the executive branch of the federal government.[97]Certain privacy rights have been established in the United States via legislation such as theChildren's Online Privacy Protection Act(COPPA),[98]theGramm–Leach–Bliley Act(GLB), and theHealth Insurance Portability and Accountability Act(HIPAA).[99] Unlike the EU and most EU-member states, the US does not recognize the right to privacy of non-US citizens. The UN's Special Rapporteur on the right to privacy, Joseph A. Cannataci, criticized this distinction.[100] The theory ofcontextual integrity,[101]developed byHelen Nissenbaum, defines privacy as an appropriate information flow, where appropriateness, in turn, is defined as conformance with legitimate, informational norms specific to social contexts. In 1890, the United StatesjuristsSamuel D. Warren and Louis Brandeis wrote "The Right to Privacy", an article in which they argued for the "right to be let alone", using that phrase as a definition of privacy.[102]This concept relies on the theory ofnatural rightsand focuses on protecting individuals. The citation was a response to recent technological developments, such as photography, and sensationalist journalism, also known asyellow journalism.[103] There is extensive commentary over the meaning of being "let alone", and among other ways, it has been interpreted to mean the right of a person to chooseseclusionfrom the attention of others if they wish to do so, and the right to be immune from scrutiny or being observed in private settings, such as one's own home.[102]Although this early vague legal concept did not describe privacy in a way that made it easy to design broad legal protections of privacy, it strengthened the notion of privacy rights for individuals and began a legacy of discussion on those rights in the US.[102] Limited access refers to a person's ability to participate in society without having other individuals and organizations collect information about them.[104] Various theorists have imagined privacy as a system for limiting access to one's personal information.[104]Edwin Lawrence Godkinwrote in the late 19th century that "nothing is better worthy of legal protection than private life, or, in other words, the right of every man to keep his affairs to himself, and to decide for himself to what extent they shall be the subject of public observation and discussion."[104][105]Adopting an approach similar to the one presented by Ruth Gavison[106]Nine years earlier,[107]Sissela Boksaid that privacy is "the condition of being protected from unwanted access by others—either physical access, personal information, or attention."[104][108] Control over one's personal information is the concept that "privacy is the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others." Generally, a person who hasconsensually formed an interpersonal relationshipwith another person is not considered "protected" by privacy rights with respect to the person they are in the relationship with.[109][110]Charles Friedsaid that "Privacy is not simply an absence of information about us in the minds of others; rather it is the control we have over information about ourselves. Nevertheless, in the era ofbig data, control over information is under pressure.[111][112][This quote needs a citation][check quotation syntax] Alan Westin defined four states—or experiences—of privacy: solitude, intimacy, anonymity, and reserve.Solitudeis a physical separation from others;[113]Intimacy is a "close, relaxed; and frank relationship between two or more individuals" that results from the seclusion of a pair or small group of individuals.[113]Anonymity is the "desire of individuals for times of 'public privacy.'"[113]Lastly, reserve is the "creation of a psychological barrier against unwanted intrusion"; this creation of a psychological barrier requires others to respect an individual's need or desire to restrict communication of information concerning themself.[113] In addition to the psychological barrier of reserve, Kirsty Hughes identified three more kinds of privacy barriers: physical, behavioral, and normative. Physical barriers, such as walls and doors, prevent others from accessing and experiencing the individual.[114](In this sense, "accessing" an individual includes accessing personal information about them.)[114]Behavioral barriers communicate to others—verbally, through language, or non-verbally, through personal space, body language, or clothing—that an individual does not want the other person to access or experience them.[114]Lastly, normative barriers, such as laws and social norms, restrain others from attempting to access or experience an individual.[114] PsychologistCarl A. Johnson has identified the psychological concept of “personal control” as closely tied to privacy. His concept was developed as a process containing four stages and two behavioural outcome relationships, with one’s outcomes depending on situational as well as personal factors.[115]Privacy is described as “behaviors falling at specific locations on these two dimensions”.[116] Johnson examined the following four stages to categorize where people exercise personal control: outcome choice control is the selection between various outcomes. Behaviour selection control is the selection between behavioural strategies to apply to attain selected outcomes. Outcome effectance describes the fulfillment of selected behaviour to achieve chosen outcomes. Outcome realization control is the personal interpretation of one’s achieved outcome. The relationship between two factors– primary and secondary control, is defined as the two-dimensional phenomenon where one reaches personal control: primary control describes behaviour directly causing outcomes, while secondary control is behaviour indirectly causing outcomes.[117]Johnson explores the concept that privacy is a behaviour that has secondary control over outcomes. Lorenzo Magnaniexpands on this concept by highlighting how privacy is essential in maintaining personal control over one's identity and consciousness.[118]He argues that consciousness is partly formed by external representations of ourselves, such as narratives and data, which are stored outside the body. However, much of our consciousness consists of internal representations that remain private and are rarely externalized. This internal privacy, which Magnani refers to as a form of "information property" or "moral capital," is crucial for preserving free choice and personal agency. According to Magnani,[119]when too much of our identity and data is externalized and subjected to scrutiny, it can lead to a loss of personal control, dignity, and responsibility. The protection of privacy, therefore, safeguards our ability to develop and pursue personal projects in our own way, free from intrusive external forces. Acknowledging other conceptions of privacy while arguing that the fundamental concern of privacy is behavior selection control, Johnson converses with other interpretations including those of Maxine Wolfe and Robert S. Laufer, and Irwin Altman. He clarifies the continuous relationship between privacy and personal control, where outlined behaviours not only depend on privacy, but the conception of one’s privacy also depends on his defined behavioural outcome relationships.[120] Privacy is sometimes defined as an option to have secrecy. Richard Posner said that privacy is the right of people to "conceal information about themselves that others might use to their disadvantage".[121][122] In various legal contexts, when privacy is described as secrecy, a conclusion is reached: if privacy is secrecy, then rights to privacy do not apply for any information which is already publicly disclosed.[123]When privacy-as-secrecy is discussed, it is usually imagined to be a selective kind of secrecy in which individuals keep some information secret and private while they choose to make other information public and not private.[123] Privacy may be understood as a necessary precondition for the development and preservation of personhood. Jeffrey Reiman defined privacy in terms of a recognition of one's ownership of their physical and mental reality and a moral right toself-determination.[124]Through the "social ritual" of privacy, or the social practice of respecting an individual's privacy barriers, the social group communicates to developing children that they have exclusive moral rights to their bodies—in other words, moral ownership of their body.[124]This entails control over both active (physical) and cognitive appropriation, the former being control over one's movements and actions and the latter being control over who can experience one's physical existence and when.[124] Alternatively, Stanley Benn defined privacy in terms of a recognition of oneself as a subject with agency—as an individual with the capacity to choose.[125]Privacy is required to exercise choice.[125]Overt observation makes the individual aware of himself or herself as an object with a "determinate character" and "limited probabilities."[125]Covert observation, on the other hand, changes the conditions in which the individual is exercising choice without his or her knowledge and consent.[125] In addition, privacy may be viewed as a state that enables autonomy, a concept closely connected to that of personhood. According to Joseph Kufer, an autonomous self-concept entails a conception of oneself as a "purposeful, self-determining, responsible agent" and an awareness of one's capacity to control the boundary between self and other—that is, to control who can access and experience him or her and to what extent.[126]Furthermore, others must acknowledge and respect the self's boundaries—in other words, they must respect the individual's privacy.[126] The studies of psychologists such as Jean Piaget and Victor Tausk show that, as children learn that they can control who can access and experience them and to what extent, they develop an autonomous self-concept.[126]In addition, studies of adults in particular institutions, such as Erving Goffman's study of "total institutions" such as prisons and mental institutions,[127]suggest that systemic and routinized deprivations or violations of privacy deteriorate one's sense of autonomy over time.[126] Privacy may be understood as a prerequisite for the development of a sense of self-identity. Privacy barriers, in particular, are instrumental in this process. According to Irwin Altman, such barriers "define and limit the boundaries of the self" and thus "serve to help define [the self]."[128]This control primarily entails the ability to regulate contact with others.[128]Control over the "permeability" of the self's boundaries enables one to control what constitutes the self and thus to define what is the self.[128] In addition, privacy may be seen as a state that fosters personal growth, a process integral to the development of self-identity. Hyman Gross suggested that, without privacy—solitude, anonymity, and temporary releases from social roles—individuals would be unable to freely express themselves and to engage in self-discovery andself-criticism.[126]Such self-discovery and self-criticism contributes to one's understanding of oneself and shapes one's sense of identity.[126] In a way analogous to how the personhood theory imagines privacy as some essential part of being an individual, the intimacy theory imagines privacy to be an essential part of the way that humans have strengthened orintimate relationshipswith other humans.[129]Because part ofhuman relationshipsincludes individuals volunteering to self-disclose most if not all personal information, this is one area in which privacy does not apply.[129] James Rachelsadvanced this notion by writing that privacy matters because "there is a close connection between our ability to control who has access to us and to information about us, and our ability to create and maintain different sorts of social relationships with different people."[129][130]Protecting intimacy is at the core of the concept of sexual privacy, which law professorDanielle Citronargues should be protected as a unique form of privacy.[131] Physical privacy could be defined as preventing "intrusions into one's physical space or solitude."[132]An example of the legal basis for the right to physical privacy is the U.S.Fourth Amendment, which guarantees "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures".[133] Physical privacy may be a matter of cultural sensitivity, personal dignity, and/or shyness. There may also be concerns about safety, if, for example one is wary of becoming the victim of crime orstalking.[134]There are different things that can be prevented to protect one's physical privacy, including people watching (even through recorded images) one'sintimate behavioursorintimate partsand unauthorized access to one's personal possessions or places. Examples of possible efforts used to avoid the former, especially formodestyreasons, areclothes,walls,fences, privacy screens,cathedral glass,window coverings, etc. Government agencies, corporations, groups/societies and other organizations may desire to keep their activities or secrets from being revealed to other organizations or individuals, adopting varioussecuritypractices and controls in order to keep private information confidential. Organizations may seek legal protection for their secrets. For example, a government administration may be able to invokeexecutive privilege[135]or declare certain information to beclassified, or a corporation might attempt to protect valuable proprietary information astrade secrets.[133] Privacy self-synchronization is a hypothesized mode by which the stakeholders of an enterprise privacy program spontaneously contribute collaboratively to the program's maximum success. The stakeholders may be customers, employees, managers, executives, suppliers, partners or investors. When self-synchronization is reached, the model states that the personal interests of individuals toward their privacy is in balance with the business interests of enterprises who collect and use the personal information of those individuals.[136] David Flahertybelieves networked computer databases pose threats to privacy. He develops 'data protection' as an aspect of privacy, which involves "the collection, use, and dissemination of personal information". This concept forms the foundation for fair information practices used by governments globally. Flaherty forwards an idea of privacy as information control, "[i]ndividuals want to be left alone and to exercise some control over how information about them is used".[137] Richard Posnerand Lawrence Lessig focus on the economic aspects of personal information control. Posner criticizes privacy for concealing information, which reduces market efficiency. For Posner, employment is selling oneself in the labour market, which he believes is like selling a product. Any 'defect' in the 'product' that is not reported is fraud.[138]For Lessig, privacy breaches online can be regulated through code and law. Lessig claims "the protection of privacy would be stronger if people conceived of the right as a property right",[139]and that "individuals should be able to control information about themselves".[140] There have been attempts to establish privacy as one of the fundamentalhuman rights, whose social value is an essential component in the functioning of democratic societies.[141] Priscilla Regan believes that individual concepts of privacy have failed philosophically and in policy. She supports a social value of privacy with three dimensions: shared perceptions, public values, andcollectivecomponents. Shared ideas about privacy allows freedom of conscience and diversity in thought. Public values guarantee democratic participation, including freedoms of speech and association, and limits government power. Collective elements describe privacy as collective good that cannot be divided. Regan's goal is to strengthen privacy claims in policy making: "if we did recognize the collective or public-good value of privacy, as well as the common and public value of privacy, those advocating privacy protections would have a stronger basis upon which to argue for its protection".[142] Leslie Regan Shade argues that the human right to privacy is necessary for meaningful democratic participation, and ensures human dignity and autonomy. Privacy depends on norms for how information is distributed, and if this is appropriate. Violations of privacy depend on context. The human right to privacy has precedent in theUnited Nations Declaration of Human Rights: "Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers."[143]Shade believes that privacy must be approached from a people-centered perspective, and not through the marketplace.[144] Dr. Eliza Watt, Westminster Law School, University of Westminster in London, UK, proposes application of the International Human Right Law (IHRL) concept of “virtual control” as an approach to deal with extraterritorial mass surveillance by state intelligence agencies. Dr. Watt envisions the “virtual control” test, understood as a remote control over the individual's right to privacy of communications, where privacy is recognized under the ICCPR, Article 17. This, she contends, may help to close the normative gap that is being exploited by nation states.[145] Theprivacy paradoxis a phenomenon in which online users state that they are concerned about their privacy but behave as if they were not.[146]While this term was coined as early as 1998,[147]it was not used in its current popular sense until the year 2000.[148][146] Susan B. Barnes similarly used the termprivacy paradoxto refer to the ambiguous boundary between private and public space on social media.[149]When compared to adults, young people tend to disclose more information onsocial media. However, this does not mean that they are not concerned about their privacy. Susan B. Barnes gave a case in her article: in a television interview about Facebook, a student addressed her concerns about disclosing personal information online. However, when the reporter asked to see her Facebook page, she put her home address, phone numbers, and pictures of her young son on the page. The privacy paradox has been studied and scripted in different research settings. Several studies have shown this inconsistency between privacy attitudes and behavior among online users.[150]However, by now an increasing number of studies have also shown that there are significant and at times large correlations between privacy concerns and information sharing behavior,[151]which speaks against the privacy paradox. A meta-analysis of 166 studies published on the topic reported an overall small but significant relation between privacy concerns and informations sharing or use of privacy protection measures.[152]So although there are several individual instances or anecdotes where behavior appear paradoxical, on average privacy concerns and privacy behaviors seem to be related, and several findings question the general existence of the privacy paradox.[153] However, the relationship between concerns and behavior is likely only small, and there are several arguments that can explain why that is the case. According to theattitude-behavior gap, attitudes and behaviors arein generaland in most cases not closely related.[154]A main explanation for the partial mismatch in the context of privacy specifically is that users lack awareness of the risks and the degree of protection.[155]Users may underestimate the harm of disclosing information online.[28]On the other hand, some researchers argue that the mismatch comes from lack of technology literacy and from the design of sites.[156]For example, users may not know how to change theirdefault settingseven though they care about their privacy. Psychologists Sonja Utz and Nicole C. Krämer particularly pointed out that the privacy paradox can occur when users must trade-off between their privacy concerns and impression management.[157] A study conducted by Susanne Barth and Menno D.T. de Jo demonstrates that decision making takes place on an irrational level, especially when it comes to mobile computing. Mobile applications in particular are often built up in such a way that spurs decision making that is fast and automatic without assessing risk factors. Protection measures against these unconscious mechanisms are often difficult to access while downloading and installing apps. Even with mechanisms in place to protect user privacy, users may not have the knowledge or experience to enable these mechanisms.[158] Users of mobile applications generally have very little knowledge of how their personal data are used. When they decide which application to download, they typically are not able to effectively interpret the information provided by application vendors regarding the collection and use of personal data.[159]Other research finds that this lack of interpretability means users are much more likely to be swayed by cost, functionality, design, ratings, reviews and number of downloads than requested permissions for usage of their personal data.[160] The willingness to incur a privacy risk is suspected to be driven by a complex array of factors including risk attitudes, personal value for private information, and general attitudes to privacy (which are typically measured using surveys).[161]One experiment aiming to determine the monetary value of several types of personal information indicated relatively low evaluations of personal information.[159]Despite claims that ascertaining the value of data requires a "stock-market for personal information",[162]surveillance capitalismand themass surveillance industryregularly place price tags on this form of data as it is shared between corporations and governments. Users are not always given the tools to live up to their professed privacy concerns, and they are sometimes willing to trade private information for convenience, functionality, or financial gain, even when the gains are very small.[163]One study suggests that people think their browser history is worth the equivalent of a cheap meal.[164]Another finds that attitudes to privacy risk do not appear to depend on whether it is already under threat or not.[161]The methodology ofuser empowermentdescribes how to provide users with sufficient context to make privacy-informed decisions. It is suggested byAndréa Belligerand David J. Krieger that the privacy paradox should not be considered a paradox, but more of aprivacy dilemma, for services that cannot exist without the user sharing private data.[164]However, the general public is typically not given the choice whether to share private data or not,[19][56]making it difficult to verify any claim that a service truly cannot exist without sharing private data. The privacy calculus model posits that two factors determine privacy behavior, namely privacy concerns (or perceived risks) and expected benefits.[165][166]By now, the privacy calculus has been supported by several studies.[167][168] As with otherconceptions of privacy, there are various ways to discuss what kinds of processes or actions remove, challenge, lessen, or attack privacy. In 1960 legal scholarWilliam Prossercreated the following list of activities which can be remedied with privacy protection:[169][170] From 2004 to 2008, building from this and other historical precedents, Daniel J. Solove presented another classification of actions which are harmful to privacy, including collection of information which is already somewhat public, processing of information, sharing information, and invading personal space to get private information.[171] In the context of harming privacy, information collection means gathering whatever information can be obtained by doing something to obtain it.[171]Examples include surveillance andinterrogation.[171]Another example is how consumers and marketers also collect information in the business context through facial recognition which has recently caused a concern for things such as privacy. There is currently research being done related to this topic.[172] Companies like Google and Meta collect vast amounts of personal data from their users through various services and platforms. This data includes browsing habits, search history, location information, and even personal communications. These companies then analyze and aggregate this data to create detailed user profiles, which are sold to advertisers and other third parties. This practice is often done without explicit user consent, leading to an invasion of privacy as individuals have little control over how their information is used. The sale of personal data can result in targeted advertising, manipulation, and even potential security risks, as sensitive information can be exploited by malicious actors. This commercial exploitation of personal data undermines user trust and raises significant ethical and legal concerns regarding data protection and privacy rights.[173] It can happen that privacy is not harmed when information is available, but that the harm can come when that information is collected as a set, then processed together in such a way that the collective reporting of pieces of information encroaches on privacy.[174]Actions in this category which can lessen privacy include the following:[174] Count not him among your friends who will retail your privacies to the world. Information dissemination is an attack on privacy when information which was shared in confidence is shared or threatened to be shared in a way that harms the subject of the information.[174] There are various examples of this.[174]Breach of confidentiality is when one entity promises to keep a person's information private, then breaks that promise.[174]Disclosure is making information about a person more accessible in a way that harms the subject of the information, regardless of how the information was collected or the intent of making it available.[174]Exposure is a special type of disclosure in which the information disclosed is emotional to the subject or taboo to share, such as revealing their private life experiences, their nudity, or perhaps private body functions.[174]Increased accessibility means advertising the availability of information without actually distributing it, as in the case ofdoxing.[174]Blackmail is making a threat to share information, perhaps as part of an effort to coerce someone.[174]Appropriation is an attack on thepersonhoodof someone, and can include using the value of someone's reputation or likeness to advance interests which are not those of the person being appropriated.[174]Distortion is the creation of misleading information or lies about a person.[174] Invasion of privacy, a subset ofexpectation of privacy, is a different concept from the collecting, aggregating, and disseminating information because those three are a misuse of available data, whereas invasion is an attack on the right of individuals to keep personal secrets.[174]An invasion is an attack in which information, whether intended to be public or not, is captured in a way that insults the personal dignity and right to private space of the person whose data is taken.[174] Anintrusionis any unwanted entry into a person's private personal space and solitude for any reason, regardless of whether data is taken during that breach of space.[174]Decisional interferenceis when an entity somehow injects itself into the personal decision-making process of another person, perhaps to influence that person's private decisions but in any case doing so in a way that disrupts the private personal thoughts that a person has.[174] Similarly toactions which reduce privacy, there are multiple angles of privacy and multiple techniques to improve them to varying extents. When actions are done at anorganizational level, they may be referred to ascybersecurity. Individuals can encrypt e-mails via enabling either two encryption protocols,S/MIME, which is built into companies like Apple orOutlookand thus most common, orPGP.[175]TheSignalmessaging app, which encrypts messages so that only the recipient can read the message, is notable for being available on many mobile devices and implementing a form ofperfect forward secrecy.[176]Signal has received praise from whistleblowerEdward Snowden.[177]Encryption and other privacy-based security measures are also used in some cryptocurrencies such asMoneroandZCash.[178][179] Anonymizing proxiesor anonymizing networks likeI2PandTorcan be used to prevent Internet service providers (ISP) from knowing which sites one visits and with whom one communicates, by hiding IP addresses and location, but does not necessarily protect a user from third party data mining. Anonymizing proxies are built into a user's device, in comparison to aVirtual Private Network(VPN), where users must download software.[180]Using a VPN hides all data and connections that are exchanged between servers and a user's computer, resulting in the online data of the user being unshared and secure, providing a barrier between the user and their ISP, and is especially important to use when a user is connected to public Wi-Fi. However, users should understand that all their data does flow through the VPN's servers rather than the ISP. Users should decide for themselves if they wish to use either an anonymizing proxy or a VPN. In a more non-technical sense, using incognito mode or private browsing mode will prevent a user's computer from saving history, Internet files, and cookies, but the ISP will still have access to the users' search history. Usinganonymous search engineswill not share a user's history, clicks, and will obstruct ad blockers.[181] Concrete solutions on how to solve paradoxical behavior still do not exist. Many efforts are focused on processes of decision making, like restricting data access permissions during application installation, but this would not completely bridge the gap between user intention and behavior. Susanne Barth and Menno D.T. de Jong believe that for users to make more conscious decisions on privacy matters, the design needs to be more user-oriented.[158]That being said, delivering on privacy protections is difficult due to the complexity of online consent processes, for example.[182] In a social sense, simply limiting the amount of personal information that users posts on social media could increase their security, which in turn makes it harder for criminals to perform identity theft.[181]Moreover, creating a set of complex passwords and using two-factor authentication can allow users to be less susceptible to their accounts being compromised when various data leaks occur. Furthermore, users should protect their digital privacy by using anti-virus software, which can block harmful viruses like a pop-up scanning for personal information on a users' computer.[183] Although there are laws that promote the protection of users, in some countries, like the U.S., there is no federal digital privacy law and privacy settings are essentially limited by the state of current enacted privacy laws. To further their privacy, users can start conversing with representatives, letting representatives know that privacy is a main concern, which in turn increases the likelihood of further privacy laws being enacted.[184] David Attenborough, abiologistandnatural historian, affirmed thatgorillas"value their privacy" while discussing a brief escape by a gorilla inLondon Zoo.[185] Lack of privacy in public spaces, caused by overcrowding, increases health issues in animals, includingheart diseaseandhigh blood pressure. Also, the stress from overcrowding is connected to an increase in infant mortality rates and maternal stress. The lack of privacy that comes with overcrowding is connected to other issues in animals, which causes their relationships with others to diminish. How they present themselves to others of their species is a necessity in their life, and overcrowding causes the relationships to become disordered.[186] For example, David Attenborough claims that the gorilla's right to privacy is being violated when they are looked at through glass enclosures. They are aware that they are being looked at, therefore they do not have control over how much the onlookers can see of them. Gorillas and other animals may be in the enclosures due to safety reasons, however Attenborough states that this is not an excuse for them to be constantly watched by unnecessary eyes. Also, animals will start hiding in unobserved spaces.[186]Animals in zoos have been found to exhibit harmful or different behaviours due to the presence of visitors watching them:[187]
https://en.wikipedia.org/wiki/Privacy
Privacy-enhancing technologies(PET) are technologies that embody fundamental data protection principles by minimizing personal data use, maximizing data security, and empowering individuals. PETs allowonline usersto protect theprivacyof theirpersonally identifiable information(PII), which is often provided to and handled by services or applications. PETs use techniques to minimize an information system's possession ofpersonal datawithout losing functionality.[1]Generally speaking, PETs can be categorized as either hard or soft privacy technologies.[2] The objective of PETs is to protectpersonal dataand assure technology users of two key privacy points: their own information is kept confidential, and management ofdata protectionis a priority to the organizations who hold responsibility for anyPII. PETs allow users to take one or more of the following actions related to personal data that is sent to and used byonline service providers, merchants or other users (this control is known asself-determination). PETs aim to minimize personal data collected and used by service providers and merchants, usepseudonymsor anonymous data credentials to provide anonymity, and strive to achieve informed consent about giving personal data to online service providers and merchants.[3]In Privacy Negotiations, consumers and service providers establish, maintain, and refine privacy policies as individualized agreements through the ongoing choice among service alternatives, therefore providing the possibility to negotiate the terms and conditions of giving personal data to online service providers and merchants (data handling/privacy policy negotiation). Within private negotiations, the transaction partners may additionally bundle the personal information collection and processing schemes with monetary or non-monetary rewards.[4] PETs provide the possibility to remotely audit the enforcement of these terms and conditions at the online service providers and merchants (assurance), allow users to log, archive and look up past transfers of their personal data, including what data has been transferred, when, to whom and under what conditions, and facilitate the use of their legal rights of data inspection, correction and deletion. PETs also provide the opportunity for consumers or people who want privacy-protection to hide their personal identities. The process involves masking one's personal information and replacing that information with pseudo-data or an anonymous identity. Privacy-enhancing Technologies can be distinguished based on their assumptions.[2] Soft privacy technologies are used where it can be assumed that a third-party can be trusted for the processing of data. This model is based oncompliance,consent, control and auditing.[2] Example technologies areaccess control,differential privacy, andtunnel encryption (SSL/TLS). An example of soft privacy technologies is increased transparency and access. Transparency involves granting people with sufficient details about the rationale used in automated decision-making processes. Additionally, the effort to grant users access is considered soft privacy technology. Individuals are usually unaware of their right of access or they face difficulties in access, such as a lack of a clear automated process.[5] With hard privacy technologies, no single entity can violate the privacy of the user. The assumption here is that third-parties cannot be trusted. Data protection goals includedata minimizationand the reduction of trust in third-parties.[2] Examples of such technologies includeonion routing, thesecret ballot, and VPNs[6]used for democratic elections. PETs have evolved since their first appearance in the 1980s.[dubious–discuss]At intervals, review articles have been published on the state of privacy technology: Examples of existing privacy enhancing technologies are: General PET building blocks: PETs for Privacy-Preserving Communication: PETs for Privacy Preserving Data Processingare PETs that facilitate data processing or the production of statistics while preserving privacy of the individuals providing raw data, or of the specific raw data elements. Some examples include: PETs for Privacy Preserving Data Analyticsare a subset of the PETs used for data processing that are specifically designed for the publishing of statistical data. Some examples include: Examples of privacy enhancing technologies that are being researched or developed include[20]limited disclosure technology, anonymous credentials, negotiation and enforcement of data handling conditions, and data transaction logs. Limited disclosure technologyprovides a way of protecting individuals' privacy by allowing them to share only enough personal information with service providers to complete an interaction or transaction. This technology is also designed to limit tracking and correlation of users’ interactions with these third parties. Limited disclosure usescryptographictechniques and allows users to retrieve data that is vetted by a provider, to transmit that data to a relying party, and have these relying parties trust the authenticity and integrity of the data.[21] Anonymous credentialsare asserted properties or rights of thecredentialholder that don't reveal the true identity of the holder; the only information revealed is what the holder of the credential is willing to disclose. The assertion can be issued by the user himself/herself, by the provider of the online service or by a third party (another service provider, a government agency, etc.). For example: Online car rental. The car rental agency doesn't need to know the true identity of the customer. It only needs to make sure that the customer is over 23 (as an example), that the customer has a drivers license,health insurance(i.e. for accidents, etc.), and that the customer is paying. Thus there is no real need to know the customers name nor their address or any otherpersonal information. Anonymous credentials allow both parties to be comfortable: they allow the customer to only reveal so much data which the car rental agency needs for providing its service (data minimization), and they allow the car rental agency to verify their requirements and get their money. When ordering a car online, the user, instead of providing the classical name, address andcredit card number, provides the following credentials, all issued topseudonyms(i.e. not to the real name of the customer): Negotiation and enforcement of data handling conditions. Before ordering a product or service online, the user and the online service provider or merchant negotiate the type ofpersonal datathat is to be transferred to the service provider. This includes the conditions that shall apply to the handling of the personal data, such as whether or not it may be sent to third parties (profile selling) and under what conditions (e.g. only while informing the user), or at what time in the future it shall be deleted (if at all). After the transfer of personal data took place, the agreed upon data handling conditions are technically enforced by the infrastructure of the service provider, which is capable of managing and processing and data handling obligations. Moreover, this enforcement can be remotely audited by the user, for example by verifying chains of certification based onTrusted computingmodules or by verifying privacy seals/labels that were issued by third party auditing organizations (e.g. data protection agencies). Thus instead of the user having to rely on the mere promises of service providers not to abusepersonal data, users will be more confident about the service provider adhering to the negotiated data handling conditions[22] Lastly, thedata transaction logallows users the ability to log the personal data they send to service provider(s), the time in which they do it, and under what conditions. These logs are stored and allow users to determine what data they have sent to whom, or they can establish the type of data that is in possession by a specific service provider. This leads to moretransparency, which is a pre-requisite of being in control. PETs in general: Anonymous credentials: Privacy policy negotiation:
https://en.wikipedia.org/wiki/Privacy_enhancing_technologies
Web literacyrefers to the skills and competencies needed for reading, writing, and participating on the web.[1]It has been described as "both content and activity" meaning that web users should not just learn about the web but also about how to make their own website.[2] In the late 1990s, literacy researchers began to explore the differences between printed text and network-enabled devices with screens. This research was largely focused on two areas: the credibility of information that can be found on theWorld Wide Web[3]and the difference thathypertextmakes to reading and writing.[4]These skills were included in definitions of information literacy and included in aSCONULposition paper in 1999.[5]This paper became the '7 Pillars of Information Literacy', which was last updated in 2011.[6] TheMozilla Foundationis anon-profit organizationthat aims to promote openness, innovation, and participation on theInternet. It has created a Web Literacy Map[1]in consultation with a community ofstakeholdersfrom formal and informal education, as well as industry.[7][1]Work on what was originally entitled a Web Literacy 'Standard' began in early 2013. Version 1.0 was launched at theMozilla Festivallater that year.[8]Going forward, 'standard' was seen to be problematic and against the ethos of what the Mozilla community was trying to achieve.[9] Literacy Version 1.1of theWeb Literacy Mapwas released in early 2014[10]and underpins the Mozilla Foundation'sWebmaker resources section, where learners and mentors can find activities that help teach related areas. Although the Web Literacy Map is a list of strands, skills, and competencies, it is most commonly represented as a competency grid. The Mozilla community finalized version 1.5 of the Web Literacy Map at the end of March 2015.[11]This involves small changes to the competencies layer and a comprehensive review of the skills they contain.[12] (Navigating the Web) (Creating the Web) (Participating on the Web)
https://en.wikipedia.org/wiki/Web_literacy
Early research and development: Merging the networks and creating the Internet: Commercialization, privatization, broader access leads to the modern Internet: Examples of Internet services: Information and communications technology(ICT) is an extensional term forinformation technology(IT) that stresses the role ofunified communications[1]and the integration oftelecommunications(telephonelines andwirelesssignals) and computers, as well as necessaryenterprise software,middleware, storage and audiovisual, that enable users to access, store, transmit, understand and manipulate information. ICT is also used to refer to theconvergenceof audiovisuals andtelephone networkswithcomputer networksthrough a single cabling or link system. There are large economic incentives to merge the telephone networks with the computer network system using a single unified system of cabling, signal distribution, and management. ICT is anumbrella termthat includes any communication device, encompassing radio, television, cell phones, computer and network hardware, satellite systems and so on, as well as the various services and appliances with them such as video conferencing and distance learning. ICT also includes analog technology, such as paper communication, and any mode that transmits communication.[2] ICT is a broad subject and the concepts are evolving.[3]It covers any product that will store, retrieve, manipulate, process, transmit, or receive information electronically in a digital form (e.g., personal computers including smartphones, digital television, email, or robots). Skills Framework for the Information Age is one of many models for describing and managing competencies for ICT professionals in the 21st century.[4] The phrase "information and communication technologies" has been used by academic researchers since the 1980s.[5]The abbreviation "ICT" became popular after it was used in a report to the UK government byDennis Stevensonin 1997,[6]and then in the revisedNational Curriculumfor England, Wales and Northern Ireland in 2000. However, in 2012, theRoyal Societyrecommended that the use of the term "ICT" should be discontinued in British schools "as it has attracted too many negative connotations".[7]From 2014, the National Curriculum has used the wordcomputing,which reflects the addition ofcomputer programminginto the curriculum.[8] Variations of the phrase have spread worldwide. The United Nations has created a "United Nations Information and Communication Technologies Task Force" and an internal "Office of Information and Communications Technology".[9] The money spent on IT worldwide has been estimated as US$3.8 trillion[10]in 2017 and has been growing at less than 5% per year since 2009. The estimated 2018 growth of the entire ICT is 5%. The biggest growth of 16% is expected in the area of new technologies (IoT,Robotics,AR/VR, andAI).[11] The 2014 IT budget of the US federal government was nearly $82 billion.[12]IT costs, as a percentage of corporate revenue, have grown 50% since 2002, putting a strain on IT budgets. When looking at current companies' IT budgets, 75% are recurrent costs, used to "keep the lights on" in the IT department, and 25% are the cost of new initiatives for technology development.[13] The average IT budget has the following breakdown:[13] The estimated amount of money spent in 2022 is just over US$6 trillion.[14] The world's technological capacity to store information grew from 2.6 (optimally compressed)exabytesin 1986 to 15.8 in 1993, over 54.5 in 2000, and to 295 (optimally compressed)exabytesin 2007, and some 5zettabytesin 2014.[15][16]This is the informational equivalent to 1.25 stacks ofCD-ROMfrom theearthto themoonin 2007, and the equivalent of 4,500 stacks of printed books from theearthto thesunin 2014. The world's technological capacity to receive information through one-waybroadcastnetworks was 432exabytesof (optimally compressed) information in 1986, 715 (optimally compressed)exabytesin 1993, 1.2 (optimally compressed)zettabytesin 2000, and 1.9zettabytesin 2007.[15]The world's effective capacity to exchange information through two-waytelecommunicationnetworks was 281petabytesof (optimally compressed) information in 1986, 471petabytesin 1993, 2.2 (optimally compressed)exabytesin 2000, 65 (optimally compressed)exabytesin 2007,[15]and some 100exabytesin 2014.[17]The world's technological capacity to compute information with humanly guided general-purpose computers grew from 3.0 × 10^8 MIPS in 1986, to 6.4 x 10^12 MIPS in 2007.[15] The following is a list ofOECDcountries by share of ICT sector in total value added in 2013.[18] TheICT Development Indexranks and compares the level of ICT use and access across the various countries around the world.[19]In 2014 ITU (International Telecommunication Union) released the latest rankings of the IDI, with Denmark attaining the top spot, followed by South Korea. The top 30 countries in the rankings include most high-income countries where the quality of life is higher than average, which includes countries from Europe and other regions such as "Australia, Bahrain, Canada, Japan, Macao (China), New Zealand, Singapore, and the United States; almost all countries surveyed improved their IDI ranking this year."[20] On 21 December 2001, theUnited Nations General Assemblyapproved Resolution 56/183, endorsing the holding of theWorld Summit on the Information Society(WSIS) to discuss the opportunities and challenges facing today's information society.[21]According to this resolution, the General Assembly related the Summit to theUnited Nations Millennium Declaration's goal of implementing ICT to achieveMillennium Development Goals. It also emphasized a multi-stakeholder approach to achieve these goals, using all stakeholders including civil society and the private sector, in addition to governments. To help anchor and expand ICT to every habitable part of the world, "2015 is the deadline for achievements of the UN Millennium Development Goals (MDGs), which global leaders agreed upon in the year 2000."[22] There is evidence that, to be effective in education, ICT must be fully integrated into thepedagogy. Specifically, when teaching literacy and math, using ICT in combination with Writing to Learn[23][24]produces better results than traditional methods alone or ICT alone.[25]The United Nations Educational, Scientific and Cultural Organisation (UNESCO), a division of the United Nations, has made integrating ICT into education as part of its efforts to ensure equity and access to education. The following, which was taken directly from a UNESCO publication on educational ICT, explains the organization's position on the initiative. Information and Communication Technology can contribute to universal access to education, equity in education, the delivery of quality learning and teaching, teachers' professional development and more efficient education management, governance, and administration.UNESCOtakes a holistic and comprehensive approach to promote ICT in education. Access, inclusion, and quality are among the main challenges they can address. The Organization's Intersectoral Platform for ICT in education focuses on these issues through the joint work of three of its sectors: Communication & Information, Education and Science.[26] Despite the power of computers to enhance and reform teaching and learning practices, improper implementation is a widespread issue beyond the reach of increased funding and technological advances with little evidence that teachers and tutors are properly integrating ICT into everyday learning.[27]Intrinsic barriers such as a belief in more traditional teaching practices and individual attitudes towards computers in education as well as the teachers own comfort with computers and their ability to use them all as result in varying effectiveness in the integration of ICT in the classroom.[28] School environments play an important role in facilitating language learning. However, language and literacy barriers are obstacles preventing refugees from accessing and attending school, especially outside camp settings.[29] Mobile-assisted language learning apps are key tools for language learning. Mobile solutions can provide support for refugees' language and literacy challenges in three main areas: literacy development, foreign language learning and translations. Mobile technology is relevant because communicative practice is a key asset for refugees and immigrants as they immerse themselves in a new language and a new society. Well-designed mobile language learning activities connect refugees with mainstream cultures, helping them learn in authentic contexts.[29] ICT has been employed as an educational enhancement inSub-Saharan Africasince the 1960s. Beginning with television and radio, it extended the reach of education from the classroom to the living room, and to geographical areas that had been beyond the reach of the traditional classroom. As the technology evolved and became more widely used, efforts in Sub-Saharan Africa were also expanded. In the 1990s a massive effort to push computer hardware and software into schools was undertaken, with the goal of familiarizing both students and teachers with computers in the classroom. Since then, multiple projects have endeavoured to continue the expansion of ICT's reach in the region, including theOne Laptop Per Child(OLPC) project, which by 2015 had distributed over 2.4 million laptops to nearly two million students and teachers.[30] The inclusion of ICT in the classroom, often referred to asM-Learning, has expanded the reach of educators and improved their ability to track student progress in Sub-Saharan Africa. In particular, the mobile phone has been most important in this effort. Mobile phone use is widespread, and mobile networks cover a wider area than internet networks in the region. The devices are familiar to student, teacher, and parent, and allow increased communication and access to educational materials. In addition to benefits for students, M-learning also offers the opportunity for better teacher training, which leads to a more consistent curriculum across the educational service area. In 2011, UNESCO started a yearly symposium called Mobile Learning Week with the purpose of gathering stakeholders to discuss the M-learning initiative.[30] Implementation is not without its challenges. While mobile phone and internet use are increasing much more rapidly in Sub-Saharan Africa than in other developing countries, the progress is still slow compared to the rest of the developed world, with smartphone penetration only expected to reach 20% by 2017.[30]Additionally, there are gender, social, and geo-political barriers to educational access, and the severity of these barriers vary greatly by country. Overall, 29.6 million children in Sub-Saharan Africa were not in school in the year 2012, owing not just to the geographical divide, but also to political instability, the importance of social origins, social structure, and gender inequality. Once in school, students also face barriers to quality education, such as teacher competency, training and preparedness, access to educational materials, and lack of information management.[30] In modern society, ICT is ever-present, with over three billion people having access to the Internet.[31]With approximately 8 out of 10 Internet users owning a smartphone, information and data are increasing by leaps and bounds.[32]This rapid growth, especially in developing countries, has led ICT to become a keystone of everyday life, in which life without some facet of technology renders most of clerical, work and routine tasks dysfunctional. The most recent authoritative data, released in 2014, shows "that Internet use continues to grow steadily, at 6.6% globally in 2014 (3.3% in developed countries, 8.7% in the developing world); the number of Internet users in developing countries has doubled in five years (2009–2014), with two-thirds of all people online now living in the developing world."[20] However, hurdles are still large. "Of the 4.3 billion people not yet using the Internet, 90% live in developing countries. In the world's 42 Least Connected Countries (LCCs), which are home to 2.5 billion people, access to ICTs remains largely out of reach, particularly for these countries' large rural populations."[33]ICT has yet to penetrate the remote areas of some countries, with many developing countries dearth of any type of Internet. This also includes the availability of telephone lines, particularly the availability of cellular coverage, and other forms of electronic transmission of data. The latest "Measuring the Information Society Report" cautiously stated that the increase in the aforementioned cellular data coverage is ostensible, as "many users have multiple subscriptions, with global growth figures sometimes translating into little real improvement in the level of connectivity of those at the very bottom of the pyramid; an estimated 450 million people worldwide live in places which are still out of reach of mobile cellular service."[31] Favourably, the gap between the access to the Internet and mobile coverage has decreased substantially in the last fifteen years, in which "2015 was the deadline for achievements of the UN Millennium Development Goals (MDGs), which global leaders agreed upon in the year 2000, and the new data show ICT progress and highlight remaining gaps."[22]ICT continues to take on a new form, with nanotechnology set to usher in a new wave of ICT electronics and gadgets. ICT newest editions into the modern electronic world include smartwatches, such as theApple Watch, smart wristbands such as theNike+ FuelBand, and smart TVs such asGoogle TV. With desktops soon becoming part of a bygone era, and laptops becoming the preferred method of computing, ICT continues to insinuate and alter itself in the ever-changing globe. Information communication technologies play a role in facilitatingaccelerated pluralisminnew social movementstoday. The internet according toBruce Bimberis "accelerating the process of issue group formation and action"[34]and coined the termaccelerated pluralismto explain this new phenomena. ICTs are tools for "enabling social movement leaders and empowering dictators"[35]in effect promoting societal change. ICTs can be used to garnergrassrootssupport for a cause due to the internet allowing for political discourse and direct interventions with state policy[36]as well as change the way complaints from the populace are handled by governments. Furthermore, ICTs in a household are associated with women rejecting justifications for intimate partner violence. According to a study published in 2017, this is likely because "access to ICTs exposes women to different ways of life and different notions about women's role in society and the household, especially in culturally conservative regions where traditional gender expectations contrast observed alternatives."[37] Applications of ICTs in science, research and development, and academia include: ScholarMark Warschauerdefines a "models of access" framework for analyzing ICT accessibility. In the second chapter of his book,Technology and Social Inclusion: Rethinking the Digital Divide, he describes three models of access to ICTs: devices, conduits, and literacy.[40]Devices and conduits are the most common descriptors for access to ICTs, but they are insufficient for meaningful access to ICTs without third model of access, literacy.[40]Combined, these three models roughly incorporate all twelve of the criteria of "Real Access" to ICT use, conceptualized by a non-profit organization called Bridges.org in 2005:[41] The most straightforward model of access for ICT inMark Warschauer's theory is devices.[40]In this model, access is defined most simply as the ownership of a device such as a phone or computer.[40]Warschauer identifies many flaws with this model, including its inability to account for additional costs of ownership such as software, access to telecommunications, knowledge gaps surrounding computer use, and the role of government regulation in some countries.[40]Therefore, Warschauer argues that considering only devices understates the magnitude of digital inequality. For example, thePew Research Centernotes that 96% of Americans own a smartphone,[42]although most scholars in this field would contend that comprehensive access to ICT in the United States is likely much lower than that. A conduit requires a connection to a supply line, which for ICT could be a telephone line or Internet line. Accessing the supply requires investment in the proper infrastructure from a commercial company or local government and recurring payments from the user once the line is set up. For this reason, conduits usually divide people based on their geographic locations. As aPew Research Centerpoll reports, Americans in rural areas are 12% less likely to have broadband access than other Americans, thereby making them less likely to own the devices.[43]Additionally, these costs can be prohibitive to lower-income families accessing ICTs. These difficulties have led to a shift toward mobile technology; fewer people are purchasing broadband connection and are instead relying on their smartphones for Internet access, which can be found for free at public places such as libraries.[44]Indeed, smartphones are on the rise, with 37% of Americans using smartphones as their primary medium for internet access[44]and 96% of Americans owning a smartphone.[42] In 1981,Sylvia ScribnerandMichael Colestudied a tribe inLiberia, theVai people, who havetheir own local script. Since about half of those literate in Vai have never had formal schooling,ScribnerandColewere able to test more than 1,000 subjects to measure the mental capabilities of literates over non-literates.[45]This research, which they laid out in their bookThe Psychology of Literacy,[45]allowed them to study whether the literacy divide exists at the individual level.Warschauerapplied their literacy research to ICT literacy as part of his model of ICT access. ScribnerandColefound no generalizable cognitive benefits from Vai literacy; instead, individual differences on cognitive tasks were due to other factors, like schooling or living environment.[45]The results suggested that there is "no single construct of literacy that divides people into two cognitive camps; [...] rather, there are gradations and types of literacies, with a range of benefits closely related to the specific functions of literacy practices."[40]Furthermore, literacy and social development are intertwined, and the literacy divide does not exist on the individual level. Warschauer draws onScribnerandCole's research to argue that ICT literacy functions similarly to literacy acquisition, as they both require resources rather than a narrow cognitive skill. Conclusions about literacy serve as the basis for a theory of the digital divide and ICT access, as detailed below: There is not just one type of ICT access, but many types. The meaning and value of access varies in particular social contexts. Access exists in gradations rather than in a bipolar opposition. Computer and Internet use brings no automatic benefit outside of its particular functions. ICT use is a social practice, involving access to physical artifacts, content, skills, and social support. And acquisition of ICT access is a matter not only of education but also of power.[40] Therefore, Warschauer concludes that access to ICT cannot rest on devices or conduits alone; it must also engage physical, digital, human, and social resources.[40]Each of these categories of resources have iterative relations with ICT use. If ICT is used well, it can promote these resources, but if it is used poorly, it can contribute to a cycle of underdevelopment and exclusion.[45] In the early 21st century a rapid development of ICT services and electronical devices took place, in which the internet servers multiplied by a factor of 1000 to 395 million and its still increasing. This increase can be explained byMoore's law, which states, that the development of ICT increases every year by 16–20%, so it will double in numbers every four to five years.[46]Alongside this development and the high investments in increasing demand for ICT capable products, a high environmental impact came with it.Softwareand Hardware development as well as production causing already in 2008 the same amount of CO2emissions as global air travels.[46] There are two sides of ICT, the positive environmental possibilities and the shadow side. On the positive side, studies proved, that for instance in theOECDcountries a reduction of 0.235% energy use is caused by an increase in ICT capital by 1%.[47]On the other side the moredigitizationis happening, the more energy is consumed, that means for OECD countries 1% increase in internet users causes a raise of 0.026% electricity consumption per capita and for emerging countries the impact is more than 4 times as high. Currently the scientific forecasts are showing an increase up to 30700 TWh in 2030 which is 20 times more than it was in 2010.[47] To tackle the environmental issues of ICT, theEU commissionplans proper monitoring and reporting of theGHGemissions of different ICT platforms, countries and infrastructure in general. Further the establishment of international norms for reporting and compliance are promoted to foster transparency in this sector.[48] Moreover it is suggested by scientists to make more ICT investments to exploit the potentials of ICT to alleviate CO2emissions in general, and to implement a more effective coordination of ICT, energy and growth policies.[49]Consequently, applying the principle of thecoase theoremmakes sense. It recommends to make investments there, where the marginal avoidance costs of emissions are the lowest, therefore in thedeveloping countrieswith comparatively lower technological standards and policies ashigh-techcountries. With these measures, ICT can reduce environmental damage from economic growth and energy consumption by facilitating communication and infrastructure. ICTs could also be used to addressenvironmental issues, includingclimate change, in various ways, including ways beyond education.[50][51][52]
https://en.wikipedia.org/wiki/Information_and_communications_technology
The followingoutlineis provided as an overview of and topical guide to information technology: Information technology(IT) –microelectronicsbased combination ofcomputingandtelecommunicationstechnologyto treatinformation, including in the acquisition, processing, storage and dissemination of vocal, pictorial, textual and numerical information. It is defined by theInformation Technology Association of America(ITAA) as "the study, design, development, implementation, support or management of computer-basedinformation systems, particularly toward software applications and computer hardware." There are different names for this at different periods or through fields. Some of these names are: Third-party commercial organizations and vendor neutral interest groups that sponsor certifications include: General certification of software practitioners has struggled. TheACMhad a professional certification program in the early 1980s, which was discontinued due to lack of interest. Today, theIEEEis certifying software professionals, but only about 500 people have passed the exam by March 2005[update].
https://en.wikipedia.org/wiki/Outline_of_information_technology
Aknowledge societygenerates, shares, and makes available to all members of thesocietyknowledge that may be used to improve thehuman condition.[1]A knowledge society differs from aninformation societyin that the former serves to transform information into resources that allow society to take effective action, while the latter only creates and disseminates theraw data.[2]The capacity to gather and analyze information has existed throughouthuman history. However, the idea of the present-day knowledge society is based on the vast increase in data creation andinformation disseminationthat results from theinnovationofinformation technologies.[3]TheUNESCOWorld Report addresses the definition, content and future of knowledge societies.[4] The growth ofInformation and communication technology(ICT) has significantly increased the world's capacity for creation of raw data and the speed at which it is produced. The advent of theInternetdelivered unheard-of quantities ofinformationto people. Theevolutionof the internet from Web 1.0 toWeb 2.0offered individuals tools to connect with each otherworldwideas well as become content users and producers. Innovation indigital technologiesandmobile devicesoffers individuals a means to connect anywhere anytime where digital technologies are accessible. Tools of ICT have the potential to transformeducation,training,employmentand access to life-sustaining resources for all members of society.[5] However, this capacity for individuals to produce and use data on a global scale does not necessarily result inknowledgecreation. Contemporary media delivers seemingly endless amounts of information and yet, the information alone does not create knowledge. For knowledge creation to take place, reflection is required to createawareness, meaning and understanding. The improvement of human circumstances requires critical analysis of information to develop the knowledge that assists humankind.[2]Absent reflection andcritical thinking, information can actually become "non-knowledge", that which is false or inaccurate.[6]The anticipated Semantic Web 3.0 and Ubiquitous Web 4.0 will move both information and knowledge creation forward in their capacities to use intelligence to digitally create meaning independent of user-driven ICT.[7][8] Thesocial theoryof a knowledge society explains how knowledge is fundamental to thepolitics,economics, and culture ofmodern society.Associated ideas include theknowledge economycreated by economists and thelearning societycreated by educators.[3]Knowledge is a commodity to be traded for economic prosperity. In a knowledge society,individuals,communities, and organizations produce knowledge-intensive work.Peter Druckerviewed knowledge as a keyeconomic resourceand coined the termknowledge workerin 1969.[9]Fast-forward to the present day, and in this knowledge-intensive environment, knowledge begets knowledge, new competencies develop, and the result is innovation.[10] A knowledge society promoteshuman rightsand offers equal, inclusive, anduniversal access to all knowledgecreation. The UNESCO World Report establishes four principles that are essential for development of an equitable knowledge society:[4] However, they acknowledge that thedigital divideis an obstacle to achievement of genuine knowledge societies. Access to the internet is available to 39 percent of the world's population.[11]This statistic represents growth as well as a continued gap. Among the many challenges that contribute to a global digital divide are issues regarding economic resources,geography, age,gender,language,education, social and cultural background,employmentanddisabilities.[4] To reduce the span of the digital divide,leadersandpolicymakersworldwide must first develop an understanding of knowledge societies and, second, create and deploy initiatives that will universally benefit all populations. The public expects politicians and public institutions to act rationally and rely on relevant knowledge fordecision-making. Yet, in many cases, there are no definitive answers for some of the issues that impacthumankind.Scienceis no longer viewed as the provider of unquestionable knowledge, and sometimes raises more uncertainty in its search for knowledge. The very advancement of knowledge creates the existence of increasedignoranceor non-knowledge.[12]This means that public policy must learn to manage doubt,probability,riskand uncertainty while making the best decisions possible.[6] To confront the uncertainty that comes from an increase in both knowledge and the resulting lack of knowledge, members of a society disagree and make decisions usingjustificationandobservationof consequences.[6]Public policy may operate with the intent to prevent the worst possible outcome, versus find the perfect solution.Democratizationof expert knowledge occurs when a knowledge society produces and relies on more experts. Expert knowledge is no longer exclusive to certain individuals, professional or organizations. If in a knowledge society, knowledge is a public good to which all people have access, any individual may also serve as a creator of knowledge and receive credit as an expert. Since politicians rely on expert knowledge for decision-making, the layperson who may lack specialized knowledge might hold a view that serves as expertise to the political process. As technologies are deployed to improve global information access, the role of education will continue to grow and change. Education is viewed as a basic human right.[4]For a society wherereadingandcountingare a requisite fordaily living, skills in reading,writing, andbasic arithmeticare critical for future learning. However, in a knowledge society, education is not restricted to school. The advent of ICT allows learners to seek information and develop knowledge at any time and any place where access is available and unrestricted. In these circumstances, the skill of learning to learn is one of the most important tools to help people acquire formal and informal education.[4]In a knowledge society supported by ICT, the ability to locate, classify and sort information is essential. Equipped with this skill, the use of ICT becomes an active versus a passive endeavor and integral to literacy and lifelong learning.[4] One marker of a knowledge society is continuous innovation that demandslifelong learning, knowledge development, andknowledge sharing. The institution of education will need to become responsive to changing demands. Educationprofessionalswill need to learn along with everyone else, and as leaders of changing designs in learning, they will serve as a bridge betweentechnologyandteaching.[5]The ability to individually reflect on personal learning requirements and seek knowledge in whatever method is appropriate characterizes lifelong learning. One model that supports this type of learning is theW. Edwards DemingPlan-do-check-act cycle[5]that promotescontinuous improvement. Educational professionals will need to prepare learners to be accountable for their own lifelong learning.
https://en.wikipedia.org/wiki/Knowledge_society
ITIL(previously and also known asInformation Technology Infrastructure Library)is a framework with a set of practices (previously processes) for IT activities such asIT service management(ITSM) andIT asset management(ITAM) that focus on aligning IT services with the needs of the business.[1] ITIL describes best practices, including processes, procedures, tasks, and checklists which are neither organization-specific nor technology-specific. It is designed to allow organizations to establish a baseline and can be used to demonstrate compliance and to measure improvements. There is no formal independent third-party compliance assessment available to demonstrate ITIL compliance in an organization. Certification in ITIL is only available to individuals and not organizations. Since 2021, the ITILtrademarkhas been owned by PeopleCert.[2] Responding to growing dependence on IT, the UK Government'sCentral Computer and Telecommunications Agency(CCTA) in the 1980s developed a set of recommendations designed to standardize IT management practices across government functions, built around aprocess model-based view of controlling and managing operations often credited toW. Edwards Demingand hisplan-do-check-act (PDCA)cycle.[3] ITIL 4 contains seven guiding principles: ITIL 4 consists of 34 practices grouped into 3 categories: ITIL 4 certification can be obtained by different roles in IT management. Certification starts with ITIL 4 Foundation, followed by one of two branches:[11]
https://en.wikipedia.org/wiki/Infrastructure_Management_Services
ITIL(previously and also known asInformation Technology Infrastructure Library)is a framework with a set of practices (previously processes) for IT activities such asIT service management(ITSM) andIT asset management(ITAM) that focus on aligning IT services with the needs of the business.[1] ITIL describes best practices, including processes, procedures, tasks, and checklists which are neither organization-specific nor technology-specific. It is designed to allow organizations to establish a baseline and can be used to demonstrate compliance and to measure improvements. There is no formal independent third-party compliance assessment available to demonstrate ITIL compliance in an organization. Certification in ITIL is only available to individuals and not organizations. Since 2021, the ITILtrademarkhas been owned by PeopleCert.[2] Responding to growing dependence on IT, the UK Government'sCentral Computer and Telecommunications Agency(CCTA) in the 1980s developed a set of recommendations designed to standardize IT management practices across government functions, built around aprocess model-based view of controlling and managing operations often credited toW. Edwards Demingand hisplan-do-check-act (PDCA)cycle.[3] ITIL 4 contains seven guiding principles: ITIL 4 consists of 34 practices grouped into 3 categories: ITIL 4 certification can be obtained by different roles in IT management. Certification starts with ITIL 4 Foundation, followed by one of two branches:[11]
https://en.wikipedia.org/wiki/ITIL_v3
Microsoft Operations Framework(MOF)4.0is a series of guides aimed at helpinginformation technology(IT) professionals establish and implement reliable, cost-effective services. MOF 4.0 was created to provide guidance across the entire IT life cycle. Completed in early 2008, MOF 4.0 integrates community-generated processes;governance, risk, andcomplianceactivities; management reviews, andMicrosoft Solutions Framework(MSF) best practices. The guidance in theMicrosoftOperations Framework encompasses all of the activities and processes involved in managing an IT service: its conception, development, operation, maintenance, and—ultimately—its retirement. MOF 4.0 ThePlan Phasefocuses on ensuring that, from its inception, a requested IT service is reliable, policy-compliant, cost-effective, and adaptable to changing business needs. TheDeliver Phaseconcerns the envisioning, planning, building, stabilization, and deployment of requested services. TheOperate Phasedeals with the efficient operation, monitoring, and support of deployed services in line with agreed-toservice level agreement(SLA) targets. TheManage Layerhelps users establish an integrated approach to IT service management activities through the use ofrisk management,change management, and controls. It also provides guidance relating to accountabilities and role types. Service Management Functions MOF organizes IT activities and processes into Service Management Functions (SMFs) which provide operational guidance for capabilities within the service management environment. Each SMF is anchored within a related lifecycle phase and contains a unique set of goals and outcomes supporting the objectives of that phase. Management Reviews An IT service’s readiness to move from one phase to the next is confirmed by management reviews, which ensure that goals are achieved in an appropriate fashion and that IT’s goals are aligned with the goals of the organization. Governance, Risk, and Compliance The interrelated disciplines of governance, risk, and compliance (GRC) represent a cornerstone of MOF 4.0. IT governance is a senior management–level activity that clarifies who holds the authority to make decisions, determines accountability for actions and responsibility for outcomes, and addresses how expected performance will be evaluated. Risk represents possible adverse impacts on reaching goals and can arise from actions taken or not taken. Compliance is a process that ensures individuals are aware ofregulations,policies, and procedures that must be followed as a result of senior management’s decisions.
https://en.wikipedia.org/wiki/Microsoft_Operations_Framework
Information security management(ISM) defines and manages controls that an organization needs to implement to ensure that it is sensibly protecting theconfidentiality, availability, and integrity ofassetsfromthreatsandvulnerabilities. The core of ISM includesinformation risk management, a process that involves the assessment of the risks an organization must deal with in the management and protection of assets, as well as the dissemination of the risks to all appropriatestakeholders.[1]This requires proper asset identification and valuation steps, including evaluating the value ofconfidentiality,integrity,availability, and replacement of assets.[2]As part of information security management, an organization may implement an information security management system and other best practices found in theISO/IEC 27001,ISO/IEC 27002, and ISO/IEC 27035 standards oninformation security.[3][4] Managing information security in essence means managing and mitigating the various threats and vulnerabilities to assets, while at the same time balancing the management effort expended on potential threats and vulnerabilities by gauging the probability of them actually occurring.[1][5][6]A meteorite crashing into aserver roomis certainly a threat, for example, but an information security officer will likely put little effort into preparing for such a threat. Just as people don't have to start preparing for the end of the world just because of the existence of aglobal seed bank.[7] After appropriate asset identification and valuation have occurred,[2]risk management and mitigation of risks to those assets involves the analysis of the following issues:[5][6][8] Once a threat and/or vulnerability has been identified and assessed as having sufficient impact/likelihood on information assets, a mitigation plan can be enacted. The mitigation method is chosen largely depends on which of the seven information technology (IT) domains the threat and/or vulnerability resides in. The threat of user apathy toward security policies (the user domain) will require a much different mitigation plan than the one used to limit the threat of unauthorized probing andscanningof a network (the LAN-to-WAN domain).[8] An information security management system (ISMS) represents the collation of all the interrelated/interacting information security elements of an organization so as to ensure policies, procedures, and objectives can be created, implemented, communicated, and evaluated to better guarantee the organization's overall information security. This system is typically influenced by an organization's needs, objectives, security requirements, size, and processes.[9]An ISMS includes and lends to risk management and mitigation strategies. Additionally, an organization's adoption of an ISMS indicates that it is systematically identifying, assessing, and managing information security risks and "will be capable of successfully addressing information confidentiality, integrity, and availability requirements."[10]However, the human factors associated with ISMS development, implementation, and practice (the user domain[8]) must also be considered to best ensure the ISMS' ultimate success.[11] Implementing an effective information security management (including risk management and mitigation) requires a management strategy that takes note of the following:[12] Without sufficient budgetary considerations for all the above—in addition to the money allotted to standard regulatory, IT, privacy, and security issues—an information security management plan/system can not fully succeed. Standards that are available to assist organizations with implementing the appropriate programs and controls to mitigate threats and vulnerabilities include theISO/IEC 27000family of standards, theITIL framework, theCOBIT framework, andO-ISM3 2.0. The ISO/IEC 27000 family represents some of the most well-known standards governing information security management and their ISMS is based on global expert opinion. They lay out the requirements for best "establishing, implementing, deploying, monitoring, reviewing, maintaining, updating, and improving information security management systems."[3][4]ITIL acts as a collection of concepts, policies, and best practices for the effective management of information technology infrastructure, service, and security, differing from ISO/IEC 27001 in only a few ways.[13][14]COBIT, developed byISACA, is a framework for helping information security personnel develop and implement strategies for information management and governance while minimizing negative impacts and controlling information security and risk management,[4][13][15]andO-ISM32.0 isThe Open Group's technology-neutral information security model for enterprise.[16]
https://en.wikipedia.org/wiki/Information_security_management_system
COBIT(Control Objectives for Information and Related Technologies) is a framework created byISACAforinformation technology (IT) managementandIT governance.[1] The framework is business focused and defines a set of generic processes for the management of IT, with each process defined together with process inputs and outputs, key process-activities, process objectives, performance measures and an elementarymaturity model.[1] Business and IT goals are linked and measured to create responsibilities of business and IT teams. Five processes are identified: Evaluate, Direct and Monitor (EDM); Align, Plan and Organize (APO); Build, Acquire and Implement (BAI); Deliver, Service and Support (DSS); and Monitor, Evaluate and Assess (MEA).[2] The COBIT framework ties in withCOSO,ITIL,[3]BiSL,ISO 27000,CMMI,TOGAFandPMBOK.[1] The framework helps companies follow law, be more agile and earn more.[4] Below are COBIT components: The standard meets all the needs of the practice, while maintaining independence from specific manufacturers, technologies and platforms. When developing the standard, it was possible to use it both for auditing a company'sIT systemand for designing an IT system. In the first case, COBIT allows you to determine the degree of conformity of the system under study to the best examples, and in the second, to design a system that is almost ideal in its characteristics. COBIT was initially "Control Objectives for Information and Related Technologies," though before the release of the framework people talked of "CobiT" as "Control Objectives for IT"[5]or "Control Objectives for Information and Related Technology."[6] ISACA first released COBIT in 1996, originally as a set of control objectives[clarification needed]to help the financial audit community better maneuver in IT-related environments.[1][7]Seeing value in expanding the framework beyond just the auditing realm, ISACA released a broader version 2 in 1998 and expanded it even further by adding management guidelines in 2000's version 3. The development of both theAS 8015:Australian Standard for Corporate Governance of Information and Communication Technologyin January 2005[8]and the more international draft standard ISO/IEC DIS 29382 (which soon after becameISO/IEC 38500) in January 2007[9]increased awareness of the need for more information and communication technology (ICT) governance components. ISACA inevitably added related components/frameworks with versions 4 and 4.1 in 2005 and 2007 respectively, "addressing the IT-related business processes and responsibilities in value creation (Val IT) andrisk management(Risk IT)."[1][7] COBIT 5 (2012) is based on COBIT 4.1, Val IT 2.0 and Risk IT frameworks, and draws on ISACA'sIT Assurance Framework(ITAF) and theBusiness Model for Information Security(BMIS).[10][11] ISACA currently offers certification tracks on both COBIT 2019 (COBIT Foundations, COBIT Design & Implementation, and Implementing the NIST Cybersecurity Framework Using COBIT 2019)[12]as well as certification in the previous version (COBIT 5).[13][14]
https://en.wikipedia.org/wiki/COBIT
TheCapability Maturity Model(CMM) is a development model created in 1986 after a study of data collected from organizations that contracted with theU.S. Department of Defense, who funded the research. The term "maturity" relates to the degree of formality andoptimizationof processes, fromad hocpractices, to formally defined steps, to managed result metrics, to active optimization of the processes. The model's aim is to improve existingsoftware developmentprocesses, but it can also be applied to other processes. In 2006, the Software Engineering Institute atCarnegie Mellon Universitydeveloped theCapability Maturity Model Integration, which has largely superseded the CMM and addresses some of its drawbacks.[1] The Capability Maturity Model was originally developed as a tool for objectively assessing the ability of government contractors'processesto implement a contracted software project. The model is based on the process maturity framework first described inIEEE Software[2]and, later, in the 1989 bookManaging the Software ProcessbyWatts Humphrey. It was later published as an article in 1993[3]and as a book by the same authors in 1994.[4] Though the model comes from the field ofsoftware development, it is also used as a model to aid in business processes generally, and has also been used extensively worldwide in government offices, commerce, and industry.[5][6] In the 1980s, the use of computers grew more widespread, more flexible and less costly. Organizations began to adopt computerized information systems, and the demand forsoftware developmentgrew significantly. Many processes for software development were in their infancy, with few standard or "best practice" approaches defined. As a result, the growth was accompanied by growing pains: project failure was common, the field ofcomputer sciencewas still in its early years, and the ambitions for project scale and complexity exceeded the market capability to deliver adequate products within a planned budget. Individuals such asEdward Yourdon,[7]Larry Constantine,Gerald Weinberg,[8]Tom DeMarco,[9]andDavid Parnasbegan to publish articles and books with research results in an attempt to professionalize the software-development processes.[5][10] In the 1980s, several US military projects involving software subcontractors ran over-budget and were completed far later than planned, if at all. In an effort to determine why this was occurring, theUnited States Air Forcefunded a study at the Software Engineering Institute (SEI). The first application of a staged maturity model to IT was not by CMU/SEI, but rather byRichard L. Nolan, who, in 1973 published thestages of growth modelfor IT organizations.[11] Watts Humphreybegan developing his process maturity concepts during the later stages of his 27-year career at IBM.[12] Active development of the model by the US Department of Defense Software Engineering Institute (SEI) began in 1986 when Humphrey joined theSoftware Engineering Institutelocated at Carnegie Mellon University inPittsburgh, Pennsylvaniaafter retiring from IBM. At the request of the U.S. Air Force he began formalizing his Process Maturity Framework to aid the U.S. Department of Defense in evaluating the capability of software contractors as part of awarding contracts. The result of the Air Force study was a model for the military to use as an objective evaluation of software subcontractors' process capability maturity. Humphrey based this framework on the earlierQuality Management Maturity Griddeveloped byPhilip B. Crosbyin his book "Quality is Free".[13]Humphrey's approach differed because of his unique insight that organizations mature their processes in stages based on solving process problems in a specific order. Humphrey based his approach on the staged evolution of a system of software development practices within an organization, rather than measuring the maturity of each separate development process independently. The CMMI has thus been used by different organizations as a general and powerful tool for understanding and then improving general business process performance. Watts Humphrey's Capability Maturity Model (CMM) was published in 1988[14]and as a book in 1989, inManaging the Software Process.[15] Organizations were originally assessed using a process maturity questionnaire and a Software Capability Evaluation method devised by Humphrey and his colleagues at the Software Engineering Institute. The full representation of the Capability Maturity Model as a set of definedprocess areasand practices at each of the five maturity levels was initiated in 1991, with Version 1.1 being published in July 1993.[3]The CMM was published as a book[4]in 1994 by the same authors Mark C. Paulk, Charles V. Weber,Bill Curtis, and Mary Beth Chrissis. The CMMI model's application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in training, appraisals, and improvement activities. TheCapability Maturity Model Integration(CMMI) project was formed to sort out the problem of using multiple models for software development processes, thus the CMMI model has superseded the CMM model, though the CMM model continues to be a general theoretical process capability model used in the public domain.[16][citation needed][17] In 2016, the responsibility for CMMI was transferred to the Information Systems Audit and Control Association (ISACA). ISACA subsequently released CMMI v2.0 in 2021. It was upgraded again to CMMI v3.0 in 2023. CMMI now places a greater emphasis on the process architecture which is typically realized as a process diagram. Copies of CMMI are available now only by subscription. The CMMI was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity ofprocess(e.g.,IT service managementprocesses) in IS/IT (and other) organizations. Amaturity modelcan be viewed as a set of structured levels that describe how well the behaviors, practices and processes of an organization can reliably and sustainably produce required outcomes. A maturity model can be used as a benchmark for comparison and as an aid to understanding - for example, for comparative assessment of different organizations where there is something in common that can be used as a basis for comparison. In the case of the CMM, for example, the basis for comparison would be the organizations' software development processes. The model involves five aspects: There are five levels defined along the continuum of the model and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief".[18] Within each of these maturity levels are Key Process Areas which characterise that level, and for each such area there are five factors: goals, commitment, ability, measurement, and verification. These are not necessarily unique to CMMI, representing — as they do — the stages that organizations must go through on the way to becoming mature. The model provides a theoretical continuum along which process maturity can be developed incrementally from one level to the next. Skipping levels is not allowed/feasible. Between 2008 and 2019, about 12% of appraisals given were at maturity levels 4 and 5.[19][20] The model was originally intended to evaluate the ability of government contractors to perform a software project. It has been used for and may be suited to that purpose, but critics[who?]pointed out that process maturity according to the CMM was not necessarily mandatory for successful software development. The software process framework documented is intended to guide those wishing to assess an organization's or project's consistency with the Key Process Areas. For each maturity level there are five checklist types:
https://en.wikipedia.org/wiki/Capability_Maturity_Model
TheInformation Services Procurement Library(ISPL) is a best practice library for the management ofInformation Technologyrelatedacquisitionprocesses (derived fromEuromethod). It helps both the customer and supplier organization to achieve the desired quality using the corresponded amount of time and money by providing methods and best practices forrisk management, contract management, and planning. ISPL focuses on the relationship between the customer and supplier organization: It helps constructing the request for proposal, it helps constructing the contract and delivery plan according to the project situation and risks, and it helps monitoring the delivery phase. ISPL is a unique Information Technology method because where most other Information Technology methods and frameworks focus on development (e.g.DSDM,RUP), ISPL focuses purely on theprocurementof information services. The target audience for ISPL consists of procurement managers, acquisition managers, programme managers, contract managers, facilities managers, service level managers, and project managers in the IT (Information Technology) area. Because of ISPL's focus on procurement it is very suitable to be used withITIL(forIT Service Management) andPRINCE2(forProject Management).[1] There are four main benefits of using ISPL. These benefits are elaborated in the paragraphs below. The customer can take advantage of the competitive marketISPL helps the customer organisation to construct the Request for Proposal (RFP). It even provides the customer organisation with a table of contents. A very important part of the ISPL request for proposal is the elaboration on the supplier evaluation approach. The complete transparency of the supplier evaluation approach triggers the candidate supplier organisations to issue a really competitive proposal. As a result, the customer can take full advantage of the competitive market. Proposals of suppliers become comparableISPL specifies the table of contents of the candidate supplier organisation's proposal. It also supplies the customer and candidate supplier organisations with one clear terminology. This makes the proposals of the candidate suppliers very easy to compare. The use of a strategy that really fits the situationISPL provides the user with a really extensive risk management process. Based on best practices, it helps the management to design a delivery strategy that really fits the situation of both the customer and the supplier organizations. This is a benefit for both the customer and the supplier organizations because selecting a suboptimal strategy obviously brings along higher costs. The chapter#Service Planningdescribes the situation and risk analysis and the design of a service delivery strategy. Contract as a control instrumentISPL helps the customer and supplier organisations to set up a contract that specifies all critical aspects of the procurement. For example, the list of requirements, the budget and the delivery plan are all fixed in the contract. This provides both customer and supplier organisation with a very powerful control instrument. One of the benefits for the customer organisation is that the supplier organisation will be highly motivated to meet all the deadlines because otherwise the supplier commits contract breach. One of the benefits for the supplier organisation is that it becomes much harder for the customer organisation to change the requirements which would slow down the development process. In short, with ISPL there is a better control of costs and time-scales. ISPL is developed and published in 1999 by a consortium of five European companies: EXIN and ID Research (ORDINA) from the Netherlands, FAST from Germany, SEMA from France and TIEKE from Finland. The development of ISPL was part of theSPRITE-S2programme that was launched in 1998 by theEuropean Commission. ISPL is derived fromEuromethodand based on the research of about 200 real-life acquisitions. Although ISPL is a best practice library it does not only consist of books. The structure of ISPL is displayed in Figure 1. In this figure the books are represented by squares and the other tools are represented by circles. The basis is formed by four practical books, the IS Procurement Management Essentials. Additionally, a specific book addresses public procurement. Plug-ins to the IS Procurement Management Essentials are provided for specific needs and situations. Currently, three plug-ins are available for which there is a large market potential. A fourth plug-in on the acquisition ofproduct softwareis currently under construction. Follow the links below to get more information on the different parts of ISPL. Figure 2 illustrates the meta-process model of ISPL. This model can be used to easily link and compare ISPL to other Information Technology methods and frameworks. I will provide each process with a link to more information. Define requirements The definition of requirements is out of the scope of ISPL. For more information on defining requirements I would like to refer to the entry onRequirements management processes. Specify deliverables Both customer and supplier organisation have to specify the information and services they want to receive from the other party. More information on the specification of deliverables can be found in the chapter on#Specifying Deliverables. Situation Analysis, Identify Risks, Select Strategy The customer organisation has to conduct a situation analysis to be able to identify critical risks and mitigate them using an appropriate delivery strategy. More information can be found in the chapter '#Managing Risks and Planning Deliveries'. Make decisions During the execution of the delivery plan the customer and supplier make decisions at each decision point. This is the#Contract Monitoringphase. This chapter provides a high-level description of the ISPL acquisition process from a customer-supplier-interaction point-of-view. Figure 3 illustrates the sequence of customer and supplier processes during the acquisition. The following paragraphs link the processes in Figure 3 to the theoretical information that is present in this entry and thus provide a link from practise to theory. Make RFP To construct a request for proposal the customer first needs to describe the acquisition goal and other requirements, and perform a situation and risk analysis. Using the risk analysis a delivery plan has to be constructed. More information on all of these activities can be found in the chapter#Managing Risks and Planning Deliveriesand the paragraph#Acquisition initiationin the chapter#Managing Acquisition Processes. The request for proposal is a#Tendering deliverable. Make proposal The supplier company writes a proposal in which it clarifies how it can fulfil the acquisition goal. More information on proposals can be found in the paragraph#Tendering deliverable. Select The customer selects a supplier. The selection activity is an important step in the#Tenderingprocess, part of the#Procurement process. Negotiate contract Customer and supplier negotiate the contract. Usually this means that the delivery plan is refined to a more detailed level. The information in the chapter#Managing Risks and Planning Deliveriescan be used to update the delivery plan. Make decisions For each contract the delivery plan is executed. Customer and supplier make decisions at each decision point. For an individual contract the final decision is whether#Contract Completionis reached. The deliverables used in this phase are of the#Decision point deliverabletype. The final decision of the complete acquisition process is the#Acquisition completion. The ISPL Acquisition Process is the actual process of obtaining a system or service to achieve a goal contributing to business objectives and/or needs. It is one of the most important parts of the ISPL method. This chapter is a summary of [2]. Figure 4 displays a model of the ISPL acquisition process. The acquisition process consists of three sequential process steps. These individual process steps are discussed in more detail in the paragraphs below. In ISPL the terms target domain and service domain are used quite frequently. Target domainThe target domain is the part of the customer organization that is affected by the service. Service domainThe service domain is the service organization that delivers the service, i.e. the supplier. The first process to be executed by the customer contract authority within an acquisition is the acquisition initiation process. It consists of two sequential process steps: acquisition goal definition and acquisition planning. The final result of the process is an acquisition plan reflecting an acquisition strategy, along with a clear understanding of systems and services requirements defining the acquisition goal. The acquisition initiation process is illustrated in Figure 5. A more detailed description of theAcquisition Initiation phasecan be foundhere. The result of this step is a sufficiently clear understanding of the requirements to the systems and services that are the goal of the acquisition and the costs and benefits for the business and its various stakeholders. This process step consists of four activities: The input of business needs is ideally provided byRequirements management processes. The goal of this phase is to define an acquisition strategy adapted to the situation, plan the main decision points of the acquisition, and establish the acquisition organisation. The acquisition planning phase consists of the following activities: Steps two to four are discussed in detail in the chapter#Managing Risks and Planning Deliveries. Both the acquisition goal definition and acquisition planning are activities within the phase ofAcquisition Initiation. A more thorough entry of this phase can be foundhere. The procurement step of the ISPL acquisition process embodies the obtaining of one single contract. Note that the acquisition itself can contain multiple procurements. Such a contract consists of one or more projects or ongoing services. The procurement step consists of three sequencing processes: The aim of the tendering process is to select a supplier and a proposal for the considered services and systems and to agree with the chosen supplier on a contract in which both parties’ deliveries and responsibilities are defined. A very important aspect of such a contract is the planning of the decision points. The tendering process consists of four different steps: This process aims to monitor the services defined in the contract. It has to ensure that the deliverable and services conform to the requirements in the contract. The most important activity of the contract monitoring process is the execution of the decision points. In the decision point execution the customer organization makes judgments and decisions based on the status of the procurement at any given time. The aim of this process is to ensure that all outstanding technical and commercial requirements in the procurement contract (that was written and signed in the tendering phase) have been met. This is the formal completion of all the contracts of the acquisition. The acquisition manager (often this is the customer company's contract authority) has to verify the successful conclusion of all contracts and the achievement of the acquisition goal. The acquisition completion process consists of four activities: The following paragraphs elaborate on the different activities of acquisition completion. The acquisition manager has to check the completion of the different contracts by checking if each individual contract's completion phase has been achieved and the required reports are written. If necessary the acquisition manager has to trigger the contract completion process for contracts that need it. Unfortunately, when all individual contracts’ goals are reached this does not automatically mean that the acquisition goal is achieved. In this activity of the acquisition completion the acquisition manager has to verify the customer company's business objectives have been met and there are no missing parts in the acquisition that have been overlooked. Based on the assessment of the contract completions and the achievement of the acquisition goal the decision whether the acquisition is complete is made. This decision is generally made by the acquisition manager together with representatives of all the organizational authorities concerned with the acquisition. When one or more contracts are not completed or the acquisition goal is not reached this decision requires the involvement of the customer company's organizational authorities. The acquisition manager has to write an acquisition report. This report's purpose is to record all decisions made during the acquisition process, the level of acquisition completion, and lessons-to-be-learned for future acquisitions. Interesting data to be included in the acquisition report: The whole idea behind the extensive acquisition report is that an enterprise can only learn from its faults and experiences when these are well documented available in relevant future situations. ISPL's form ofrisk managementis purely based on heuristics. By describing and analyzing the situation, critical risks can be identified and mitigated by selecting actions and an appropriate strategy. ISPL provides heuristics that link risks and situational factors to mitigating actions and strategies. This chapter is a summary of [3] and also includes information from [4]. The process of managing risks and planning deliveries is illustrated in Figure 6. All the different steps in the process are described in detail in the paragraphs below. The first step of the process is describing the service that is to be procured. It is important to note the differences between a project and an ongoing service. Both types of services are described differently. ISPL provides the user with two separate sets of guidelines There are two steps in describing an ongoing service. I will briefly elaborate on each of these steps in the following paragraphs. ISPL proposes two methods for identifying the type of service: These methods can be used in a successive way where the ISO-LCP standard is used to identify the different process types and public domain service packages are used to refine the ISO-LCP processes. An example of a publicly available description of service packages is the one that is present inITIL. Note that in ITIL, service packages are referred to as processes. More info on the ISO-LCP standard can be found in the Wikipedia entry onISO/IEC 12207. The service can be described in more detail by its service properties. All service properties can be divided into three groups: ISPL provides methods for describing all three groups of properties. Projects are described by an initial and a final state, i.e. the current and the desired situation. This is done by specifying the operation items (the actual parts of the Information service to be procured), and specifying the descriptive items (documentation). A short outline of documenting the initial and final state is given in the paragraphs below. More information on describing operation and descriptive items is given in the chapter#Specifying Deliverables. For each component of the Information Service, contents and quality of the operation items have to be described. Secondary, an assessment has to be made on which already present descriptions of future operational items are relevant to use in the project. All the operational items that will be in use at the final state have to be documented. When describing these operation items the focus should be on: Not only the operational items have to be described: Descriptions of the documentation of the Information Service that is to be procured are also necessary. The client has to describe the profiles of the documentation needed for maintenance on, and further development of the Information service. The service planning process consists of three sub-processes. These are: In practice, these three sub-processes are followed by a process of risk monitoring that can give input to the situation and risk analysis process. The terms complexity and uncertainty are used quite often in this chapter. ComplexityIn the context of ISPL, complexity can be regarded as the difficulty encountered in managing the available knowledge. UncertaintyIn the context of ISPL, uncertainty can be regarded as the lack of available knowledge. The analysis sub-process is divided into a situation analysis followed by a risk analysis. The situation of both the customer and supplier organizations has a lot of influence in the success of the Information Service acquisition process. The situation analysis is all about identifying situational factors and their values. A situational factor value says something about the relative contribution to the overall complexity or uncertainty of the service to be delivered. For both complexity and uncertainty it can have one of the following values: high, low or medium. The ISPL method provides a set of tables which aids the user in determining the situational factor values. An example of a part of such a table can be found in Table 1. The risk management strategy depends on the situation as is displayed in Figure 7. When all the situational factor values are determined it is possible to determine the overall complexity and uncertainty of the service. These two factors can be used by the manager for the Design of the service delivery strategy. In the#Situation analysis, complexity and uncertainty values have been attributed to each of the situational factors. In the Risk Analysis, these values are used to identify the possible risks and their probability. Examples of possible risks for the customer business are unpredictable/increased costs for the business, delays in system delivery and poor quality of service or system. Examples of possible risks affecting the quality of the service to be delivered are demotivate of service actors, unclear requirements and uncertain interfaces with other services or systems. ISPL provides tables that map situational factors to risks. An example of such a table can be found in Table 2. For each of the risks found, both the probability and impact, i.e. the consequence, are assessed. The product of these two values is called the risk exposure. The risk exposure value is used to identify the risks that are critical to service delivery. The critical risks influence the#Delivery strategy designand the#Decision point planning. This process uses the service description, the situation analysis and the risk analysis as inputs to define an optimal service delivery strategy. The resulting service delivery strategy consists of three elements: ISPL provides heuristics on how to mitigate risks and change the individual situational factors that cause them. An excerpt of these heuristics can be found in Table 3. The service execution approach determines how the service is executed. For projects, the service execution approach is referred to as the development approach. It consists of a description approach, a construction approach and an installation approach. ISPL provides the user with heuristics on what type of description, construction and installation approach fits best to the situational factors and critical risks found in the#Situation analysisand#Risk analysis. For example, ISPL advises to use an evolutionary construction and installation approach when both overall complexity and uncertainty are high. The selection of a service control approach is based on the situational factors and overall complexity and uncertainty found in the#Situation analysis. ISPL provides heuristics on which types of control are most suitable for various situational factors. In total there are six types of control: development, quality and configuration in a formal and frequent variant. After defining a complete service strategy the consistency between the chosen strategy options has to be checked and possibly some choices have to be adjusted. The impact of the chosen strategy should also be analysed by checking that all critical risks have been addressed. It is also that some strategies cause new critical risks. The goal of decision point planning is to determine a sequence of decision points and give a clear description on each of the decision points, using the#Delivery strategy designas input. The sequence and contents of the decision points should reflect the chosen service delivery strategy. The decision point planning is made in the following order: The basic sequence of decision points is based on the#Delivery strategy design. ISPL provides heuristics on what basic sequences should be used for different delivery strategy options. The basic sequence found is adapted to the list of actions to mitigate risks of the#Delivery strategy design. ISPL gives information on what information elements should be in a decision point. Think of for instance purpose and the pre-conditions. For every decision has to be determined which deliverables are required. General rule is that the deliverables must contain the right amount of information to be able to make the decision but no more than that. Too much information is costly and blurs the focus of the decision makers. More information about this topic can be found in the chapter on#Specifying Deliverables. Risk monitoring takes place after the#Decision point planningphase. Its results can serve as input for the#Situation and risk analysisphase of service planning if necessary. It involves the tracking, controlling and monitoring of risks and risk mitigating actions. ISPL focuses on the relationship between customer and supplier. It is very important that the communication between both parties is clear and unambiguous. A deliverable is a product that is exchanged between the supplier and customer organisations (both ways). Deliverables are exchanged in all phases of the acquisition process: e.g. the Request-For-Proposal during the tendering phase, the intermediate versions during the execution of the delivery plan, and the declaration of contract completion at the end of each contract. ISPL provides guidance for the specification of all deliverables needed in the acquisition process. This chapter is a summary of [5]. ISPL divides deliverables in various types, each with a defined set of properties. These properties characterise the knowledge that is captured by each type of deliverable. Figure 8 illustrates the different types of deliverables. Contract domain deliverables are used to define and control all contracts in a procurement. There are two types of contract domain deliverables: the tendering deliverable and the decision point deliverable. In the following paragraphs each of these types is discussed in detail. Tendering deliverables are used in the tendering process to place requirements on all of the services within a procurement. There are four types of tendering deliverables: ISPL provides the customer and supplier organisations with templates for each of the different types. It is important to note that all of the tendering deliverables include the delivery plan in which all delivery and decision points are fixed. Decision point deliverables support the decision making during the execution of the delivery plan in the#Contract Monitoringphase. There are two types of decision point deliverables: Each of these types has its own, more specified subtype. The contract status report is a subtype of decision point proposal that describes the current status of a contract. The contract completion report, a subtype of the decision point proposal, records whether the contract has successfully achieved its business goals. ISPL provides the tables of content of each of the decision point deliverable types and subtypes. Service domain deliverables describe the service domain (see paragraph#Target domain and service domainfor more information). They are delivered by both customer and supplier organisations to plan and control services. There are two types of service domain deliverables: service plans and service reports. A service plan provides information on how to meet identifies goals in terms of service levels, deliverables, schedules, resources and costs. It can for instance give guidance on how to reach targets. ISPL describes the different properties that can be included in the service plan. In contrast to the service plan, the service reports controls the service status by reviewing service levels and results. It records the service's productivity and proposes corrective actions. ISPL gives guidance on which information should be included for each property that is described in the service report. The target domain is described using target domain deliverables. There are two different types of target domain deliverables: operational items and descriptive items. Both will be discussed in the following paragraphs. An operation item is a delivered system or system component that is or will be installed as part of the acquisition. They contribute to the functioning of the target domain. Some examples of different operational item types are listed below. In practice, descriptions of operational items often already present. It is important to make the distinction between functional and quality properties. A descriptive item captures knowledge about the target domain. They can be used to describe business organisations, business processes, information services et cetera. ISPL gives guidance on describing descriptive items by providing a descriptive item profile in which all properties of the descriptive item can easily be ordered.
https://en.wikipedia.org/wiki/ISPL
Technology education[1]is the study oftechnology, in which students "learn about the processes and knowledge related to technology".[2]As a field of study, it covers the human's ability to shape and change the physical world to meet needs, by manipulating materials andtoolswith techniques. It addresses the disconnect between wide usage and the lack of knowledge about technical components of technologies used and how to fix them.[3]This emergent discipline seeks to contribute to the learners' overallscientificandtechnological literacy,[4]andtechnacy. Technology education should not be confused witheducational technology. Educational technology focuses on a more narrow subset of technology use that revolves around the use of technology in and for education as opposed to technology education's focus on technology's use in general.[5] Technology education is an offshoot of theIndustrial Artstradition in theUnited Statesand the Craft teaching or vocational education in other countries.[4]In 1980, through what was called the "Futuring Project", the name of "industrial arts education" was changed to be "technology education" inNew York State; the goal of this movement was to increase students' technological literacy.[6]Since the nature of technology education is significantly different from its predecessor, Industrial Arts teachers underwent inservice education in the mid-1980s while a Technology Training Network was also established by the New York State Education Department (NYSED).[4] In Sweden, technology as a new subject emerged from the tradition of crafts subjects while in countries like Taiwan and Australia, its elements are discernible in historical vocational programs.[7] In the 21st century,Mars suitdesign was utilized as a topic for technology education.[8]Technical education is entirely different from general education TeachThought, a private entity, described technology education as being in the “status of childhood and bold experimentation.[9]” A survey of teachers across the United States by an independent market research company found out that 86 percent of teacher-respondents agree that technology must be used in the classroom. 96 percent say it promotes engagement of students and 89% agree technology improves student outcomes.[10]Technology is present in many education systems. As of July 2018, American public schools provide one desktop computer for every five students and spend over $3 billion annually on digital content.[11]In school year 2015–2016, the government conducted more state-standardized testing for elementary and middle levels through digital platforms instead of the traditional pen and paper method.[12] The digital revolution offers fresh learning prospects. Students can learn online even if they are not inside the classroom. Advancement in technology entails new approaches of combining present and future technological improvements and incorporating these innovations into the public education system.[13]With technology incorporated into everyday learning, this creates a new environment with new personalized and blended learning. Students are able to complete work based on their own needs as well as having the versatility of individualized study and it evolves the overall learning experience. Technology space in education is huge. It advances and changes rapidly.[14]In the United Kingdom, computer technology helped elevate standards in different schools to confront various challenges.[15]The UK adopted the “Flipped Classroom” concept after it became popular in the United States. The idea is to reverse conventional teaching methods through the delivery of instructions online and outside of traditional classrooms.[16] In Europe, the European Commission espoused a Digital Education Plan in January 2018. The program consists of 11 initiatives that support utilization of technology and digital capabilities in education development.[17]The Commission also adopted an action plan called the Staff Working Document[18]which details its strategy in implementing digital education. This plan includes three priorities formulating measures to assist European Union member-states to tackle all related concerns.[19]The whole framework will support the European Qualifications Framework for Lifelong Learning[20]and European Classification of Skills, Competences, Qualifications, and Occupations.[21] In East Asia, TheWorld Bankco-sponsored a yearly (two-day) international symposium[22]In October 2017 with South Korea's Ministry of Education, Science, and Technology and the World Bank to support education and ICT concerns for industry practitioners and senior policymakers. Participants plan and discuss issues in use of new technologies for schools within the region.[23]
https://en.wikipedia.org/wiki/Tech_ed
Risk managementis the identification, evaluation, and prioritization ofrisks,[1]followed by the minimization, monitoring, and control of the impact or probability of those risks occurring.[2]Risks can come from various sources (i.e,threats) including uncertainty ininternational markets,political instability, dangers of project failures (at any phase in design, development, production, or sustaining of life-cycles),legal liabilities,credit risk,accidents,natural causes and disasters, deliberate attack from an adversary, or events of uncertain or unpredictableroot-cause.[3] There are two types of events viz. Risks and Opportunities. Negative events can be classified as risks while positive events are classified as opportunities. Risk managementstandardshave been developed by various institutions, including theProject Management Institute, theNational Institute of Standards and Technology,actuarialsocieties, andInternational Organization for Standardization.[4][5][6]Methods, definitions and goals vary widely according to whether the risk management method is in the context ofproject management,security,engineering,industrial processes,financial portfolios,actuarial assessments, orpublic healthandsafety. Certain risk management standards have been criticized for having no measurable improvement on risk, whereas the confidence in estimates and decisions seems to increase.[2] Strategies to manage threats (uncertainties with negative consequences) typically include avoiding the threat, reducing the negative effect or probability of the threat, transferring all or part of the threat to another party, and even retaining some or all of the potential or actual consequences of a particular threat. The opposite of these strategies can be used to respond to opportunities (uncertain future states with benefits).[7] As aprofessional role, arisk manager[8]will "oversee the organization's comprehensive insurance and risk management program, assessing and identifying risks that could impede the reputation, safety, security, or financial success of the organization", and then develop plans to minimize and / or mitigate any negative (financial) outcomes. Risk Analysts[9]support the technical side of the organization's risk management approach: once risk data has been compiled and evaluated, analysts share their findings with their managers, who use those insights to decide among possible solutions. See alsoChief Risk Officer,internal audit, andFinancial risk management § Corporate finance. Risk is defined as the possibility that an event will occur that adversely affects the achievement of an objective. Uncertainty, therefore, is a key aspect of risk.[10]Risk management appears in scientific and management literature since the 1920s.[11]It became a formal science in the 1950s, when articles and books with "risk management" in the title also appear in library searches.[12]Most of research was initially related to finance and insurance.[13][14]One popular standard clarifying vocabulary used in risk management isISO Guide 31073:2022, "Risk management — Vocabulary".[4] Ideally in risk management, a prioritization process is followed.[15]Whereby the risks with the greatest loss (or impact) and the greatestprobabilityof occurring are handled first. Risks with lower probability of occurrence and lower loss are handled in descending order. In practice the process of assessing overall risk can be tricky, and organisation has to balance resources used to mitigate between risks with a higher probability but lower loss, versus a risk with higher loss but lower probability.Opportunity costrepresents a unique challenge for risk managers. It can be difficult to determine when to put resources toward risk management and when to use those resources elsewhere. Again, ideal risk management optimises resource usage (spending, manpower etc), and also minimizes the negative effects of risks. Opportunities first appear in academic research or management books in the 1990s. The first PMBoKProject Management Body of Knowledgedraft of 1987 doesn't mention opportunities at all. Modern project management school recognize the importance of opportunities. Opportunities have been included in project management literature since the 1990s, e.g. in PMBoK, and became a significant part of project risk management in the years 2000s,[16]when articles titled "opportunity management" also begin to appear in library searches.Opportunity managementthus became an important part of risk management. Modern risk management theory deals with any type of external events, positive and negative. Positive risks are calledopportunities. Similarly to risks, opportunities have specific mitigation strategies: exploit, share, enhance, ignore. In practice, risks are considered "usually negative". Risk-related research and practice focus significantly more on threats than on opportunities. This can lead to negative phenomena such astarget fixation.[17] For the most part, these methods consist of the following elements, performed, more or less, in the following order: The Risk managementknowledge area, as defined by theProject Management Body of KnowledgePMBoK, consists of the following processes: TheInternational Organization for Standardization(ISO) identifies the following principles for risk management:[5] Benoit Mandelbrotdistinguished between "mild" and "wild" risk and argued that risk assessment and management must be fundamentally different for the two types of risk.[19]Mild risk followsnormalor near-normalprobability distributions, is subject toregression to the meanand thelaw of large numbers, and is therefore relatively predictable. Wild risk followsfat-tailed distributions, e.g.,Paretoorpower-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and management is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and management are to be valid and reliable, according to Mandelbrot. According to the standardISO 31000, "Risk management – Guidelines", the process of risk management consists of several steps as follows:[5] This involves: After establishing the context, the next step in the process of managing risk is to identify potential risks. Risks are about events that, when triggered, cause problems or benefits. Hence, risk identification can start with the source of problems and those of competitors (benefit), or with the problem's consequences. Some examples of risk sources are: stakeholders of a project, employees of a company or the weather over an airport. When either source or problem is known, the events that a source may trigger or the events that can lead to a problem can be investigated. For example: stakeholders withdrawing during a project may endanger funding of the project; confidential information may be stolen by employees even within a closed network; lightning striking an aircraft during takeoff may make all people on board immediate casualties. The chosen method of identifying risks may depend on culture, industry practice and compliance. The identification methods are formed by templates or the development of templates for identifying source, problem or event. Common risk identification methods are: Once risks have been identified, they must then be assessed as to their potential severity of impact (generally a negative impact, such as damage or loss) and to the probability of occurrence.[25]These quantities can be either simple to measure, in the case of the value of a lost building, or impossible to know for sure in the case of an unlikely event, the probability of occurrence of which is unknown. Therefore, in the assessment process it is critical to make the best educated decisions in order to properly prioritize the implementation of therisk management plan. Even a short-term positive improvement can have long-term negative impacts. Take the "turnpike" example. A highway is widened to allow more traffic. More traffic capacity leads to greater development in the areas surrounding the improved traffic capacity. Over time, traffic thereby increases to fill available capacity. Turnpikes thereby need to be expanded in a seemingly endless cycles. There are many other engineering examples where expanded capacity (to do any function) is soon filled by increased demand. Since expansion comes at a cost, the resulting growth could become unsustainable without forecasting and management. The fundamental difficulty in risk assessment is determining the rate of occurrence since statistical information is not available on all kinds of past incidents and is particularly scanty in the case of catastrophic events, simply because of their infrequency. Furthermore, evaluating the severity of the consequences (impact) is often quite difficult for intangible assets. Asset valuation is another question that needs to be addressed. Thus, best educated opinions and available statistics are the primary sources of information. Nevertheless, risk assessment should produce such information for senior executives of the organization that the primary risks are easy to understand and that the risk management decisions may be prioritized within overall company goals. Thus, there have been several theories and attempts to quantify risks. Numerous different risk formulae exist, but perhaps the most widely accepted formula for risk quantification is: "Rate (or probability) of occurrence multiplied by the impact of the event equals risk magnitude."[vague] Risk mitigation measures are usually formulated according to one or more of the following major risk options, which are: Later research[26]has shown that the financial benefits of risk management are less dependent on the formula used but are more dependent on the frequency and how risk assessment is performed. In business it is imperative to be able to present the findings of risk assessments in financial, market, or schedule terms. Robert Courtney Jr. (IBM, 1970) proposed a formula for presenting risks in financial terms. The Courtney formula was accepted as the official risk analysis method for the US governmental agencies. The formula proposes calculation of ALE (annualized loss expectancy) and compares the expected loss value to the security control implementation costs (cost–benefit analysis). Planning for risk management uses four essential techniques. Under the acceptance technique, the business intentionally assumes risks without financial protections in the hopes that possible gains will exceed prospective losses. The transfer approach shields the business from losses by shifting risks to a third party, frequently in exchange for a fee, while the third-party benefits from the project. By choosing not to participate in high-risk ventures, the avoidance strategy avoids losses but also loses out on possibilities. Last but not least, the reduction approach lowers risks by implementing strategies like insurance, which provides protection for a variety of asset classes and guarantees reimbursement in the event of losses.[27] Once risks have been identified and assessed, all techniques to manage the risk fall into one or more of these four major categories:[28] Ideal use of theserisk control strategiesmay not be possible. Some of them may involve trade-offs that are not acceptable to the organization or person making the risk management decisions. Another source, from the US Department of Defense (see link),Defense Acquisition University, calls these categories ACAT, for Avoid, Control, Accept, or Transfer. This use of the ACAT acronym is reminiscent of another ACAT (for Acquisition Category) used in US Defense industry procurements, in which Risk Management figures prominently in decision making and planning. Similarly to risks, opportunities have specific mitigation strategies: exploit, share, enhance, ignore. This includes not performing an activity that could present risk. Refusing to purchase apropertyor business to avoidlegal liabilityis one such example. Avoidingairplaneflights for fear ofhijacking. Avoidance may seem like the answer to all risks, but avoiding risks also means losing out on the potential gain that accepting (retaining) the risk may have allowed. Not entering a business to avoid the risk of loss also avoids the possibility of earning profits. Increasing risk regulation in hospitals has led to avoidance of treating higher risk conditions, in favor of patients presenting with lower risk.[29] Risk reduction or "optimization" involves reducing the severity of the loss or the likelihood of the loss from occurring. For example,sprinklersare designed to put out afireto reduce the risk of loss by fire. This method may cause a greater loss by water damage and therefore may not be suitable.Halonfire suppression systems may mitigate that risk, but the cost may be prohibitive as astrategy. Acknowledging that risks can be positive or negative, optimizing risks means finding a balance between negative risk and the benefit of the operation or activity; and between risk reduction and effort applied. By effectively applyingHealth, Safety and Environment(HSE) management standards, organizations can achieve tolerable levels ofresidual risk.[30] Modern software development methodologies reduce risk by developing and delivering software incrementally. Early methodologies suffered from the fact that they only delivered software in the final phase of development; any problems encountered in earlier phases meant costly rework and often jeopardized the whole project. By developing in iterations, software projects can limit effort wasted to a single iteration. Outsourcingcould be an example of risk sharing strategy if the outsourcer can demonstrate higher capability at managing or reducing risks.[31]For example, a company may outsource only its software development, the manufacturing of hard goods, or customer support needs to another company, while handling the business management itself. This way, the company can concentrate more on business development without having to worry as much about the manufacturing process, managing the development team, or finding a physical location for a center. Also, implanting controls can also be an option in reducing risk. Controls that either detect causes of unwanted events prior to the consequences occurring during use of the product, or detection of the root causes of unwanted failures that the team can then avoid. Controls may focus on management or decision-making processes. All these may help to make better decisions concerning risk.[32] Briefly defined as "sharing with another party the burden of loss or the benefit of gain, from a risk, and the measures to reduce a risk." The term 'risk transfer' is often used in place of risk-sharing in the mistaken belief that you can transfer a risk to a third party through insurance or outsourcing. In practice, if the insurance company or contractor go bankrupt or end up in court, the original risk is likely to still revert to the first party. As such, in the terminology of practitioners and scholars alike, the purchase of an insurance contract is often described as a "transfer of risk." However, technically speaking, the buyer of the contract generally retains legal responsibility for the losses "transferred", meaning that insurance may be described more accurately as a post-event compensatory mechanism. For example, a personal injuries insurance policy does not transfer the risk of a car accident to the insurance company. The risk still lies with the policyholder namely the person who has been in the accident. The insurance policy simply provides that if an accident (the event) occurs involving the policyholder then some compensation may be payable to the policyholder that is commensurate with the suffering/damage. Methods of managing risk fall into multiple categories. Risk-retention pools are technically retaining the risk for the group, but spreading it over the whole group involves transfer among individual members of the group. This is different from traditional insurance, in that no premium is exchanged between members of the group upfront, but instead, losses are assessed to all members of the group. Risk retention involves accepting the loss, or benefit of gain, from a risk when the incident occurs. Trueself-insurancefalls in this category. Risk retention is a viable strategy for small risks where the cost of insuring against the risk would be greater over time than the total losses sustained. All risks that are not avoided or transferred are retained by default. This includes risks that are so large or catastrophic that either they cannot be insured against or the premiums would be infeasible.Waris an example since most property and risks are not insured against war, so the loss attributed to war is retained by the insured. Also any amounts of potential loss (risk) over the amount insured is retained risk. This may also be acceptable if the chance of a very large loss is small or if the cost to insure for greater coverage amounts is so great that it would hinder the goals of the organization too much. Select appropriate controls or countermeasures to mitigate each risk. Risk mitigation needs to be approved by the appropriate level of management. For instance, a risk concerning the image of the organization should have top management decision behind it whereas IT management would have the authority to decide on computer virus risks. The risk management plan should propose applicable and effective security controls for managing the risks. For example, an observed high risk of computer viruses could be mitigated by acquiring and implementing antivirus software. A good risk management plan should contain a schedule for control implementation and responsible persons for those actions. There are four basic steps of risk management plan, which are threat assessment, vulnerability assessment, impact assessment and risk mitigation strategy development.[33] According toISO/IEC 27001, the stage immediately after completion of therisk assessmentphase consists of preparing a Risk Treatment Plan, which should document the decisions about how each of the identified risks should be handled. Mitigation of risks often means selection ofsecurity controls, which should be documented in a Statement of Applicability, which identifies which particular control objectives and controls from the standard have been selected, and why. Implementation follows all of the planned methods for mitigating the effect of the risks. Purchase insurance policies for the risks that it has been decided to transferred to an insurer, avoid all risks that can be avoided without sacrificing the entity's goals, reduce others, and retain the rest. Initial risk management plans will never be perfect. Practice, experience, and actual loss results will necessitate changes in the plan and contribute information to allow possible different decisions to be made in dealing with the risks being faced. Risk analysisresults and management plans should be updated periodically. There are two primary reasons for this: Enterprise risk management (ERM) defines risk as those possible events or circumstances that can have negative influences on theenterprisein question, where the impact can be on the very existence, the resources (human and capital), the products and services, or the customers of the enterprise, as well as external impacts on society, markets, or the environment. There arevarious defined frameworkshere, where every probable risk can have a pre-formulated plan to deal with its possible consequences (to ensurecontingencyif the risk becomes aliability). Managers thus analyze and monitor both the internal and external environment facing the enterprise, addressingbusiness riskgenerally, and any impact on the enterprise achieving itsstrategic goals. ERM thus overlaps various other disciplines -operational risk management,financial risk managementetc. - but is differentiated by its strategic and long-term focus.[34]ERM systems usually focus on safeguarding reputation, acknowledging its significant role in comprehensive risk management strategies.[35] As applied tofinance, risk management concerns the techniques and practices for measuring, monitoring and controlling themarket-andcredit risk(andoperational risk) on a firm'sbalance sheet, due to a bank's credit and trading exposure, or re afund manager's portfolio value; for an overview seeFinance § Risk management. The concept of "contractual risk management" emphasises the use of risk management techniques in contract deployment, i.e. managing the risks which are accepted through entry into a contract. Norwegian academic Petri Keskitalo defines "contractual risk management" as "a practical, proactive and systematical contracting method that uses contract planning and governance to manage risks connected to business activities".[36]In an article by Samuel Greengard published in 2010, two US legal cases are mentioned which emphasise the importance of having a strategy for dealing with risk:[37] Greengard recommends using industry-standard contract language as much as possible to reduce risk as much as possible and rely on clauses which have been in use and subject to established court interpretation over a number of years.[37] Customs risk management is concerned with the risks which arise within the context ofinternational tradeand have a bearing on safety and security, including the risk thatillicit drugsandcounterfeit goodscan pass across borders and the risk that shipments and their contents are incorrectly declared.[40]TheEuropean Unionhas adopted a Customs Risk Management Framework (CRMF) applicable across the union and throughout itsmember states, whose aims include establishing a common level of customs control protection and a balance between the objectives of safe customs control and the facilitation of legitimate trade.[41]Two events which prompted theEuropean Commissionto review customs risk management policy in 2012-13 were theSeptember 11 attacksof 2001 and the2010 transatlantic aircraft bomb plotinvolving packages being sent fromYemento theUnited States, referred to by the Commission as "the October 2010 (Yemen) incident".[42] ESRM is a security program management approach that links security activities to an enterprise's mission and business goals through risk management methods. The security leader's role in ESRM is to manage risks of harm to enterprise assets in partnership with the business leaders whose assets are exposed to those risks. ESRM involves educating business leaders on the realistic impacts of identified risks, presenting potential strategies to mitigate those impacts, then enacting the option chosen by the business in line with accepted levels of business risk tolerance[43] Formedical devices, risk management is a process for identifying, evaluating and mitigating risks associated with harm to people and damage to property or the environment. Risk management is an integral part of medical device design and development, production processes and evaluation of field experience, and is applicable to all types of medical devices. The evidence of its application is required by most regulatory bodies such as theUS FDA. The management of risks for medical devices is described by the International Organization for Standardization (ISO) inISO 14971:2019, Medical Devices—The application of risk management to medical devices, a product safety standard. The standard provides a process framework and associated requirements for management responsibilities, risk analysis and evaluation, risk controls and lifecycle risk management. Guidance on the application of the standard is available via ISO/TR 24971:2020. The European version of the risk management standard was updated in 2009 and again in 2012 to refer to the Medical Devices Directive (MDD) and Active Implantable Medical Device Directive (AIMDD) revision in 2007, as well as the In Vitro Medical Device Directive (IVDD). The requirements of EN 14971:2012 are nearly identical to ISO 14971:2007. The differences include three "(informative)" Z Annexes that refer to the new MDD, AIMDD, and IVDD. These annexes indicate content deviations that include the requirement for risks to be reducedas far as possible, and the requirement that risks be mitigated by design and not by labeling on the medical device (i.e., labeling can no longer be used to mitigate risk). Typical risk analysis and evaluation techniques adopted by the medical device industry includehazard analysis,fault tree analysis(FTA),failure mode and effects analysis(FMEA), hazard and operability study (HAZOP), and risk traceability analysis for ensuring risk controls are implemented and effective (i.e. tracking risks identified to product requirements, design specifications, verification and validation results etc.). FTA analysis requires diagramming software. FMEA analysis can be done using aspreadsheetprogram. There are also integrated medical device risk management solutions. Through adraft guidance, the FDA has introduced another method named "Safety Assurance Case" for medical device safety assurance analysis. The safety assurance case is structured argument reasoning about systems appropriate for scientists and engineers, supported by a body of evidence, that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given environment. With the guidance, a safety assurance case is expected for safety critical devices (e.g. infusion devices) as part of the pre-market clearance submission, e.g. 510(k). In 2013, the FDA introduced another draft guidance expecting medical device manufacturers to submit cybersecurity risk analysis information. Project risk management must be considered at the different phases of acquisition. At the beginning of a project, the advancement of technical developments, or threats presented by a competitor's projects, may cause a risk or threat assessment and subsequent evaluation of alternatives (seeAnalysis of Alternatives). Once a decision is made, and the project begun, more familiar project management applications can be used:[44][45][46] Megaprojects(sometimes also called "major programs") are large-scale investment projects, typically costing more than $1 billion per project. Megaprojects include major bridges, tunnels, highways, railways, airports, seaports, power plants, dams, wastewater projects, coastal flood protection schemes, oil and natural gas extraction projects, public buildings, information technology systems, aerospace projects, and defense systems. Megaprojects have been shown to be particularly risky in terms of finance, safety, and social and environmental impacts. Risk management is therefore particularly pertinent for megaprojects and special methods and special education have been developed for such risk management.[47] It is important to assess risk in regard to natural disasters likefloods,earthquakes, and so on. Outcomes of natural disaster risk assessment are valuable when considering future repair costs, business interruption losses and other downtime, effects on the environment, insurance costs, and the proposed costs of reducing the risk.[48][49]TheSendai Framework for Disaster Risk Reductionis a 2015 international accord that has set goals and targets fordisaster risk reductionin response to natural disasters.[50]There are regularInternational Disaster and Risk ConferencesinDavosto deal with integral risk management. Several tools can be used to assess risk and risk management of natural disasters and other climate events, including geospatial modeling, a key component ofland change science. This modeling requires an understanding of geographic distributions of people as well as an ability to calculate the likelihood of a natural disaster occurring. The management of risks to persons and property inwildernessand remote natural areas has developed with increases in outdoor recreation participation and decreased social tolerance for loss. Organizations providing commercial wilderness experiences can now align with national and international consensus standards for training and equipment such asANSI/NASBLA 101-2017 (boating),[51]UIAA152 (ice climbing tools),[52]andEuropean Norm13089:2015 + A1:2015 (mountaineering equipment).[53][54]TheAssociation for Experiential Educationoffers accreditation for wilderness adventure programs.[55]TheWilderness Risk Management Conferenceprovides access to best practices, and specialist organizations provide wilderness risk management consulting and training.[56] The text Outdoor Safety – Risk Management for Outdoor Leaders,[57]published by the New Zealand Mountain Safety Council, provides a view of wilderness risk management from the New Zealand perspective, recognizing the value of national outdoor safety legislation and devoting considerable attention to the roles of judgment and decision-making processes in wilderness risk management. One popular models for risk assessment is the Risk Assessment and Safety Management (RASM) Model developed by Rick Curtis, author of The Backpacker's Field Manual.[58]The formula for the RASM Model is: Risk = Probability of Accident × Severity of Consequences. The RASM Model weighs negative risk—the potential for loss, against positive risk—the potential for growth. IT riskis a risk related to information technology. This is a relatively new term due to an increasing awareness thatinformation securityis simply one facet of a multitude of risks that are relevant to IT and the real world processes it supports. "Cybersecurity is tied closely to the advancement of technology. It lags only long enough for incentives like black markets to evolve and new exploits to be discovered. There is no end in sight for the advancement of technology, so we can expect the same from cybersecurity."[59] ISACA'sRisk ITframework ties IT risk to enterprise risk management. Duty of Care Risk Analysis (DoCRA) evaluates risks and their safeguards and considers the interests of all parties potentially affected by those risks.[60]TheVerizon Data Breach Investigations Report (DBIR)features how organizations can leverage the Veris Community Database (VCDB) to estimate risk. Using HALOCKmethodologywithin CIS RAM and data from VCDB, professionals can determine threat likelihood for their industries. IT risk management includes "incident handling", an action plan for dealing with intrusions, cyber-theft, denial of service, fire, floods, and other security-related events. According to theSANS Institute, it is a six step process: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned.[61] Operational risk management (ORM) is the oversight ofoperational risk, including the risk of loss resulting from: inadequate or failed internal processes and systems; human factors; or external events. Given thenature of operations, ORM is typically a "continual" process, and will include ongoing risk assessment, risk decision making, and the implementation of risk controls. For the offshore oil and gas industry, operational risk management is regulated by thesafety caseregime in many countries. Hazard identification and risk assessment tools and techniques are described in the international standard ISO 17776:2000, and organisations such as the IADC (International Association of Drilling Contractors) publish guidelines forHealth, Safety and Environment(HSE) Case development which are based on the ISO standard. Further, diagrammatic representations of hazardous events are often expected by governmental regulators as part of risk management in safety case submissions; these are known asbow-tie diagrams(seeNetwork theory in risk assessment). The technique is also used by organisations and regulators in mining, aviation, health, defence, industrial and finance. The principles and tools for quality risk management are increasingly being applied to different aspects of pharmaceutical quality systems. These aspects include development, manufacturing, distribution, inspection, and submission/review processes throughout the lifecycle of drug substances, drug products, biological and biotechnological products (including the use of raw materials, solvents, excipients, packaging and labeling materials in drug products, biological and biotechnological products). Risk management is also applied to the assessment of microbiological contamination in relation to pharmaceutical products and cleanroom manufacturing environments.[62] Supply chain risk management (SCRM) aims at maintainingsupply chaincontinuity in the event of scenarios or incidents which could interrupt normal business and hence profitability. Risks to the supply chain range from everyday to exceptional, including unpredictable natural events (such astsunamisandpandemics) to counterfeit products, and reach across quality, security, to resiliency and product integrity. Mitigation of these risks can involve various elements of the business includinglogisticsand cybersecurity, as well as the areas of finance and operations. Travel risk management is concerned with how organisations assess the risks to theirstaff when travelling, especially when travelling overseas. In the field ofinternational standards, ISO 31030:2021 addresses good practice in travel risk management.[63] The Global Business Travel Association's education and research arm, the GBTA Foundation. found in 2015 that most businesses covered by their research employed travel risk management protocols aimed at ensuring the safety and well-being of their business travelers.[64]Six key principles of travel risk awareness put forward by the association are preparation, awareness of surroundings and people, keeping a low profile, adopting an unpredictable routine, communications and layers of protection.[65]Traveler tracking using mobile tracking and messaging technologies had by 2015 become a widely used aspect of travel risk management.[64] Risk communicationis a complex cross-disciplinary academic field that is part of risk management and related to fields likecrisis communication. The goal is to make sure that targeted audiences understand how risks affect them or their communities by appealing to their values.[66][67] Risk communication is particularly important indisaster preparedness,[68]public health,[69]and preparation for majorglobal catastrophic risk.[68]For example, theimpacts of climate changeandclimate riskeffect every part of society, so communicating that risk is an importantclimate communicationpractice, in order for societies to plan forclimate adaptation.[70]Similarly, inpandemic prevention,understanding of riskhelps communities stop the spread of disease and improve responses.[71] Risk communication deals with possible risks and aims to raise awareness of those risks to encourage or persuade changes in behavior to relieve threats in the long term. On the other hand, crisis communication is aimed at raising awareness of a specific type of threat, the magnitude, outcomes, and specific behaviors to adopt to reduce the threat.[72] Risk communication infood safetyis part of therisk analysis framework. Together with risk assessment and risk management, risk communication aims to reducefoodborne illnesses. Food safety risk communication is an obligatory activity for food safety authorities[73]in countries, which adopted theAgreement on the Application of Sanitary and Phytosanitary Measures.
https://en.wikipedia.org/wiki/Risk_management
Amassive open online course(MOOC/muːk/) or anopen online courseis anonline courseaimed at unlimited participation andopen accessvia theWeb.[1]In addition to traditional course materials, such as filmed lectures, readings, andproblem sets, many MOOCs provideinteractive courseswith user forums or social media discussions to support community interactions among students, professors, andteaching assistants(TAs), as well as immediate feedback to quick quizzes and assignments. MOOCs are a widely researched development indistance education,[2]first introduced in 2008,[3]that emerged as a popular mode of learning in 2012, a year called the "Year of the MOOC".[4][5][6] Early MOOCs (cMOOCs: Connectivist MOOCs) often emphasized open-access features, such asopen licensingof content, structure and learning goals, to promote the reuse and remixing of resources. Some later MOOCs (xMOOCs: extended MOOCs) use closed licenses for their course materials while maintaining free access for students.[7][8][9][10] Before theDigital Age,distance learningappeared in the form ofcorrespondence coursesin the 1890s–1920s and later radio and television broadcast of courses and early forms ofe-learning. Typically fewer than five percent of the students would complete a course.[11]For example the Stanford Honors Cooperative Program, established in 1954, eventually offered video classes on-site at companies, at night, leading to a fully accredited Master's degree. This program was controversial because the companies paid double the normal tuition paid by full-time students.[12]The 2000s saw changes inonline, or e-learningand distance education, with increasing online presence, open learning opportunities, and the development of MOOCs.[13]By 2010 audiences for the most popular college courses such as "Justice" withMichael J. Sandeland "Human Anatomy" withMarian Diamondwere reaching millions.[14] The first MOOCs emerged from theopen educational resources(OER) movement, which was sparked byMIT OpenCourseWareproject.[15]The OER movement was motivated from work by researchers who pointed out that class size and learning outcomes had no established connection. Here,Daniel Barwick's work is the most often-cited example.[16][17] Within the OER movement, theWikiversitywas founded in 2006 and the first open course on the platform was organised in 2007. A ten-week course with more than 70 students was used to test the idea of making Wikiversity an open and free platform for education in the tradition of Scandinavian free adult education,Folk High Schooland thefree school movement.[18]The termMOOCwas coined in 2008 by Dave Cormier of theUniversity of Prince Edward Islandin response to a course calledConnectivism and Connective Knowledge(also known asCCK08). CCK08, which was led byGeorge SiemensofAthabasca UniversityandStephen Downesof theNational Research Council, consisted of 25 tuition-paying students inExtended Educationat theUniversity of Manitoba, as well as over 2200 online students from the general public who paid nothing.[19]All course content was available throughRSSfeeds, and online students could participate through collaborative tools, including blog posts, threaded discussions inMoodle, andSecond Lifemeetings.[20][21][22]Stephen Downes considers these so-called cMOOCs to be more "creative and dynamic" than the current xMOOCs, which he believes "resemble television shows or digital textbooks".[19] Other cMOOCs were then developed; for example, Jim Groom from TheUniversity of Mary Washingtonand Michael Branson Smith ofYork College, City University of New Yorkhosted MOOCs through several universities starting with 2011's 'Digital Storytelling' (ds106) MOOC.[23]MOOCs from private, non-profit institutions emphasized prominent faculty members and expanded existing distance learning offerings (e.g., podcasts) into free and open online courses.[24] Alongside the development of these open courses, other E-learning platforms emerged – such asKhan Academy,Peer-to-Peer University(P2PU),Udemy, andAlison– which are viewed as similar to MOOCs and work outside the university system or emphasize individual self-paced lessons.[25][26][27][28][29] As MOOCs developed with time, multiple conceptions of the platform seem to have emerged. Mostly two different types can be differentiated: those that emphasize a connectivist philosophy, and those that resemble more traditional courses. To distinguish the two, several early adopters of the platform proposed the terms "cMOOC" and "xMOOC".[31][32] cMOOCs are based on principles fromconnectivist pedagogyindicating that material should beaggregated(rather than pre-selected),remixable,re-purposable, andfeeding forward(i.e. evolving materials should be targeted at future learning).[33][34][35][36]cMOOC instructional design approaches attempt to connect learners to each other to answer questions or collaborate on joint projects. This may include emphasizing collaborative development of the MOOC.[37]Andrew Ravenscroft of the London Metropolitan University claimed that connectivist MOOCs better support collaborative dialogue and knowledge building.[38][39] xMOOCs have a much more traditional course structure. They are characterized by a specified aim of completing the course obtaining certain knowledge certification of the subject matter. They are presented typically with a clearly specified syllabus of recorded lectures and self-test problems. However, some providers require paid subscriptions for acquiring graded materials and certificates. They employ elements of the original MOOC, but are, in some effect, branded IT platforms that offer content distribution partnerships to institutions.[32]The instructor is the expert provider of knowledge, and student interactions are usually limited to asking for assistance and advising each other on difficult points. According toThe New York Times, 2012 became "the year of the MOOC" as several well-financed providers, associated with top universities, emerged, includingCoursera,Udacity, andedX.[4] During a presentation at SXSWedu in early 2013, Instructure CEO Josh Coates suggested that MOOCs are in the midst of a hype cycle, with expectations undergoing wild swings.[41]Dennis Yang, President of MOOC provider Udemy, later made the point in an article forThe Huffington Post.[42] Many universities scrambled to join in the "next big thing", as did more establishedonline educationservice providers such asBlackboard Inc, in what has been called a "stampede". Dozens of universities in Canada, Mexico, Europe and Asia have announced partnerships with the large American MOOC providers.[43][4]By early 2013, questions emerged about whether academia was "MOOC'd out".[40][44]This trend was later confirmed in continuing analysis.[45] The industry has an unusual structure, consisting of linked groups including MOOC providers, the larger non-profit sector, universities, related companies andventure capitalists.The Chronicle of Higher Educationlists the major providers as the non-profits Khan Academy and edX, and the for-profits Udacity and Coursera.[46] The larger non-profit organizations include theBill & Melinda Gates Foundation, theMacArthur Foundation, theNational Science Foundation, and theAmerican Council on Education. University pioneers includeStanford,Harvard,MIT, theUniversity of Pennsylvania,Caltech, theUniversity of Texas at Austin, theUniversity of California at Berkeley, andSan Jose State University.[46]Related companies investing in MOOCs includeGoogleand educational publisherPearson PLC. Venture capitalists includeKleiner Perkins Caufield & Byers,New Enterprise AssociatesandAndreessen Horowitz.[46] In the fall of 2011, Stanford University launched three courses.[47]The first of those courses wasIntroduction Into AI, launched bySebastian ThrunandPeter Norvig. Enrollment quickly reached 160,000 students. The announcement was followed within weeks by the launch of two more MOOCs, byAndrew NgandJennifer Widom. Following the publicity and high enrollment numbers of these courses, Thrun started a company he named Udacity andDaphne Kollerand Andrew Ng launched Coursera.[48] In January 2013, Udacity launched its first MOOCs-for-credit, in collaboration with San Jose State University. In May 2013, the company announced the first entirely MOOC-based master's degree, a collaboration between Udacity,AT&Tand theGeorgia Institute of Technology, costing $7,000, a fraction of its normal tuition.[49] Concerned about the commercialization of online education, in 2012 MIT created the not-for-profit MITx.[50]The inaugural course, 6.002x, launched in March 2012. Harvard joined the group, renamed edX, that spring, andUniversity of California, Berkeleyjoined in the summer. The initiative then added theUniversity of Texas System,Wellesley CollegeandGeorgetown University. In September 2013, edX announced a partnership with Google to develop MOOC.org, a site for non-xConsortium groups to build and host courses. Google will work on the core platform development with edX partners. In addition, Google and edX will collaborate on research into how students learn and how technology can transform learning and teaching. MOOC.org will adopt Google's infrastructure.[51]The ChineseTsinghua UniversityMOOC platform XuetangX.com (launched Oct. 2013) uses the Open edX platform.[52] Before 2013, each MOOC tended to develop its own delivery platform. EdX in April 2013 joined with Stanford University, which previously had its own platform called Class2Go, to work onXBlockSDK, a joint open-source platform. It is available to the public under theAGPLopen source license, which requires that all improvements to the platform be publicly posted and made available under the same license.[53]Stanford Vice Provost John Mitchell said that the goal was to provide the "Linuxof online learning".[54]This is unlike companies such as Coursera that have developed their own platform.[55][unreliable source?] By November 2013, edX offered 94 courses from 29 institutions around the world. During its first 13 months of operation (ending March 2013), Coursera offered about 325 courses, with 30% in the sciences, 28% in arts and humanities, 23% in information technology, 13% in business and 6% in mathematics.[56]Udacity offered 26 courses. The number of courses offered has since increased dramatically: As of January 2016, edx offers 820 courses, Coursera offers 1580 courses and Udacity offers more than 120 courses. According to FutureLearn, the British Council's Understanding IELTS: Techniques for English Language Tests has an enrollment of over 440,000 students.[57] Early cMOOCs such as CCK08 and ds106 used innovative pedagogy (Connectivism), with distributed learning materials rather than a video-lecture format, and a focus on education and learning, and digital storytelling respectively[19][20][21][22][23] Following the 2011 launch of three Stanford xMOOCs, includingIntroduction Into AI, launched by Sebastian Thrun and Peter Norvig[47]a number of other innovative courses have emerged. As of May 2014, more than 900 MOOCs are offered by US universities and colleges. As of February 2013, dozens of universities had affiliated with MOOCs, including many international institutions.[43][58]In addition, some organisations operate their own MOOCs – including Google's Power Search. A range of courses have emerged; "There was a real question of whether this would work for humanities and social science", said Ng. However, psychology and philosophy courses are among Coursera's most popular. Student feedback and completion rates suggest that they are as successful as math and science courses[59]even though the corresponding completion rates are lower.[10] In January 2012, University of Helsinki launched a Finnish MOOC in programming. The MOOC is used as a way to offer high-schools the opportunity to provide programming courses for their students, even if no local premises or faculty that can organize such courses exist.[60]The course has been offered recurringly, and the top-performing students are admitted to a BSc and MSc program in Computer Science at the University of Helsinki.[60][61]At a meeting on E-Learning and MOOCs, Jaakko Kurhila, Head of studies for University of Helsinki, Department of Computer Science, claimed that to date, there have been over 8000 participants in their MOOCs altogether.[62] On 18 June 2012, Ali Lemus fromGalileo University[63]launched the first Latin American MOOC titled "Desarrollando Aplicaciones para iPhone y iPad"[64]This MOOC is a Spanish remix of Stanford University's popular "CS 193P iPhone Application Development" and had 5,380 students enrolled. The technology used to host the MOOC was the Galileo Educational System platform (GES) which is based on the .LRN project.[65] "Gender Through Comic Books" was a course taught byBall State University's Christina Blanch on Instructure's Canvas Network, a MOOC platform launched in November 2012.[66]The course used examples fromcomic booksto teach academic concepts about gender and perceptions.[67] In November 2012, theUniversity of Miamilaunched its first high school MOOC as part of Global Academy, its online high school. The course became available for high school students preparing for theSATSubject Test in biology.[68] During the Spring 2013 semester,Cathy DavidsonandDan Arielytaught the "Surprise Endings: Social Science and Literature" aSPOCcourse taught in-person at Duke University and also as a MOOC, with students from Duke running the online discussions.[4] In the UK of summer 2013, Physiopedia ran their first MOOC regarding Professional Ethics in collaboration with University of the Western Cape in South Africa.[69]This was followed by a second course in 2014, Physiotherapy Management of Spinal Cord Injuries, which was accredited by theWorld Confederation of Physical Therapyand attracted approximately 4000 participants with a 40% completion rate.[70][71]Physiopedia is the first provider of physiotherapy/physical therapy MOOCs, accessible to participants worldwide.[72] In March 2013, Coursolve piloted acrowdsourcedbusiness strategy course for 100 organizations with the University of Virginia.[73]A data science MOOC began in May 2013.[74] In May 2013, Coursera announced free e-books for some courses in partnership withChegg, an online textbook-rental company. Students would use Chegg'se-reader, which limits copying and printing and could use the book only while enrolled in the class.[75] In June 2013, theUniversity of North Carolina at Chapel Hilllaunched Skynet University,[76]which offers MOOCs on introductory astronomy. Participants gain access to the university's global network ofrobotic telescopes, including those in the Chilean Andes and Australia. In July 2013 theUniversity of TasmanialaunchedUnderstanding Dementia. The course had a completion rate of (39%),[77]the course was recognized in the journalNature.[78] Startup Veduca[79]launched the first MOOCs in Brazil, in partnership with theUniversity of São Pauloin June 2013. The first two courses were Basic Physics, taught by Vanderlei Salvador Bagnato, and Probability and Statistics, taught by Melvin Cymbalista and André Leme Fleury.[80]In the first two weeks following the launch atPolytechnic School of the University of São Paulo, more than 10,000 students enrolled.[81][82] Startup Wedubox (finalist at MassChallenge 2013)[83]launched the first MOOC in finance and third MOOC in Latam, the MOOC was created by Jorge Borrero (MBA Universidad de la Sabana) with the title "WACC and the cost of capital" it reached 2.500 students in Dec 2013 only 2 months after the launch.[84] In January 2014, Georgia Institute of Technology partnered with Udacity and AT&T to launch theirOnline Master of Science in Computer Science(OMSCS). Priced at $7,000, OMSCS was the firstMOOD(massive online open degree) (Master's degree) incomputer science.[85][86][87] In September 2014, the high street retailer,Marks & Spencerpartnered up withUniversity of Leedsto construct an MOOC business course "which will use case studies from the Company Archive alongside research from the University to show how innovation and people are key to business success. The course will be offered by the UK based MOOC platform, FutureLearn.[88] On 16 March 2015, theUniversity of Cape Townlaunched its first MOOC,Medicine and the Artson the UK-led platform,Futurelearn.[89] In July 2015, OpenClassrooms, jointly with IESA Multimedia, launched the first MOOC-based bachelor's degree in multimedia project management recognized by a French state.[90] In January 2018,Brown Universityopened its first "game-ified" course onEdX. TitledFantastic Places, Unhuman Humans: Exploring Humanity Through Literatureby Professor James Egan. It featured a storyline and plot to help Leila, a lost humanoid wandering different worlds, in which a learner had to play mini games to advance through the course.[91] ThePacific Open Learning Health Net, set up by theWHOin 2003, developed an online learning platform in 2004–05 for continuing development of health professionals. Courses were originally delivered by Moodle, but were looking more like other MOOCs by 2012.[92] By June 2012, more than 1.5 million people had registered for classes through Coursera, Udacity or edX.[93][94]As of 2013, the range of students registered appears to be broad, diverse and non-traditional, but concentrated among English-speakers in rich countries. By March 2013, Coursera alone had registered about 2.8 million learners.[56]By October 2013, Coursera enrollment continued to surge, surpassing 5 million, while edX had independently reached 1.3 million.[59] In India 2003 was the first online course rolled out, making it potentially the first Asian MOOC under the aegis of the NPTELNational Programme on Technology Enhanced Learninginstituted by the Ministry of Human Resource Development (MHRD, latterly called Ministry of Education) and the indian institutes of technologyIIT. In the words of Prof. Thangaraj[95]from IIT-Madras the prime mover of this initiative the motivation for these MOOCs was "...a huge number of people in India, students particularly, who have a strong analytical and problem-solving background. Not all of them get into IITs or the top institutions. What happens to those guys?..". With the aim of providing high-quality lectures with Indian faculty, to complement the mostly European and USAmerican offerings these courses were offered. Today most of them combine video lectures, online and in person exams and certification. The offering is currently approximately 3,000 courses. The courses are free if one does not want a certificate, i.e. audit mode. For certification the platform charges approximately ₹1,000 (approximately US$ 12). A course billed as "Asia's first MOOC" given by theHong Kong University of Science and Technologythrough Coursera starting in April 2013 registered 17,000 students. About 60% were from "rich countries" with many of the rest from middle-income countries in Asia, South Africa, Brazil or Mexico. Fewer students enrolled from areas with more limited access to the internet, and students from the People's Republic of China may have been discouraged by Chinese government policies.[96] Koller stated in May 2013 that a majority of the people taking Coursera courses had already earned college degrees.[97] According to a Stanford University study of a more general group of students "active learners" – anybody who participated beyond just registering – found that 64% of high school active learners were male and 88% were male for undergraduate- and graduate-level courses.[98] A study from Stanford University's Learning Analytics group identified four types of students: auditors, who watched video throughout the course, but took few quizzes or exams; completers, who viewed most lectures and took part in most assessments; disengaged learners, who quickly dropped the course; and sampling learners, who might only occasionally watch lectures.[98]They identified the following percentages in each group:[99] Jonathan Haber focused on questions of what students are learning and student demographics. About half the students taking US courses are from other countries and do not speak English as their first language. He found some courses to be meaningful, especially about reading comprehension. Video lectures followed by multiple choice questions can be challenging since they are often the "right questions". Smaller discussion boards paradoxically offer the best conversations. Larger discussions can be "really, really thoughtful and really, really misguided", with long discussions becoming rehashes or "the same old stale left/right debate".[100] MIT and Stanford University offered initial MOOCs in Computer Science and Electrical Engineering. Since engineering courses need prerequisites so at the outset upper-level engineering courses were nearly absent from the MOOC list. By 2015, several universities were presenting undergraduate and advanced-level engineering courses.[101][102][103] In 2013, theChronicle of Higher Educationsurveyed 103 professors who had taught MOOCs. "Typically a professor spent over 100 hours on his MOOC before it even started, by recording online lecture videos and doing other preparation", though some instructors' pre-class preparation was "a few dozen hours". The professors then spent 8–10 hours per week on the course, including participation in discussion forums.[104] The medians were: 33,000 students enrollees; 2,600 passing; and 1 teaching assistant helping with the class. 74% of the classes used automated grading, and 34% used peer grading. 97% of the instructors used original videos, 75% used open educational resources and 27% used other resources. 9% of the classes required a physical textbook and 5% required an e-book.[104][105] Unlike traditional courses, MOOCs require additional skills, provided by videographers, instructional designers, IT specialists and platform specialists. Georgia Tech professorKaren Headreports that 19 people work on their MOOCs and that more are needed.[106]The platforms have availability requirements similar to media/content sharing websites, due to the large number of enrollees. MOOCs typically usecloud computingand are often created withauthoring systems. Authoring tools for the creation of MOOCs are specialized packages ofeducational softwarelikeElicitus,IMC Content StudioandLectorathat are easy-to-use and support e-learning standards likeSCORMandAICC. Despite their potential to support learning and education, MOOCs have a major concern related to attrition rates and course drop out. Even though the number of learners who enroll in the courses tends to be in the thousands range, only a very small portion of the enrolled learners complete the course. According to the visualizations and analysis conducted by Katy Jordan (2015),[107]the investigated MOOCs have a typical enrollment of 25,000, even though enrollment has reached a value up to ~230,000. Jordan reports that the average completion rate for such MOOCs is approximately 15%. Early data from Coursera suggest a completion rate of 7–9%.[108]Coffrin et al. (2012)[109]report the completion rates are even lower (between 3 and 5%), while they say there is a consistent and noticeable decline in the number of students who participate in the course every week. Others[108][110][111][112][113]have also shown attrition rates similar to Coffrin. One example is the courseBioelectricity, in the Fall of 2012at Duke University, where 12,725 students enrolled, but only 7,761 ever watched a video, 3,658 attempted a quiz, 345 attempted the final exam, and 313 passed, earning a certificate.[114][115]Students paying $50 for a feature (designed to prevent cheating on exams) have completion rates of about 70%.[116]Yang et al. (2013)[117]suggest that even though there is a large proportion of students who drop out early on due to a variety of reasons, there is a significant proportion of the students who remain in the course and drop out later, thus causing attrition to happen over time. Before analyzing some factors which is related to attrition rates and course drop out, one important thing one should keep in mind is that average completion rate for MOOCs is not a good indicator. Completion rate does not reflect the overall view of every student because different students have diverse purposes.[118]For example, Khe Foon Hew (2016)[118]indicates that some students take part in the MOOCs just for interest or finding extrinsic value of course. They drop the course if the course does not satisfy their purpose. However, completion rate is objective enough to reflect engagement of students. Much research has investigated why students drop out of MOOC courses or what factors could contribute to them dropping out. For example,Carolyn Roséet al. (2014)[119]investigate how three social factors make predictions on student attrition, for students who participated in the course discussion forum. The authors found that students who serve as authorities in the community seem to be more committed to the community and thus less inclined to drop out the course. In addition, students who actively participated in the course since the first week were 35% less likely to drop out of the course, compared with the average population. Lastly, the analysis of the patterns of attrition in a sub community showed that attrition was related to the engagement of the particular students with one another. One interpretation of this finding according to Rosé et al. (2014)[119]is that while participating in MOOCs, students create virtual cohorts who progress and engage with the material in similar ways. Thus, if students start dropping out, then that might cause other students to drop out as they might perceive the environment as less supportive or engaging without their peers. Other studies focus on exploring how motivation and self-regulated learning could be related to MOOC dropout and attrition. Carson (2002)[120]investigated characteristics of self-directed learning in students of grades 8–12 who took online courses through a statewide online program. Two of the hypothesis that the study explored were whether there exist underlying distinct classes (categories) of self-regulated learners and if the membership in these classes was associated with measures such significantly different online course completion, online final grade, or GPA. The results show that there exist different latent classes of self-regulated learning within the population of online students, designated as high, moderate, and low self-directed learning. In addition, the results support the hypothesis that there is an association between the self-directed learning class the student belongs to with the significantly different course completion rate or course achievement (course achievement was measured by the completion of the online courses, the final online course grade and the cumulative GPA). In other words, course completion and self-directed learning in students were found to be significantly related. One online survey published a "top ten" list of reasons for dropping out of a MOOC.[121]The list involved reasons such as the course required too much time, or was too difficult or too basic. Reasons related to poor course design included "lecture fatigue" from courses that were just lecture videos, lack of a proper introduction to course technology and format, clunky technology and abuse on discussion boards. Hidden costs were cited, including required readings from expensive textbooks written by the instructor that also significantly limited students' access to learning material.[10]Other non-completers were "just shopping around" when they registered, or were participating for knowledge rather than a credential. Other reasons for the poor completion rates include the workload, length and difficulty of a course.[10]In addition, some participants participate peripherally ("lurk"). For example, one of the first MOOCs in 2008 had 2200 registered members, of whom 150 actively interacted at various times.[122] Besides those factors cause the low completion rate in MOOCs, the inequality on receiving knowledge affected by different characters of individuals also has a huge influence on the consequence of completion rate. Actually, MOOC is not as fair as we expected. Russian researchers Semenova, T.V. and Rudakova, L.M (2016), indicate that MOOC is designed to decrease the unequal access to getting knowledge, but that does not mean every individual can enjoy the same equality in course completion rate. From their research, there are three main factors that cause inequality, which are degree of education, experience of MOOCs and gender. The survey shows that 18% of high-education students complete the course while only 3% low-education students complete. To be more visualized, 84–88% of students who have completed the course are high-educational. What's more, among students who have completed the course, 65–80% of students have at least one experience of using online learning platform comparing to 6–31% of students who have no experience. Gender also influences the completion rate. In general, 6–7% more men than women complete the course because women are supposed to do household in many countries, which distracts women's attention in learning.[123] The effectiveness of MOOCs is an open question as completion rates are substantially less than traditional online education courses.[124][125]Alraimi et al. (2015) explained in their research model a substantial percentage of the variance for the intention to continue using MOOCs, which is significantly influenced by perceived reputation, perceived openness, perceived usefulness, and perceived user satisfaction. Perceived reputation and perceived openness were the strongest predictors and have not previously been examined in the context of MOOCs However, research indicates that completion rates is not the right metric to measure success of MOOCs. Alternate metrics are proposed to measure effectiveness of MOOCs and online learning.[125] Many MOOCs use videolectures, employing the old form of teaching (lecturing) using a new technology.[124][128]Thrun testified before thePresident’s Council of Advisors on Science and Technology(PCAST) that MOOC "courses are 'designed to be challenges,' not lectures, and the amount of data generated from these assessments can be evaluated 'massively using machine learning' at work behind the scenes. This approach, he said, dispels 'the medieval set of myths' guiding teacher efficacy and student outcomes, and replaces it with evidence-based, 'modern, data-driven' educational methodologies that may be the instruments responsible for a 'fundamental transformation of education' itself".[129] Some view the videos and other material produced by the MOOC as the next form of the textbook. "MOOC is the new textbook", according to David Finegold ofRutgers University.[130]A study of edX student habits found that certificate-earning students generally stop watching videos longer than 6 to 9 minutes. They viewed the first 4.4 minutes (median) of 12- to 15-minute videos.[131]Some traditional schools blend online and offline learning, sometimes called flipped classrooms. Students watch lectures online at home and work on projects and interact with faculty while in class. Such hybrids can even improve student performance in traditional in-person classes. One fall 2012 test by San Jose State and edX found that incorporating content from an online course into a for-credit campus-based course increased pass rates to 91% from as low as 55% without the online component. "We do not recommend selecting an online-only experience over a blended learning experience", says Coursera's Andrew Ng.[59] Because of massive enrollments, MOOCs require instructional design that facilitates large-scale feedback and interaction. The two basic approaches are: So-called connectivist MOOCs rely on the former approach; broadcast MOOCs rely more on the latter.[134]This marks a key distinction betweencMOOCswhere the 'C' stands for 'connectivist', and xMOOCs where the x stands for extended (as in TEDx, edX) and represents that the MOOC is designed to be in addition to something else (university courses for example).[135] Assessment can be the most difficult activity to conduct online, and online assessments can be quite different from the brick-and-mortar version.[132]Special attention has been devoted to proctoring and cheating.[136] Peer review is often based upon sample answers orrubrics, which guide the grader on how many points to award different answers. These rubrics cannot be as complex for peer grading as for teaching assistants. Students are expected to learn via grading others[137]and become more engaged with the course.[10]Exams may be proctored at regional testing centers. Other methods, including "eavesdropping technologies worthy of the C.I.A.", allow testing at home or office, by using webcams, or monitoring mouse clicks and typing styles.[136]Special techniques such asadaptive testingmay be used, where the test tailors itself given the student's previous answers, giving harder or easier questions accordingly. "The most important thing that helps students succeed in an online course is interpersonal interaction and support", says Shanna Smith Jaggars, assistant director ofColumbia University'sCommunity College Research Center. Her research compared online-only and face-to-face learning in studies of community-college students and faculty in Virginia and Washington state. Among her findings: In Virginia, 32% of students failed or withdrew from for-credit online courses, compared with 19% for equivalent in-person courses.[59] Assigning mentors to students is another interaction-enhancing technique.[59]In 2013 Harvard offered a popular class,The Ancient Greek Hero, instructed byGregory Nagyand taken by thousands of Harvard students over prior decades. It appealed to alumni to volunteer as online mentors and discussion group managers. About 10 former teaching fellows also volunteered. The task of the volunteers, which required 3–5 hours per week, was to focus on online class discussion. The edX course registered 27,000 students.[138] Research by Kop and Fournier[139]highlighted as major challenges the lack of social presence and the high level of autonomy required. Techniques for maintaining connection with students include adding audio comments on assignments instead of writing them, participating with students in the discussion forums, asking brief questions in the middle of the lecture, updating weekly videos about the course and sending congratulatory emails on prior accomplishments to students who are slightly behind.[59]Grading by peer review has had mixed results. In one example, three fellow students grade one assignment for each assignment that they submit. The grading key or rubric tends to focus the grading, but discourages more creative writing.[100] A. J. Jacobsin an op-ed inThe New York Timesgraded his experience in 11 MOOC classes overall as a "B".[140]He rated his professors as '"B+", despite "a couple of clunkers", even comparing them to pop stars and "A-list celebrity professors". Nevertheless, he rated teacher-to-student interaction as a "D" since he had almost no contact with the professors. The highest-rated ("A") aspect of Jacobs' experience was the ability to watch videos at any time. Student-to-student interaction and assignments both received "B−". Study groups that did not meet,trollson message boards and the relative slowness of online vs. personal conversations lowered that rating. Assignments included multiple-choice quizzes and exams as well as essays and projects. He found the multiple-choice tests stressful and peer-graded essays painful. He completed only 2 of the 11 classes.[140][141] When searching for the desired course, the courses are usually organized by "most popular" or a "topical scheme". Courses planned for synchronous learning are structured as an exact organizational scheme called a chronological scheme,[142]Courses planned forasynchronous learningare also presented as a chronological scheme, but the order the information is learned as a hybrid scheme. In this way it can be harder to understand the course content and complete, because they are not based on an existingmental model.[142] MOOCs are widely seen as a major part of a largerdisruptive innovationtaking place in higher education.[143][144][145]In particular, the many services offered under traditional university business models are predicted to becomeunbundledand sold to students individually or in newly formed bundles.[146][147]These services include research, curriculum design, content generation (such as textbooks), teaching, assessment and certification (such as granting degrees) and student placement. MOOCs threaten existing business models by potentially selling teaching, assessment, or placement separately from the current package of services.[143][148][149] Former PresidentBarack Obamacited recent developments, including the online learning innovations atCarnegie Mellon University,Arizona State UniversityandGeorgia Institute of Technology, as having potential to reduce the rising costs of higher education.[150] James Mazoue, Director of Online Programs atWayne State Universitydescribes one possible innovation: The next disruptor will likely mark a tipping point: an entirely free online curriculum leading to a degree from an accredited institution. With this new business model, students might still have to pay to certify their credentials, but not for the process leading to their acquisition. If free access to a degree-granting curriculum were to occur, the business model of higher education would dramatically and irreversibly change.[151] But how universities will benefit by "giving our product away free online" is unclear.[152] No one's got the model that's going to work yet. I expect all the current ventures to fail, because the expectations are too high. People think something will catch on like wildfire. But more likely, it's maybe a decade later that somebody figures out how to do it and make money. Principles ofopennessinform the creation, structure and operation of MOOCs. The extent to which practices of Open Design in educational technology[153]are applied vary. In thefreemiumbusiness model, the basic product – the course content – is given away free. "Charging for content would be a tragedy", said Andrew Ng. But "premium" services such as certification or placement would be charged a fee – however financial aids are given in some cases.[56] Course developers could charge licensing fees for educational institutions that use its materials. Introductory or "gateway" courses and some remedial courses may earn the most fees. Free introductory courses may attract new students to follow-on fee-charging classes. Blended courses supplement MOOC material with face-to-face instruction. Providers can charge employers for recruiting its students. Students may be able to pay to take a proctored exam to earn transfer credit at a degree-granting university, or for certificates of completion.[152]Udemy allows teachers to sell online courses, with the course creators keeping 70–85% of the proceeds andintellectual property rights.[156] Coursera found that students who paid $30 to $90 were substantially more likely to finish the course. The fee was ostensibly for the company's identity-verification program, which confirms that they took and passed a course.[59] In February 2013, the American Council on Education (ACE) recommended that its members provide transfer credit from a few MOOC courses, though even the universities who deliver the courses had said that they would not.[159]TheUniversity of Wisconsinoffered multiple,competency-basedbachelor'sandmaster'sdegrees starting Fall 2013, the first public university to do so on a system-wide basis. The university encouraged students to take online-courses such as MOOCs and complete assessment tests at the university to receive credit.[160]As of 2013 few students had applied for college credit for MOOC classes.[161]Colorado State University-Global Campusreceived no applications in the year after they offered the option.[160] Academic Partnerships is a company that helps public universities move their courses online. According to its chairman, Randy Best, "We started it, frankly, as a campaign to grow enrollment. But 72 to 84 percent of those who did the first course came back and paid to take the second course."[162] While Coursera takes a larger cut of any revenue generated – but requires no minimum payment – the not-for-profit edX has a minimum required payment from course providers, but takes a smaller cut of any revenues, tied to the amount of support required for each course.[163] MOOCs are regarded by many as an important tool to widen access tohigher education(HE) for millions of people, including those in thedeveloping world, and ultimately enhance their quality of life.[2]MOOCs may be regarded as contributing to the democratisation of HE, not only locally or regionally but globally as well. MOOCs can help democratise content and make knowledge reachable for everyone. Students are able to access complete courses offered by universities all over the world, something previously unattainable. With the availability of affordable technologies, MOOCs increase access to an extraordinary number of courses offered by world-renowned institutions and teachers.[164] The costs of tertiary education continue to increase because institutions tend to bundle too many services. With MOOCs, some of these services can be transferred to other suitable players in the public or private sector. MOOCs are for large numbers of participants, can be accessed by anyone anywhere as long as they have an Internet connection, are open to everyone without entry qualifications and offer a full/complete course experience online for free.[165][164] MOOCs can be seen as a form of open education offered for free through online platforms. The (initial) philosophy of MOOCs is to open up quality higher education to a wider audience. As such, MOOCs are an important tool to achieve goal 4 of the 2030 Agenda for Sustainable Development.[164] Certain lectures, videos, and tests through MOOCs can be accessed at any time compared to scheduled class times. By allowing learners to complete their coursework in their own time, this provides flexibility to learners based on their own personal schedules.[166][164] The learning environments of MOOCs make it easier for learners across the globe to work together on common goals. Instead of having to physically meet one another, online collaboration creates partnerships among learners. While time zones may have an effect on the hours that learners communicate, projects, assignments, and more can be completed to incorporate the skills and resources that different learners offer no matter where they are located.[166][164]Distance and collaboration can benefit learners who may have struggled with traditionally more individual learning goals, including learning how to write.[167] The MOOC Guide[168]suggests five possible challenges for MOOCs: These general challenges in effective MOOC development are accompanied by criticism by journalists and academics. Robert Zemsky(2014) argues that they have passed their peak: "They came; they conquered very little; and now they face substantially diminished prospects."[169]Others have pointed to a backlash arising from the tiny completion rates.[170] Kris Olds (2012) argues that the "territorial" dimensions of MOOCs[171]have received insufficient discussion or data-backed analysis, namely: 1. the true geographical diversity of enrolls in/completes courses; 2. the implications of courses scaling across country borders, and potential difficulties with relevance and knowledge transfer; and 3. the need for territory-specific study of locally relevant issues and needs. Other features associated with early MOOCs, such as open licensing of content, open structure and learning goals, and community-centeredness, may not be present in all MOOC projects.[7] Effects on the structure of higher education were lamented, for example, byMoshe Y. Vardi, who finds an "absence of serious pedagogy in MOOCs", and indeed in all of higher education. He criticized the format of "short, unsophisticated video chunks, interleaved with online quizzes, and accompanied by social networking."[clarification needed]An underlying reason is simple cost-cutting pressures, which could hamstring the higher education industry.[172] The changes predicted from MOOCs generated objections in some quarters. The San Jose State University philosophy faculty wrote in an open letter to Harvard University professor and MOOC teacherMichael Sandel: Should one-size-fits-all vendor-designed blended courses become the norm, we fear two classes of universities will be created: one, well-funded colleges and universities in which privileged students get their own real professor; the other, financially stressed private and public universities in which students watch a bunch of video-taped lectures.[173] Cary Nelson, former president of theAmerican Association of University Professorsclaimed that MOOCs are not a reliable means of supplying credentials, stating that "It’s fine to put lectures online, but this plan only degrades degree programs if it plans to substitute for them." Sandra Schroeder, chair of the Higher Education Program and Policy Council for theAmerican Federation of Teachersexpressed concern that "These students are not likely to succeed without the structure of a strong and sequenced academic program."[174] With a 60% majority, theAmherst Collegefaculty rejected the opportunity to work with edX based on a perceived incompatibility with their seminar-style classes and personalized feedback. Some were concerned about issues such as the "information dispensing" teaching model of lectures followed by exams, the use of multiple-choice exams and peer-grading. TheDuke Universityfaculty took a similar stance in the spring of 2013. The effect of MOOCs on second- and third-tier institutions and of creating a professorial "star system" were among other concerns.[133] At least one alternative to MOOCs has advocates: Distributed Open Collaborative Courses (DOCC) challenge the roles of the instructor, hierarchy, money and massiveness. DOCC recognizes that the pursuit of knowledge may be achieved better by not using a centralized singular syllabus, that expertise is distributed throughout all the participants and does not just reside with one or two individuals.[175] Another alternative to MOOCs is the self-paced online course (SPOC) which provides a high degree of flexibility. Students can decide on their own pace and with which session they would like to begin their studies. According to a report by Class Central founder Dhawal Shah, more than 800 self-paced courses have been available in 2015.[176] Although the purpose of MOOCs is ultimately to educate more people, recent criticisms include accessibility and a Westernized curriculum that lead to a failure to reach the same audiences marginalised by traditional methods.[177] MOOCs have been criticized for a perceived lack of academic rigor as well as the monetization strategies adopted by providers. InMOOCs: A University Qualification in 24 Hours?Michael Shea writes "By offering courses that are near-impossible to fail and charging upfront fees for worthless certificates, Coursera is simply running a high-tech version of the kind of scams that have been run by correspondence colleges for decades."[178] Language of instruction is one of the major barriers that ELLs face in MOOCs. In recent estimates, almost 75% of MOOC courses are presented in the English language, however, native English speakers are a minority among the world's population.[179]This issue is mitigated by the increasing popularity of English as a global language, and therefore has more second language speakers than any other language in the world. This barrier has encouraged content developers and other MOOC stakeholders to develop content in other popular languages to increase MOOC access. However, research studies show that some ELLs prefer to take MOOCs in English, despite the language challenges, as it promotes their goals of economic, social, andgeographic mobility.[180]This emphasizes the need to not only provide MOOC content in other languages, but also to develop English language interventions for ELLs who participate in English MOOCs. Areas that ELLs particularly struggle with in English MOOCs include MOOC content without corresponding visual supporting materials[181](e.g., an instructor narrating instruction without text support in the background), or their hesitation to participate in MOOC discussion forums.[182]Active participation in MOOC discussion forums has been found to improve students grades, their engagement, and leads to lower dropout rates,[183]however, ELLs are more likely to be spectators than active contributors in discussion forums.[182] Researching studies show a “complex mix of affective, socio-cultural, and educational factors” that are inhibitors to their active participation in discussion forums.[184]As expected, English as the language of communication poses both linguistic and cultural challenges for ELLs, and they may not be confident in their English language communication abilities.[185]Discussion forums may also be an uncomfortable means of communication especially for ELLs from Confucian cultures, where disagreement and arguing one’s points are often viewed as confrontational, and harmony is promoted.[186]Therefore, while ELLs may be perceived as being uninterested in participating, research studies show that they do not show the same hesitation in face to face discourse.[187][188]Finally, ELLs may come from high power distance cultures,[189]where teachers are regarded as authority figures, and the culture of back and forth conversations between teachers and students is not a cultural norm.[187][188]As a result, discussion forums with active participation from the instructors may cause discomfort and prevent participation of students from such cultures. Open Culture, not affiliated with Stanford University, founded in 2006, by Dan Coleman, the Director and Associate Dean of Stanford University's Continuing Education Program, aggregates and curates free MOOCs, as well as free cultural & educational media.[190][191][192][193][194][195][196][197]C. Berman, of theUniversity of Illinois at Urbana-Champaign, found the website difficult to navigate, with links "hidden" in articles, and the right side lists, clunky and long.[198]
https://en.wikipedia.org/wiki/Massive_open_online_course
Karlstad University(SwedishKarlstads universitet) is a stateuniversityinKarlstad,Sweden. It was originally established as the Karlstad campus of theUniversity of Gothenburgin 1967, and this campus became an independentuniversity collegein 1977 which was granted full university status in 1999 by theGovernment of Sweden. The university has about 40 educational programmes, 30 programme extensions and 900 courses withinhumanities,social studies,science,technology,teaching,health careandarts. As of today, it has approximately 16,000 students and 1,200 employees.[1]Itsuniversity pressis namedKarlstad University Press. The currentRectoris Jerker Moodysson. CTF Service Research Center(SwedishCentrum för tjänsteforskning) at Karlstad University is one of the world's leading research centers focusing on service management and value creation through service. On March 26, 2009, the faculty of Economics, communication and IT formedKarlstad Business School(SwedishHandelshögskolan vid Karlstads universitet) as a brand of their educational programmes in the business related areas. Karlstad University has two a cappella groups, Sällskapet CMB and Söt Likör. Many students live at the nearby student accommodation facilities called Unionen and Campus Futurum. Themottoof the university isSapere aude(Dare to know). Institutions of higher education issuing teaching degrees are obliged to have a board with responsibility for the teacher education programmes. The Faculty Board for Teacher Education is also responsible for educational research. At Karlstad University there is a business school situated with focus on a service perspective. Karlstad Business School has seven disciplines and has a deep interest in economics and business.CTF Service Research Centerconducts world leading research with a focus on value creation through service. The Ingesund School of Music is part of Karlstad University and the Department of Artistic Studies. It is situated in the beautiful Arvika area in mid-Sweden. The school offers music teacher education, music studies, and sound engineering. Karlstad University has a new and environmentally friendly way of heating and cooling the university's buildings. The initiative is one of the largest in a campus area in Europe, making Karlstad University almost self-sufficient for heat and cold. With the new plant, Karlstad University will produce virtually all its heat and cooling locally. This will happen via 270 drilled holes, 200 meters down to the ground. And the environmental benefits are many, among other things, carbon dioxide emissions are radically reduced and energy consumption for heating and cooling buildings can fall by about 70 percent. The investment means that the current district heating is replaced and that heat can instead be supplied byheat pumptechnology. During the summer, heat is stored in the ground and then taken up during the winter. There is no other campus in Europe with a comprehensive geo-energy plant in this size, says Birgitta Hohlfält, Regional Director of Akademiska Hus Väst.[2][3] 59°24′21″N13°34′54″E / 59.40583°N 13.58167°E /59.40583; 13.58167
https://en.wikipedia.org/wiki/Karlstad_University
Acypherpunkis one who advocates the widespread use of strongcryptographyandprivacy-enhancing technologiesas a means of effecting social and political change. The cypherpunk movement originated in the late 1980s and gained traction with the establishment of the "Cypherpunks"electronic mailing listin 1992, where informal groups of activists, technologists, andcryptographersdiscussed strategies to enhance individual privacy and resist state or corporatesurveillance. Deeplylibertarianin philosophy, the movement is rooted in principles ofdecentralization, individual autonomy, and freedom fromcentralized authority.[1][2]Its influence on society extends to the development of technologies that have reshaped global finance, communication, and privacy practices, such as the creation ofBitcoinand othercryptocurrencies, which embody cypherpunk ideals of decentralized and censorship-resistant money. The movement has also contributed to the mainstreaming of encryption in everyday technologies, such as secure messaging apps and privacy-focused web browsers. Until about the 1970s,cryptographywas mainly practiced in secret by military or spy agencies. However, that changed when two publications brought it into public awareness: the first publicly available work onpublic-key cryptography, byWhitfield DiffieandMartin Hellman,[3]and the US government publication of theData Encryption Standard(DES), ablock cipherwhich became very widely used. The technical roots of Cypherpunk ideas have been traced back to work by cryptographerDavid Chaumon topics such as anonymous digital cash and pseudonymous reputation systems, described in his paper "Security without Identification: Transaction Systems to Make Big Brother Obsolete" (1985).[4] In the late 1980s, these ideas coalesced into something like a movement.[4] In late 1992,Eric Hughes,Timothy C. May, andJohn Gilmorefounded a small group that met monthly at Gilmore's companyCygnus Solutionsin theSan Francisco Bay Areaand was humorously termedcypherpunksbyJude Milhonat one of the first meetings—derived fromcipherandcyberpunk.[5]In November 2006, the word was added to theOxford English Dictionary.[6] The Cypherpunksmailing listwas started in 1992, and by 1994 had 700 subscribers.[5]At its peak, it was a very active forum with technical discussions ranging over mathematics, cryptography, computer science, political and philosophical discussion, personal arguments and attacks, etc., with somespamthrown in. An email fromJohn Gilmorereports an average of 30 messages a day from December 1, 1996, to March 1, 1999, and suggests that the number was probably higher earlier.[7]The number of subscribers is estimated to have reached 2,000 in the year 1997.[5] In early 1997, Jim Choate and Igor Chudov set up the Cypherpunks Distributed Remailer,[8]a network of independent mailing list nodes intended to eliminate thesingle point of failureinherent in a centralized list architecture. At its peak, the Cypherpunks Distributed Remailer included at least seven nodes.[9]By mid-2005, al-qaeda.net ran the only remaining node.[10]In mid-2013, following a brief outage, the al-qaeda.net node's list software was changed fromMajordomotoGNU Mailman,[11]and subsequently the node was renamed to cpunks.org.[12]The CDR architecture is now defunct, though the list administrator stated in 2013 that he was exploring a way to integrate this functionality with the new mailing list software.[11] For a time, the cypherpunks mailing list was a popular tool with mailbombers,[13]who would subscribe a victim to the mailing list in order to cause a deluge of messages to be sent to him or her. (This was usually done as a prank, in contrast to the style of terrorist referred to as a mailbomber.) This precipitated the mailing list sysop(s) to institute a reply-to-subscribe system. Approximately two hundred messages a day was typical for the mailing list, divided between personal arguments and attacks, political discussion, technical discussion, and early spam.[14][15] The cypherpunks mailing list had extensive discussions of the public policy issues related to cryptography and on the politics and philosophy of concepts such as anonymity, pseudonyms, reputation, and privacy. These discussions continue both on the remaining node and elsewhere as the list has become increasingly moribund.[citation needed] Events such as theGURPS Cyberpunkraid[16]lent weight to the idea that private individuals needed to take steps to protect their privacy. In its heyday, the list discussed public policy issues related to cryptography, as well as more practical nuts-and-bolts mathematical, computational, technological, and cryptographic matters. The list had a range of viewpoints and there was probably no completely unanimous agreement on anything. The general attitude, though, definitely put personal privacy and personal liberty above all other considerations.[17] The list was discussing questions about privacy, government monitoring, corporate control of information, and related issues in the early 1990s that did not become major topics for broader discussion until at least ten years later. Some list participants were highly radical on these issues.[citation needed] Those wishing to understand the context of the list might refer to the history of cryptography; in the early 1990s, the US government considered cryptography software amunitionfor export purposes (PGPsource code was published as a paper book to bypass these regulations and demonstrate their futility). In 1992, a deal between NSA and SPA allowed export of cryptography based on 40-bit RC2 and RC4 which was considered relatively weak (and especially after SSL was created, there were many contests to break it). The US government had also tried to subvert cryptography through schemes such asSkipjackand key escrow. It was also not widely known that all communications were logged by government agencies (which would later be revealed during theNSAandAT&T scandals) though this was taken as an obvious axiom by list members[citation needed].[18] The original cypherpunk mailing list, and the first list spin-off,coderpunks, were originally hosted onJohn Gilmore's toad.com, but after a falling out with the sysop over moderation, the list was migrated to several cross-linked mail-servers in what was called the "distributed mailing list."[19][20]Thecoderpunkslist, open by invitation only, existed for a time.Coderpunkstook up more technical matters and had less discussion of public policy implications. There are several lists today that can trace their lineage directly to the original Cypherpunks list: the cryptography list (cryptography@metzdowd.com), the financial cryptography list (fc-announce@ifca.ai), and a small group of closed (invitation-only) lists as well.[citation needed] Toad.com continued to run with the existing subscriber list, those that didn't unsubscribe, and was mirrored on the new distributed mailing list, but messages from the distributed list didn't appear on toad.com.[21]As the list faded in popularity, so too did it fade in the number of cross-linked subscription nodes.[citation needed] To some extent, the cryptography list[22]acts as a successor to cypherpunks; it has many of the people and continues some of the same discussions. However, it is a moderated list, considerably less zany and somewhat more technical. A number of current systems in use trace to the mailing list, includingPretty Good Privacy,/dev/randomin theLinux kernel(the actual code has been completely reimplemented several times since then) and today'sanonymous remailers.[citation needed] The basic ideas can be found inA Cypherpunk's Manifesto(Eric Hughes, 1993): "Privacy is necessary for an open society in the electronic age. ... We cannot expect governments, corporations, or other large, faceless organizations to grant us privacy ... We must defend our own privacy if we expect to have any. ... Cypherpunks write code. We know that someone has to write software to defend privacy, and ... we're going to write it."[23] Some are or were senior people at major hi-tech companies and others are well-known researchers (seelist with affiliationsbelow). The first mass media discussion of cypherpunks was in a 1993Wiredarticle bySteven LevytitledCrypto Rebels: The people in this room hope for a world where an individual's informational footprints—everything from an opinion on abortion to the medical record of an actual abortion—can be traced only if the individual involved chooses to reveal them; a world where coherent messages shoot around the globe by network and microwave, but intruders and feds trying to pluck them out of the vapor find only gibberish; a world where the tools of prying are transformed into the instruments of privacy. There is only one way this vision will materialize, and that is by widespread use of cryptography. Is this technologically possible? Definitely. The obstacles are political—some of the most powerful forces in government are devoted to the control of these tools. In short, there is a war going on between those who would liberate crypto and those who would suppress it. The seemingly innocuous bunch strewn around this conference room represents the vanguard of the pro-crypto forces. Though the battleground seems remote, the stakes are not: The outcome of this struggle may determine the amount of freedom our society will grant us in the 21st century. To the Cypherpunks, freedom is an issue worth some risk.[24] The three masked men on the cover of that edition ofWiredwere prominent cypherpunksTim May,Eric HughesandJohn Gilmore. Later, Levy wrote a book,Crypto: How the Code Rebels Beat the Government – Saving Privacy in the Digital Age,[25]covering thecrypto warsof the 1990s in detail. "Code Rebels" in the title is almost synonymous with cypherpunks. The termcypherpunkis mildly ambiguous. In most contexts it means anyone advocating cryptography as a tool for social change, social impact and expression. However, it can also be used to mean a participant in the Cypherpunkselectronic mailing listdescribedbelow. The two meanings obviously overlap, but they are by no means synonymous. Documents exemplifying cypherpunk ideas include Timothy C. May'sThe Crypto Anarchist Manifesto(1992)[26]andTheCyphernomicon(1994),[27]A Cypherpunk's Manifesto.[23] A very basic cypherpunk issue isprivacy in communicationsanddata retention. John Gilmore said he wanted "a guarantee -- with physics and mathematics, not with laws -- that we can give ourselves real privacy of personal communications."[28] Such guarantees requirestrong cryptography, so cypherpunks are fundamentally opposed to government policies attempting to control the usage or export of cryptography, which remained an issue throughout the late 1990s. TheCypherpunk Manifestostated "Cypherpunks deplore regulations on cryptography, for encryption is fundamentally a private act."[23] This was a central issue for many cypherpunks. Most were passionately opposed to various government attempts to limit cryptography—export laws, promotion of limited key length ciphers, and especiallyescrowed encryption. The questions ofanonymity,pseudonymityandreputationwere also extensively discussed. Arguably, the possibility ofanonymousspeech, and publication is vital for an open society and genuine freedom of speech—this is the position of most cypherpunks.[29] In general, cypherpunks opposed the censorship and monitoring from government and police. In particular, the US government'sClipper chipscheme forescrowed encryptionof telephone conversations (encryption supposedly secure against most attackers, but breakable by government) was seen asanathemaby many on the list. This was an issue that provoked strong opposition and brought many new recruits to the cypherpunk ranks. List participantMatt Blazefound a serious flaw[30]in the scheme, helping to hasten its demise. Steven Schear first suggested thewarrant canaryin 2002 to thwart the secrecy provisions ofcourt ordersandnational security letters.[31]As of 2013[update], warrant canaries are gaining commercial acceptance.[32] An important set of discussions concerns the use of cryptography in the presence of oppressive authorities. As a result, Cypherpunks have discussed and improvedsteganographicmethods that hide the use of crypto itself, or that allow interrogators to believe that they have forcibly extracted hidden information from a subject. For instance,Rubberhosewas a tool that partitioned and intermixed secret data on a drive with fake secret data, each of which accessed via a different password. Interrogators, having extracted a password, are led to believe that they have indeed unlocked the desired secrets, whereas in reality the actual data is still hidden. In other words, even its presence is hidden. Likewise, cypherpunks have also discussed under what conditions encryption may be used without being noticed bynetwork monitoringsystems installed by oppressive regimes. As theManifestosays, "Cypherpunks write code";[23]the notion that good ideas need to be implemented, not just discussed, is very much part of the culture of themailing list.John Gilmore, whose site hosted the original cypherpunks mailing list, wrote: "We are literally in a race between our ability to build and deploy technology, and their ability to build and deploy laws and treaties. Neither side is likely to back down or wise up until it has definitively lost the race."[33] Anonymous remailers such as theMixmaster Remailerwere almost entirely a cypherpunk development.[34]Other cypherpunk-related projects includePGPfor email privacy,[35]FreeS/WANforopportunistic encryptionof the whole net,Off-the-record messagingfor privacy inInternet chat, and theTorproject for anonymous web surfing. In 1998, theElectronic Frontier Foundation, with assistance from the mailing list, built a $200,000machinethat could brute-force aData Encryption Standardkey in a few days.[36]The project demonstrated that DES was, without question, insecure and obsolete, in sharp contrast to the US government's recommendation of the algorithm. Cypherpunks also participated, along with other experts, in several reports on cryptographic matters. One such paper was "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security".[37]It suggested 75 bits was theminimumkey size to allow an existing cipher to be considered secure and kept in service. At the time, theData Encryption Standardwith 56-bit keys was still a US government standard, mandatory for some applications. Other papers were critical analysis of government schemes. "The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption",[38]evaluatedescrowed encryptionproposals.Comments on the Carnivore System Technical Review.[39]looked at anFBIscheme for monitoring email. Cypherpunks provided significant input to the 1996National Research Councilreport on encryption policy,Cryptography's Role In Securing the Information Society(CRISIS).[40]This report, commissioned by the U.S. Congress in 1993, was developed via extensive hearings across the nation from all interested stakeholders, by a committee of talented people. It recommended a gradual relaxation of the existing U.S. government restrictions on encryption. Like many such study reports, its conclusions were largely ignored by policy-makers. Later events such as the final rulings in the cypherpunks lawsuits forced a more complete relaxation of the unconstitutional controls on encryption software. Cypherpunks have filed a number of lawsuits, mostly suits against the US government alleging that some government action is unconstitutional. Phil Karnsued the State Department in 1994 over cryptography export controls[41]after they ruled that, while the bookApplied Cryptography[42]could legally be exported, a floppy disk containing a verbatim copy of code printed in the book was legally a munition and required an export permit, which they refused to grant. Karn also appeared before both House and Senate committees looking at cryptography issues. Daniel J. Bernstein, supported by theEFF, also sued over the export restrictions, arguing that preventing publication of cryptographic source code is an unconstitutional restriction on freedom of speech. He won, effectively overturning the export law. SeeBernstein v. United Statesfor details. Peter Jungeralso sued on similar grounds, and won.[citation needed][43] Cypherpunks encouraged civil disobedience, in particular,US law on the export of cryptography.[citation needed]Until 1997, cryptographic code was legally a munition and fell under ITAR, and the key length restrictions in the EAR was not removed until 2000.[44] In 1995 Adam Back wrote a version of theRSAalgorithm forpublic-key cryptographyin three lines ofPerl[45][46]and suggested people use it as an email signature file: Vince Cateput up a web page that invited anyone to become an international arms trafficker; every time someone clicked on the form, an export-restricted item—originallyPGP, later a copy of Back's program—would be mailed from a US server to one in Anguilla.[47][48][49] InNeal Stephenson's novelCryptonomiconmany characters are on the "Secret Admirers" mailing list. This is fairly obviously based on the cypherpunks list, and several well-known cypherpunks are mentioned in the acknowledgements. Much of the plot revolves around cypherpunk ideas; the leading characters are building a data haven which will allow anonymous financial transactions, and the book is full of cryptography. But, according to the author[50]the book's title is—in spite of its similarity—not based on the Cyphernomicon,[27]an online cypherpunk FAQ document. Cypherpunk achievements would later also be used on the Canadian e-wallet, theMintChip, and the creation ofbitcoin. It was an inspiration forCryptoPartydecades later to such an extent thatA Cypherpunk's Manifestois quoted at the header of its Wiki,[51]and Eric Hughes delivered the keynote address at the Amsterdam CryptoParty on 27 August 2012. Cypherpunks list participants included many notable computer industry figures. Most were list regulars, although not all would call themselves "cypherpunks".[52]The following is a list of noteworthy cypherpunks and their achievements: * indicates someone mentioned in the acknowledgements of Stephenson'sCryptonomicon.
https://en.wikipedia.org/wiki/Cypherpunk
Digital credentialsare the digital equivalent of paper-basedcredentials. Just as a paper-based credential could be apassport, adriver's license, a membership certificate or some kind of ticket to obtain some service, such as a cinema ticket or a public transport ticket, a digital credential is a proof of qualification, competence, or clearance that is attached to a person. Also, digital credentials prove something about their owner. Both types of credentials may contain personal information such as the person's name, birthplace, birthdate, and/or biometric information such as a picture or a finger print. Because of the still evolving, and sometimes conflicting, terminologies used in the fields of computer science, computer security, and cryptography, the term "digital credential" is used quite confusingly in these fields. Sometimes passwords or other means of authentication are referred to as credentials. Inoperating systemdesign, credentials are the properties of aprocess(such as its effectiveUID) that is used for determining its access rights. On other occasions,certificatesand associated key material such as those stored inPKCS#12 andPKCS#15 are referred to as credentials. Digital badgesare a form of digital credential that indicate an accomplishment, skill, quality or interest. Digital badges can be earned in a variety of learning environments.[1] Money, in general, is not regarded as a form of qualification that is inherently linked to a specific individual, as the value oftoken moneyis perceived to reside independently. However, the emergence of digital assets, such asdigital cash, has introduced a new set of challenges due to their susceptibility to replication. Consequently, digital cash protocols have been developed with additional measures to mitigate the issue ofdouble spending, wherein a coin is used for multiple transactions. Credentials, on the other hand, serve as tangible evidence of an individual's qualifications or attributes, acting as a validation of their capabilities. One notable example is the concept of E-Coins, which are exclusively assigned to individuals and are not transferable to others. These E-Coins can only be utilised in transactions with authorised merchants. Anonymity is maintained for individuals as long as they ensure that a coin is spent only once. However, if an individual attempts to spend the same coin multiple times, their identity can be established, enabling the bank or relevant authority to take appropriate actions.[2] The shared characteristic of being tied to an individual forms the basis for the numerous similarities between digital cash and digital credentials. This commonality explains why these two concepts often exhibit overlapping features. In fact, it is worth noting that a significant majority of implementations of anonymous digital credentials also incorporate elements of digital cash systems.[2] The concept of anonymous digital credentials centres around the provision of cryptographic tokens to users, enabling them to demonstrate specific statements about themselves and their associations with public and private organizations while maintaining anonymity. This approach is viewed as a privacy-conscious alternative to the storage and utilization of extensive centralized user records, which can be linked together. Anonymous digital credentials are thus related toprivacyandanonymity.[3] Analogous to the physical world, personalised or non-anonymouscredentialsinclude documents like passports, driving licenses, credit cards, health insurance cards, and club membership cards. These credentials bear the owner's name and possess certain validating features, such as signatures, PINs, or photographs, to prevent unauthorised usage. In contrast, anonymous credentials in the physical realm can be exemplified by forms of currency, bus and train tickets, and game-arcade tokens. These items lack personally identifiable information, allowing for their transfer between users without the issuers or relying parties being aware of such transactions. Organizations responsible for issuing credentials verify the authenticity of the information contained within them, which can be provided to verifying entities upon request.[4] To explore the specific privacy-related characteristics of credentials, it is instructive to examine two types of credentials: physical money and credit cards. Both facilitate payment transactions effectively, although the extent and quality of information disclosed differ significantly. Money is safeguarded against counterfeiting through its physical properties. Furthermore, it reveals minimal information, with coins featuring an inherent value and year of minting, while banknotes incorporate a unique serial number to comply with traceability requirements for law enforcement purposes.[5] In contrast, the usage of credit cards, despite sharing a fundamental purpose with money, allows for the generation of detailed records pertaining to the cardholder. Consequently, credit cards are not considered protective of privacy. The primary advantage of money, in terms of privacy, is that its users can preserve their anonymity. However, real-world cash also possesses additional security and usability features that contribute to its widespread acceptance.[6] Credentials utilised within a national identification system are particularly relevant to privacy considerations. Such identification documents, including passports, driver's licenses, or other types of cards, typically contain essential personal information. In certain scenarios, it may be advantageous to selectively disclose only specific portions of the information contained within the identification document. For example, it might be desirable to reveal only the minimum age of an individual or the fact that they are qualified to drive a car.[7] The original system of anonymous credentials, initially proposed byDavid Chaum[8]is sometimes referred to as a pseudonym system.[9]This nomenclature arises from the nature of the credentials within this system, which are acquired and presented to organizations under distinct pseudonyms that cannot be linked together. The introduction of pseudonyms[8]is a useful extension to anonymity.Pseudonymsrepresent a valuable expansion of anonymity. They afford users the ability to adopt different names when interacting with each organization. While pseudonyms enable organizations to establish associations with user accounts, they are unable to ascertain the true identities of their customers. Nonetheless, through the utilisation of an anonymous credential, specific assertions concerning a user's relationship with one organization, under a pseudonym, can be verified by another organization that only recognizes the user under a different pseudonym. Anonymous credential systems have a close connection to the concept of untraceable or anonymous payments.[10]David Chaum made significant contributions to this field by introducingblind signatureprotocols as a novel cryptographic primitive. In such protocols, the signer remains oblivious to the message being signed, while the recipient obtains a signature without any knowledge of the signed message. Blind signatures serve as a crucial building block for various privacy-sensitive applications, including anonymous payments, voting systems, and credentials. The original notion of an anonymous credential system[8]was derived from the concept of blind signatures but relied on atrusted partyfor the transfer of credentials, involving the translation from one pseudonym to another. Chaum's blind signature scheme, based onRSAsignatures and thediscrete logarithmproblem, enabled the construction of anonymous credential systems. Stefan Brandsfurther advanced digital credentials by introducing secret-key certificate-based credentials, enhancing Chaum's basic blind-signature system in both the discrete logarithm and strong RSA assumption settings. Brands credentials offer efficient algorithms and unconditional commercial security in terms of privacy,[11]along with additional features like a proof of non-membership blacklist.[12] Another form of credentials that adds a new feature to anonymous credentials is multi-show unlinkability, which is realized throughgroup signaturerelated credentials of Camenisch et al. The introduction ofGroup signaturespossibilities for multi-show unlinkable showing protocols. While blind signatures are highly relevant for electronic cash and single-show credentials, the cryptographic primitive known as group signature introduced new avenues for constructing privacy-enhancing protocols.[13]Group signatures share similarities with Chaum's concept of credential systems.[8] In a group signature scheme, members of a group can sign a message using their respective secret keys. The resulting signature can be verified by anyone possessing the common public key, without revealing any information about the signer other than their group membership. Typically, a group manager entity exists, capable of disclosing the actual identity of the signer and managing the addition or removal of users from the group, often through the issuance or revocation of group membership certificates. The anonymity, unlinkability, and anonymity revocation features provided by group signatures make them suitable for various privacy-sensitive applications, such as voting, bidding, anonymous payments, and anonymous credentials. Efficient constructions for group signatures were presented by Ateniese, Camenisch, Joye, and Tsudik[14]while the most efficient multi-show unlinkable anonymous credential systems[15]]—with the latter being a streamlined version of idemix[[16]]—are based on similar principles.[17]This is particularly true for credential systems that provide efficient means for implementing anonymous multi-show credentials with credential revocation.[18] Both schemes are based on techniques for doingproofs of knowledge.[19][20]Proofs of knowledge based on the discrete logarithm problem for groups of known order and the special RSA problem for groups of hidden order form the foundation for most modern group signature and anonymous credential systems.[12][14][15][21]Moreover, thedirect anonymous attestation, a protocol for authenticatingtrusted platform modules, is also based on the same techniques. Direct anonymous attestationcan be considered the first commercial application of multi-show anonymous digital credentials, although in this case, the credentials are associated with chips and computer platforms rather than individuals. From an application perspective, the main advantage of Camenisch et al.'s multi-show unlinkable credentials over the more efficient Brands credentials is the property of multi-show unlinkability. However, this property is primarily relevant in offline settings. Brands credentials offer a mechanism that provides analogous functionality without sacrificing performance: an efficient batch issuing protocol capable of simultaneously issuing multiple unlinkable credentials. This mechanism can be combined with a privacy-preserving certificate refresh process, which generates a fresh unlinkable credential with the same attributes as a previously spent credential. Online credentials for learning are digital credentials that are offered in place of traditional paper credentials for a skill or educational achievement. Directly linked to the accelerated development of internet communication technologies, the development ofdigital badges,electronic passportsandmassive open online courses[22](MOOCs) have a very direct bearing on our understanding of learning, recognition and levels as they pose a direct challenge to the status quo. It is useful to distinguish between three forms of online credentials: Test-based credentials, online badges, and online certificates.[23] This article incorporates text from afree contentwork. Licensed under CC-BY-SA IGO 3.0 (license statement/permission). Text taken fromLevel-setting and recognition of learning outcomes: The use of level descriptors in the twenty-first century​, 129-131, Keevey, James; Chakroun, Borhene, UNESCO. UNESCO.
https://en.wikipedia.org/wiki/Digital_credential
Digital self-determinationis amultidisciplinaryconcept derived from the legal concept ofself-determinationand applied to the digital sphere, to address the unique challenges to individual and collectiveagencyandautonomyarising with increasingdigitalizationof many aspects of society and daily life. There is no philosophically or legally agreed-upon concept of digital self-determination yet. Broadly speaking, the term describes the attempt to comprehensively project the pattern of human self-determination (as first explored in disciplines like philosophy and psychology, and in the law) into the digital age. The concept has been included in an official document for the first time byARCEP, the French Telecoms Regulator, in a section of its 2021 Report on the State of the Internet,[1]exploring the work on "Network Self-determination"[2]conducted by Professor Luca Belli. The concept of self-determination relates to concepts ofsubjectivity,dignity, and autonomy in classic central-European philosophy and derived fromImmanuel Kant's conception of freedom. Self-determination presupposes that human beings are entities capable of reason and responsibility for their own rationally chosen and justified actions (autonomy), and ought to be treated accordingly. In formulating hiscategorical imperative(kategorischer Imperativ), Kant suggested that humans, as a condition of their autonomy, must never be treated as a means to an end but as an end in itself. The pattern of self-determination similarly aims at enabling autonomous human beings to create, choose and pursue their own identity, action, and life choices without undue interference. In psychology, the concept of self-determination is closely related to self-regulation andintrinsic motivation, i.e., engaging in a behavior or activity because it is inherently rewarding to do so, as opposed to being driven by external motivations or pressures, like monetary incentives, status, or fear. In this context, self-determination and intrinsic motivation are linked to feeling in control of one's choices and behavior and are considered necessary for psychological well-being.Self-determination theory(SDT), first introduced by psychologists Richard Ryan and Eduard Deci in the 1980s,[3][4]and further developed through the 1990s and 2000s, has been largely influential in shaping the concept of self-determination in the field of psychology. Ryan and Deci's SDT proposed that individuals' motivated behavior is characterized by three basic and universal needs: autonomy, competence, and relatedness.[5]Autonomy refers here to the need to feel free to decide one's course of action. Competence refers to the need to have the capacity and skills to undertake and complete motivated behavior in an effective manner. Finally, relatedness refers to the need to experience warm and caring social relationships and feel connected to others. According to SDT, all three needs must be fulfilled for optimal functioning and psychological well-being. However, other psychologists like Barry Schwartz have argued that if self-determination is taken to extremes, freedom of choice can turn into the "tyranny of choice".[6]In this view, having too much autonomy and too many choices over our course of action can be perceived as overwhelming, make our decisions more difficult, and ultimately lead to psychological distress rather than wellbeing. In international law, the right of apeopleto self-determination is commonly recognized as aius cogensrule. Here, self-determination denotes that a people, based on respect for the principle of equal rights and fair equality of opportunity, have the right to freely choose their sovereignty, international political status, economic, social, and cultural development with no interference. In the framework of the United Nations, fundamental rights like self-determination are mainly defined in theUniversal Declaration of Human Rights, theInternational Covenant on Civil and Political Rights, and theInternational Covenant on Economic, Social and Cultural Rights. The concept ofinformational self-determination(informational Selbstbestimmung), considered as a modern fundamental right which protects against unjustified data processing, has featured prominently in theGerman Federal Constitutional Court's (Bundesverfassungsgericht) jurisprudence and might be the most direct precursor and inspiration to the concept of digital self-determination. In 1983, the Bundesverfassungsgericht ruled that "in the context of modern data processing, the general right of personality under Article 2.1 in conjunction with Article 1.1 of the Basic Law encompasses the protection of the individual against unlimited collection, storage, use and sharing of personal data. The fundamental right guarantees the authority conferred on the individual to, in principle, decide themselves on the disclosure and use of their personal data." (Volkszählungsurteil, headnote 1). Philosophically, the right to informational self-determination is deeply rooted in the Bundesverfassungsgericht's understanding of inviolable Human Dignity (Article 1 of the Grundgesetz) as a prohibition of human objectification (in German: Objektformel; see for example n°33 of BVerfGE 27, 1 - Mikrozensus). This understanding refers back to the late 18th-century German philosophy of Enlightenment. The Volkszählungsurteil was inspired by the concern that modern data processing technology could lead to a "registration and cataloging of one's personality in a manner that is incompatible with human dignity" (Volkszählungsurteil, headnote 4). In this view, human beings, due to their inviolable dignity, may never be treated like depersonalized and objectified resources that can be harvested for data. Instead humans, due to their capacity for autonomy, are self-determined agents possessing a significant degree of control over their informational images. The increasing digitization of most aspects of society poses new challenges for the concept and realization of self-determination.[7]While the digital sphere has ushered in innovation and opened up new opportunities for self-expression and communication for individuals across the globe, its reach and benefits have not been evenly distributed, oftentimes deepening existing inequalities and power structures, commonly referred to as adigital divide. Moreover, the digital transformation has enabled, oftentimes unbeknownst to individuals, the mass collection, analysis, and harvesting of personal data by private companies and governments to infer individuals' information and preferences (e.g., by tracking browsing and shopping history), influence opinions and behavior (e.g., through filter bubbles and targeted advertisements), and/or to make decisions about them (e.g., approving or not a loan or employment application), thus posing new threats to individuals' privacy and autonomy.[8] Although the definition of digital self-determination is still evolving, the term has been used to address humans' capacity (or lack thereof) to exercise self-determination in their existence in and usage of digital media, spaces, networks, and technologies, with the protection of the potential for human flourishing in the digital world as one of the chief concerns.[9] Starting in the 2010s, a few multidisciplinary and cross-sectoral initiatives around the world have been working on developing a theoretical framework for the concept of digital self-determination. In 2015, the Cologne Center for Ethics, Rights, Economics, and Social Sciences of Health, housed at the University of Cologne (CERES), conducted a study to help define digital self-determination and develop metrics to measure its fulfillment.[10]Their study report defines digital self-determination as "the concrete development of a human personality or the possibility of realizing one's own plans of action and decisions to act, insofar as this relates to the conscious use of digital media or is (co-)dependent on the existence or functioning of digital media". In 2017, Professor Luca Belli presented at the United NationsInternet Governance Forumthe concept of Network Self-determination as the "right to freely associate in order to define, in a democratic fashion, the design, development and management of network infrastructure as a common good, so that all individuals can freely seek, impart and receive information and innovation."[11]Arguing that the right to network self-determination finds its basis in the fundamental right to self-determination of peoples as well as in the right to informational self-determination, Belli posits that network self-determination plays a pivotal role allowing individuals to associate and join efforts to bridge digital divides in a bottom-up fashion, freely developing common infrastructure. The concept gained traction at the Latin American level, starting to form a core element of research and policy proposals dedicated to community networks.[12] In 2018, the Swiss government launched a Digital Self-Determination network in response to the action plan for the Federal Council's 'Digital Switzerland' strategy, including representatives from the Swiss Federal Administration, academia, civil society, and the private sector.[13]The work of this network conceptualizes digital self-determination as "a way of enhancing trust into digital transformation while allowing all actors of society to benefit from the potential of the data economy". This work proposes that the core principles of digital self-determination are transparency and trust, control and self-determined data sharing, user-oriented data spaces, and decentralized data spaces that operate in proximity to citizens' needs. The work of the network aims "to create an international network that represents the basic principles of digital self-determination and on this basis will elaborate best practices, standards, and agreements to develop international data spaces". In 2021, the French Telecoms Regulator (ARCEP) referred to the concept of Digital Self-determination in its official annual report dedicated to "The State of the Internet",[1]drawing on theIGFoutput document report on "The Value of Internet Openness in Times of Crisis". In 2021, the Centre of AI & Data Governance at Singapore Management University launched a major research project focusing on the concept of digital self-determination, in collaboration with the Swiss government and other research partners.[14]Their theoretical framework[7]focuses on data governance and privacy, and proposes that the core components of digital self-determination are the empowerment of data subjects to oversee their sense of self in the digital sphere, their ability to govern their data, consent as a cornerstone of privacy and data protection, protection against data malfeasance, and accuracy and authenticity of the data collected. This proposed framework also emphasizes that digital self-determination refers to both individuals and collectives and that the concept should be understood in the context of "rights dependent on duties" and in parallel to concepts of a social or relational self, social responsibility, and digital solidarity (see below: 3.1. Addressing the multilevel 'self' in digital self-determination) In 2021, the Digital Asia Hub in collaboration with theBerkman Klein Centerat Harvard University and the Global Network of Internet & Society Centers, conducted a research sprint to explore the concept of digital self-determination from different perspectives and across cultural contexts. This initiative approached digital self-determination "as an enabler of - or at least contributor - to the exercise of autonomy and agency in the face of shrinking choices", to address questions of control, power, and equity "in a world that is increasingly constructed, mediated, and at times even dominated by digital technologies and digital media, including the underlying infrastructures."[15] In addition to the work of governments and research centers, civil society members have also advocated for digital self-determination. For example,Ferdinand von Schirach, a legal attorney and widely-read German writer of fictional legal short stories and novels, has launched an initiative entitled "JEDER MENSCH", which translates to "Every human". In "JEDER MENSCH", von Schirach calls for the addition of six new fundamental rights to the Charter of Fundamental Rights of the European Union. Article 2 of this proposal is entitled "right to digital self-determination", and proposed that "Everyone has the right to digital self-determination. Excessive profiling or the manipulation of people is forbidden."[16] In October 2021, an International Network on Digital Self-Determination[17]was created with the intention of "bringing together diverse perspectives from different fields around the world to study and design ways to engage in trustworthy data spaces and ensure human centric approaches.[18]" The network is composed of experts from the Directorate of International Law of the Swiss Federal Department of Foreign Affairs;[19]the Centre for Artificial Intelligence and Data Governance at Singapore Management University;[20]the Berkman Klein Center at Harvard University;[21]the Global Tech Policy Practice at the TUM School of Social Sciences and Technology[22]and The GovLab at New York University.[23] Different sectors of society, ranging from legislators and policy-makers, to public organizations and scholars, to activists and members of the civil society, have called for digital infrastructure, tools and systems that protect and promote individuals' self-determination, including equal and free access, human-centered design, better privacy protections and control over data. These elements are closely connected and complement one another. For example, equal access to digital infrastructure can enable the representation of diverse viewpoints and participatory governance in the digital sphere, and decentralized systems might be necessary to ensure individuals' control over their data. Bridging the various forms of existingdigital dividesand providing equitable and fair access to digital technologies and the internet has been proposed as crucial to ensure that all individuals are able to benefit from the digital age, including access to information, services, and advancement opportunities.[24][25] In this sense, the concept of Digital Self-determination overlaps with the concept of "Network Self-determination"[26]as it emphases that groups of unconnected and scarcely connected individuals can regain control over digital infrastructures, by building them and shaping the governance framework that will organise them as a common good.[27]As such, Belli stresses that network self-determination leads to several positive externalities for the affected communities, preserving the Internet as an open, distributed, interoperable and generative network of networks.[2] Digital literacyandmedia literacyhave been proposed as necessary for individuals to acquire the knowledge and skills to use digital tools as well as to critically assess the content they encounter online, create their own content, and understand the features and implications of the digital technology used on them as well as the technology they consciously and willingly engage with.[28]In addition to basic digital navigation skills and critical consumption of information, definitions of digital literacy have been extended to include an awareness of existing alternatives to the digital platforms and services used, understanding how personal data is handled, awareness of rights and existing legal protections, and of measures to independently protect one's security and privacy online (e.g., the adoption of obfuscation techniques as a way of evading and protesting digital surveillance[29]). Internet activistEli Parisercoined the termfilter bubbleto refer to the reduced availability of divergent opinions and realities that we encounter online as a consequence of personalization algorithms likepersonalized searchandrecommendation systems.[30]Filter bubbles have been suggested to facilitate a warped understanding of others' points of view and the world. Ensuring a wide representation of diverse realities on digital platforms could be a way of increasing exposure to conflicting viewpoints and avoiding intellectual isolation into informational bubbles. Scholars have coined the termattention economyto refer to the treatment of human attention as a scarce commodity in the context of ever-increasing amounts of information and products. In this view, the increasing competition for users' limited attention, especially when relying onadvertising revenuemodels, creates a pressing goal for digital platforms to get as many people as possible to spend as much time and attention as possible using their product or service. In their quest for users' scarce attention, these platforms would be incentivized to exploit users' cognitive and emotional weaknesses, for example via constant notifications,dark patterns, forced multitasking, social comparison, and incendiary content.[8]Advocates ofhuman-centered designin technology (orhumane technology) propose that technology should refrain from such 'brain-hacking' practices, and instead should support users' agency over their time and attention as well as their overall wellbeing.[31] ScholarShoshana Zuboffpopularized the termsurveillance capitalismto refer to the private sector's commodification of users' personal data for profit (e.g. viatargeted advertising), leading to increased vulnerability to surveillance and exploitation. Surveillance capitalism relies on centralized data management models wherein private companies retain ownership and control over the users' data. To guard against the challenges to individuals' privacy and self-determination, various alternative data governance models have been recently proposed around the world, including trusts,[32]commons,[33]cooperative,[34]collaboratives,[35]fiduciaries,[36]and "pods".[37]These models have some overlap and share a common mission to give more control to individuals over their data and thus address the current power imbalances between data holders and data subjects.[38] An individual's exercising ofself-agencycan be intimately connected to the digital environments one is embedded in, which can shape one'schoice architecture, access to information and opportunities as well as exposure to harm and exploitation, thereby affecting the person's capacity to freely and autonomously conduct his or her life. A variety of digital technologies and their underlying infrastructure, regardless of their relatively visible or indirecthuman interfaces, could contribute to conditions that empower or disempower an individual's self-determination in the spheres of socio-economic participation, representation ofcultural identityandpolitical expression. The extent of technologically-mediated spheres where such influence could take place over an individual's self-determined choices has been the focus of growing contemporary debates across diverse geographies. One of the debates concerns whether an individual'sprivacy, as a form of control over one's information,[39]may or may not be sufficiently protected from exploitativedata harvestingandmicro-targetingthat can exert undue behavioural influence over the individual as part of a targeted group. Developments in this area vary greatly across countries and regions where there are differentprivacyframeworks and big data policies, such as theEuropean Union'sGeneral Data Protection RegulationandChina'sSocial Credit System,[40]which approachpersonal datadistinctly.[41] Other debates range from whether individualagencyin decision-making can be undermined bypredictive algorithms;[42]whether an individual labor, particularly in theGlobal South,[43]may encounter new employment opportunities as well as unique vulnerabilities in thedigital economy; whether an individual's self-expression may be unduly and discriminately policed bysurveillancetechnologies deployed insmart cities, particularly those integratingfacial recognitionandemotion recognitioncapabilities which run onbiometric data, as a form of digitalpanopticon;[44]and whether an individual's access to diverse information may be affected by thedigital divideand dominance of centralized online platforms, potentially limiting one's capacity to imagine his or her identity[45]and make informed decisions. Digital mediaand technology afford children the opportunity to engage in various activities that support their development, learning, and pleasure time.[46][47]Such activities include play, interactions with others, sharing and creating content, and experimenting with varied forms of identities afforded by mediums they engage with .[48]At the same time, despite digital media affordances, children are users who are under 18 years old, which can have unintended consequences on how children consume content, be vulnerable, and ways interactions with technology impact the child's emotional, behavioral, andcognitive development. Therefore, calls withindigital literacyand children technology interaction research assert that ethical design of technology is essential for designing equitable environments for children.[49]Work in digital media and learning acknowledges the affordances of technology for creating expansive ways for learning and development for children, at the same time, pays attention to that children should learn critical digital literacies that enables them to communicate, evaluate, and construct knowledge within digital media.[50]Additionally, ethical consideration should be taken into account to support children's self-determination.[51]For instance within this body of work, there is an attention to involving children in the decision making process of technology design as an ethical methodological approach in engaging the design of technology for children. In other words, involving children within the design process of technologies and thinking about ethical dimensions of children interactions enables a shift of the notion of vulnerability is shifted towards supporting children to enact their self-determination and positioning them as active creators of their own digital futures. Beyond ethical considerations, children's involvement in digital technologies and digital market practices has also an important relevance with theirprivacyanddata protectionrights. Use ofpredictive analyticsand tracking software systems can impact children's digital and real life choices by exploiting massive profiling practices. In fact, due to the ubiquitous use of these algorithmic systems at both state and private sector level, children's privacy can easily be violated and they can bepersonally identifiablein the digital sphere.[52] Article 12 ofthe UN CRCimplies a responsibility to states that children should have the right to form and express their own views "freely, without any pressure".[53]In the literal analysis, pressure refers to any kind of manipulation, oppression or exploitation. States parties should recognize that all children regardless of age are capable of forming and expressing their own autonomous opinions.[53]Also, it is stated by the Committee that children should have the right to be heard even if they don't have a comprehensive understanding of the subject matter affecting them.[53]Moreover, Article 3 of the UNCRC states that the best interest of the child shall be embedded in private and governmental decision making processes and shall be a primary consideration in relation to the services and procedures which involve children.[54]Anchoring these responsibilities to private and public digital practices and as it is highlighted in the General Comment No. 25 of the Committee on the Rights of the Child, children are at great risk in the digital domain regarding their vulnerable and evolvingidentity.[55]It turns out that with the proliferation ofmass surveillanceand predictive analytics, new disputes are on the way for states to protect children's very innate rights. To this end, recentclass actionsand regulation efforts Tech firms can be promising examples in the context of pushing the private sector to adopt more privacy-preserving practices on children which can provide a golden shield for their autonomy.[56][57]In this incautiously regulated atmosphere, it has become easier to make profit with the help ofbehavioral advertisingagainst children.[58]Not having appropriate consent- inform/parental consentpractices, it is so easy to manipulate and exploit very intrinsic vulnerabilities of children andnudgethem to choose specific products and applications. Regarding this, article 8 ofthe GDPRprovides a set of age limits on the processing of personal data of children related to the information society services(ISS). Pursuant to Article 8, in conditions where children are at least 16 years old, the child can give consent to the lawful processing of personal data restricted with the purpose of processing (Art(6)(1)(a). For children under 16 years old, it is lawful only and to the extent of the consent which is given by the holder of the parental responsibility to the child. This age limit of 16 can be lower to the age of 13 by the Member States. In addition to this, it is emphasized that data controllers should take necessary measurements in relation to the protection of children's data. Supporting these, Recital 38 states that children merit specific protection on the use, collection and processing of their data taking into consideration that children are less aware of the impacts, consequences and safeguards with respect to processing of their personal data. The GDPR also refers to the children in Articles 40 and 57; Recitals 58 and 75. Beyond the GDPR, one of the structured regulations is the UK'sInformation Commissioner's Office(ICO) Children Code (formally Age Appropriate Design Code) which is passed in September 2020.[59]The Children Code sets forth the age limit as 18 with regard to ability to give free while implying the responsibility to the providers of online services such as apps, games, connected devices and toys and new services. What differs The Children Code from the EU regulations is that it applies to all information society services which are likely to be accessed by children. This means, even if the service is not directly aimed at children, the parties that offer those services must comply with The Children Code. The ICO's Children Code is also infused with the notion of the best interest of the child that is laid out in the UNCRC.[60]Having a broad scope, the ICO lists a set of guiding points for organizations to support the notion of the best interest of the child such as recognizing that children has an evolving capacity to form their own views and giving due weight to those views, protecting their needs of the developing their own ideas and identity, their right to assembly and play.[60]The Code also extends the protection of the personal data of children with a set of key standards such as data minimisation, data protection impact assessments, age appropriate design, privacy by default and transparency. The politics of the empire are already permeating the shared histories. Unequal social relations between colonizing and colonized peoples materialized through exploitation, segregation, epistemic violence, and so on. Throughout the world, these discourses of colonialism dominated peoples' perceptions and cultures. Post-colonial critics contended how colonized peoples could attain cultural, economical, and social agency against the oppressive structures and representation imposed on their lives and societies.[61] However, through temporality, the preface "post" implies the historical period of colonization has ended, and the colonized subjects are now free of its discourses. Scholars have focalized on the continuity of colonialism even if it has historically ended. The neo-colonial structures and discourses are already part of the different "postcolonial" cultures.[62]The postcolonial era in which colonized countries have gained independence and autonomy has been a means for the populace to regain their own self-determination and freedom. Yet, the neo-colonial structures that are still rampant in the postcolonial societies. Although the nation-state might forward the idea of autonomy and self-determination, new forms of colonialism are always emerging. This dialectic between colonialism and self-determination encompasses a range of fields, changing in form, focus, and scope over time. It is reflected in the complex political and policy relationships between "postcolonial" peoples and the state, especially since most states are replicating the legal and political systems of their former colonizer. History articulates that state policy in fields as diverse as health, education, housing, public works, employment, and justice had, and continue to have, negative effects on indigenous peoples after independence.[63]This negative effect is shared throughout the former colonized peoples. Alongside these political tensions, economic interests have manipulated legal and governance frameworks to extract value and resources from former colonized territories, often without adequate compensation or consultation to impacted individuals and communities.[64]Accordingly, Digital Colonialism emerges as a dominant discourse in the digital sphere. Digital colonialism is a structural form of domination exercised through the centralized ownership and control of the three core pillars of the digital ecosystem: software, hardware, and network connectivity. The control over the latter three gives giant corporations an immense political, economic, and social power over not only individuals, but even nation-states.[65]Assimilation into the tech products, models, and ideologies of foreign powers constitutes a colonization of the internet age.[66] Today, a new form of corporate colonization is taking place. Instead of the conquest of land, Big Tech corporations are colonizing digital technology. The following functions are dominated by a handful of multinational companies: search engines (Google); web browsers (Google Chrome); smartphone and tablet operating systems (Google Android, Apple iOS); desktop and laptop operating systems (Microsoft Windows); office software (Microsoft Office, Google Docs); cloud infrastructure and services (Amazon, Microsoft, Google, IBM); social networking platforms (Facebook, Twitter); transportation (Uber, Lyft); business networking (Microsoft LinkedIn); streaming video (Google YouTube, Netflix, Hulu); and online advertising (Google, Facebook) – among others. These include the five wealthiest corporations in the world, with a combined market cap exceeding $3 trillion.[67]If any nation-state integrates these Big Tech products into their society, these multinational corporations will obtain enormous power over their economy and create technological dependencies that will lead to perpetual resource extraction.[citation needed]This resembles the Colonial period in which the colonies were made to be dependent on the colonizer's economy for further exploitation. Under digital colonialism, digital infrastructure in the Global South are engineered for the Big tech companies' needs, enabling economic and cultural domination while imposing privatized forms of governance.[68]To accomplish this task, major corporations design digital technology to ensure their own dominance over critical functions in the tech ecosystem. This allows them to accumulate profits from revenues derived from rent; to exercise control over the flow of information, social activities, and a plethora of other political, social, economic, and military functions which use their technologies. Digital colonialism depends on code. In Code: And Other Laws of Cyberspace, Lawrence Lessig famously argued that computer code shapes the rules, norms, and behaviors of computer-mediated experiences. As a result, "code is law" in the sense that it has the power to usurp legal, institutional, and social norms impacting the political, economic, and cultural domains of society. This critical insight has been applied in fields like copyright, free speech regulation, Internet governance, blockchain, privacy, and even torts.[69]This is similar to architecture in physical space during colonialism. Building and infrastructures were built to reinforce the dominance and reach of colonialism.[70] "Postcolonial" peoples, then, face multiple digital limitations in their access and use of the networked digital infrastructures. The latter threatens to reflect and restructure existing relations of social inequality grounded in colonialism and continuing processes of neo-colonialism. Indigenous peoples are acutely aware of this potential, and so are working with various partners to decolonize the digital sphere. They are undertaking a variety of projects that represent their diverse and localized experiences, alongside a common desire for self-determination.[citation needed]Rural and remote indigenous communities face persistent access problems to the digital associated with the historic and ongoing effects of colonialism. Remote indigenous communities are becoming 'offline by design' because their going online has been challenged.[71]Indigenous peoples are asserting their digital self-determination by using these platforms to build online communities, express virtual identities, and represent their culture virtually. Hence, they are no longer static as offline, but becoming 'networked individualism'.[72]Their engagement with the digital sphere resists the imposed representations of their identities and deterritorializes conceptions of virtual communities. Accordingly, the former colonized peoples are always engaged in the process of decolonizing the latent neo-/colonial discourses which are dominating the internet. Digital apartheid has also been a key concept in debates around digitalself-determination. For authors such as Christian Fuchs, digital apartheid means that "certain groups and regions of the world are systematically excluded fromcyberspaceand the benefits that it can create."[73] Brown and Czerniewicz (2010), drawing on a research project interrogating the access ofhigher educationstudents inSouth AfricatoInformation and Communications Technology(ICT), highlight that while age or generational aspects have been a characteristic ofdigital divides, now the latter are rather a question of access and opportunity, claiming that in the present day "digital apartheid is alive and well."[74] Borrowing from Graham (2011),[75]and extending to the representation of conditions surrounding higher education in post-apartheid South Africa, Ashton et al. (2018)[76]highlight the concept of digital apartheid as a multidimensional process with three dimensions - a material dimension (including access toinfrastructure, device,cellularcoverage,electricity), a skills dimension (including education legacy regardingcomputer training,social capitalwith regard to thefamily/communitycomputer skills), and a virtual dimension (includinglanguage,cultureand contextual relevance). The authors argue that "The virtual dimension emerges from the intentional act of 'digital redlining' which takes on a number of forms. It may be under the guise of protecting an organisation fromspamand illicit, harmfulcyber-attacks, but has the secondary outcome of blocking or filtering out communities who only have access through cheaper portals."[76]It also includes the influence of the Westernised, English internet that further influencescontentvisibility. The skills dimension emerges from an understanding whereICTlessons were not a part of thecurriculumuntil recently and therefore the skill development remained underexposed and restricted. The authors refer to the material dimension as the most cited concern regarding introducingtechnologyas part of the curriculum, arguing that "the lack of power infrastructure in lowersocio-economicareas and exorbitant data costs, impact some students' ability to access their learning resources."[76] Since 2019, this concept signifying advantages to some and dispossession of some others has also been used to characterizeinternet shutdowns and communications blockades in Jammu and Kashmir. The region, contested and claimed by bothIndiaandPakistanin its entirety and a site of an activearmed conflict, witnessed the Indian State imposing a total communication blackout andinternet shutdown in Jammu and Kashmiron the intervening night of 4 and 5 August 2019 as part of its unilateral measures to remove the semi-autonomous nature of the disputed territory of Jammu and Kashmir.[77]Low speed2G internetwas restored in January 2020[78]while high speed4G internetwas restored in February 2021.[79]A 2019 report notes that between 2012 and 2019, there have been 180 internet shutdowns in the region.[80]India also topped the list of 29 countries that had disrupted access to the internet for the people in the year 2020.[81]The report byAccess Nowhighlighted, "India had instituted what had become a perpetual, punitive shutdown in Jammu and Kashmir beginning in August 2019. Residents in these states had previously experienced frequent periodic shutdowns, and in 2020 they were deprived of reliable, secure, open, and accessible internet on an ongoing basis."[81]In placing these frequent shutdowns in the context of the ongoing conflict in Kashmir, the report Kashmir's Internet Siege (2020) by theJammu Kashmir Coalition of Civil Societyargues that with the frequent internet shutdowns, the Indian government has been enacting in these regions a "digital apartheid," "a form of systemic and pervasive discriminatory treatment andcollective punishment."[82]According to the report, "frequent and prolonged internet shutdowns enact a profound digital apartheid by systematically and structurally depriving the people of Kashmir of the means to participate in a highly networked and digitised world."[82] This systematiccensorshipand deprivation not only resulted in excluding the people, collectively, from participating incyberspace, but as was evident, it crippled IT companies andstartupsin Kashmir. It was noted to have affected at least a thousand employees working in this sector[83]just in the third month of the world's longest internet shutdown that began on the intervening night of 4 and 5 August 2019 across Jammu and Kashmir.[82]In a statement,UN Special Rapporteursreferred to thecommunication blackoutas acollective punishmentwithout any pretext of precipitating offence. "The shutdown of the internet andtelecommunicationnetworks, without justification from the Government, are inconsistent with the fundamental norms of necessity and proportionality," the experts said.[84]A news report quoting the story of anentrepreneurwho had been doing well with astartupnoted that the "Internet is the oxygen for start-ups. The Centre pulled that plug on August 5. The virtual world was our space for growth. Now that's gone. All employees and producers have been rendered jobless [..] I have to work by hook or by crook to meet the damage inflicted by loss of customers, undelivered orders and accumulated goods after the non-availability of Internet."[85]In June 2020, it was reported for the first time how non-local companies were able to bag a majority of contracts online forminingof mineral blocks, as locals were left at a disadvantage due to the ban on high speed internet.[86] The effect of this digital apartheid was also witnessed during the lockdown induced by theCovid-19 pandemicleaving thehealthcareinfrastructure crippled as doctors complained about not being able to accessinformationor attend trainings on coronavirus owing to the restricted internet. The president of the Doctors Association noted that the awareness drives that were carried out elsewhere about theviruswere impossible to run in Kashmir. "We want to educate people through videos, which is not possible at 2G speed. We are handicapped in the absence of high speed internet."[87]Health experts and the locals warned that the internet blackout was hampering the fight againstcoronavirusin the region.[88]The internet shutdown also affected education across all levels in the region. News reports noted how Kashmiri education was left behind even as life elsewhere was moving online in dealing with the stay-at-home guidelines during the pandemic.[89]A news report after a year of the communication blackout and subsequent restriction on high-speed internet highlighted that it had "ravagedhealth,education,entrepreneurship" in the region.[90] Promoting concepts and rights which are closely related to digital self-determination is a common goal behind regulatory initiatives in various legal systems. Stemming from the conceptual framework ofhuman rights, and a well-established notion ofinformational self-determination, digital self-determination gradually comes to play an increasingly important role as a concept that encompasses values and virtues which remain highly relevant in the context of the globalnetwork society, such asautonomy,dignity, andfreedom. The importance of embedding the fundamental values into the legislative frameworks regulating the digital sphere has been stressed numerous times by scholars,[91]public authorities, and representatives of various organizations. The EU's legal policy, while not explicitly referencing a right to digital self-determination, pursues closely related objectives. One of the overarching premises of the European Digital Strategy is to encourage the development of trustworthy technology that "works for the people".[92]It aims at advancing, among other things, "human-centered digital public services and administration", as well as "ethical principles for human-centered algorithms". The EU has outlined these policy goals in several regulatory agendas including i.a.the EU Commission Digital Strategy,the European Data Strategy, andthe EU's White Paper on Artificial Intelligence. Subsequently, the EU has pursued the abovementioned objectives through the adoption or proposal of several legal instruments including: The U.S. has yet to introduce a comprehensiveinformation privacy law; legislation pertaining to data and digital rights currently exists at both the state and federal level and is often sector-specific. In the United States,The Federal Trade Commission(FTC) is tasked with overseeing the protection of consumers' digital privacy and security, outliningfair information practiceprinciples for the governance of online spaces.[96] Federal legislation includes theChildren's Online Privacy Protection Act(COPPA)which regulates the collection of personally identifiable information from children under the age of thirteen online. TheHealth Insurance Portability and Accountability Act(HIPAA) includes federal standards for protecting the privacy and security of personal health data stored electronically. TheFamily Educational and Rights Privacy Act(FERPA) governs access to and the disclosure of studenteducational records.While state legislation varies in the strength of their protections, theCalifornia Consumer Privacy Act(CCPA) of 2018 provides California consumers with the right to access data, know and delete personal information collected by businesses, opt-out of the sale of this information, and the right to non-discrimination for exercisingthese rights. Artificial intelligence and digital self-determination The proliferation ofartificial intelligence(AI), as not a single technology but rather a set of technologies,[97]is increasingly shaping the technologically-mediated spaces for individuals and communities to conduct their lives. From algorithmic recommendation ine-commerce[98]andsocial mediaplatforms,[99]smart surveillancein policing,[100]to automated resources allocation in public services,[101]the extent of possible AI applications that can influence an individual's autonomy is continuously contested, considering the widespreaddataficationof people's lives across the socio-economic and political spheres today. For example,machine learning, a subfield of artificial intelligence, "allows us to extract information from data and discover new patterns, and is able to turn seemingly innocuous data into sensitive, personal data",[102]meaning an individual's privacy and anonymity may be prone to vulnerabilities outside of the original data domain, such as having their social media data harvested forcomputational propagandain the election based onmicro-targeting.[103] Another sphere where AI systems can affect the exercising of self-determination is when the datasets on which algorithms are trained mirror the existing structures ofinequality, thereby reinforcing structural discrimination that limits certain groups' access to fair treatment and opportunities. In theUnited States, an AI recruiting tool used byAmazonhas shown to discriminate against female job applicants,[104]while an AI-based modelling tool used by the Department of Human Services in Allegheny County,Pennsylvania, to flag potential child abuse has shown to disproportionately profile the poor and racial minority, raising questions about how predictive variables in algorithms could often be "abstractions" that "reflect priorities and preoccupations".[105] Current landscape of AI principles relevant to digital self-determination How states attempt to govern the AI industry can shape how AI applications are developed, tested and operated and in what ethical frameworks relevant to many forms of human interests, thereby affecting the degree of digital self-determination exercised by individuals and communities. In recent years, there has been a proliferation of high-level principles and guidelines documents,[106]providing suggestions for public-sector policies and private-sector code of conduct in a non-binding manner. Compared to the binding laws enacted by states, the landscape of AI ethics principles paints a more diverse picture, with governmental and nongovernmental organisations including private companies, academic institutions and civic society actively developing the ecosystem. A 2020 report by theUnited Nationsidentified "over 160 organizational, national and international sets of AI ethics and governance principles worldwide, although there is no common platform to bring these separate initiatives together".[107] Common themes of AI principles have been emerging as research efforts develop, with many closely linked to the various conditions of digital self-determination, such as control over one's data, protection from biased treatment, and equal access to the benefits offered by AI. A 2020 publication by theBerkman Klein Center for Internet and SocietyatHarvard Universitystudied thirty-six "especially visible or influential" AI principles documents authored by government and non-governmental actors from multiple geographical regions, and identified eight key themes: However, the report also notes "a wide and thorny gap between the articulation of these high-level concepts and their actual achievement in the real world".[108] Examples of intergovernmental and governmental AI principles Currently, few AI governance principles are internationally recognised. The "OECDPrinciples on AI", adopted by OECD Member States and nine other non-OECD countries in May 2019, integrates elements relevant to digital self-determination such as "inclusive growth", "well-being", "human-centered values and fairness", while emphasizing an individual's ability to appeal and "challenge the outcome of AI system" and the adherence of AI development to "internationally recognized labour rights".[109] On a national level, numerous state AI policies make reference to AI ethics principles, though in an irregular fashion. Such references can be standalone documents. For example,Japan's government established its "Social Principles of Human-Centric AI",[110]which is closely linked to its "AI strategy 2019: AI for Everyone: People, Industries, Regions and Governments",[111]and a separate set of AI Utilization Guidelines that encourage voluntary adherence and emphasize that AI shall be used to "expand human abilities and creativity", shall not "infringe on a person's individual freedom, dignity or equality", and adheres to the "principle of human dignity and individual autonomy".[112] AI principles can also be incorporated into a national AI strategy, which primarily focuses on policy instruments advancing AI, such as investment inSTEMeducation and public-private partnerships. For example,India's AI strategy, "National Strategy for Artificial Intelligence" published in June 2018, identifies key areas of high national priority for AI development (healthcare, agriculture, education, urban-/smart-city infrastructure, transportation and mobility), with ethical topics such as privacy and fairness integrated as a forward-looking section.[113] Opportunities and challenges for AI principles to address self-determination Non-binding AI principles suggested by actors inside or outside the government might sometimes be further concretized into specific policy or regulation. In 2020, theUnited Kingdom's government's advisory body on the responsible use of AI, the Centre for Data Ethics and Innovation, proposed specific measures for government, regulators and industry to tackle algorithmic bias in the sectors of financial services, local government, policing and recruitment,[114]with each area relevant to how individuals conduct their ways of life and access socio-economic opportunities without being subjected to unfair treatment. Cultural and geographical representation has been highlighted as a challenge in ensuring the burgeoning AI norms sufficiently consider unique opportunities and risks faced by the global population, who exercise their autonomy and freedom in vastly different political regimes with varying degrees of rule of law. In 2020, a report published by theCouncil of Europereviewed 116 AI principles documents and found that "these soft law documents are being primarily developed in Europe, North America and Asia", while "theglobal southis currently underrepresented in the landscape of organisations proposing AI ethics guidelines".[115]
https://en.wikipedia.org/wiki/Digital_self-determination
Enhanced Privacy ID(EPID) is Intel Corporation's recommended algorithm forattestationof atrusted systemwhile preserving privacy. It has been incorporated in several Intel chipsets since 2008 and Intel processors since 2011. At RSAC 2016 Intel disclosed that it has shipped over 2.4B EPID keys since 2008.[1]EPID complies with international standardsISO/IEC20008[2]/ 20009,[3]and theTrusted Computing Group(TCG)TPM2.0 for authentication.[4]Intel contributed EPIDintellectual propertyto ISO/IEC underRAND-Zterms. Intel is recommending that EPID become the standard across the industry for use in authentication of devices in theInternet of Things(IoT) and in December 2014 announced that it was licensing the technology to third-party chip makers to broadly enable its use.[5] EPID is an enhancement of theDirect Anonymous Attestation(DAA) algorithm.[6]DAA is adigital signaturealgorithm supporting anonymity. Unlike traditional digital signature algorithms, in which each entity has a unique public verification key and a unique private signature key, DAA provides a common group public verification key associated with many (typically millions) of unique private signature keys. DAA was created so that a device could prove to an external party what kind of device it is (and optionally what software is running on the device) without needing to provide device identity, i.e., to prove you are an authentic member of a group without revealingwhichmember. EPID enhances DAA by providing an additional utility of being able to revoke a private key given a signature created by that key, even if the key itself is still unknown. In 1999 thePentium IIIadded aProcessor Serial Number(PSN) as a way to create identity for security of endpoints on the internet. However, privacy advocates were especially concerned and Intel chose to remove the feature in later versions.[7]Building on improving asymmetric cryptography of the time and group keys, Intel Labs researched and then standardized a way to get to the benefits of PSN while preserving privacy. There are three roles when using EPID: Issuer, Member and Verifier. The issuer is the entity that issues unique EPID private keys for each member of a group. The member is the entity that is trying to prove its membership in a group. The verifier is the entity who is checking an EPID signature to establish whether it was signed by an entity or device which is an authentic member of the group. Current usage by Intel has the Intel Key Generation Facility as the Issuer, an Intel-based PC with embedded EPID key as a member, and a server (possibly running in the cloud) as the verifier (on behalf of some party that wishes to know that it is communicating with some trusted component in a device). The issuing of an EPID key can be done directly by the issuer creating an EPID key and delivering securely to the member, or blinded so that the issuer does not know the EPID private key. Having EPID keys embedded in devices before they ship is an advantage for some usages so that EPID is available inherently in the devices as they arrive in the field. Having the EPID key issued using the blinded protocol is an advantage for some usages, since there is never a question about whether the issuer knew the EPID key in the device. It is an option to have one EPID key in the device at time of shipment, and use that key to prove to another issuer that it is a valid device and then get issued a different EPID key using the blinded issuing protocol. In recent years EPID has been used for attestation of applications in the platforms used for protected content streaming and financial transactions. It is also used for attestation inSoftware Guard Extensions(SGX), released by Intel in 2015. It is anticipated that EPID will become prevalent in IoT, where inherent key distribution with the processor chip, and optional privacy benefits will be especially valued. An example usage for EPID is to prove that a device is a genuine device. A verifier wishing to know that a part was genuine would ask the part to sign acryptographic noncewith its EPID key. The part would sign the nonce and also provide a proof that the EPID key was not revoked. The verifier after checking the validity of the signature and proof would know that the part was genuine. With EPID, this proof is anonymous and unlinkable.[8] EPID can be used to attest that a platform can securely streamdigital rights management(DRM)-protected content because it has a minimum level of hardware security. TheIntel Insiderprogram uses EPID for platform attestation to the rights-holder. Data Protection Technology (DPT) for Transactions is a product for doing a 2-way authentication of apoint of sale(POS) terminal to a backend server based on EPID keys. Using hardware roots of trust based on EPID authentication, the initial activation and provisioning of a POS terminal can securely be performed with a remote server. In general, EPID can be used as the basis to securely provision any cryptographic key material over the air or down the wire with this method. For securing the IoT, EPID can be used to provide authentication while also preserving privacy. EPID keys placed in devices during manufacturing are ideal for provisioning other keys for other services in a device. EPID keys can be used in devices for services while not allowing users to be tracked by their IoT devices using these services. Yet if required, a known transaction can be used for when an application and user choose (or require) the transaction to be unambiguously known (e.g., a financial transaction). EPID can be used for both persistent identity and anonymity. Whereas alternative approaches exist for persistent identity, it is difficult to convert persistent identity to anonymous identity. EPID can serve both requirements and can enable anonymous identity in a mode of operation that enables persistence, as well. Thus, EPID is ideal for the broad range of anticipated IoT uses. Security and privacy are foundational to the IoT. Since IoT security and privacy extend beyond Intel processors to other chipmaker's processors in sensors, Intel announced on December 9, 2014 their intent to license EPID broadly to other chip manufacturers for Internet of things applications. On August 18, 2015, Intel jointly announced the licensing of EPID to Microchip and Atmel, and showed it running on a Microchip microcontroller at the Intel Developers Forum.[9] Internet of things has been described as a "network of networks"[10]where internal workings of one network may not be appropriate to disclose to a peer or foreign network. For example, a use case involving redundant or spare IoT devices facilitates availability and serviceability objectives, but network operations that load balances or replaces different devices need not be reflected to peer or foreign networks that "share" a device across network contexts. The peer expects a particular type of service or data structure but likely doesn't need to know about device failover, replacement or repair. EPID can be used to share a common public key or certificate that describes and attests the group of similar devices used for redundancy and availability, but doesn't allow tracking of specific device movements. In many cases, peer networks do not want to track such movements as it would require, potentially, maintaining context involving multiple certificates and device lifecycles. Where privacy is also a consideration, the details of device maintenance, failover, load balancing and replacement cannot be inferred by tracking authentication events. Because of EPID's privacy preserving properties, it is ideal for IoT Device identity to allow a device to securely and automatically onboard itself into an IoT Service immediately at the first power on of the device. Essentially the device performs a secure boot, and then before anything else, reaches out across the internet to find the IoT Service that the new owner has chosen for managing the device. An EPID attestation is integral to this initial communication. As a consequence of the EPID attestation, a secure channel is created between the device and IoT Service. Because of the EPID attestation, the IoT Service knows it is talking to the real IoT Device. (Using the secure channel created, there is reciprocal attestation so the IoT Device knows it is talking to the IoT Service the new owner selected to manage it.) Unlike PKI, where the key is unchanging transaction to transaction, an adversary lurking on the network cannot see and correlate traffic by the key used when EPID is employed. Thus privacy of onboarding is preserved and adversaries can no longer collect data to create attack maps for later use when future IoT Device vulnerabilities are discovered. Moreover, additional keys can be securely provisioned over the air or down the wire, the latest version of software, perhaps specific to the IoT Service, can be downloaded and default logins disabled to secure the IoT Device without operator intervention. On October 3, 2017, Intel announced Intel Secure Device Onboard,[11]a software solution to help IoT Device Manufacturers and IoT Cloud Services privately, securely and quickly onboard IoT Devices into IoT Services. The objective is to onboard "Any Device to Any IoT Platform"[12]for a "superior Onboarding experience and ecosystem enablement ROI". The use cases and protocols from SDO have been submitted to theFIDO AllianceIoT working group.
https://en.wikipedia.org/wiki/Enhanced_privacy_ID
Data processingis thecollectionand manipulation of digital data to produce meaningful information.[1]Data processing is a form ofinformation processing, which is the modification (processing) of information in any manner detectable by an observer.[note 1] Data processing may involve various processes, including: TheUnited States Census Bureauhistory illustrates the evolution of data processing from manual through electronic procedures. Although widespread use of the termdata processingdates only from the 1950s,[2]data processing functions have been performed manually for millennia. For example,bookkeepinginvolves functions such as posting transactions and producing reports like thebalance sheetand thecash flow statement. Completely manual methods were augmented by the application ofmechanicalor electroniccalculators. A person whose job was to perform calculations manually or using a calculator was called a "computer." The1890 United States censusschedule was the first to gather data by individual rather thanhousehold. A number of questions could be answered by making a check in the appropriate box on the form. From 1850 to 1880 the Census Bureau employed "a system of tallying, which, by reason of the increasing number of combinations of classifications required, became increasingly complex. Only a limited number of combinations could be recorded in one tally, so it was necessary to handle the schedules 5 or 6 times, for as many independent tallies."[3]"It took over 7 years to publish the results of the 1880 census"[4]using manual processing methods. The termautomatic data processingwas applied to operations performed by means ofunit record equipment, such asHerman Hollerith's application ofpunched cardequipment for the1890 United States census. "Using Hollerith's punchcard equipment, the Census Office was able to complete tabulating most of the 1890 census data in 2 to 3 years, compared with 7 to 8 years for the 1880 census. It is estimated that using Hollerith's system saved some $5 million in processing costs"[4]in 1890 dollars even though there were twice as many questions as in 1880. Computerized data processing, orelectronic data processingrepresents a later development, with a computer used instead of several independent pieces of equipment. The Census Bureau first made limited use ofelectronic computersfor the1950 United States census, using aUNIVAC Isystem,[3]delivered in 1952. The termdata processinghas mostly been subsumed by the more general terminformation technology(IT).[5]The older term "data processing" is suggestive of older technologies. For example, in 1996 theData Processing Management Association(DPMA) changed its name to theAssociation of Information Technology Professionals. Nevertheless, the terms are approximately synonymous. Commercial data processing involves a large volume of input data, relatively few computational operations, and a large volume of output. For example, an insurance company needs to keep records on tens or hundreds of thousands of policies, print and mail bills, and receive and post payments. In science and engineering, the termsdata processingandinformation systemsare considered too broad, and the termdata processingis typically used for the initial stage followed by adata analysisin the second stage of the overall data handling. Data analysis uses specializedalgorithmsandstatisticalcalculations that are less often observed in a typical general business environment. For data analysis, software suites likeSPSSorSAS, or their free counterparts such asDAP,gretl, orPSPPare often used. These tools are usually helpful for processing various huge data sets, as they are able to handle enormous amount of statistical analysis.[6] Adata processing systemis a combination ofmachines, people, and processes that for a set ofinputsproduces a defined set ofoutputs. The inputs and outputs are interpreted asdata,facts,informationetc. depending on the interpreter's relation to the system. A term commonly used synonymously withdata or storage (codes) processing systemisinformation system.[7]With regard particularly toelectronic data processing, the corresponding concept is referred to aselectronic data processing system. A very simple example of a data processing system is the process of maintaining a check register. Transactions— checks and deposits— are recorded as they occur and the transactions are summarized to determine a current balance. Monthly the data recorded in the register is reconciled with a hopefully identical list of transactions processed by the bank. A more sophisticated record keeping system might further identify the transactions— for example deposits by source or checks by type, such as charitable contributions. This information might be used to obtain information like the total of all contributions for the year. The important thing about this example is that it is asystem, in which, all transactions are recorded consistently, and the same method of bank reconciliation is used each time. This is aflowchartof a data processing system combining manual and computerized processing to handleaccounts receivable, billing, andgeneral ledger
https://en.wikipedia.org/wiki/Data_processing
Privacy engineeringis an emerging field of engineering which aims to provide methodologies, tools, and techniques to ensure systems provide acceptable levels ofprivacy. Its focus lies in organizing and assessing methods to identify and tackle privacy concerns within the engineering ofinformation systems.[1] In theUS, an acceptable level of privacy is defined in terms of compliance to the functional and non-functional requirements set out through aprivacy policy, which is a contractual artifact displaying the data controlling entities compliance to legislation such asFair Information Practices, health record security regulation and otherprivacy laws. In theEU, however, theGeneral Data Protection Regulation(GDPR) sets the requirements that need to be fulfilled. In the rest of the world, the requirements change depending on local implementations ofprivacyanddata protectionlaws. The definition of privacy engineering given byNational Institute of Standards and Technology (NIST)is:[2] Focuses on providing guidance that can be used to decrease privacy risks, and enable organizations to make purposeful decisions about resource allocation and effective implementation of controls in information systems. While privacy has been developing as a legal domain, privacy engineering has only really come to the fore in recent years as the necessity of implementing said privacy laws in information systems has become a definite requirement to the deployment of such information systems. For example, IPEN outlines their position in this respect as:[3] One reason for the lack of attention to privacy issues in development is the lack of appropriate tools and best practices. Developers have to deliver quickly in order to minimize time to market and effort, and often will re-use existing components, despite their privacy flaws. There are, unfortunately, few building blocks for privacy-friendly applications and services, and security can often be weak as well. Privacy engineering involves aspects such as process management,security,ontologyandsoftware engineering.[4]The actual application of these derives from necessary legal compliances, privacy policies and 'manifestos' such asPrivacy-by-Design.[5] Towards the more implementation levels, privacy engineering employsprivacy enhancing technologiesto enableanonymisationandde-identificationof data. Privacy engineering requires suitable security engineering practices to be deployed, and some privacy aspects can be implemented using security techniques. A privacy impact assessment is another tool within this context and its use does not imply that privacy engineering is being practiced. One area of concern is the proper definition and application of terms such as personal data, personally identifiable information, anonymisation andpseudo-anonymisationwhich lack sufficient and detailed enough meanings when applied to software, information systems and data sets. Another facet of information system privacy has been the ethical use of such systems with particular concern onsurveillance,big datacollection,artificial intelligenceetc. Some members of the privacy and privacy engineering community advocate for the idea ofethics engineeringor reject the possibility of engineering privacy into systems intended for surveillance. Software engineers often encounter problems when interpreting legal norms into current technology. Legal requirements are by nature neutral to technology and will in case of legal conflict be interpreted by a court in the context of the current status of both technology and privacy practice. As this particular field is still in its infancy and somewhat dominated by the legal aspects, the following list just outlines the primary areas on which privacy engineering is based: Despite the lack of a cohesive development of the above areas, courses already exist for the training of privacy engineering.[8][9][10]The International Workshop on Privacy Engineering co-located withIEEE Symposiumon Security and Privacy provides a venue to address "the gap between research and practice in systematizing and evaluating approaches to capture and address privacy issues while engineering information systems".[11][12][13] A number of approaches to privacy engineering exist. The LINDDUN[14]methodology takes a risk-centric approach to privacy engineering where personal data flows at risk are identified and then secured with privacy controls.[15][16]Guidance for interpretation of the GDPR has been provided in the GDPR recitals,[17]which have been coded into a decision tool[18]that maps GDPR into software engineering forces[18]with the goal to identify suitable privacy design patterns.[19][20]One further approach uses eight privacy design strategies - four technical and four administrative strategies - to protect data and to implement data subject rights.[21] Privacy engineering is particularly concerned with the processing of information over the following aspects orontologiesand their relations[22]to their implementation in software: Further to this how the above then affect the security classification, risk classification and thus the levels of protection and flow within a system can then the metricised or calculated. Privacy is an area dominated by legal aspects but requires implementation using, ostensibly, engineering techniques, disciplines and skills. Privacy Engineering as an overall discipline takes its basis from considering privacy not just as a legal aspect or engineering aspect and their unification but also utilizing the following areas:[25] The impetus for technological progress in privacy engineering stems from generalprivacy lawsand various particular legal acts:
https://en.wikipedia.org/wiki/Privacy_Engineering
Privacy-Enhanced Mail(PEM) is ade factofile format for storing and sending cryptographickeys,certificates, and other data, based on a set of 1993IETFstandards defining "privacy-enhanced mail." While the original standards were never broadly adopted and were supplanted byPGPandS/MIME, the textual encoding they defined became very popular. The PEM format was eventually formalized by the IETF inRFC 7468.[1] Many cryptography standards useASN.1to define their data structures, andDistinguished Encoding Rules(DER) to serialize those structures.[2]Because DER producesbinaryoutput, it can be challenging to transmit the resulting files through systems, like electronic mail, that only support ASCII. The PEM format solves this problem by encoding the binary data usingbase64. PEM also defines a one-line header, consisting of-----BEGIN, a label, and-----, and a one-line footer, consisting of-----END, a label, and-----. The label determines the type of message encoded. Common labels includeCERTIFICATE,CERTIFICATE REQUEST,PRIVATE KEYandX509 CRL. PEM data is commonly stored in files with a ".pem" suffix, a ".cer" or ".crt" suffix (for certificates), or a ".key" suffix (for public or private keys).[3]The label inside a PEM file represents the type of the data more accurately than the file suffix, since many different types of data can be saved in a ".pem" file. In particular PEM refers to the header andbase64wrapper for a binary format contained within, but does not specify any type or format for the binary data, so that a PEM file may contain "almost anything base64 encoded and wrapped with BEGIN and END lines".[4] The PEM format was first developed in the privacy-enhanced mail series ofRFCs: RFC 1421, RFC 1422, RFC 1423, and RFC 1424. These standards assumed prior deployment of a hierarchicalpublic key infrastructure(PKI) with a single root. Such a PKI was never deployed, due to operational cost and legal liability concerns.[citation needed]These standards were eventually obsoleted byPGPandS/MIME, competing e-mail encryption standards.[citation needed] The initiative to develop Privacy Enhanced Mail began in 1985 on behalf of the PSRG (Privacy and Security Research Group)[5]also known as the Internet Research Task Force. This task force is a subsidiary of theInternet Architecture Board(IAB) and their efforts have resulted in the Requests for Comment (RFCs) which are suggested Internet guidelines.[6]
https://en.wikipedia.org/wiki/Privacy-enhanced_Electronic_Mail