text
stringlengths 16
172k
| source
stringlengths 32
122
|
|---|---|
Inmetadata, the termdata elementis an atomic unit of data that has precise meaning or precise semantics. A data element has:
Data elements usage can be discovered by inspection ofsoftware applicationsor applicationdata filesthrough a process of manual or automatedApplication Discovery and Understanding. Once data elements are discovered they can be registered in ametadata registry.
Intelecommunications, the termdata elementhas the following components:
In the areas ofdatabasesanddata systemsmore generally a data element is a concept forming part of adata model. As an element of data representation, a collection of data elements forms adata structure.[1]
In practice, data elements (fields, columns, attributes, etc.) are sometimes "overloaded", meaning a given data element will have multiple potential meanings. While a known bad practice, overloading is nevertheless a very real factor or barrier to understanding what a system is doing.
Thisdatabase-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Data_element
|
In themathematicalfield ofFourier analysis, theconjugate Fourier seriesarises by realizing the Fourier series formally as the boundary values of thereal partof aholomorphic functionon theunit disc. Theimaginary partof that function then defines the conjugate series.Zygmund (1968)studied the delicate questions of convergence of this series, and its relationship with theHilbert transform.
In detail, consider atrigonometric seriesof the form
in which the coefficientsanandbnarereal numbers. This series is the real part of thepower series
along theunit circlewithz=eiθ{\displaystyle z=e^{i\theta }}. The imaginary part ofF(z) is called theconjugate seriesoff, and is denoted
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Conjugate_Fourier_series
|
Intraditional logic, acontradictionoccurs when apropositionconflicts either with itself or establishedfact. It is often used as a tool to detectdisingenuousbeliefs andbias. Illustrating a general tendency in applied logic,Aristotle'slaw of noncontradictionstates that "It is impossible that the same thing can at the same time both belong and not belong to the same object and in the same respect."[1]
In modernformal logicandtype theory, the term is mainly used instead for asingleproposition, often denoted by thefalsumsymbol⊥{\displaystyle \bot }; a proposition is a contradiction iffalsecan be derived from it, using the rules of the logic. It is a proposition that is unconditionally false (i.e., a self-contradictory proposition).[2][3]This can be generalized to a collection of propositions, which is then said to "contain" a contradiction.
By creation of aparadox,Plato'sEuthydemusdialogue demonstrates the need for the notion ofcontradiction. In the ensuing dialogue,Dionysodorusdenies the existence of "contradiction", all the while thatSocratesis contradicting him:
... I in my astonishment said: What do you mean Dionysodorus? I have often heard, and have been amazed to hear, this thesis of yours, which is maintained and employed by the disciples of Protagoras and others before them, and which to me appears to be quite wonderful, and suicidal as well as destructive, and I think that I am most likely to hear the truth about it from you. The dictum is that there is no such thing as a falsehood; a man must either say what is true or say nothing. Is not that your position?
Indeed, Dionysodorus agrees that "there is no such thing as false opinion ... there is no such thing as ignorance", and demands of Socrates to "Refute me." Socrates responds "But how can I refute you, if, as you say, to tell a falsehood is impossible?".[4]
In classical logic, particularly inpropositionalandfirst-order logic, a propositionφ{\displaystyle \varphi }is a contradictionif and only ifφ⊢⊥{\displaystyle \varphi \vdash \bot }. Since for contradictoryφ{\displaystyle \varphi }it is true that⊢φ→ψ{\displaystyle \vdash \varphi \rightarrow \psi }for allψ{\displaystyle \psi }(because⊥⊢ψ{\displaystyle \bot \vdash \psi }), one may prove any proposition from a set of axioms which contains contradictions. This is called the "principle of explosion", or "ex falso quodlibet" ("from falsity, anything follows").[5]
In acompletelogic, a formula is contradictory if and only if it isunsatisfiable.
For a set of consistent premisesΣ{\displaystyle \Sigma }and a propositionφ{\displaystyle \varphi }, it is true inclassical logicthatΣ⊢φ{\displaystyle \Sigma \vdash \varphi }(i.e.,Σ{\displaystyle \Sigma }provesφ{\displaystyle \varphi }) if and only ifΣ∪{¬φ}⊢⊥{\displaystyle \Sigma \cup \{\neg \varphi \}\vdash \bot }(i.e.,Σ{\displaystyle \Sigma }and¬φ{\displaystyle \neg \varphi }leads to a contradiction). Therefore, aproofthatΣ∪{¬φ}⊢⊥{\displaystyle \Sigma \cup \{\neg \varphi \}\vdash \bot }also proves thatφ{\displaystyle \varphi }is true under the premisesΣ{\displaystyle \Sigma }. The use of this fact forms the basis of aproof techniquecalledproof by contradiction, which mathematicians use extensively to establish the validity of a wide range of theorems. This applies only in a logic where thelaw of excluded middleA∨¬A{\displaystyle A\vee \neg A}is accepted as an axiom.
Usingminimal logic, a logic with similar axioms to classical logic but withoutex falso quodlibetand proof by contradiction, we can investigate the axiomatic strength and properties of various rules that treat contradiction by considering theorems of classical logic that are not theorems of minimal logic.[6]Each of these extensions leads to anintermediate logic:
In mathematics, the symbol used to represent a contradiction within a proof varies.[7]Some symbols that may be used to represent a contradiction include ↯, Opq,⇒⇐{\displaystyle \Rightarrow \Leftarrow }, ⊥,↔{\displaystyle \leftrightarrow \ \!\!\!\!\!\!\!}/ , and ※; in any symbolism, a contradiction may be substituted for the truth value "false", as symbolized, for instance, by "0" (as is common inBoolean algebra). It is not uncommon to seeQ.E.D., or some of its variants, immediately after a contradiction symbol. In fact, this often occurs in a proof by contradiction to indicate that the original assumption was proved false—and hence that its negation must be true.
In general, aconsistency proofrequires the following two things:
But by whatever method one goes about it, all consistency proofs wouldseemto necessitate the primitive notion ofcontradiction.Moreover, itseemsas if this notion would simultaneously have to be "outside" the formal system in the definition of tautology.
WhenEmil Post, in his 1921 "Introduction to a General Theory of Elementary Propositions", extended his proof of the consistency of thepropositional calculus(i.e. the logic) beyond that ofPrincipia Mathematica(PM), he observed that with respect to ageneralizedset of postulates (i.e. axioms), he would no longer be able to automatically invoke the notion of "contradiction"—such a notion might not be contained in the postulates:
The prime requisite of a set of postulates is that it be consistent. Since the ordinary notion of consistency involves that of contradiction, which again involves negation, and since this function does not appear in general as a primitive in [thegeneralizedset of postulates] a new definition must be given.[8]
Post's solution to the problem is described in the demonstration "An Example of a Successful Absolute Proof of Consistency", offered byErnest NagelandJames R. Newmanin their 1958Gödel's Proof. They too observed a problem with respect to the notion of "contradiction" with its usual "truth values" of "truth" and "falsity". They observed that:
The property of being a tautology has been defined in notions of truth and falsity. Yet these notions obviously involve a reference to somethingoutsidethe formula calculus. Therefore, the procedure mentioned in the text in effect offers aninterpretationof the calculus, by supplying a model for the system. This being so, the authors have not done what they promised, namely, "to define a property of formulas in terms of purely structural features of the formulas themselves". [Indeed] ... proofs of consistency which are based on models, and which argue from the truth of axioms to their consistency, merely shift the problem.[9]
Given some "primitive formulas" such as PM's primitives S1V S2[inclusive OR] and ~S (negation), one is forced to define the axioms in terms of these primitive notions. In a thorough manner, Post demonstrates in PM, and defines (as do Nagel and Newman, see below) that the property oftautologous– as yet to be defined – is "inherited": if one begins with a set of tautologous axioms (postulates) and adeduction systemthat containssubstitutionandmodus ponens, then aconsistentsystem will yield only tautologous formulas.
On the topic of the definition oftautologous, Nagel and Newman create twomutually exclusiveandexhaustiveclasses K1and K2, into which fall (the outcome of) the axioms when their variables (e.g. S1and S2are assigned from these classes). This also applies to the primitive formulas. For example: "A formula having the form S1V S2is placed into class K2, if both S1and S2are in K2; otherwise it is placed in K1", and "A formula having the form ~S is placed in K2, if S is in K1; otherwise it is placed in K1".[10]
Hence Nagel and Newman can now define the notion oftautologous: "a formula is a tautology if and only if it falls in the class K1, no matter in which of the two classes its elements are placed".[11]This way, the property of "being tautologous" is described—without reference to a model or an interpretation.
For example, given a formula such as ~S1V S2and an assignment of K1to S1and K2to S2one can evaluate the formula and place its outcome in one or the other of the classes. The assignment of K1to S1places ~S1in K2, and now we can see that our assignment causes the formula to fall into class K2. Thus by definition our formula is not a tautology.
Post observed that, if the system were inconsistent, a deduction in it (that is, the last formula in a sequence of formulas derived from the tautologies) could ultimately yield S itself. As an assignment to variable S can come from either class K1or K2, the deduction violates the inheritance characteristic of tautology (i.e., the derivation must yield an evaluation of a formula that will fall into class K1). From this, Post was able to derive the following definition of inconsistency—without the use of the notion of contradiction:
Definition.A system will be said to be inconsistent if it yields the assertion of the unmodified variable p [S in the Newman and Nagel examples].
In other words, the notion of "contradiction" can be dispensed when constructing a proof of consistency; what replaces it is the notion of "mutually exclusive and exhaustive" classes. An axiomatic system need not include the notion of "contradiction".[12]: 177
Adherents of theepistemologicaltheory ofcoherentismtypically claim that as a necessary condition of the justification of abelief, that belief must form a part of a logically non-contradictorysystemof beliefs. Somedialetheists, includingGraham Priest, have argued that coherence may not require consistency.[13]
A pragmatic contradiction occurs when the very statement of the argument contradicts the claims it purports. An inconsistency arises, in this case, because the act of utterance, rather than the content of what is said, undermines its conclusion.[14]
Indialectical materialism: Contradiction—as derived fromHegelianism—usually refers to an opposition inherently existing within one realm, one unified force or object. This contradiction, as opposed to metaphysical thinking, is not an objectively impossible thing, because these contradicting forces exist in objective reality, not cancelling each other out, but actually defining each other's existence. According toMarxist theory, such a contradiction can be found, for example, in the fact that:
Hegelian and Marxist theories stipulate that thedialecticnature of history will lead to thesublation, orsynthesis, of its contradictions. Marx therefore postulated that history would logically makecapitalismevolve into asocialistsociety where themeans of productionwould equally serve theworking and producing classof society, thus resolving the prior contradiction between (a) and (b).[15]
Colloquial usagecan label actions or statements as contradicting each other when due (or perceived as due) topresuppositionswhich are contradictory in the logical sense.
Proof by contradictionis used inmathematicsto constructproofs.
|
https://en.wikipedia.org/wiki/Contradiction
|
Inmathematics, aquotient categoryis acategoryobtained from another category by identifying sets ofmorphisms. Formally, it is aquotient objectin thecategory of (locally small) categories, analogous to aquotient grouporquotient space, but in the categorical setting.
LetCbe a category. Acongruence relationRonCis given by: for each pair of objectsX,YinC, anequivalence relationRX,Yon Hom(X,Y), such that the equivalence relations respect composition of morphisms. That is, if
are related in Hom(X,Y) and
are related in Hom(Y,Z), theng1f1andg2f2are related in Hom(X,Z).
Given a congruence relationRonCwe can define thequotient categoryC/Ras the category whose objects are those ofCand whose morphisms areequivalence classesof morphisms inC. That is,
Composition of morphisms inC/Riswell-definedsinceRis a congruence relation.
There is a natural quotientfunctorfromCtoC/Rwhich sends each morphism to its equivalence class. This functor is bijective on objects and surjective on Hom-sets (i.e. it is afull functor).
Every functorF:C→Ddetermines a congruence onCby sayingf~giffF(f) =F(g). The functorFthen factors through the quotient functorC→C/~ in a unique manner. This may be regarded as the "first isomorphism theorem" for categories.
IfCis anadditive categoryand we require the congruence relation ~ onCto be additive (i.e. iff1,f2,g1andg2are morphisms fromXtoYwithf1~f2andg1~g2, thenf1+g1~f2+g2), then the quotient categoryC/~ will also be additive, and the quotient functorC→C/~ will be an additive functor.
The concept of an additive congruence relation is equivalent to the concept of atwo-sided ideal of morphisms: for any two objectsXandYwe are given an additive subgroupI(X,Y) of HomC(X,Y) such that for allf∈I(X,Y),g∈ HomC(Y,Z) andh∈ HomC(W,X), we havegf∈I(X,Z) andfh∈I(W,Y). Two morphisms in HomC(X,Y) are congruent iff their difference is inI(X,Y).
Every unitalringmay be viewed as an additive category with a single object, and the quotient of additive categories defined above coincides in this case with the notion of aquotient ringmodulo a two-sided ideal.
Thelocalization of a categoryintroduces new morphisms to turn several of the original category's morphisms into isomorphisms. This tends to increase the number of morphisms between objects, rather than decrease it as in the case of quotient categories. But in both constructions it often happens that two objects become isomorphic that weren't isomorphic in the original category.
TheSerre quotientof anabelian categoryby aSerre subcategoryis a new abelian category which is similar to a quotient category but also in many cases has the character of a localization of the category.
|
https://en.wikipedia.org/wiki/Quotient_category
|
"Talking past each other" is an English phrase describing the situation where two or more people talk about different subjects, while believing that they are talking about the same thing.[1]
David Horton writes that when characters in fiction talk past each other, the effect is to expose "an unbridgeable gulf between their respective perceptions and intentions. The result is an exchange, but never an interchange, of words in fragmented and cramped utterances whose subtext often reveals more than their surface meaning."[2]
The phrase is used in widely varying contexts. For example, in 1917,Albert EinsteinandDavid Hilberthad dawn-to-dusk discussions of physics; and they continued their debate in writing, althoughFelix Kleinrecords that they "talked past each other, as happens not infrequently between simultaneously producing mathematicians."[3]
|
https://en.wikipedia.org/wiki/Talking_past_each_other
|
Astardateis a fictional system of time measurement developed for the television and film seriesStar Trek. In the series, use of this date system is commonly heard at the beginning of avoice-overlog entry, such as "Captain's log, stardate 41153.7. Our destination is planet Deneb IV …". While the original method was inspired by theModified Julian date[1][2][3]system currently used by astronomers, the writers and producers have selected numbers using different methods over the years, some more arbitrary than others. This makes it impossible to convert all stardates into equivalent calendar dates, especially since stardates were originally intended to avoid specifying exactly whenStar Trektakes place.[4]
The original 1967Star Trek Guide(April 17, 1967, p. 25) instructed writers forthe originalStar TrekTV serieson how to select stardates for their scripts. Writers could pick any combination of four numbers plus a decimal point, and aim for consistency within a single script, but not necessarily between different scripts. This was to "avoid continually mentioningStar Trek's century" and avoid "arguments about whether this or that would have developed by then".[5]Though the guide sets the series "about two hundred years from now", the few references within the show itself were contradictory, and later productions and reference materials eventually placed the series between the years 2265 and 2269. The second pilot begins on stardate 1312.4 and the last-produced episode on stardate 5928.5.[6]Though the dating system was revised forStar Trek: The Next Generation, the pilot ofStar Trek: Discoveryfollows the original series' dating system, starting on stardate 1207.3, which is stated precisely to be Sunday, May 11, 2256.[7]
SubsequentStar Trekseries followed a new numerical convention.Star Trek: The Next Generation(TNG) revised the stardate system in the 1987Star Trek: The Next Generation Writer's/Director's Guide, to five digits and one decimal place. According to the guide, the first digit "4" should represent the 24th century, with the second digit representing thetelevision season. The remaining digits can progress unevenly, with the decimal representing the time as fractional days. Stardates ofStar Trek: Deep Space Ninebegan with 46379.1, corresponding to the sixth season ofTNGwhich was also set in the year 2369.Star Trek: Voyagerbegan with stardate 48315.6 (2371), one season afterTNGhad finished its seventh and final season. As inTNG, the second digit would increase by one every season, while the initial two digits eventually rolled over from 49 to 50, despite the year 2373 still being in the 24th century.Star Trek: Nemesiswas set around stardate 56844.9.Star Trek: Discoverytraveled to the year 3188, giving a stardate of 865211.3, corresponding to that year in this system of stardates.
On March 9, 2023,Star Trek: Picardgave a stardate of 78183.10. This indicates a continuity withTNG. Each stardate increment represents one milliyear, with 78 years in 2401, counted from 2323. The decimal represents a fractional day. Thus, stardates are a composition of two types ofdecimal time. In the twenty-first century, this would indicate 78 years from 1945.
Stardates usually are expressed with a single decimal digit, but sometimes with more than one. For instance,The Next Generationepisode,"The Child", displays the stardate 42073.1435. According toThe Star Trek Guide, the official writers' guide for the original series:
Likewise, page 32 of the 1988Star Trek: The Next Generation Writer's/Director's Guidefor season two states:
This was demonstrated by the ship's chronometer in theTOS-Remasteredepisode, "The Naked Time," and by Captain Varley's video logs in theTNGepisode "Contagion". The latter displays several stardates with two decimal digits next to corresponding times.
AdditionalStar Trekmedia have generated their own numbering systems. The 2009MMORPGStar Trek Onlinebegan on stardate 86088.58, in the in-game year 2409, counting 1000 stardates per year from May 25, 1922.[8]WriterRoberto Orcirevised the system for the2009 filmStar Trekso that the first four digits correspond to the year, while the remainder was intended to stand for the day of the year, in effect representing anordinal date.[9][10][11]In the first installment of the movie trilogy,Spockmakes his log of the destruction of Vulcan on stardate 2258.42, or February 11, 2258.Star Trek Into Darknessbegins on stardate 2259.55, or February 24, 2259.[12]Star Trek Beyondbegins on stardate 2263.02, or January 2, 2263. InThe Big Bang Theoryepisode, "The Adhesive Duck Deficiency",Sheldon Coopergives the stardate 63345.3, corresponding with the date of theLeonid meteor showerthat year, November 17, 2009.[13]
|
https://en.wikipedia.org/wiki/Stardate
|
Game Description Language(GDL) is a specializedlogicprogramming languagedesigned byMichael Genesereth. The goal of GDL is to allow the development of AI agents capable ofgeneral game playing. It is part of the General Game Playing Project atStanford University.
GDL is a tool for expressing the intricacies of game rules and dynamics in a form comprehensible to AI systems through a combination of logic-based constructs and declarative principles.
In practice, GDL is often used for General Game Playing competitions and research endeavors. In these contexts, GDL is used to specify the rules of games that AI agents are expected to play. AI developers and researchers harness GDL to create algorithms that can comprehend and engage with games based on their rule descriptions. The use of GDL paves the way for the development of highly adaptable AI agents, capable of competing and excelling in diverse gaming scenarios.
This innovation is a testament to the convergence of logic-based formalism and the world of games, opening new horizons for AI's potential in understanding and mastering a multitude of games. Game Description Language equips AI with a universal key to unlock the mysteries of diverse game environments and strategies.
Quoted in an article inNew Scientist, Genesereth pointed out that althoughDeep Bluecan play chess at agrandmasterlevel, it is incapable of playingcheckersat all because it is a specialized game player.[1]Both chess and checkers can be described in GDL. This enables general game players to be built that can play both of these games and any other game that can be described using GDL.
GDL is a variant ofDatalog, and thesyntaxis largely the same. It is usually given inprefix notation. Variables begin with "?".[2]
The following is the list of keywords in GDL, along with brief descriptions of their functions:
A game description in GDL provides complete rules for each of the following elements of a game.
Facts that define the roles in a game. The following example is from a GDL description of the two-player gameTic-tac-toe:
Rules that entail all facts about the initial game state. An example is:
Rules that describe each move by the conditions on the current position under which it can be taken by a player. An example is:
Rules that describe all facts about the next state relative to the current state and the moves taken by the players. An example is:
Rules that describe the conditions under which the current state is a terminal one. An example is:
The goal values for each player in a terminal state. An example is:
With GDL, one can describe finite games with an arbitrary number of players. However, GDL cannot describe games that contain an element of chance (for example, rolling dice) or games where players have incomplete information about the current state of the game (for example, in many card games the opponents' cards are not visible).GDL-II, theGame Description Language for Incomplete Information Games, extends GDL by two keywords that allow for the description of elements of chance and incomplete information:[3]
The following is an example from a GDL-II description of the card gameTexas hold 'em:
Michael Thielscher also created a further extension,GDL-III, a general game description language withimperfect informationandintrospection, that supports the specification ofepistemic games— ones characterised by rules that depend on the knowledge of players.[4]
In classical game theory, games can be formalised inextensiveandnormalforms. Forcooperative game theory, games are represented using characteristic functions. Some subclasses of games allow special representations in smaller sizes also known assuccinct games.
Some of the newer developments of formalisms and languages for the representation of some subclasses of games or representations adjusted to the needs of interdisciplinary research are summarized as the following table.[5]Some of these alternative representations also encode time-related aspects:
A 2016 paper "describes a multilevel algorithm compiling a general game description in GDL into an optimized reasoner in a low level language".[19]
A 2017 paper uses GDL to model the process of mediating a resolution to a dispute between two parties and presented an algorithm that uses available information efficiently to do so.[20]
|
https://en.wikipedia.org/wiki/Game_Description_Language
|
Harrison Colyar White(March 21, 1930 – May 18, 2024) was an American sociologist who was the Giddings Professor of Sociology atColumbia University. White played an influential role in the “Harvard Revolution” insocial networks[1]and theNew York School of relational sociology.[2]He is credited with the development of a number of mathematical models of social structure includingvacancy chainsandblockmodels. He has been a leader of a revolution insociologythat is still in process, using models of social structure that are based onpatterns of relationsinstead of the attributes and attitudes of individuals.[3]
Among social network researchers, White is widely respected. For instance, at the 1997International Network of Social Network Analysisconference, the organizer held a special “White Tie” event, dedicated to White.[4]Social network researcher Emmanuel Lazega refers to him as both “Copernicus and Galileo” because he invented both the vision and the tools.
The most comprehensive documentation of his theories can be found in the bookIdentity and Control, first published in 1992. A major rewrite of the book appeared in June 2008. In 2011, White received the W.E.B. DuBois Career of Distinguished Scholarship Award from theAmerican Sociological Association, which honors "scholars who have shown outstanding commitment to the profession of sociology and whose cumulative work has contributed in important ways to the advancement of the discipline."[5]Before his retirement to live inTucson, Arizona, White was interested in sociolinguistics and business strategy as well as sociology.
White was born on March 21, 1930, inWashington, D.C.He had three siblings and his father was a doctor in the US Navy. Although moving around to different Naval bases throughout his adolescence, he considered himself Southern, andNashville, TNto be his home. At the age of 15, he entered theMassachusetts Institute of Technology(MIT), receiving his undergraduate degree at 20 years of age; five years later, in 1955, he received a doctorate intheoretical physics, also from MIT withJohn C. Slateras his advisor.[6]His dissertation was titledA quantum-mechanical calculation of inter-atomic force constants in copper.[7]This was published in thePhysical Reviewas "Atomic Force Constants of Copper from Feynman's Theorem" (1958).[8]While at MIT he also took a course with the political scientistKarl Deutsch, who White credits with encouraging him to move toward the social sciences.[9]
After receiving his PhD in theoretical physics, he received a Fellowship from the Ford Foundation to begin his second doctorate in sociology atPrinceton University. His dissertation advisor wasMarion J. Levy. White also worked withWilbert Moore,Fred Stephan, andFrank W. Notesteinwhile at Princeton.[10]His cohort was very small, with only four or five other graduate students includingDavid Matza, andStanley Udy.
At the same time, he took up a position as an operations analyst at theOperations Research Office,Johns Hopkins Universityfrom 1955 to 1956.[11]During this period, he worked with Lee S. Christie onQueuing with Preemptive Priorities or with Breakdown, which was published in 1958.[12]Christie previously worked alongside mathematical psychologistR. Duncan Lucein the Small Group Laboratory at MIT while White was completing his first PhD in physics also at MIT.
While continuing his studies at Princeton, White also spent a year as a fellow at theCenter for Advanced Study in the Behavioral Sciences,Stanford University, California where he metHarold Guetzkow. Guetzkow was a faculty member at the Carnegie Institute of Technology, known for his application of simulations to social behavior and long-time collaborator with many other pioneers in organization studies, includingHerbert A. Simon,James March, andRichard Cyert.[13]Upon meeting Simon through his mutual acquaintance with Guetzkow, White received an invitation to move from California to Pittsburgh to work as an assistant professor of Industrial Administration and Sociology at theGraduate School of Industrial Administration, Carnegie Institute of Technology (laterCarnegie-Mellon University), where he stayed for a couple of years, between 1957 and 1959. In an interview, he claimed to have fought with the dean,Leyland Bock, to have the word "sociology" included in his title.
It was also during his time at the Stanford Center for Advanced Study that White met his first wife, Cynthia A. Johnson, who was a graduate ofRadcliffe College, where she had majored in art history. The couple's joint work on the French Impressionists,Canvases and Careers(1965) and “Institutional Changes in the French Painting World” (1964), originally grew out of a seminar on art in 1957 at the Center for Advanced Study led by Robert Wilson. White originally hoped to use sociometry to map the social structure of French art to predict shifts, but he had an epiphany that it was not social structure but institutional structure which explained the shift.
It was also during these years that White, still a graduate student in sociology, wrote and published his first social scientific work, "Sleep: A Sociological Interpretation" inActa Sociologicain 1960, together withVilhelm Aubert, a Norwegian sociologist. This work was a phenomenological examination of sleep which attempted to "demonstrate that sleep was more than a straightforward biological activity... [but rather also] a social event".[14]
For his dissertation, White carried out empirical research on a research and development department in a manufacturing firm, consisting of interviews and a 110-item questionnaire with managers. He specifically used sociometric questions, which he used to model the "social structure" of relationships between various departments and teams in the organization. In May 1960 he submitted as his doctoral dissertation, titledResearch and Development as a Pattern in Industrial Management: A Case Study in Institutionalisation and Uncertainty,[15]earning a PhD in sociology fromPrinceton University. His first publication based on his dissertation was ''Management conflict and sociometric structure'' in theAmerican Journal of Sociology.[16]
In 1959James Colemanleft the University of Chicago to found a new department of social relations at Johns Hopkins University, this left a vacancy open for a mathematical sociologist like White. He moved to Chicago to start working as an associate professor at the Department of Sociology. At that time, highly influential sociologists, such asPeter Blau,Mayer Zald,Elihu Katz,Everett Hughes,Erving Goffmanwere there. As Princeton only required one year in residence, and White took the opportunity to take positions at Johns Hopkins, Stanford, and Carnegie while still working on his dissertation, it was at Chicago that White credits as being his "real socialization in a way, into sociology."[17]It was here that White advised his first two graduate students Joel H. Levine andMorris Friedell, both who went on to make contributions to social network analysis in sociology. While at the Center for Advanced Study, White began learning anthropology and became fascinated with kinship. During his stay at theUniversity of ChicagoWhite was able to finishAn Anatomy of Kinship, published in 1963 within the Prentice-Hall series in Mathematical Analysis of Social Behavior, withJames ColemanandJames Marchas chief editors. The book received significant attention from many mathematical sociologists of the time, and contributed greatly to establish White as a model builder.[18]
In 1963, White left Chicago to be an associate professor of sociology at theHarvard Department of Social Relations—the same department founded by Talcott Parsons and still heavily influenced by the structural-functionalist paradigm of Parsons. As White previously only taught graduate courses at Carnegie and Chicago, his first undergraduate course wasAn Introduction to Social Relations(see Influence) at Harvard, which became infamous among network analysts. As he "thought existing textbooks were grotesquely unscientific,"[19]the syllabus of the class was noted for including few readings by sociologists, and comparatively more readings by anthropologists, social psychologists, and historians.[20]White was also a vocal critic of what he called the "attributes and attitudes" approach of Parsonsian sociology, and came to be the leader of what has been variously known as the “Harvard Revolution," the "Harvard breakthrough," or the "Harvard renaissance" in social networks. He worked closely with small group researchersGeorge C. HomansandRobert F. Bales, which was largely compatible with his prior work in organizational research and his efforts to formalize network analysis. Overlapping White's early years,Charles Tilly, a graduate of the Harvard Department of Social Relations, was a visiting professor at Harvard and attended some of White's lectures - network thinking heavily influenced Tilly's work.
White remained at Harvard until 1986. In addition to a divorce from his wife, Cynthia, (with whom he published several works) and wanting a change, the sociology department at the University of Arizona offer him the position as department chair.[21]He remained at Arizona for two years.
In 1988, White joined Columbia University as a professor of sociology and was the director of thePaul F. Lazarsfeld Center for the Social Sciences. This was at the early stages of what is perhaps the second major revolution in network analysis, the so-called "New York School of relational sociology." This invisible college included Columbia as well as the New School for Social Research and New York University. While the Harvard Revolution involved substantial advances in methods for measuring and modeling social structure, the New York School involved the merging of cultural sociology with network-structural sociology, two traditions which had previously been antagonistic. White stood at the heart of this, and his magnum opusIdentity and Controlwas a testament to this new relational sociology.
In 1992, White received the named position of Giddings Professor of Sociology and was the chair of the department of sociology for various years until his retirement. He resided in Tucson, Arizona.
A good summary of White's sociological contributions is provided by his former student and collaborator,Ronald Breiger:
White addresses problems of social structure that cut across the range of the social sciences. Most notably, he has contributed (1) theories of role structures encompassing classificatory kinship systems of native Australian peoples and institutions of the contemporary West; (2) models based on equivalences of actors across networks of multiple types of social relation; (3) theorization of social mobility in systems of organizations; (4) a structural theory of social action that emphasizes control, agency, narrative, and identity; (5) a theory of artistic production; (6) a theory of economic production markets leading to the elaboration of a network ecology for market identities and new ways of accounting for profits, prices, and market shares; and (7) a theory of language use that emphasizes switching between social, cultural, and idiomatic domains within networks of discourse. His most explicit theoretical statement isIdentity and Control: A Structural Theory of Social Action(1992), although several of the major components of his theory of the mutual shaping of networks, institutions, and agency are also readily apparent inCareers and Creativity: Social Forces in the Arts(1993), written for a less-specialized audience.[22]
More generally, White and his students sparked interest in looking at society as networks rather than as aggregates of individuals.[23]
This view is still controversial. In sociology and organizational science, it is difficult to measure cause and effect in a systematic way. Because of that, it is common to use sampling techniques to discover some sort of average in a population.
For instance, we are told almost daily how the average European or American feels about a topic. It allows social scientists and pundits to make inferences about cause and say “people are angry at the current administration because the economy is doing poorly.” This kind of generalization certainly makes sense, but it does not tell us anything about an individual. This leads to the idea of an idealized individual, something that is the bedrock of modern economics.[24]Most modern economic theories look at social formations, like organizations, as products of individuals all acting in their own best interest.[25]
While this has proved to be useful in some cases, it does not account well for the knowledge that is required for the structures to sustain themselves. White and his students (and his students' students) have been developing models that incorporate the patterns of relationships into descriptions of social formations. This line of work includes: economic sociology, network sociology and structuralist sociology.
White's most comprehensive work isIdentity and Control. The first edition came out in 1992 and the second edition appeared in June 2008.
In this book, White discusses the social world, including “persons,” as emerging from patterns of relationships. He argues that it is a default human heuristic to organize the world in terms of attributes, but that this can often be a mistake. For instance, there are countless books on leadership that look for the attributes that make a good leader. However, no one is a leader without followers; the term describes a relationship one has with others. Without the relationships, there would be no leader. Likewise, an organization can be viewed as patterns of relationships. It would not “exist” if people did not honor and maintain specific relationships. White avoids giving attributes to things that emerge from patterns of relationships, something that goes against our natural instincts and requires some thought to process.[26]
Identity and Controlhas seven chapters. The first six are about social formations that control us and how our own judgment organizes our experience in ways that limit our actions. The final chapter is about “getting action” and how change is possible. One of the ways is by “proxy,” empowering others.
Harrison White also developed a perspective on market structure and competition in his 2002 book,Markets from Networks, based on the idea that markets are embedded insocial networks. His approach is related to economic concepts such asuncertainty(as defined byFrank Knight),monopolistic competition(Edward Chamberlin), orsignalling(Spence). This sociological perspective on markets has influenced both sociologists (seeJoel M. Podolny) and economists (seeOlivier Favereau).
White's later work discussed linguistics. InIdentity and Controlhe emphasized “switching” between network domains as a way to account for grammar in a way that does not ignore meaning as does much of standard linguistic theory. He had a long-standing interest in organizations, and before he retired, he worked on how strategy fits into the overall models of social construction he has developed.
In addition to his own publications, White is widely credited with training many influential generations of network analysts in sociology. Including the early work in the 1960s and 1970s during the Harvard Revolution, as well as the 1980s and 1990s at Columbia during the New York School of relational sociology.
White's student and teaching assistant,Michael Schwartz,took notes in the spring of 1965, known asNotes on the Constituents of Social Structure, of White's undergraduateIntroduction to Social Relations course (Soc Rel 10). These notes were circulated among network analysis students and aficionados, until finally published in 2008 in Sociologica. As popular social science blog Orgtheory.net explains, "in contemporary American sociology, there are no set of student-taken notes that have had as much underground influence as those from Harrison White’s introductorySoc Rel 10seminar at Harvard."[27]
The first generation of Harvard graduate students that trained with White during the 1960s went on to be a formidable cohort of network analytically inclined sociologists. His first graduate student at Harvard wasEdward Laumannwho went onto develop one of the most widely used methods of studying personal networks known as ego-network surveys (developed with one of Laumann's students at the University of Chicago,Ronald Burt). Several of them went on to contribute to the "Toronto school" of structural analysis.Barry Wellman, for instance, contributed heavily to the cross fertilization of network analysis and community studies, later contributing to the earliest studies of online communities. Another of White's earliest students at Harvard was Nancy Lee (now Nancy Howell) who used social network analysis in her groundbreaking study of how women seeking an abortion found willing doctors before Roe v. Wade. She found that women found doctors through links of friends and acquaintances and was four degrees separated from the doctor on average. White also trained later additions to the Toronto school, Harriet Friedmann ('77) and Bonnie Erickson ('73).
One of White's most well-known graduate students wasMark Granovetter, who attended Harvard as a Ph.D. student from 1965 to 1970. Granovetter studied how people got jobs, discovered they were more likely to get them through acquaintances than through friends. Recounting the development of his widely cited 1973 article, "The Strength of Weak Ties", Granovetter credits White's lectures and specifically White's description of sociometric work by Anatol Rapaport and William Horrath that gave him the idea. This, tied with earlier work byStanley Milgram(who was also in theHarvard Department of Social Relations1963–1967, though not one of White's students), gave scientists a better sense of how the social world was organized: into many densegroupswith “weak ties” between them. Granovetter's work provided the theoretical background forMalcolm Gladwell'sThe Tipping Point. This line of research is still actively being pursued byDuncan Watts,Albert-László Barabási,Mark Newman,Jon Kleinbergand others.
White's research on “vacancy chains” was assisted by a number of graduate students, includingMichael SchwartzandIvan Chase. The outcome of this was the bookChains of Opportunity. The book described a model of social mobility where the roles and the people that filled them were independent. The idea of a person being partially created by their position in patterns of relationships has become a recurring theme in his work. This provided a quantitative analysis of social roles, allowing scientists new ways to measure society that were not based on statistical aggregates.
During the 1970s, White work with his student'sScott Boorman,Ronald Breiger, and François Lorrain on a series of articles that introduce a procedure called "blockmodeling" and the concept of "structural equivalence." The key idea behind these articles was identifying a "position" or "role" through similarities in individuals' social structure, rather than characteristics intrinsic to the individuals ora prioridefinitions of group membership.
At Columbia, White trained a new cohort of researchers who pushed network analysis beyond methodological rigor to theoretical extension and the incorporation of previously neglected concepts, namely, culture and language.
Many of his students and mentees have had a strong impact in sociology. Other former students includeMichael Schwartzand Ivan Chase, both professors at Stony Brook; Joel Levine, who foundedDartmouth College's Math/Social Science program;Edward Laumannwho pioneered survey-based egocentric network research and became a dean and provost atUniversity of Chicago;Kathleen CarleyatCarnegie Mellon University;Ronald Breigerat theUniversity of Arizona;Barry Wellmanat theUniversity of Torontoand then the NetLab Network;Peter BearmanatColumbia University; Bonnie Erickson (Toronto);Christopher Winship(Harvard University); Joel Levine (Dartmouth College), Nicholas Mullins (Virginia Tech, deceased), Margaret Theeman (Boulder), Brian Sherman (retired, Atlanta), Nancy Howell (retired, Toronto);David R. Gibson(University of Notre Dame); Matthew Bothner (University of Chicago);Ann Mische(University of Notre Dame); Kyriakos Kontopoulos (Temple University); andFrédéric Godart(INSEAD).[28]
White died at an assisted living facility inTucson, on May 19, 2024, at the age of 94.[29]
|
https://en.wikipedia.org/wiki/Harrison_White
|
Incryptography, abrute-force attackconsists of an attacker submitting manypasswordsorpassphraseswith the hope of eventually guessing correctly. The attacker systematically checks all possible passwords and passphrases until the correct one is found. Alternatively, the attacker can attempt to guess thekeywhich is typically created from the password using akey derivation function. This is known as anexhaustive key search. This approach doesn't depend on intellectual tactics; rather, it relies on making several attempts.[citation needed]
A brute-force attack is acryptanalytic attackthat can, in theory, be used to attempt to decrypt any encrypted data (except for data encrypted in aninformation-theoretically securemanner).[1]Such an attack might be used when it is not possible to take advantage of other weaknesses in an encryption system (if any exist) that would make the task easier.
When password-guessing, this method is very fast when used to check all short passwords, but for longer passwords other methods such as thedictionary attackare used because a brute-force search takes too long. Longer passwords, passphrases and keys have more possible values, making them exponentially more difficult to crack than shorter ones due to diversity of characters.[2]
Brute-force attacks can be made less effective byobfuscatingthe data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute-force attack against it.[3]
Brute-force attacks are an application of brute-force search, the general problem-solving technique of enumerating all candidates and checking each one. The word 'hammering' is sometimes used to describe a brute-force attack,[4]with 'anti-hammering' for countermeasures.[5]
Brute-force attacks work by calculating every possible combination that could make up a password and testing it to see if it is the correct password. As the password's length increases, the amount of time, on average, to find the correct password increases exponentially.[6]
The resources required for a brute-force attack growexponentiallywith increasingkey size, not linearly. Although U.S. export regulations historically restricted key lengths to 56-bitsymmetric keys(e.g.Data Encryption Standard), these restrictions are no longer in place, so modern symmetric algorithms typically use computationally stronger 128- to 256-bit keys.
There is a physical argument that a 128-bit symmetric key is computationally secure against brute-force attack. TheLandauer limitimplied by the laws of physics sets a lower limit on the energy required to perform a computation ofkT·ln 2per bit erased in a computation, whereTis the temperature of the computing device inkelvins,kis theBoltzmann constant, and thenatural logarithmof 2 is about 0.693 (0.6931471805599453). No irreversible computing device can use less energy than this, even in principle.[7]Thus, in order to simply flip through the possible values for a 128-bit symmetric key (ignoring doing the actual computing to check it) would, theoretically, require2128− 1bit flips on a conventional processor. If it is assumed that the calculation occurs near room temperature (≈300 K), the Von Neumann-Landauer Limit can be applied to estimate the energy required as ≈1018joules, which is equivalent to consuming 30gigawattsof power for one year. This is equal to 30×109W×365×24×3600 s = 9.46×1017J or 262.7 TWh (about 0.1% of theyearly world energy production). The full actual computation – checking each key to see if a solution has been found – would consume many times this amount. Furthermore, this is simply the energy requirement for cycling through the key space; the actual time it takes to flip each bit is not considered, which is certainly greater than 0 (seeBremermann's limit).[citation needed]
However, this argument assumes that the register values are changed using conventional set and clear operations, which inevitably generateentropy. It has been shown that computational hardware can be designed not to encounter this theoretical obstruction (seereversible computing), though no such computers are known to have been constructed.[citation needed]
As commercial successors of governmentalASICsolutions have become available, also known ascustom hardware attacks, two emerging technologies have proven their capability in the brute-force attack of certain ciphers. One is moderngraphics processing unit(GPU) technology,[8][page needed]the other is thefield-programmable gate array(FPGA) technology. GPUs benefit from their wide availability and price-performance benefit, FPGAs from theirenergy efficiencyper cryptographic operation. Both technologies try to transport the benefits of parallel processing to brute-force attacks. In case of GPUs some hundreds, in the case of FPGA some thousand processing units making them much better suited to cracking passwords than conventional processors. For instance in 2022, 8Nvidia RTX 4090GPU were linked together to test password strength by using the softwareHashcatwith results that showed 200 billion eight-characterNTLMpassword combinations could be cycled through in 48 minutes.[9][10]
Various publications in the fields of cryptographic analysis have proved the energy efficiency of today's FPGA technology, for example, the COPACOBANA FPGA Cluster computer consumes the same energy as a single PC (600 W), but performs like 2,500 PCs for certain algorithms. A number of firms provide hardware-based FPGA cryptographic analysis solutions from a single FPGAPCI Expresscard up to dedicated FPGA computers.[citation needed]WPAandWPA2encryption have successfully been brute-force attacked by reducing the workload by a factor of 50 in comparison to conventional CPUs[11][12]and some hundred in case of FPGAs.
Advanced Encryption Standard(AES) permits the use of 256-bit keys. Breaking a symmetric 256-bit key by brute-force requires 2128times more computational power than a 128-bit key. One of the fastest supercomputers in 2019 has a speed of 100petaFLOPSwhich could theoretically check 100 trillion (1014) AES keys per second (assuming 1000 operations per check), but would still require 3.67×1055years to exhaust the 256-bit key space.[13]
An underlying assumption of a brute-force attack is that the complete key space was used to generate keys, something that relies on an effectiverandom number generator, and that there are no defects in the algorithm or its implementation. For example, a number of systems that were originally thought to be impossible to crack by brute-force have nevertheless beencrackedbecause thekey spaceto search through was found to be much smaller than originally thought, because of a lack of entropy in theirpseudorandom number generators. These includeNetscape's implementation ofSecure Sockets Layer(SSL) (cracked byIan GoldbergandDavid Wagnerin 1995) and aDebian/Ubuntuedition ofOpenSSLdiscovered in 2008 to be flawed.[14][15]A similar lack of implemented entropy led to the breaking ofEnigma'scode.[16][17]
Credential recycling is thehackingpractice of re-using username and password combinations gathered in previous brute-force attacks. A special form of credential recycling ispass the hash, whereunsaltedhashed credentials are stolen and re-used without first being brute-forced.[18]
Certain types of encryption, by their mathematical properties, cannot be defeated by brute-force. An example of this isone-time padcryptography, where everycleartextbit has a corresponding key from a truly random sequence of key bits. A 140 character one-time-pad-encoded string subjected to a brute-force attack would eventually reveal every 140 character string possible, including the correct answer – but of all the answers given, there would be no way of knowing which was the correct one. Defeating such a system, as was done by theVenona project, generally relies not on pure cryptography, but upon mistakes in its implementation, such as the key pads not being truly random, intercepted keypads, or operators making mistakes.[19]
In case of anofflineattack where the attacker has gained access to the encrypted material, one can try key combinations without the risk of discovery or interference. In case ofonlineattacks, database and directory administrators can deploy countermeasures such as limiting the number of attempts that a password can be tried, introducing time delays between successive attempts, increasing the answer's complexity (e.g., requiring aCAPTCHAanswer or employingmulti-factor authentication), and/or locking accounts out after unsuccessful login attempts.[20][page needed]Website administrators may prevent a particular IP address from trying more than a predetermined number of password attempts against any account on the site.[21]Additionally, the MITRE D3FEND framework provides structured recommendations for defending against brute-force attacks by implementing strategies such as network traffic filtering, deploying decoy credentials, and invalidating authentication caches.[22]
In a reverse brute-force attack (also called password spraying), a single (usually common) password is tested against multiple usernames or encrypted files.[23]The process may be repeated for a select few passwords. In such a strategy, the attacker is not targeting a specific user.
|
https://en.wikipedia.org/wiki/Brute-force_attack
|
Amobile operating systemis anoperating systemused forsmartphones,tablets,smartwatches, smartglasses, or other non-laptoppersonalmobile computing devices. While computers such aslaptopsare "mobile", the operating systems used on them are usually not considered mobile, as they were originally designed fordesktop computersthat historically did not have or need specificmobilefeatures. This "fine line" distinguishing mobile and other forms has become blurred in recent years, due to the fact that newer devices have become smaller and more mobile, unlike thehardwareof the past. Key notabilities blurring this line are the introduction oftablet computers, lightlaptops, and the hybridization of the2-in-1 PCs.
Mobile operating systems combine features of adesktop computeroperating system with other features useful for mobile or handheld use, and usually including a wireless inbuilt modem andSIMtray for telephone and data connection. In Q1 2018, over 123 million smartphones were sold (the most ever recorded) with 60.2% runningAndroidand 20.9% runningiOS.[1]Sales in 2012 were 1.56 billion; sales in 2023 were 1.43 billion[2]with 53.32% beingAndroid.[3]Android alone has more sales than the popular desktop operating systemMicrosoft Windows, and smartphone use (even without tablets) outnumbers desktop use.[4]
Mobile devices, with mobile communications abilities (for example,smartphones), contain two mobile operating systems. The main user-facing software platform is supplemented by a second low-level proprietary real-time operating system which operates the radio and other hardware. Research has shown that these low-level systems may contain a range of security vulnerabilities permitting malicious base stations to gain high levels of control over the mobile device.[5]
Mobile operating systems have had the most use of any operating system since 2017 (measured by web use).[2]
Mobile operating system milestones mirror the development ofmobile phones,PDAs, and smartphones:
These operating systems often run atopbasebandor otherreal-time operating systemsthat handle hardware aspects of the phone.
Android (based on the modifiedLinux kernel) is a mobile operating system developed by Open Handset Alliance.[118]The base system isopen-source(and only the kernelcopyleft), but the apps and drivers which provide functionality are increasingly becomingclosed-source.[119]Besides having the largest installed base worldwide on smartphones, it is also the most popular operating system forgeneral purpose computers[further explanation needed](a category that includes desktop computers and mobile devices), even though Android is not a popular operating system for regular (desktop)personal computers(PCs). Although the Android operating system isfree and open-source software,[120]in devices sold, much of the softwarebundledwith it (including Google apps and vendor-installed software) isproprietary softwareand closed-source.[121]
Android's releases before2.0(1.0,1.5,1.6) were used exclusively on mobile phones. Android 2.x releases were mostly used for mobile phones but also some tablets.Android 3.0was a tablet-oriented release and does not officially run on mobile phones. Both phone and tablet compatibility were merged withAndroid 4.0. The current Android version isAndroid 14, released on October 4, 2023.
Android One, a successor toGoogle Nexus, is a software experience that runs on the unmodified Android operating system. Unlike most of the "stock" Androids running on the market, the Android OneUser Interface(UI) closely resembles theGoogle PixelUI, due to Android One being a software experience developed by Google and distributed to partners such asNokia Mobile (HMD)andXiaomi. Thus, the UI is intended to be as clean as possible.Original equipment manufacturer(OEM) partners may tweak or add additional apps such as cameras to thefirmware, but most of the apps are handled proprietarily by Google. Operating system updates are handled by Google and internally tested by OEMs before being distributed via anOTA updatetoend users.
BharOS is a mobile operating system in India. It is an Indian government-funded project to develop a free and open-source operating system (OS) for use in government and public systems.
BlackBerry Secure is an operating system developed byBlackBerry, based on the Android Open Source Project (AOSP). BlackBerry officially announced the name for their Android-basedfront-endtouch interfacein August 2017, before which BlackBerry Secure was running on BlackBerry brand devices, such asBlackBerry Priv,DTEK 50/60andBlackBerry KeyOne. Currently, BlackBerry plans to license out the BlackBerry Secure to other OEMs.
CalyxOSis anoperating systemfor smartphones based on Android with mostlyfree and open-sourcesoftware. It is produced by theCalyx Instituteas part of its mission to "defend online privacy, security and accessibility."
Cherry OSis a customized operating system that was developed byCherry Mobile. It was first released in 2017 and has been developed with a light interface, optimized performance, tools for security, battery management, and access to localized apps.
ColorOSis a custom front-end touch interface based on the Android Open Source Project (AOSP) and developed byOPPO Electronics Corp.In 2016, OPPO officially released ColorOS with every OPPO andRealmedevice and released an officialROMfor theOnePlus One. Future Realme devices will have their own version of ColorOS.
CopperheadOSis asecurity-hardenedversion of Android.
DivestOSis a soft fork ofLineageOS.[122]Includes Monthly Updates, FOSS Focus, Deblobbing, Security and Privacy focus, and F-Droid[123]
Huawei EMUI is the front-end touch interface developed byHuawei Technologies Co. Ltd.and its sub-brandHonorwhich is based on Google's Android Open Source Project (AOSP). EMUI is preinstalled on most Huawei and Honor devices. While it was based on the open-source Android operating system, it consists of closed-source proprietary software. Since the US sanctions, it is currently a fork of Android similar to FireOS instead of a compatible one.
In mainland China, and internationally since 2020 due to U.S. sanctions, EMUI devices use Huawei Mobile Services such as Huawei AppGallery instead of Google Mobile Services. Aside from being based on Android, Huawei also bundles the HarmonyOS microkernel in the latest EMUI update inside Android, which handles other processes including security authentication such as the fingerprint authentication.[124]
/e/ is an operating systemforkedfrom the source code ofLineageOS(based on Android). /e/ targets Android smart phone devices and usesMicroGas a replacement forGoogle Play Services.[125]/e/OS is not completelyopen source software, because it comes with the proprietary Magic Earth 'Maps' app.
Amazon Fire OSis a mobile operating system forked from Android and produced byAmazonfor itsFire range of tablets,Echoand Echo Dot, and other content delivery devices likeFire TV(previously for theirFire Phone). Fire OS primarily centers on content consumption, with a customized user interface and heavy ties to content available from Amazon's own storefronts and services.
Flyme OSis an operating system developed byMeizu Technology Co., Ltd., anopen-sourceoperating system based on the Android Open Source Project (AOSP). Flyme OS is mainly installed on Meizu smartphones such as theMX series. However, it also has officialROMsupport for a few Android devices.
Funtouch OSis a custom user interface developed byVivothat is based on the Android Open Source Project. FuntouchOS 10.5 had a redesigned UI that resembled stock Androids.
iQOO UI was a custom user interface based on Vivo's FuntouchOS. The UI largely resembled its predecessor, with a customized UI on top of the FuntouchOS. It was installed on iQOO smartphones sold inChinaand later was succeeded by OriginOS
GrapheneOS is a variant of Android forPixelhardware.
Hello UI (formerly called My UI and My UX) is a custom Android UI developed by Motorola for their devices. It used to look like the stock Android user experience up until My UI 3.x.
HiOS is an Android-based operating system developed byHong Kongmobile phone manufacturerTecno Mobile, a subsidiary ofTranssion Holdings, exclusively for their smartphones. HiOS allows for a wide range of user customization without requiringrootingthe mobile device. The operating system is also bundled with utility applications that allow users to free up memory, freeze applications, limit data accessibility to applications among others. HiOS comes with features like Launcher, Private Safe, Split Screen and Lockscreen Notification.
HTC Sense is a software suite developed by HTC, used primarily on the company's Android-based devices. Serving as a successor to HTC'sTouchFLO 3Dsoftware forWindows Mobile, Sense modifies many aspects of the Androiduser experience, incorporating added features (such as an altered home screen and keyboard),widgets, HTC-developed applications, and redesigned applications. The first device with Sense, theHTC Hero, was released in 2009.
Xiaomi HyperOS or HyperOS (formerly calledMIUI[127][128]), developed by the Chinese electronic companyXiaomi, is a mobile operating system based on theAndroid Open Source Project(AOSP). It is mostly founded in Xiaomi smartphones and tablets such as the Xiaomi (formerly Mi) andRedmiSeries. However, MIUI also had official ROM support for a few Android devices. Although HyperOS is based on AOSP, which is open-source, it consisted of closed-source proprietary software.
A specific version of MIUI developed for Xiaomi sub-brand (Currently an independence brand)POCO, the overall experience of the "skin" was similar to those of standard MIUI expect during the early release of MIUI for POCO where compared to standard MIUI it has an app drawer and allowed for 3rd party Android icon customization. Whereas the current MIUI for POCO shared all the common experience with those of standard MIUI, except the icon and the POCO Launcher instead of stock MIUI Launcher. In 2024 MIUI for POCO was replaced by Xiaomi HyperOS.
Indus OS is a custom mobile operating system based on the Android Open Source Project (AOSP). It is developed by the Indus OS team based in India. No longer valid as of 2018, Indus OS is available onMicromax,Intex,Karbonn, and other Indian smartphone brands.
LG UX (formerlyOptimus UI) was a front-end touch interface developed by LG Electronics and partners, featuring a fulltouch user interface. It was not an operating system. LG UX was used internally by LG for sophisticatedfeature phonesand tablet computers, and was not available for licensing by external parties.
Optimus UI 2, based on Android 4.1.2, has been released on the Optimus K II and the Optimus Neo 3. It features a more refined user interface compared to the prior version based on Android 4.1.1, along with new functionalities such as voice shutter and quick memo.
Lineage Android Distribution is a custom mobile operating system based on the Android Open Source Project (AOSP). It serves as the successor to the highly popular custom ROM,CyanogenMod, from which it was forked in December 2016 when Cyanogen Inc. announced it was discontinuing development and shut down the infrastructure behind the project. Since Cyanogen Inc. retained the rights to the Cyanogen name, the project rebranded its fork as LineageOS.
Similar to CyanogenMod, it does not include any proprietary apps unless the user installs them. It allows Android users who can no longer obtain update support from their manufacturer to continue updating their OS version to the latest one based on official release from Google AOSP and heavy theme customization.
"MagicOS" (formerly known as Magic UI and Magic Live) is a front-end touch interface developed byHonoras a subsidiary of Huawei Technologies Co. Ltd before Honor became an independent company.
Magic UI is based on HuaweiEMUI, which is based on the Android Open Source Project (AOSP). The overall user interface looks almost identical to EMUI, even after the separation. While it was based on the open-source Android operating system, it consists of closed-source proprietary software.
Due to sanctions imposed by the US on Huawei, new devices released by both Huawei and Honor are no longer allowed to includeGoogle Mobile Services. To allow Honor to regain access to Google services, Huawei sold off Honor to become an independent company, thereby allowing them to pre-install Google Mobile Services on their latest devices.
MyOS (formerly called MiFavor) is a custom Android UI developed byZTEfor their flagship smartphones andnubiasmartphones. MyOS is based on the Android Open Source Project (AOSP). This is a redesign from their previous custom Android UI, MiFavor.
Nothing OS is a custom Android UI developed byNothingfor theirNothing Phone (1). Nothing OS design interface are identical to the stock Android and Pixel UI experience, aside from their custom font and widget which is based on dot design.
nubia UI was a custom Android UI developed byZTEandnubiafor their smartphones. nubia UI was based on the Android Open Source Project (AOSP).
One UI (formerly calledTouchWizandSamsung Experience) is a user interface developed by Samsung Electronics in 2008 with partners. It is not a true operating system, but auser experience. Samsung Experience is used internally by Samsung for smartphones,feature phones, and tablet computers. The Android version of Samsung Experience also came with Samsung-made apps preloaded until theGalaxy S6, which removed all Samsung pre-loaded apps exceptSamsung Galaxy Store(formerly Galaxy Apps) to save storage space due to the removal of itsMicroSD. With the release of Samsung Galaxy S8 and S8+, Samsung Experience 8.1 was preinstalled on it with new functions, known as Samsung DeX. Similar to the concept of Microsoft Continuum, Samsung DeX allowed high-end Galaxy devices such as S8/S8+ or Note 8 to connect into a docking station, which extends the device to allow desktop-like functionality by connecting a keyboard, mouse, and monitor. Samsung also announced "Linux on Galaxy", which allows users to use the standard Linux distribution on the DeX platform.
Additionally, starting fromGalaxy Note 3onwards. Samsung had includedKnox, a hardware-based security platform to most of their Galaxy phones as an additional security measure on top of the TEE OS. Allowing user to have a secure environment in within their Secure Folder to have more protected environment to install sensitive apps, which separated from the main homescreen.
Origin OSis a custom user interface developed by Vivo that is based on Android. It is a redesigned skin of Funtouch OS. It is currently only available in China but may someday be released globally.
OxygenOS is based on the open source Android Open Source Project (AOSP) and is developed byOnePlusto replace Cyanogen OS on OnePlus devices such as theOnePlus One. It is preinstalled on theOnePlus 2,OnePlus X,OnePlus 3,OnePlus 3T,OnePlus 5,OnePlus 5T, andOnePlus 6.[130]As stated by Oneplus, OxygenOS is focused on stabilizing and maintaining of stock Android functionalities like those found onNexusdevices. It consists of mainly Google apps and minor UI customization to maintain the sleekness of stock Android.
Google Pixel UIor Pixel Launcher is developed by Google and based on the open-source Android system. Unlike Nexus phones, where Google shipped with stock Android, the UI that came with first-generationPixelphones was slightly modified. As part of the Google Pixel software, the Pixel UI and its home launcher are closed-source and proprietary, so it is only available on Pixel family devices. However, third-party mods allow non-Pixel smartphones to install Pixel Launcher withGoogle Nowfeed integration. FromPixel 3series onwards, Google had included the Trusty OS as their TEE OS running aside Android.
realme UI is a mobile operating system developed byRealmewhich is based onOPPOColorOS, which itself is based on the Android Open Source Project (AOSP). The UI mostly resemble its predecessor, but with a custom UI on top of ColorOS to match Realme's target audience.
realme UI R edition is a custom Android skin that Realme developed for their lower-end device line with "C" and Narzo series, the Android-based line of is based onAndroid Go, hence the overall experience is tune down to allowed for smoother experience on budget Realme devices.
Red Magic OS is a mobile operating system developed by ZTE andNubiafor their Red Magic devices.
Replicant is a custom mobile operating system based on the Android with all proprietary drivers andbloatedclosed-source software removed.
TCL UI is a custom user interface developed byTCL Technologyfor their in-house smartphone series. The OS is based on the Android Open Source Project (AOSP).
VOS is a custom Android UI developed byBQ AquarisandVsmart.
XOS (formerly known as XUI) is an Android-based operating system developed byHong Kongmobile phone manufacturerInfinix Mobile, a subsidiary ofTranssion Holdings, exclusively for their smartphones. XOS allows for a wide range of user customization without requiringrootingthe mobile device. The operating system comes with utility applications that allow users to protect their privacy, improve speed, enhance their experience, etc. XOS comes with features like XTheme, Scan to Recharge, Split Screen and XManager.
Sony Xperia UI (formerly known as Sony Ericsson Timescape UI) was the front-end UI developed bySony Mobile(formerly Sony Ericsson) in 2010 for their Android-basedSony Xperiaseries. Sony Xperia UI mostly consisted of Sony's own applications such as Sony Music (formerly known as Walkman Music player), Albums and Video Player. During its time as Timescape UI, the UI differed from the standard Android UI—instead of traditional apps dock on the bottom part, they were located at the four corners of the home screen, while the middle of the screen consisted of thewidget. However, recent UI developments more closely resemble those of stock Android.
ZenUI is a front-end touch interface developed byASUSwith partners, featuring a full touch user interface. ZenUI is used by ASUS for itsAndroid phonesand tablet computers, and is not available for licensing by external parties. ZenUI also comes preloaded with ASUS-made apps like ZenLink (PC Link, Share Link, Party Link & Remote Link).
ZUI is a custom operating system originally developed byLenovosubsidiaryZUK Mobilefor their smartphones. However, after the shutting down of ZUK Mobile, Lenovo took over as the main developer of ZUI. The operating system is based on the Android Open Source Project (AOSP).
Wear OS (also known simply as Wear and formerly Android Wear) is a version of Google's Android operating system designed for smartwatches and otherwearables. By pairing with mobile phones running Android version 6.0 or newer, or iOS version 10.0 or newer with limited support from Google's pairing application, Wear OS integratesGoogle Assistanttechnology and mobile notifications into a smartwatch form factor.
In May 2021 atGoogle I/O, Google announced a major update to the platform, internally known as Wear OS 3.0. It incorporates a new visual design inspired by Android 12, and Fitbit exercise tracking features. Google also announced a partnership with Samsung Electronics, who is collaborating with Google to unify its Tizen-based smartwatch platform with Wear OS, and has committed to using Wear OS on its future smartwatch products. The underlying codebase was also upgraded to Android 11. Wear OS 3.0 will be available to Wear OS devices runningQualcomm SnapdragonWear 4100system on chip, and will be an opt-in upgrade requiring a factory reset to install.
One UI Watch is the user interface Samsung developed for their Wear OS based smartwatch, officially announced after both Google and Samsung confirmed they would unify their respective wearable operating systems (Google Wear OS 2.0 and Samsung Tizen) into Wear OS 3.0.
ChromeOS is an operating system designed by Google that is based on the Linux kernel and uses theGoogle Chromeweb browser as its principal user interface. As a result, ChromeOS primarily supportsweb applications. Google announced the project in July 2009, conceiving it as an operating system in which both applications and user data reside in thecloud: hence ChromeOS primarily runsweb applications.[132]
Due to increase of popularity with 2-in-1 PCs, most recent Chromebooks are introduced with touch screen capability, with Android applications starting to become available for the operating system in 2014. And in 2016, access to Android apps in the entireGoogle Play Storewas introduced on supported ChromeOS devices. With the support of Android applications, there are Chromebook devices that are positioned as tablet based instead of notebooks.
ChromeOS is only available pre-installed on hardware from Google manufacturing partners. An open source equivalent,ChromiumOS, can becompiledfrom downloadedsource code. Early on, Google provided design goals for ChromeOS, but has not otherwise released a technical description.
Sailfish OS is from Jolla. It is open source withGNU General Public License(GPL) for middleware stack core which comes from MER. Sailfish due to Jolla's business model and due to alliances with various partners and due to intentional design of OS internals, is capable to adopt in several layers third-party software including Jolla software e.g. Jolla's UI is proprietary software (closed source), so such components can be proprietary with many kinds of licences. However, user can replace them with open source components like e.g. NEMO UI instead Jolla's UI.
After Nokia abandoned in 2011 the MeeGo project, most of the MeeGo team left Nokia, and established Jolla as a company to use MeeGo and Mer business opportunities. The MER standard allows it to be launched on any hardware with kernel compatible with MER. In 2012, Linux Sailfish OS based on MeeGo and using middleware of MER core stack distribution was launched for public use. The first device, theJolla smartphone, was unveiled on May 20, 2013. In 2015, Jolla Tablet was launched and theBRICScountries declared it an officially supported OS there. Jolla started licensing Sailfish OS 2.0 for third parties. Some devices sold are updateable to Sailfish 2.0 with no limits.
Nemo Mobileis a community-driven OS, similar to Sailfish but attempting to replace its proprietary components, such as the user interface.[133][134][135]
SteamOS is aLinux distributiondeveloped byValve. It incorporates Valve's popular namesakeSteamvideo game storefront and is the primary operating system forSteam Machinesand theSteam Deck. SteamOS isopen sourcewith some closed source components.
SteamOS was originally built to support streaming of video games from onepersonal computerto the one running SteamOS within the same network, although the operating system can support standalone systems and was intended to be used as part of Valve'sSteam Machineplatform. SteamOS versions 1.0, released in December 2013, and 2.0 were based on theDebiandistribution of Linux withGNOMEdesktop.[136]With SteamOS, Valve encouraged developers to incorporate Linux compatibility into their releases to better support Linux gaming options.
In February 2022, Valve released thehandheldgaming computerSteam Deckrunning SteamOS 3.0. SteamOS 3 is based on theArch Linuxdistribution withKDE Plasma 5.[137][138]
Tizen (based on the Linux kernel) is a mobile operating system hosted by Linux Foundation, together with support from the Tizen Association, guided by a Technical Steering Group composed of Intel and Samsung.
Tizen is an operating system for devices including smartphones, tablets,In-Vehicle Infotainment(IVI) devices, however currently it mainly focuses on wearable and smart TVs. It is an open source system (however the SDK was closed-source and proprietary) that aims to offer a consistent user experience across devices. Tizen's main components are the Linux kernel and theWebKitruntime. According to Intel, Tizen "combines the best of LiMo and MeeGo."HTML5apps are emphasized, with MeeGo encouraging its members to transition to Tizen, stating that the "future belongs to HTML5-based applications, outside of a relatively small percentage of apps, and we are firmly convinced that our investment needs to shift toward HTML5." Tizen will be targeted at a variety of platforms such as handsets, touch pc, smart TVs and in-vehicle entertainment.[139][140]On May 17, 2013, Tizen released version 2.1, code-named Nectarine.[141]
While Tizen itself was open source, most of the UX and UI layer that was developed by Samsung was mainly closed-source and proprietary, such as the TouchWiz UI on the Samsung Z's series smartphone and One UI for their Galaxy Watch wearable lines.
Note that some refrigerators use Tizen,[142]even though they are not considered mobile devices.
Samsung has revealed plans to discontinue the Tizen operating system by the end of 2025, marking a complete halt in support for the smartwatch OS. The company ceased using Tizen OS with its Galaxy Watch4 release, favoring a hybrid OS developed with Google.
KaiOS is from Kai. It is based onFirefox OS/Boot to Gecko. Unlike most mobile operating systems which focus on smartphones, KaiOS was developed mainly for feature phones, giving these access to more advanced technologies usually found on smartphones, such as app stores and Wi-Fi/4G capabilities.[143]
It is a mix of closed-source and open-source components.[144][145]FirefoxOS/B2G was released under the permissiveMPL 2.0. It does not redistribute itself under the same license, so KaiOS is now presumably proprietary (but still mostlyopen-source, publishing its source code).[144][145]KaiOS is not entirely proprietary, as it uses the copyleftGPLLinux kernel also used in Android.[146]
Smart Feature OSis a custom version of KaiOS that was developed and solely used byHMD Globalfor their KaiOS line of Nokia feature phone. The main differences between stock KaiOS and Smart Feature OS is the aesthetics such as icons, widgets, a custom Nokia ringtone and notification tone.
Fuchsia is a capability-based, real-time operating system (RTOS) currently being developed by Google. It was first discovered as a mysterious code post on GitHub in August 2016, without any official announcement. In contrast to prior Google-developed operating systems such as ChromeOS and Android, which are based on Linux kernels, Fuchsia is based on a new microkernel called "Zircon", derived from "Little Kernel", a small operating system intended for embedded systems. This allows it to remove Linux and the copyleftGPLunder which the Linux kernel is licensed; Fuchsia is licensed under thepermissiveBSD 3-clause,Apache 2.0, andMIT licenses. Upon inspection, media outlets noted that the code post on GitHub suggested Fuchsia's capability to run on universal devices, from embedded systems to smartphones, tablets and personal computers. In May 2017, Fuchsia was updated with a user interface, along with a developer writing that the project was not a for experimental, prompting media speculation about Google's intentions with the operating system, including the possibility of it replacing Android.[147]
LiteOS is a discontinued lightweight open source real-time operating system which is part of Huawei's "1+2+1" Internet of Things solution, which is similar to Google Android Things and Samsung Tizen. It was released under thepermissiveBSD 3-clause license. LiteOS was used in the Huawei Watch GT series and their sub-brand Honor Magic Watch series.[citation needed]
OpenHarmonyis an open-source version of HarmonyOS developed and donated by Huawei to the OpenAtom Foundation. It supports devices running a mini system with memory as small as 128 KB, or running a standard system with memory greater than 128 MB. The open sourceHarmonyOSis based on the HuaweiLiteOSkernel andLinux kernelfor standard systems. OpenHarmony LiteOS Cortex-A brings small-sized, low-power, and high-performance experience and builds a unified and open ecosystem for developers. In addition, it provides rich kernel mechanisms, more comprehensive Portable Operating System Interface (POSIX), and a unified driver framework, Hardware Driver Foundation (HDF), which offers unified access for device developers and friendly development experience for application developers.[citation needed]
Fedora Mobility is under developing mobile operating system by the Fedora Project that are porting Fedora to run on portable devices such as phones and tablets.
LuneOS is a modern reimplementation of the Palm/HP webOS interface.
Manjaro ARM is a mobile operating system with Plasma Mobile desktop environment that is running and default operating system on the PinePhone, an ARM-based smartphone released by Pine64.
A mobileDebianfocused forPinePhoneand soonLibrem.[citation needed]
Plasma Mobile is a Plasma variant for smartphones.[148]Plasma Mobile runs onWaylandand it is compatible with Ubuntu Touch applications,[149]PureOSapplications,[150]and eventually Android applications[151]via KDE'sShashlikproject – also sponsored by Blue Systems,[152][153]orAnbox. It is under the copyleftGPLv2license.
TheNecunophone uses Plasma Mobile. It is entirely open-source and thus does not have a cellular modem, so it must make calls byVOIP, like a pocket computer.[154]
postmarketOS is based on theAlpine LinuxLinux distribution. It is intended to run on older phone hardware. As of 2019[update]it is inalpha.
PureOS is a Debian GNU/Linux derivative using onlyfree softwaremeeting theDebian Free Software Guidelines, mainly thecopyleftGPL. PureOS is endorsed byFree Software Foundationas one of the freedom-respecting operating systems.[155]It is developed byPurism, and was already in use on Purism's laptops before it was used on theLibrem 5smartphone. Purism, in partnership withGNOMEandKDE, aims to separate theCPUfrom thebaseband processorand include hardwarekill switchesfor the phone'sWi-Fi,Bluetooth, camera, microphone, and baseband processor, and provide both GNOME andKDE Plasma Mobileas options for the desktop environment.[156][157]
Ubuntu Touch is an open-source (GPL) mobile version of theUbuntuoperating system[112]originally developed in 2013 byCanonical Ltd.and continued by the non-profit UBports Foundation in 2017.[158][159]Ubuntu Touch can run on a pure GNU/Linux base on phones with the required drivers, such as theLibrem 5[150]and thePinePhone.[160]To enable hardware that was originally shipped with Android, Ubuntu Touch makes use of the Android Linux kernel, using Android drivers and services via anLXCcontainer, but does not use any of the Java-like code of Android.[161][162]As of February 2022, Ubuntu Touch is available on 78 phones and tablets.[112][163]The UBports Installer serves as an easy-to-use tool to allow inexperienced users to install the operating system on third-party devices without damaging their hardware.[112][164]
iOS (formerly named iPhone OS) was created byApple Inc.It has the second largest installed base worldwide on smartphones, but the largest profits, due to aggressive price competition between Android-based manufacturers.[165]It is closed-source and proprietary, and is built on the open sourceDarwinoperating system. The iPhone,iPod Touch,iPad, and second and third-generationApple TVall use iOS, which is derived frommacOS.
Native third-party applications were not officially supported until the release of iPhone OS 2.0 on July 11, 2008. Before this, "jailbreaking" allowed third-party applications to be installed. In recent years, the jailbreaking scene has changed drastically due to Apple's continued efforts to secure their operating system and prevent unauthorized modifications. Currently, jailbreaks of recent iterations of iOS are only semi-untethered, which requires a device to be re-jailbroken at every boot, and exploits for jailbreaks are becoming increasingly hard to find and use.
Currently all iOS devices are developed by Apple and manufactured byFoxconnor another of Apple's partners.
iPadOS is a tablet operating system created and developed by Apple Inc. specifically for their iPad line of tablet computers. It was announced at the company's 2019 Worldwide Developers Conference (WWDC), as a derivation from iOS but with a greater emphasis put on multitasking. It was released on September 24, 2019.
watchOS is the operating system of the Apple Watch, developed by Apple Inc. It is based on the iOS operating system and has many similar features. It was released on April 24, 2015, along with the Apple Watch, the only device that runs watchOS. It is currently the most widely used wearable operating system. It features focus on convenience, such as being able to place phone calls and send texts, and health, such as fitness and heart rate tracking.
The most current version of the watchOS operating system iswatchOS 10.
Kindle firmware is a mobile operating system specifically designed forAmazon Kindlee-readers. It is based on a custom Linux kernel, but it is mostly closed-source and proprietary.
HarmonyOS is a distributed operating system developed by Huawei that was specifically designed for smartphones, tablets, TVs, smartwatches, smart devices made by Huawei. It is based on a proprietary multi-kernel and Linux kernel subsystem. Released officially for smartphones on June 2, 2021, from its initial launch on August 9, 2019, for smart screen TVs. On August 4, 2023, Huawei announces its full stackHarmonyOS NEXTfor HarmonyOS that will replace the current multi-kernel stack that contains Linux kernel subsystem with APK apps, with only native HarmonyOS apps able to be used. On January 18, 2024, Galaxy Edition version was announced to be used for the next version of HarmonyOS.
TheNintendo Switch system software(also known by its codename Horizon) is an updatable firmware and operating system used by theNintendo Switchhybrid video game console/tablet andNintendo Switch Litehandheld game console. It is based on a proprietary microkernel. The UI includes a HOME screen, consisting of the top bar, the screenshot viewer ("Album"), and shortcuts to the Nintendo eShop, News, and Settings.
The system itself is based on theNintendo 3DS system software, additionally the networking stack in the Switch OS is derived at least in part fromFreeBSDcode while the Stagefright multimedia framework is derived fromAndroidcode.
ThePlayStation Vita system softwareis the official firmware and operating system for thePlayStation VitaandPlayStation TVvideo game consoles. It uses the LiveArea as its graphical shell. The PlayStation Vita system software has one optional add-on component, the PlayStation Mobile Runtime Package. The system is built on a Unix-base which is derived from FreeBSD and NetBSD.
Windows 10 (not to be confused with Windows 10 Mobile—see below) is a personal computer operating system developed and released by Microsoft as part of theWindows NTfamilyof operating systems. It was released on July 29, 2015, and manyeditionsandversionshave been released since then. It was designed to run across multiple Microsoft product such as PCs and Tablets. The Windows user interface was revised to handle transitions between a mouse-oriented interface and a touchscreen-optimized interface based on available input devices particularly on 2-in-1 PCs.
Windows 10 also introduced universal apps, expanding on Metro-style apps, these apps can be designed to run across multiple Microsoft product families with nearly identical code including PCs, tablets, smartphones, embedded systems, Xbox One, Surface Hub and Mixed Reality.
Windows 11 is a major version of theWindows NToperating system developed by Microsoft that was announced on June 24, 2021, and is the successor to Windows 10, which was released in 2015. Windows 11 was released on October 5, 2021, as a free upgrade viaWindows Updatefor eligible devices running Windows 10.
Microsoft promoted that Windows 11 would have improved performance and ease of use over Windows 10; it features major changes to the Windowsshellinfluenced by the canceledWindows 10X, including a redesignedStart menu, the replacement of its "live tiles" with a separate "Widgets" panel on thetaskbar, the ability to create tiled sets of windows that can be minimized and restored from thetaskbaras a group, and new gaming technologies inherited fromXbox Series X and Series Ssuch asAuto HDRandDirectStorageon compatible hardware.Internet Exploreris fully replaced by theBlink layout engine-basedMicrosoft Edge, whileMicrosoft Teamsis integrated into the Windows shell. Microsoft also announced plans to offer support for Androidappsto run on Windows 11, with support forAmazon Appstoreand manually-installedpackages. On March 5, 2024, Microsoft announced that Android apps support will be depreciated on March 5, 2025.
Similar to Windows 10, it was designed to run across multiple Microsoft product such as PCs and Tablets. The Windows user interface was further revised to combine the UI element of both mouse-oriented interface and a touchscreen-optimized interface based into a hybrid UI that combined the capabilities of touch with a traditional desktop UI.
Other than the major operating systems, some companies such as Huami (Amazfit), Huawei, realme, TCL, and Xiaomi have developed their own proprietary RTOSes specifically for their own smartbands and smartwatches that are designed for power effiency and lower battery consumption and are not based on any other operating system.
Operating System that is primarily designed for their Bip series, however, Huami is currently developing the operating system to run on other smartwatches as well.
Huawei Band Operating system is an operating system specifically designed and developed by Huawei for their fitness trackers, including smartbands fromHonor
Proprietary OS developed by Lenovo for their fitness trackers and smartwatches.
A proprietary operating system design to run on realme smartbands and smartwatches.
A proprietary RTOS powering TCL and Alcatel branded smartbands and smartwatches.
Proprietary RTOS that is developed by Huami for theXiaomi Mi Bandseries.
CyanogenMod was a custom mobile operating system based on the Android Open Source Project (AOSP). It was a custom ROM that was co-developed by the CyanogenMod community. The OS did not include any proprietary apps unless the user installed them. Due to its open source nature, CyanogenMod allowed Android users who could no longer obtain update support from their manufacturer to continue updating their OS version to the latest one based on official releases from Google AOSP and heavy theme customization. The last version of the OS was CyanogenMod 13 which was based on Android Asus.
On December 24, 2016, CyanogenMod announced on their blog that they would no longer be releasing any CyanogenMod updates. All development moved to LineageOS.
Cyanogen OS was based onCyanogenModand maintained by Cyanogen Inc; however, it included proprietary apps and it was only available for commercial uses.
Firefox OS (formerly known as "Boot to Gecko" and shortly "B2G")[166]is from Mozilla. It was an open source mobile operating system released under theMozilla Public Licensebuilt on the Android Linux kernel and used Android drivers, but did not use any Java-like code of Android.
According toArs Technica, "Mozilla says that B2G is motivated by a desire to demonstrate that the standards-based open Web has the potential to be a competitive alternative to the existing single-vendor application development stacks offered by the dominant mobile operating systems."[167]In September 2016, Mozilla announced that work on Firefox OS has ceased, and all B2G-related code would be removed from mozilla-central.[168]
MeeGowas from non-profit organizationThe Linux Foundation. It is open source and GPL. At the 2010Mobile World Congressin Barcelona, Nokia and Intel both unveiledMeeGo, a mobile operating system that combined Moblin and Maemo to create an open-sourced experience for users across all devices. In 2011 Nokia announced that it would no longer pursue MeeGo in favor of Windows Phone. Nokia announced theNokia N9on June 21, 2011, at the Nokia Connection event[169]in Singapore. LG announced its support for the platform.[170]Maemo was a platform developed by Nokia for smartphones andInternet tablets. It is open source and GPL, based onDebian GNU/Linuxand draws much of itsgraphical user interface(GUI),frameworks, andlibrariesfrom the GNOME project. It uses theMatchboxwindow manager and theGTK-basedHildonas its GUI andapplication framework.
webOS was developed by Palm. webOS is an open source mobile operating system running on the Linux kernel, initially developed by Palm, which launched with thePalm Pre. After being acquired by HP, two phones (theVeerand thePre 3) and a tablet (theTouchPad) running webOS were introduced in 2011. On August 18, 2011, HP announced that webOS hardware would be discontinued,[171]but would continue to support and update webOS software and develop the webOS ecosystem.[172]HP released webOS as open source under the name Open webOS, and plans to update it with additional features.[173]On February 25, 2013, HP announced the sale of webOS to LG Electronics, who used the operating system for its "smart" or Internet-connected TVs. However, HP retained patents underlying WebOS and cloud-based services such as the App Catalog.
Bada platform (stylized as bada; Korean: 바다) was an operating system for mobile devices such as smartphones and tablet computers. It was developed by Samsung Electronics. Its name is derived from "바다 (bada)", meaning "ocean" or "sea" in Korean. It ranges from mid- to high-end smartphones. To foster adoption of Bada OS, since 2011 Samsung reportedly has considered releasing the source code under an open-source license, and expanding device support to include Smart TVs. Samsung announced in June 2012 intentions to merge Bada into the Tizen project, but would meanwhile use its own Bada operating system, in parallel with Google Android OS and Microsoft Windows Phone, for its smartphones. All Bada-powered devices are branded under the Wave name, but not all of Samsung's Android-powered devices are branded under the name Galaxy.
On February 25, 2013, Samsung announced that it will stop developing Bada, moving development to Tizen instead. Bug reporting was finally terminated in April 2014.[174]
In 1999,Research In Motionreleased its first BlackBerry devices, providing secure real-time push-email communications on wireless devices. Services such as BlackBerry Messenger provide the integration of all communications into a single inbox. In September 2012, RIM announced that the 200 millionth BlackBerry smartphone was shipped. As of September 2014, there were around 46 million active BlackBerry service subscribers.[175]In the early 2010s, RIM underwent a platform transition, changing its company name to BlackBerry Limited and making new devices using a new operating system named "BlackBerry 10".[176]
BlackBerry 10 (based on theQNXOS) is from BlackBerry. As a smartphone OS, it is closed-source and proprietary, and only runs on phones and tablets manufactured by BlackBerry.
One of the dominant platforms in the world in the late 2000s, its global market share was reduced significantly by the mid-2010s. In late 2016, BlackBerry announced that it will continue to support the OS, with a promise to release 10.3.3.[177][178]Therefore, BlackBerry 10 would not receive any major updates as BlackBerry and its partners would focus more on their Android base development.[179]
TheNintendo 3DS system softwareis the updatable operating system used by the Nintendo 3DS.
Symbian platform was developed by Nokia for some models of smartphones. It is proprietary software, it was however used by Ericsson (Sony Ericsson), Sending and Benq. The operating system was discontinued in 2012, although a slimmed-down version for basic phones was still developed until July 2014. Microsoft officially shelved the platform in favor of Windows Phone after its acquisition of Nokia.[180]
Palm OS/Garnet OS was fromAccess Co.It is closed-source and proprietary. webOS was introduced by Palm in January 2009, as the successor to Palm OS with Web 2.0 technologies,open architectureand multitasking abilities.
Windows Mobile was a family of proprietary operating systems from Microsoft aimed at business and enterprise users, based on Windows CE and originally developed forPocket PC(PDA) devices. In 2010 it was replaced with the consumer-focused Windows Phone.[118][55]
Versions of Windows Mobile came in multiple editions, like "Pocket PC Premium", "Pocket PC Professional", "Pocket PC Phone", and "Smartphone" (Windows Mobile 2003) or "Professional", "Standard", and "Classic" (Windows Mobile 6.0). Some editions were touchscreen-only and some were keyboard-only, although there were cases where device vendors managed to graft support for one onto an edition targeted at the other. Cellular phone features were also only supported by some editions. Microsoft started work on a version of Windows Mobile that would combine all features together, but it was aborted, and instead they focused on developing the non-backward-compatible, touchscreen-only Windows Phone 7.[76]
Windows Phone is a proprietary mobile operating system developed by Microsoft for smartphones as the replacement successor to Windows Mobile andZune. Windows Phone features a new touchscreen-oriented user interface derived from Metro design language. Windows Phone was replaced by Windows 10 Mobile in 2015.
Windows 10 Mobile (formerly called Windows Phone) was from Microsoft. It was closed-source and proprietary.
Unveiled on February 15, 2010, Windows Phone included a user interface inspired by Microsoft'sMetro Design Language. It was integrated with Microsoft services such asOneDriveand Office,Xbox Music,Xbox Video,Xbox Livegames, andBing, but also integrated with many other non-Microsoft services such asFacebookandGoogle accounts. Windows Phone devices were made primarily byMicrosoft Mobile/Nokia, and also by HTC and Samsung.
On January 21, 2015, Microsoft announced that the Windows Phone brand would be phased out and replaced with Windows 10 Mobile, bringing tighter integration and unification with its PC counterpart Windows 10, and providing a platform for smartphones and tablets with screen sizes under 8 inches.
On October 8, 2017, Microsoft officially announced that they would no longer push any major updates to Windows 10 Mobile. The operating system was put in maintenance mode, where Microsoft would push bug fixes and general improvements only. Windows 10 Mobile would not receive any new feature updates.[113][114]
On January 18, 2019, Microsoft announced that support for Windows 10 Mobile wouldendon December 10, 2019, with no further security updates released after then, and that Windows 10 Mobile users should migrate to iOS or Android phones.[116][117]
The released version of Windows 10 Mobile were:
Windows 8is a major release of theWindows NToperating systemdeveloped byMicrosoft. It wasreleased to manufacturingon August 1, 2012, and was made available for download viaMSDNandTechNeton August 15, 2012.[181]Nearly three months after its initial release, it finally made its first retail appearance on October 26, 2012.[182]
Windows 8 introduced major changes to the operating system's platform anduser interfacewith the intention to improve its user experience ontablets, whereWindowscompeted with mobile operating systems such asAndroidandiOS.[183]In particular, these changes included a touch-optimizedWindows shellandstart screenbased on Microsoft'sMetrodesign language, integration with online services, theWindows Store, and a new keyboard shortcut forscreenshots.[184]Many of these features were adapted fromWindows Phone. Windows 8 also added support forUSB 3.0,Advanced Format,near-field communication, andcloud computing, as well as a new lock screen with clock and notifications and the previously released "Domino" and "Beauty and a Beat". Additional security features—including built-inantivirus software, integration withMicrosoft SmartScreenphishing filtering, and support forSecure Booton supported devices—were introduced. It was the first Windows version to support ARM architecture under theWindows RTbranding. No CPUs withoutPAE,SSE2andNXare supported in this version.
Windows 8.1is a release of theWindows NToperating systemdeveloped byMicrosoft. It wasreleased to manufacturingon August 27, 2013, and broadly released for retail sale on October 17, 2013, about a year after the retail release of its predecessor, and succeeded byWindows 10on July 29, 2015. Windows 8.1 was made available for download viaMSDNandTechnetand available as a free upgrade for retail copies ofWindows 8andWindows RTusers via theWindows Store. Aserverversion,Windows Server 2012 R2, was released on October 18, 2013.
Windows 8.1 aimed to address complaints of Windows 8 users and reviewers on launch. Enhancements include an improvedStart screen, additional snap views, additional bundled apps, tighterOneDrive(formerly SkyDrive) integration,Internet Explorer 11(IE11), aBing-powered unified search system, restoration of a visibleStart buttonon thetaskbar, and the ability to restore the previous behavior of opening the user's desktop on login instead of the Start screen.
In 2006, Android and iOS did not exist and only 64 million smartphones were sold.[185]In 2018 Q1, 183.5 million smartphones were sold and global market share was 48.9% for Android and 19.1% for iOS. Only 131,000 smartphones running other operating systems were sold, constituting 0.03% of sales.[186]
According toStatCounterweb use statistics (a proxy for all use), smartphones (alone without tablets) have majority use globally, with desktop computers used much less (and Android, in particular, more popular than Windows).[187]Use varies however by continent with smartphones way more popular in the biggest continents, i.e. Asia, and the desktop still more popular in some, though not in North America.
The desktop is still popular in many countries (while overall down to 44.9% in the first quarter of 2017[188]), smartphones are more popular even in many developed countries (or about to be in more). A few countries on any continent are desktop-minority; European countries (and some in South America, and a few, e.g. Haiti, in North America; and most in Asia and Africa) are smartphone-majority, Poland and Turkey highest with 57.68% and 62.33%, respectively. In Ireland, smartphone use at 45.55% outnumbers desktop use and mobile as a whole gains majority when including the tablet share at 9.12%.[189][188]Spain is also slightly desktop-minority.
The range of measured mobile web use varies a lot by country, and a StatCounter press release recognizes "India among world leaders in use of mobile to surf the internet"[190](of the big countries) where the share is around (or over) 80%[191]and desktop is at 19.56%, with Russia trailing with 17.8% mobile use (and desktop the rest).
Smartphones (alone, without tablets), first gained majority in December 2016 (desktop-majority was lost the month before), and it was not a Christmas-time fluke, as while close to majority after smartphone majority happened again in March 2017.[188]
In the week from November 7–13, 2016, smartphones alone (without tablets) overtook desktop, for the first time (for a short period; non-full-month).[192]Mobile-majority applies to countries such as Paraguay in South America, Poland in Europe and Turkey; and most of Asia and Africa. Some of the world is still desktop-majority, with e.g. in the United States at 54.89% (but no not on all days).[193]However, in someterritories of the United States, such asPuerto Rico,[194]desktop is way under majority, with Windows under 30% overtaken by Android.
On October 22, 2016 (and subsequent weekends), mobile showed majority.[195]Since October 27, the desktop has not shown majority, not even on weekdays. Smartphones alone have shown majority since December 23 to the end of the year, with the share topping at 58.22% on Christmas Day.[196]To the "mobile"-majority share then of smartphones, tablets could be added giving a 63.22% majority. While an unusually high top, a similarly high also happened on Monday April 17, 2017, with then only smartphones share slightly lower and tablet share slightly higher, with them combined at 62.88%.
According to a StatCounter November 1, 2016 press release[update], the world has turned desktop-minority;[197]at about 49% desktop use for the previous month, but mobile was not ranked higher, tablet share had to be added to it to exceed desktop share. By now, mobile (smartphones) have full majority, outnumbering desktop/laptop computers by a safe margin (and no longer counting tablets with desktops makes them most popular).
Notes:
|
https://en.wikipedia.org/wiki/Mobile_operating_system
|
Scientific consensusis the generally held judgment, position, and opinion of themajorityor thesupermajorityofscientistsin aparticular fieldof study at any particular time.[1][2]
Consensus is achieved throughscholarly communicationatconferences, thepublicationprocess, replication ofreproducibleresults by others, scholarlydebate,[3][4][5][6]andpeer review. A conference meant to create a consensus is termed as a consensus conference.[7][8][9]Such measures lead to a situation in which those within the discipline can often recognize such a consensus where it exists; however, communicating to outsiders that consensus has been reached can be difficult, because the "normal" debates through which science progresses may appear to outsiders as contestation.[10]On occasion, scientific institutes issue position statements intended to communicate a summary of the science from the "inside" to the "outside" of the scientific community, or consensus review articles[11]orsurveys[12]may be published. In cases where there is little controversy regarding the subject under study, establishing the consensus can be quite straightforward.
Popular or political debate on subjects that are controversial within the public sphere but not necessarily controversial within the scientific community may invoke scientific consensus: note such topics asevolution,[13][14]climate change,[15]the safety ofgenetically modified organisms,[16]or the lack of a link betweenMMR vaccinations and autism.[10]
Scientific consensus is related to (and sometimes used to mean)convergent evidence, that is, the concept that independent sources of evidence converge on a conclusion.[17][18]
There are many philosophical and historical theories as to how scientific consensus changes over time. Because the history of scientific change is extremely complicated, and because there is a tendency to project "winners" and "losers" onto the past in relation to thecurrentscientific consensus, it is very difficult to come up with accurate and rigorous models for scientific change.[19]This is made exceedingly difficult also in part because each of the various branches of science functions in somewhat different ways with different forms of evidence and experimental approaches.[20][21]
Most models of scientific change rely on new data produced by scientificexperiment.Karl Popperproposed that since no amount of experiments could everprovea scientific theory, but a single experiment coulddisproveone, science should be based onfalsification.[22]Whilst this forms a logical theory for science, it is in a sense "timeless" and does not necessarily reflect a view on how science should progress over time.
Among the most influential challengers of this approach wasThomas Kuhn, who argued instead that experimentaldataalways provide some data which cannot fit completely into a theory, and that falsification alone did not result in scientific change or an undermining of scientific consensus. He proposed that scientific consensus worked in the form of "paradigms", which were interconnected theories and underlying assumptions about the nature of the theory itself which connected various researchers in a given field. Kuhn argued that only after the accumulation of many "significant" anomalies would scientific consensus enter a period of "crisis". At this point, new theories would be sought out, and eventually one paradigm would triumph over the old one – a series ofparadigm shiftsrather than a linear progression towards truth. Kuhn's model also emphasized more clearly the social and personal aspects of theory change, demonstrating through historical examples that scientific consensus was never truly a matter of pure logic or pure facts.[23]However, these periods of 'normal' and 'crisis' science are not mutually exclusive. Research shows that these are different modes of practice, more than different historical periods.[10]
Perception of whether a scientific consensus exists on a given issue, and how strong that conception is, has been described as a "gateway belief" upon which other beliefs and then action are based.[28]
In public policy debates, the assertion that there exists a consensus of scientists in a particular field is often used as an argument for the validity of a theory. Similarly arguments for alackof scientific consensus are often used to support doubt about the theory.[citation needed]
For example, thescientific consensus on the causes of global warmingis thatglobal surface temperatureshave increased in recent decades and that the trend is caused primarily by human-inducedemissions of greenhouse gases.[29][30][31]Thehistorian of scienceNaomi Oreskespublished an article inSciencereporting that a survey of the abstracts of 928 science articles published between 1993 and 2003 showed none which disagreed explicitly with the notion ofanthropogenic global warming.[29]In an editorial published inThe Washington Post, Oreskes stated that those who opposed these scientific findings are amplifying the normal range of scientific uncertainty about any facts into an appearance that there is a great scientific disagreement, or a lack of scientific consensus.[32]Oreskes's findings were replicated by other methods that require no interpretation.[10]
The theory ofevolution through natural selectionis also supported by an overwhelming scientific consensus; it is one of the most reliable and empirically tested theories in science.[33][34]Opponents of evolution claim that there is significant dissent on evolution within the scientific community.[35]Thewedge strategy, a plan to promoteintelligent design, depended greatly on seeding and building on public perceptions of absence of consensus on evolution.[36]
The inherentuncertainty in science, where theories are neverprovenbut can only bedisproven(seefalsifiability), poses a problem for politicians, policymakers, lawyers, and business professionals. Where scientific or philosophical questions can often languish in uncertainty for decades within their disciplinary settings, policymakers are faced with the problems of making sound decisions based on the currently available data, even if it is likely not a final form of the "truth". The tricky part is discerning what is close enough to "final truth". For example, social action against smoking probably came too long after science was 'pretty consensual'.[10]
Certain domains, such as the approval of certain technologies for public consumption, can have vast and far-reaching political, economic, and human effects should things run awry with the predictions of scientists. However, insofar as there is an expectation that policy in a given field reflect knowable and pertinent data and well-accepted models of the relationships between observable phenomena, there is little good alternative for policy makers than to rely on so much of what may fairly be called 'the scientific consensus' in guiding policy design and implementation, at least in circumstances where the need for policy intervention is compelling. While science cannot supply 'absolute truth' (or even its complement 'absolute error') its utility is bound up with the capacity to guide policy in the direction of increased public good and away from public harm. Seen in this way, the demand that policy rely only on what is proven to be "scientific truth" would be a prescription for policy paralysis and amount in practice to advocacy of acceptance of all of the quantified and unquantified costs and risks associated with policy inaction.[10]
No part of policy formation on the basis of the ostensible scientific consensus precludes persistent review either of the relevant scientific consensus or the tangible results of policy. Indeed, the same reasons that drove reliance upon the consensus drives the continued evaluation of this reliance over time – and adjusting policy as needed.[citation needed]
|
https://en.wikipedia.org/wiki/Scientific_consensus
|
In the field ofmultivariate statistics,kernel principal component analysis (kernel PCA)[1]is an extension ofprincipal component analysis(PCA) using techniques ofkernel methods. Using a kernel, the originally linear operations of PCA are performed in areproducing kernel Hilbert space.
Recall that conventional PCA operates on zero-centered data; that is,
wherexi{\displaystyle \mathbf {x} _{i}}is one of theN{\displaystyle N}multivariate observations.
It operates by diagonalizing thecovariance matrix,
in other words, it gives aneigendecompositionof the covariance matrix:
which can be rewritten as
(See also:Covariance matrix as a linear operator)
To understand the utility of kernel PCA, particularly for clustering, observe that, whileNpoints cannot, in general, belinearly separatedind<N{\displaystyle d<N}dimensions, they canalmost alwaysbe linearly separated ind≥N{\displaystyle d\geq N}dimensions. That is, givenNpoints,xi{\displaystyle \mathbf {x} _{i}}, if we map them to anN-dimensional space with
it is easy to construct ahyperplanethat divides the points into arbitrary clusters. Of course, thisΦ{\displaystyle \Phi }creates linearly independent vectors, so there is no covariance on which to perform eigendecompositionexplicitlyas we would in linear PCA.
Instead, in kernel PCA, a non-trivial, arbitraryΦ{\displaystyle \Phi }function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensionalΦ{\displaystyle \Phi }'s if we never have to actually evaluate the data in that space. Since we generally try to avoid working in theΦ{\displaystyle \Phi }-space, which we will call the 'feature space', we can create the N-by-N kernel
which represents the inner product space (seeGramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in theΦ(x){\displaystyle \Phi (\mathbf {x} )}-space (seeKernel trick). The N-elements in each column ofKrepresent thedot productof one point of the transformed data with respect to all the transformed points (N points). Some well-known kernels are shown in the example below.
Because we are never working directly in the feature space, the kernel-formulation of PCA is restricted in that it computes not the principal components themselves, but the projections of our data onto those components. To evaluate the projection from a point in the feature spaceΦ(x){\displaystyle \Phi (\mathbf {x} )}onto the kth principal componentVk{\displaystyle V^{k}}(where superscript k means the component k, not powers of k)
We note thatΦ(xi)TΦ(x){\displaystyle \Phi (\mathbf {x} _{i})^{T}\Phi (\mathbf {x} )}denotes dot product, which is simply the elements of the kernelK{\displaystyle K}. It seems all that's left is to calculate and normalize theaik{\displaystyle \mathbf {a} _{i}^{k}}, which can be done by solving the eigenvector equation
whereN{\displaystyle N}is the number of data points in the set, andλ{\displaystyle \lambda }anda{\displaystyle \mathbf {a} }are the eigenvalues and eigenvectors ofK{\displaystyle K}. Then to normalize the eigenvectorsak{\displaystyle \mathbf {a} ^{k}}, we require that
Care must be taken regarding the fact that, whether or notx{\displaystyle x}has zero-mean in its original space, it is not guaranteed to be centered in the feature space (which we never compute explicitly). Since centered data is required to perform an effective principal component analysis, we 'centralize'K{\displaystyle K}to becomeK′{\displaystyle K'}
where1N{\displaystyle \mathbf {1_{N}} }denotes a N-by-N matrix for which each element takes value1/N{\displaystyle 1/N}. We useK′{\displaystyle K'}to perform the kernel PCA algorithm described above.
One caveat of kernel PCA should be illustrated here. In linear PCA, we can use the eigenvalues to rank the eigenvectors based on how much of the variation of the data is captured by each principal component. This is useful for data dimensionality reduction and it could also be applied to KPCA. However, in practice there are cases that all variations of the data are same. This is typically caused by a wrong choice of kernel scale.
In practice, a large data set leads to a large K, and storing K may become a problem. One way to deal with this is to perform clustering on the dataset, and populate the kernel with the means of those clusters. Since even this method may yield a relatively large K, it is common to compute only the top P eigenvalues and eigenvectors of the eigenvalues are calculated in this way.
Consider three concentric clouds of points (shown); we wish to use kernel PCA to identify these groups. The color of the points does not represent information involved in the algorithm, but only shows how the transformation relocates the data points.
First, consider the kernel
Applying this to kernel PCA yields the next image.
Now consider aGaussian kernel:
That is, this kernel is a measure of closeness, equal to 1 when the points coincide and equal to 0 at infinity.
Note in particular that the first principal component is enough to distinguish the three different groups, which is impossible using only linear PCA, because linear PCA operates only in the given (in this case two-dimensional) space, in which these concentric point clouds are not linearly separable.
Kernel PCA has been demonstrated to be useful for novelty detection[3]and image de-noising.[4]
|
https://en.wikipedia.org/wiki/Kernel_PCA
|
This is a list of topics aroundBoolean algebraandpropositional logic.
|
https://en.wikipedia.org/wiki/List_of_Boolean_algebra_topics
|
So help me Godis a phrase often used to give anoath, sometimes optionally as part of anoath of office. It is used in some jurisdictions as an oath for performing a public duty, such as an appearance in court. The phrase implies greater care than usual in the truthfulness of one's testimony or in the performance of one's duty.
Notably, the wordhelpinso help me Godis in thesubjunctive mood.
InAustralia, theOath of Allegianceis available in two forms, one of which contains the phrase "So help me God!"[1]
InCanada, the Oath of Office, Oath of Allegiance, and Oath of Members of the Privy Council may be sworn, and end in "So help me God." They may also be solemnly affirmed, and in such case the phrase is omitted.[2]
TheConstitution of Fiji, Chapter 17requires this phrase for theoath of allegiance, and before service to the republic from the President's office or Vice-President's office, a ministerial position, or a judicial position.
InNew ZealandtheOath of Allegianceis available in English or Māori in two forms, one an oath containing the phrase 'so help me God' and the other anaffirmationwhich does not. ThePolice Act 1958and theOaths Modernisation Billstill includes the phrase.[3][4]
TheOath of Allegianceset out in thePromissory Oaths Act 1868ends with this phrase, and is required to be taken by various office-holders.[5]
The phrase "So help me God" is prescribed in oaths as early as theJudiciary Act of 1789, for U.S. officers other than the President. The act makes the semantic distinction between anaffirmationand anoath.[6]The oath, religious in essence, includes the phrase "so help me God" and "[I] swear". The affirmation uses "[I] affirm". Both serve the same purpose and are described as one (i.e. "... solemnly swear, or affirm, that ...")[7]
In theUnited States, theNo Religious Test Clausestates that "no religious test shall ever be required as a qualification to any office or public trust under the United States." Still, there are federal oaths which do include the phrase "So help me God", such as forjusticesandjudgesin28 U.S.C.§ 453.[8]
There is no law that requires Presidents to add the words "So help me God" at the end of the oath (or to use a Bible). Some historians maintain thatGeorge Washingtonhimself added the phrase to the end of his first oath, setting a precedent for future presidents and continuing what was already established practice in his day[9]and that all Presidents since have used this phrase, according to Marvin Pinkert, executive director of theNational Archives Experience.[10]Many other historians reject this story given that "it was not until 65 years after the event that the story that Washington added this phrase first appeared in a published volume" and other witnesses, who were present for the event, did not cite him as having added the phrase.[11]These historians further note that "we have no convincing contemporary evidence that any president said "so help me God" until September 1881, when Chester A. Arthur took the oath after the death of James Garfield."[12]It is demonstrable, however, that those historians are in error regarding their claim that there is no "contemporary evidence" of a president saying "so help me God" until 1881. Richard Gardiner's research published in theWhite House History Quarterly, November 2024, offers contemporary evidence for presidents who used the phrase going back to William Henry Harrison in 1841, and Andrew Jackson.[13]
TheUnited States Oath of Citizenship(officially referred to as the "Oath of Allegiance", 8 C.F.R. Part 337 (2008)), taken by all immigrants who wish to becomeUnited States citizens, includes the phrase "so help me God"; however8 CFR337.1provides that the phrase is optional.
TheEnlistment oathand officer'sOath of Officeboth contain this phrase. A change in October 2013 to Air Force Instruction 36-2606[14]made it mandatory to include the phrase during Air Force enlistments/reenlistments. This change has made the instruction "consistent with the language mandated in 10 USC 502".[15]The Air Force announced on September 17, 2014, that it revoked this previous policy change, allowing anyone to omit "so help me God" from the oath.[16]
Some of the states have specified that the words "so help me God" were used in oath of office, and also required ofjurors, witnesses in court,notaries public, and state employees. Alabama, Connecticut, Delaware, Kentucky, Louisiana, Maine, Mississippi, New Mexico, North Carolina, Texas, and Virginia retain the required "so help me God" as part of the oath to public office. Historically, Maryland and South Carolina did include it but both have been successfully challenged in court. Other states, such as New Hampshire, North Dakota and Rhode Island allow exceptions or alternative phrases. In Wisconsin, the specific language of the oath has been repealed.[17]
InCroatia, the text of presidential oath, which is defined by the Presidential Elections Act amendments of 1997 (Article 4), ends with "Tako mi Bog pomogao" (So help me God).[18][19]
In 2009, concerns about the phrase infringing onConstitution of Croatiawere raised.Constitutional Court of Croatiaruled them out in 2017, claiming that it is compatible with constitution and secular state.[20][21][22]The court said the phrase is in neither direct nor indirect relation to any religious beliefs of theelected president. It doesn't represent a theist or religious belief and does not stop the president in any way from expressing any other religious belief. Saying the phrase while taking the presidential oath does not force a certain belief on the President and does not infringe on their religious freedoms.[22]
In the inauguration ofDutch monarchs, the phrase "zo waarlijk helpe mij God Almachtig" ("So help me God Almighty") is used at the conclusion of the monarch's oath.[23]
In theOath of Officeof thePresident of the Philippines, the phrase "So help me God" (Filipino:Kasihan nawâ akó ng Diyos) is mandatory in oaths.[24]An affirmation, however, has exactly the same legal effect as an oath.
In medieval France, tradition held that when the Duke of Brittany or other royalty entered the city ofRennes, they would proclaimEt qu'ainsi Dieu me soit en aide("And so help me God").[25]
The phraseSo wahr mir Gott helfe(literally "as true as God may help me") is an optional part in oaths of office prescribed for civil servants, soldiers, judges as well as members and high representatives of the federal and state governments such as theFederal President,Federal Chancellorand theMinister Presidents. Parties and witnesses in criminal and civil proceedings may also be placed under oath with this phrase. In such proceedings, the judge first speaks the wordsYou swear [by God Almighty and All-Knowing] that to the best of your knowledge you have spoken the pure truth and not concealed anything.The witness or party then must answerI swear it [, so help me God]. The words between brackets are added or omitted according to the preference of the person placed under oath.[26]If the person concerned raises a conscientious objection against any kind of oath, the judge may speak the wordsAware of your responsibility in court, you affirm that to the best of your knowledge you have spoken the pure truth and not concealed anythingto which the person needs to replyYes.[27]Both forms of the oath and the affirmation carry the same penalty, if the person is found to have lied. Contrary to the oath without a religious phrase, this kind of affirmation is not necessarily available outside court proceedings (e.g. for an oath of office).
The traditional oath of witnesses in Austrian courts ends with the phraseso wahr mir Gott helfe. There are, however, exemptions for witnesses of different religious denominations as well as those unaffiliated with any religion. The oath is rarely practised in civil trials and was completely abolished for criminal procedures in 2008. The phraseso wahr mir Gott helfeis also an (optional) part in the oath of surveyors who testify as expert witnesses as well as court-certified interpreters. Unlike in Germany, the phraseso wahr mir Gott helfeis not part of the oath of office of theFederal President, members of the federal government or state governors, who may or may not add a religious affirmation after the form of oath prescribed by the constitution.
ThePolishphrase is "Tak mi dopomóż Bóg" or "Tak mi, Boże, dopomóż." It has been used in most version of thePolish Army oaths, however other denominations use different phrases. President, prime minister, deputy prime ministers, ministers and members of both houses of parliament can add this phrase at the end of the oath of their office.[28]
InRomania, the oath translation is "Așa să-mi ajute Dumnezeu!", which is used in various ceremonies such as the ministers' oath in front of the president of the republic or the magistrates' oath.
|
https://en.wikipedia.org/wiki/So_help_me_God
|
Inmathematical analysis, adomainorregionis anon-empty,connected, andopen setin atopological space. In particular, it is any non-empty connected opensubsetof thereal coordinate spaceRnor thecomplex coordinate spaceCn. A connected open subset ofcoordinate spaceis frequently used for thedomain of a function.[1]
The basic idea of a connected subset of a space dates from the 19th century, but precise definitions vary slightly from generation to generation, author to author, and edition to edition, as concepts developed and terms were translated between German, French, and English works. In English, some authors use the termdomain,[2]some use the termregion,[3]some use both terms interchangeably,[4]and some define the two terms slightly differently;[5]some avoid ambiguity by sticking with a phrase such asnon-empty connected open subset.[6]
One common convention is to define adomainas a connected open set but aregionas theunionof a domain with none, some, or all of itslimit points.[7]Aclosed regionorclosed domainis the union of a domain and all of its limit points.
Various degrees of smoothness of theboundaryof the domain are required for various properties of functions defined on the domain to hold, such as integral theorems (Green's theorem,Stokes theorem), properties ofSobolev spaces, and to definemeasureson the boundary and spaces oftraces(generalized functions defined on the boundary). Commonly considered types of domains are domains withcontinuousboundary,Lipschitz boundary,C1boundary, and so forth.
Abounded domainis a domain that isbounded, i.e., contained in some ball.Bounded regionis defined similarly. Anexterior domainorexternal domainis a domain whosecomplementis bounded; sometimes smoothness conditions are imposed on its boundary.
Incomplex analysis, acomplex domain(or simplydomain) is any connected open subset of thecomplex planeC. For example, the entire complex plane is a domain, as is the openunit disk, the openupper half-plane, and so forth. Often, a complex domain serves as thedomain of definitionfor aholomorphic function. In the study ofseveral complex variables, the definition of a domain is extended to include any connected open subset ofCn.
InEuclidean spaces,one-,two-, andthree-dimensionalregions arecurves,surfaces, andsolids, whose extent are called, respectively,length,area, andvolume.
Definition. An open set is connected if it cannot be expressed as the sum of two open sets. An open connected set is called a domain.
German:Eine offene Punktmenge heißt zusammenhängend, wenn man sie nicht als Summe von zwei offenen Punktmengen darstellen kann. Eine offene zusammenhängende Punktmenge heißt ein Gebiet.
According toHans Hahn,[8]the concept of a domain as an open connected set was introduced byConstantin Carathéodoryin his famous book (Carathéodory 1918).
In this definition, Carathéodory considers obviouslynon-emptydisjointsets.
Hahn also remarks that the word "Gebiet" ("Domain") was occasionally previously used as asynonymofopen set.[9]The rough concept is older. In the 19th and early 20th century, the termsdomainandregionwere often used informally (sometimes interchangeably) without explicit definition.[10]
However, the term "domain" was occasionally used to identify closely related but slightly different concepts. For example, in his influentialmonographsonelliptic partial differential equations,Carlo Mirandauses the term "region" to identify an open connected set,[11][12]and reserves the term "domain" to identify an internally connected,[13]perfect set, each point of which is an accumulation point of interior points,[11]following his former masterMauro Picone:[14]according to this convention, if a setAis a region then itsclosureAis a domain.[11]
|
https://en.wikipedia.org/wiki/Domain_(mathematical_analysis)
|
Online transaction processing(OLTP) is a type ofdatabasesystem used in transaction-oriented applications, such as many operational systems. "Online" refers to the fact that such systems are expected to respond to user requests and process them in real-time (process transactions). The term is contrasted withonline analytical processing(OLAP) which instead focuses on data analysis (for exampleplanningandmanagement systems).
The term "transaction" can have two different meanings, both of which might apply: in the realm of computers ordatabase transactionsit denotes an atomic change of state, whereas in the realm of business or finance, the term typically denotes an exchange of economic entities (as used by, e.g.,Transaction Processing Performance Councilorcommercial transactions.[1]): 50OLTP may use transactions of the first type to record transactions of the second type.
OLTP is typically contrasted toonline analytical processing(OLAP), which is generally characterized by much more complex queries, in a smaller volume, for the purpose of business intelligence or reporting rather than to process transactions. Whereas OLTP systems process all kinds of queries (read, insert, update and delete), OLAP is generally optimized for read only and might not even support other kinds of queries. OLTP also operates differently frombatch processingandgrid computing.[1]: 15
In addition, OLTP is often contrasted toonline event processing(OLEP), which is based on distributedevent logsto offer strong consistency in large-scale heterogeneous systems.[2]Whereas OLTP is associated with short atomic transactions, OLEP allows for more flexible distribution patterns and higher scalability, but with increased latency and without guaranteed upper bound to the processing time.
OLTP has also been used to refer to processing in which the system responds immediately to user requests. Anautomated teller machine(ATM) for a bank is an example of a commercial transaction processing application.[3]Online transaction processing applications have high throughput and are insert- or update-intensive in database management. These applications are used concurrently by hundreds of users. The key goals of OLTP applications are availability, speed, concurrency and recoverability (durability).[4]Reduced paper trails and the faster, more accurate forecast for revenues and expenses are both examples of how OLTP makes things simpler for businesses. However, like many modern online information technology solutions, some systems require offline maintenance, which further affects the cost-benefit analysis of an online transaction processing system.
An OLTP system is an accessible data processing system in today's enterprises. Some examples of OLTP systems include order entry, retail sales, and financial transaction systems.[5]Online transaction processing systems increasingly require support for transactions that span a network and may include more than one company. For this reason, modern online transaction processing software uses client or server processing and brokering software that allows transactions to run on different computer platforms in a network.
In large applications, efficient OLTP may depend on sophisticated transaction management software (such as IBMCICS) and/ordatabaseoptimization tactics to facilitate the processing of large numbers of concurrent updates to an OLTP-oriented database.
For even more demanding decentralized database systems, OLTP brokering programs can distribute transaction processing among multiple computers on anetwork. OLTP is often integrated intoservice-oriented architecture(SOA) andWeb services.
Online transaction processing (OLTP) involves gathering input information, processing the data and updating existing data to reflect the collected and processed information. As of today, most organizations use a database management system to support OLTP. OLTP is carried in a client-server system.
Online transaction process concerns about concurrency and atomicity. Concurrency controls guarantee that two users accessing the same data in the database system will not be able to change that data or the user has to wait until the other user has finished processing, before changing that piece of data. Atomicity controls guarantee that all the steps in a transaction are completed successfully as a group. That is, if any steps between the transaction fail, all other steps must fail also.[6]
To build an OLTP system, a designer must know that the large number of concurrent users does not interfere with the system's performance. To increase the performance of an OLTP system, a designer must avoid excessive use of indexes and clusters.
The following elements are crucial for the performance of OLTP systems:[4]
|
https://en.wikipedia.org/wiki/Online_transaction_processing
|
Standard formis a way of expressingnumbersthat are too large or too small to be conveniently written indecimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to asscientific formorstandard index form, orScientific notationin the United States. Thisbase tennotation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certainarithmetic operations. Onscientific calculators, it is usually known as "SCI" display mode.
In scientific notation, nonzero numbers are written in the form
ormtimes ten raised to the power ofn, wherenis aninteger, and thecoefficientmis a nonzeroreal number(usually between 1 and 10 in absolute value, and nearly always written as aterminating decimal). The integernis called theexponentand the real numbermis called thesignificandormantissa.[1]The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of thefractional partof thecommon logarithm. If the number is negative then a minus sign precedesm, as in ordinary decimal notation. Innormalized notation, the exponent is chosen so that theabsolute value(modulus) of the significandmis at least 1 but less than 10.
Decimal floating pointis a computer arithmetic system closely related to scientific notation.
For performing calculations with aslide rule, standard form expression is required. Thus, the use of scientific notation increased as engineers and educators used that tool. SeeSlide rule#History.
Any real number can be written in the formm×10^nin many ways: for example, 350 can be written as3.5×102or35×101or350×100.
Innormalizedscientific notation (called "standard form" in the United Kingdom), the exponentnis chosen so that theabsolute valueofmremains at least one but less than ten (1 ≤ |m| < 10). Thus 350 is written as3.5×102. This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number oforders of magnitudeseparating the numbers. It is also the form that is required when using tables ofcommon logarithms. In normalized notation, the exponentnis negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as5×10−1). The 10 and exponent are often omitted when the exponent is 0. For a series of numbers that are to be added or subtracted (or otherwise compared), it can be convenient to use the same value ofmfor all elements of the series.
Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized or differently normalized form, such asengineering notation, is desired. Normalized scientific notation is often calledexponentialnotation– although the latter term is more general and also applies whenmis not restricted to the range 1 to 10 (as in engineering notation for instance) and tobasesother than 10 (for example,3.15×2^20).
Engineering notation (often named "ENG" on scientific calculators) differs from normalized scientific notation in that the exponentnis restricted tomultiplesof 3. Consequently, the absolute value ofmis in the range 1 ≤ |m| < 1000, rather than 1 ≤ |m| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their correspondingSI prefixes, which facilitates reading and oral communication. For example,12.5×10−9mcan be read as "twelve-point-five nanometres" and written as12.5 nm, while its scientific notation equivalent1.25×10−8mwould likely be read out as "one-point-two-five times ten-to-the-negative-eight metres".
Calculatorsandcomputer programstypically present very large or small numbers using scientific notation, and some can be configured to uniformly present all numbers that way. Becausesuperscriptexponents like 107can be inconvenient to display or type, the letter "E" or "e" (for "exponent") is often used to represent "times ten raised to the power of", so that the notationmEnfor a decimal significandmand integer exponentnmeans the same asm× 10n. For example6.022×1023is written as6.022E23or6.022e23, and1.6×10−35is written as1.6E-35or1.6e-35. While common in computer output, this abbreviated version of scientific notation is discouraged for published documents by some style guides.[2][3]
Most popular programming languages – includingFortran,C/C++,Python, andJavaScript– use this "E" notation, which comes from Fortran and was present in the first version released for theIBM 704in 1956.[4]The E notation was already used by the developers ofSHARE Operating System(SOS) for theIBM 709in 1958.[5]Later versions of Fortran (at least sinceFORTRAN IVas of 1961) also use "D" to signifydouble precisionnumbers in scientific notation,[6]and newer Fortran compilers use "Q" to signifyquadruple precision.[7]TheMATLABprogramming language supports the use of either "E" or "D".
TheALGOL 60(1960) programming language uses a subscript ten "10" character instead of the letter "E", for example:6.0221023.[8][9]This presented a challenge for computer systems which did not provide such a character, soALGOL W(1966) replaced the symbol by a single quote, e.g.6.022'+23,[10]and some Soviet ALGOL variants allowed the use of the Cyrillic letter "ю", e.g.6.022ю+23[citation needed]. Subsequently, theALGOL 68programming language provided a choice of characters:E,e,\,⊥, or10.[11]The ALGOL "10" character was included in the SovietGOST 10859text encoding (1964), and was added toUnicode5.2 (2009) asU+23E8⏨DECIMAL EXPONENT SYMBOL.[12]
Some programming languages use other symbols. For instance,Simulauses&(or&&forlong), as in6.022&23.[13]Mathematicasupports the shorthand notation6.022*^23(reserving the letterEfor themathematical constante).
The firstpocket calculatorssupporting scientific notation appeared in 1972.[14]To enter numbers in scientific notation calculators include a button labeled "EXP" or "×10x", among other variants. The displays of pocket calculators of the 1970s did not display an explicit symbol between significand and exponent; instead, one or more digits were left blank (e.g.6.022 23, as seen in theHP-25), or a pair of smaller and slightly raised digits were reserved for the exponent (e.g.6.02223, as seen in theCommodore PR100). In 1976,Hewlett-Packardcalculator user Jim Davidson coined the termdecapowerfor the scientific-notation exponent to distinguish it from "normal" exponents, and suggested the letter "D" as a separator between significand and exponent in typewritten numbers (for example,6.022D23); these gained some currency in the programmable calculator user community.[15]The letters "E" or "D" were used as a scientific-notation separator bySharppocket computersreleased between 1987 and 1995, "E" used for 10-digit numbers and "D" used for 20-digit double-precision numbers.[16]TheTexas InstrumentsTI-83andTI-84series of calculators (1996–present) use asmall capitalEfor the separator.[17]
In 1962, Ronald O. Whitaker of Rowco Engineering Co. proposed a power-of-ten system nomenclature where the exponent would be circled, e.g. 6.022 × 103would be written as "6.022③".[18]
A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroesindicated to be significant. Leading and trailing zeroes are not significant digits, because they exist only to show the scale of the number. Unfortunately, this leads to ambiguity. The number1230400is usually read to have five significant figures: 1, 2, 3, 0, and 4, the final two zeroes serving only as placeholders and adding no precision. The same number, however, would be used if the last two digits were also measured precisely and found to equal 0 – seven significant figures.
When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the placeholding zeroes are no longer required. Thus1230400would become1.2304×106if it had five significant digits. If the number were known to six or seven significant figures, it would be shown as1.23040×106or1.230400×106. Thus, an additional advantage of scientific notation is that the number of significant figures is unambiguous.
It is customary in scientific measurement to record all the definitely known digits from the measurement and to estimate at least one additional digit if there is any information at all available on its value. The resulting number contains more information than it would without the extra digit, which may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together).
Additional information about precision can be conveyed through additional notation. It is often useful to know how exact the final digit or digits are. For instance, the accepted value of the mass of theprotoncan properly be expressed as1.67262192369(51)×10−27kg, which is shorthand for(1.67262192369±0.00000000051)×10−27kg. However it is still unclear whether the error (5.1×10−37in this case) is the maximum possible error,standard error, or some otherconfidence interval.
In normalized scientific notation, in E notation, and in engineering notation, thespace(which intypesettingmay be represented by a normal width space or athin space) that is allowedonlybefore and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character.[19]
Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed.
First, move the decimal separator point sufficient places,n, to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append× 10n; to the right,× 10−n. To represent the number1,230,400in normalized scientific notation, the decimal separator would be moved 6 digits to the left and× 106appended, resulting in1.2304×106. The number−0.0040321would have its decimal separator shifted 3 digits to the right instead of the left and yield−4.0321×10−3as a result.
Converting a number from scientific notation to decimal notation, first remove the× 10non the end, then shift the decimal separatorndigits to the right (positiven) or left (negativen). The number1.2304×106would have its decimal separator shifted 6 digits to the right and become1,230,400, while−4.0321×10−3would have its decimal separator moved 3 digits to the left and be−0.0040321.
Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shiftedxplaces to the left (or right) andxis added to (or subtracted from) the exponent, as shown below.
Given two numbers in scientific notation,x0=m0×10n0{\displaystyle x_{0}=m_{0}\times 10^{n_{0}}}andx1=m1×10n1{\displaystyle x_{1}=m_{1}\times 10^{n_{1}}}
Multiplicationanddivisionare performed using the rules for operation withexponentiation:x0x1=m0m1×10n0+n1{\displaystyle x_{0}x_{1}=m_{0}m_{1}\times 10^{n_{0}+n_{1}}}andx0x1=m0m1×10n0−n1{\displaystyle {\frac {x_{0}}{x_{1}}}={\frac {m_{0}}{m_{1}}}\times 10^{n_{0}-n_{1}}}
Some examples are:5.67×10−5×2.34×102≈13.3×10−5+2=13.3×10−3=1.33×10−2{\displaystyle 5.67\times 10^{-5}\times 2.34\times 10^{2}\approx 13.3\times 10^{-5+2}=13.3\times 10^{-3}=1.33\times 10^{-2}}and2.34×1025.67×10−5≈0.413×102−(−5)=0.413×107=4.13×106{\displaystyle {\frac {2.34\times 10^{2}}{5.67\times 10^{-5}}}\approx 0.413\times 10^{2-(-5)}=0.413\times 10^{7}=4.13\times 10^{6}}
Additionandsubtractionrequire the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted:
Next, add or subtract the significands:x0±x1=(m0±m1)×10n0{\displaystyle x_{0}\pm x_{1}=(m_{0}\pm m_{1})\times 10^{n_{0}}}
An example:2.34×10−5+5.67×10−6=2.34×10−5+0.567×10−5=2.907×10−5{\displaystyle 2.34\times 10^{-5}+5.67\times 10^{-6}=2.34\times 10^{-5}+0.567\times 10^{-5}=2.907\times 10^{-5}}
While base ten is normally used for scientific notation, powers of other bases can be used too,[25]base 2 being the next most commonly used one.
For example, in base-2 scientific notation, the number 1001binbinary(=9d) is written as1.001b× 2d11bor1.001b× 10b11busing binary numbers (or shorter1.001 × 1011if binary context is obvious).[citation needed]In E notation, this is written as1.001bE11b(or shorter: 1.001E11) with the letter "E" now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter "B" instead of "E",[26]a shorthand notation originally proposed byBruce Alan MartinofBrookhaven National Laboratoryin 1968,[27]as in1.001bB11b(or shorter: 1.001B11). For comparison, the same number indecimal representation:1.125 × 23(using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes1.001b× 10b3dor shorter 1.001B3.[26]
This is closely related to the base-2floating-pointrepresentation commonly used in computer arithmetic, and the usage of IECbinary prefixes(e.g. 1B10 for 1×210(kibi), 1B20 for 1×220(mebi), 1B30 for 1×230(gibi), 1B40 for 1×240(tebi)).
Similar to "B" (or "b"[28]), the letters "H"[26](or "h"[28]) and "O"[26](or "o",[28]or "C"[26]) are sometimes also used to indicatetimes 16 or 8 to the poweras in 1.25 =1.40h× 10h0h= 1.40H0 = 1.40h0, or 98000 =2.7732o× 10o5o= 2.7732o5 = 2.7732C5.[26]
Another similar convention to denote base-2 exponents is using a letter "P" (or "p", for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal.[29]This notation can be produced by implementations of theprintffamily of functions following theC99specification and (Single Unix Specification)IEEE Std 1003.1POSIXstandard, when using the%aor%Aconversion specifiers.[29][30][31]Starting withC++11,C++I/O functions could parse and print the P notation as well. Meanwhile, the notation has been fully adopted by the language standard sinceC++17.[32]Apple'sSwiftsupports it as well.[33]It is also required by theIEEE 754-2008binary floating-point standard. Example: 1.3DEp42 represents1.3DEh× 242.
Engineering notationcan be viewed as a base-1000 scientific notation.
Sayre, David, ed. (1956-10-15).The FORTRAN Automatic Coding System for the IBM 704 EDPM: Programmer's Reference Manual(PDF). New York: Applied Science Division and Programming Research Department,International Business Machines Corporation. pp. 9, 27. Retrieved2022-07-04.(2+51+1 pages)
"6. Extensions: 6.1 Extensions implemented in GNU Fortran: 6.1.8 Q exponent-letter".The GNU Fortran Compiler. 2014-06-12. Retrieved2022-12-21.
"The Unicode Standard"(v. 7.0.0 ed.). Retrieved2018-03-23.
Vanderburgh, Richard C., ed. (November 1976)."Decapower"(PDF).52-Notes – Newsletter of the SR-52 Users Club.1(6). Dayton, OH: 1. V1N6P1. Retrieved2017-05-28.Decapower– In the January 1976 issue of65-Notes(V3N1p4) Jim Davidson (HP-65Users Club member #547) suggested the term "decapower" as a descriptor for the power-of-ten multiplier used in scientific notation displays. I'm going to begin using it in place of "exponent" which is technically incorrect, and the letter D to separate the "mantissa" from the decapower for typewritten numbers, as Jim also suggests. For example,123−45[sic] which is displayed in scientific notation as1.23 -43will now be written1.23D-43. Perhaps, as this notation gets more and more usage, the calculator manufacturers will change their keyboard abbreviations. HP's EEX and TI's EE could be changed to ED (for enter decapower).[1]"Decapower".52-Notes – Newsletter of the SR-52 Users Club. Vol. 1, no. 6. Dayton, OH. November 1976. p. 1. Retrieved2018-05-07.(NB. The termdecapowerwas frequently used in subsequent issues of this newsletter up to at least 1978.)
電言板6 PC-U6000 PROGRAM LIBRARY[Telephone board 6 PC-U6000 program library] (in Japanese). Vol. 6. University Co-op. 1993.
"TI-83 Programmer's Guide"(PDF). Retrieved2010-03-09.
"INTOUCH 4GL a Guide to the INTOUCH Language". Archived fromthe originalon 2015-05-03.
|
https://en.wikipedia.org/wiki/P_notation
|
Inquantum computing, aquantum algorithmis analgorithmthat runs on a realistic model ofquantum computation, the most commonly used model being thequantum circuitmodel of computation.[1][2]A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classicalcomputer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on aquantum computer. Although all classical algorithms can also be performed on a quantum computer,[3]: 126the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such asquantum superpositionorquantum entanglement.
Problems that areundecidableusing classical computers remain undecidable using quantum computers.[4]: 127What makes quantum algorithms interesting is that they might be able to solve some problems faster than classical algorithms because the quantum superposition and quantum entanglement that quantum algorithms exploit generally cannot be efficiently simulated on classical computers (seeQuantum supremacy).
The best-known algorithms areShor's algorithmfor factoring andGrover's algorithmfor searching an unstructured database or an unordered list. Shor's algorithm runs much (almost exponentially) faster than the most efficient known classical algorithm for factoring, thegeneral number field sieve.[5]Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task,[6]alinear search.
Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by aquantum circuitthat acts on some inputqubitsand terminates with ameasurement. A quantum circuit consists of simplequantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as theHamiltonian oracle model.[7]
Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms includephase kick-back,phase estimation, thequantum Fourier transform,quantum walks,amplitude amplificationandtopological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems.[8]
Thequantum Fourier transformis the quantum analogue of thediscrete Fourier transform, and is used in several quantum algorithms. TheHadamard transformis also an example of a quantum Fourier transform over an n-dimensional vector space over the fieldF2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number ofquantum gates.[citation needed]
The Deutsch–Jozsa algorithm solves ablack-boxproblem that requires exponentially many queries to the black box for any deterministic classical computer, but can be done with a single query by a quantum computer. However, when comparing bounded-error classical and quantum algorithms, there is no speedup, since a classical probabilistic algorithm can solve the problem with a constant number of queries with small probability of error. The algorithm determines whether a functionfis either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half).
The Bernstein–Vazirani algorithm is the first quantum algorithm that solves a problem more efficiently than the best known classical algorithm. It was designed to create anoracle separationbetweenBQPandBPP.
Simon's algorithm solves a black-box problem exponentially faster than any classical algorithm, including bounded-error probabilistic algorithms. This algorithm, which achieves an exponential speedup over all classical algorithms that we consider efficient, was the motivation forShor's algorithmfor factoring.
Thequantum phase estimation algorithmis used to determine the eigenphase of an eigenvector of a unitary gate, given a quantum state proportional to the eigenvector and access to the gate. The algorithm is frequently used as a subroutine in other algorithms.
Shor's algorithm solves thediscrete logarithmproblem and theinteger factorizationproblem in polynomial time,[9]whereas the best known classical algorithms take super-polynomial time. It is unknown whether these problems are inPorNP-complete. It is also one of the few quantum algorithms that solves a non-black-box problem in polynomial time, where the best known classical algorithms run in super-polynomial time.
Theabelianhidden subgroup problemis a generalization of many problems that can be solved by a quantum computer, such as Simon's problem, solvingPell's equation, testing theprincipal idealof aringR andfactoring. There are efficient quantum algorithms known for the Abelian hidden subgroup problem.[10]The more general hidden subgroup problem, where the group is not necessarily abelian, is a generalization of the previously mentioned problems, as well asgraph isomorphismand certainlattice problems. Efficient quantum algorithms are known for certain non-abelian groups. However, no efficient algorithms are known for thesymmetric group, which would give an efficient algorithm for graph isomorphism[11]and thedihedral group, which would solve certain lattice problems.[12]
AGauss sumis a type ofexponential sum. The best known classical algorithm for estimating these sums takes exponential time. Since the discrete logarithm problem reduces to Gauss sum estimation, an efficient classical algorithm for estimating Gauss sums would imply an efficient classical algorithm for computing discrete logarithms, which is considered unlikely. However, quantum computers can estimate Gauss sums to polynomial precision in polynomial time.[13]
Consider anoracleconsisting ofnrandom Boolean functions mappingn-bit strings to a Boolean value, with the goal of finding nn-bit stringsz1,...,znsuch that for the Hadamard-Fourier transform, at least 3/4 of the strings satisfy
and at least 1/4 satisfy
This can be done inbounded-error quantum polynomial time(BQP).[14]
Amplitude amplificationis a technique that allows the amplification of a chosen subspace of a quantum state. Applications of amplitude amplification usually lead to quadratic speedups over the corresponding classical algorithms. It can be considered as a generalization of Grover's algorithm.[citation needed]
Grover's algorithm searches an unstructured database (or an unordered list) with N entries for a marked entry, using onlyO(N){\displaystyle O({\sqrt {N}})}queries instead of theO(N){\displaystyle O({N})}queries required classically.[15]Classically,O(N){\displaystyle O({N})}queries are required even allowing bounded-error probabilistic algorithms.
Theorists have considered a hypothetical generalization of a standard quantum computer that could access the histories of the hidden variables inBohmian mechanics. (Such a computer is completely hypothetical and wouldnotbe a standard quantum computer, or even possible under the standard theory of quantum mechanics.) Such a hypothetical computer could implement a search of an N-item database in at mostO(N3){\displaystyle O({\sqrt[{3}]{N}})}steps. This is slightly faster than theO(N){\displaystyle O({\sqrt {N}})}steps taken by Grover's algorithm. However, neither search method would allow either model of quantum computer to solveNP-completeproblems in polynomial time.[16]
Quantum countingsolves a generalization of the search problem. It solves the problem of counting the number of marked entries in an unordered list, instead of just detecting whether one exists. Specifically, it counts the number of marked entries in anN{\displaystyle N}-element list with an error of at mostε{\displaystyle \varepsilon }by making onlyΘ(ε−1N/k){\displaystyle \Theta \left(\varepsilon ^{-1}{\sqrt {N/k}}\right)}queries, wherek{\displaystyle k}is the number of marked elements in the list.[17][18]More precisely, the algorithm outputs an estimatek′{\displaystyle k'}fork{\displaystyle k}, the number of marked entries, with accuracy|k−k′|≤εk{\displaystyle |k-k'|\leq \varepsilon k}.
A quantum walk is the quantum analogue of a classicalrandom walk. A classical random walk can be described by aprobability distributionover some states, while a quantum walk can be described by aquantum superpositionover states. Quantum walks are known to give exponential speedups for some black-box problems.[19][20]They also provide polynomial speedups for many problems. A framework for the creation of quantum walk algorithms exists and is a versatile tool.[21]
The Boson Sampling Problem in an experimental configuration assumes[22]an input ofbosons(e.g., photons) of moderate number that are randomly scattered into a large number of output modes, constrained by a definedunitarity. When individual photons are used, the problem is isomorphic to a multi-photon quantum walk.[23]The problem is then to produce a fair sample of theprobability distributionof the output that depends on the input arrangement of bosons and the unitarity.[24]Solving this problem with a classical computer algorithm requires computing thepermanentof the unitary transform matrix, which may take a prohibitively long time or be outright impossible. In 2014, it was proposed[25]that existing technology and standard probabilistic methods of generating single-photon states could be used as an input into a suitable quantum computablelinear optical networkand that sampling of the output probability distribution would be demonstrably superior using quantum algorithms. In 2015, investigation predicted[26]the sampling problem had similar complexity for inputs other thanFock-statephotons and identified a transition incomputational complexityfrom classically simulable to just as hard as the Boson Sampling Problem, depending on the size of coherent amplitude inputs.
The element distinctness problem is the problem of determining whether all the elements of a list are distinct. Classically,Ω(N){\displaystyle \Omega (N)}queries are required for a list of sizeN{\displaystyle N}; however, it can be solved inΘ(N2/3){\displaystyle \Theta (N^{2/3})}queries on a quantum computer. The optimal algorithm was put forth byAndris Ambainis,[27]andYaoyun Shifirst proved a tight lower bound when the size of the range is sufficiently large.[28]Ambainis[29]and Kutin[30]independently (and via different proofs) extended that work to obtain the lower bound for all functions.
The triangle-finding problem is the problem of determining whether a given graph contains a triangle (acliqueof size 3). The best-known lower bound for quantum algorithms isΩ(N){\displaystyle \Omega (N)}, but the best algorithm known requires O(N1.297) queries,[31]an improvement over the previous best O(N1.3) queries.[21][32]
A formula is a tree with a gate at each internal node and an input bit at each leaf node. The problem is to evaluate the formula, which is the output of the root node, given oracle access to the input.
A well studied formula is the balanced binary tree with only NAND gates.[33]This type of formula requiresΘ(Nc){\displaystyle \Theta (N^{c})}queries using randomness,[34]wherec=log2(1+33)/4≈0.754{\displaystyle c=\log _{2}(1+{\sqrt {33}})/4\approx 0.754}. With a quantum algorithm, however, it can be solved inΘ(N1/2){\displaystyle \Theta (N^{1/2})}queries. No better quantum algorithm for this case was known until one was found for the unconventional Hamiltonian oracle model.[7]The same result for the standard setting soon followed.[35]
Fast quantum algorithms for more complicated formulas are also known.[36]
The problem is to determine if ablack-box group, given bykgenerators, iscommutative. A black-box group is a group with an oracle function, which must be used to perform the group operations (multiplication, inversion, and comparison with identity). The interest in this context lies in the query complexity, which is the number of oracle calls needed to solve the problem. The deterministic and randomized query complexities areΘ(k2){\displaystyle \Theta (k^{2})}andΘ(k){\displaystyle \Theta (k)}, respectively.[37]A quantum algorithm requiresΩ(k2/3){\displaystyle \Omega (k^{2/3})}queries, while the best-known classical algorithm usesO(k2/3logk){\displaystyle O(k^{2/3}\log k)}queries.[38]
Thecomplexity classBQP(bounded-error quantum polynomial time) is the set ofdecision problemssolvable by aquantum computerinpolynomial timewith error probability of at most 1/3 for all instances.[39]It is the quantum analogue to the classical complexity classBPP.
A problem isBQP-complete if it is inBQPand any problem inBQPcan bereducedto it inpolynomial time. Informally, the class ofBQP-complete problems are those that are as hard as the hardest problems inBQPand are themselves efficiently solvable by a quantum computer (with bounded error).
Witten had shown that theChern-Simonstopological quantum field theory(TQFT) can be solved in terms ofJones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial,[40]which as far as we know, is hard to compute classically in the worst-case scenario.[citation needed]
The idea that quantum computers might be more powerful than classical computers originated in Richard Feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems, yet quantum many-body systems are able to "solve themselves."[41]Since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. Efficient (i.e., polynomial-time) quantum algorithms have been developed for simulating both Bosonic and Fermionic systems,[42]as well as the simulation of chemical reactions beyond the capabilities of current classical supercomputers using only a few hundred qubits.[43]Quantum computers can also efficiently simulate topological quantum field theories.[44]In addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimatingquantum topological invariantssuch asJones[45]andHOMFLY polynomials,[46]and theTuraev-Viro invariantof three-dimensional manifolds.[47]
In 2009,Aram Harrow, Avinatan Hassidim, andSeth Lloyd, formulated a quantum algorithm for solvinglinear systems. Thealgorithmestimates the result of a scalar measurement on the solution vector to a given linear system of equations.[48]
Provided that the linear system issparseand has a lowcondition numberκ{\displaystyle \kappa }, and that the user is interested in the result of a scalar measurement on the solution vector (instead of the values of the solution vector itself), then the algorithm has a runtime ofO(log(N)κ2){\displaystyle O(\log(N)\kappa ^{2})}, whereN{\displaystyle N}is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs inO(Nκ){\displaystyle O(N\kappa )}(orO(Nκ){\displaystyle O(N{\sqrt {\kappa }})}for positive semidefinite matrices).
Hybrid Quantum/Classical Algorithms combine quantum state preparation and measurement with classical optimization.[49]These algorithms generally aim to determine the ground-state eigenvector and eigenvalue of a Hermitian operator.
Thequantum approximate optimization algorithmtakes inspiration from quantum annealing, performing a discretized approximation of quantum annealing using a quantum circuit. It can be used to solve problems in graph theory.[50]The algorithm makes use of classical optimization of quantum operations to maximize an "objective function."
Thevariational quantum eigensolver(VQE) algorithm applies classical optimization to minimize the energy expectation value of anansatz stateto find the ground state of a Hermitian operator, such as a molecule's Hamiltonian.[51]It can also be extended to find excited energies of molecular Hamiltonians.[52]
The contracted quantum eigensolver (CQE) algorithm minimizes the residual of a contraction (or projection) of the Schrödinger equation onto the space of two (or more) electrons to find the ground- or excited-state energy and two-electron reduced density matrix of a molecule.[53]It is based on classical methods for solving energies and two-electron reduced density matrices directly from the anti-Hermitian contracted Schrödinger equation.[54]
|
https://en.wikipedia.org/wiki/Quantum_algorithm
|
In computer terminology, ahoneypotis acomputer securitymechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use ofinformation systems. Generally, a honeypot consists ofdata(for example, in a network site) that appears to be a legitimate part of the site which contains information or resources of value to attackers. It is actually isolated, monitored, and capable of blocking or analyzing the attackers. This is similar to policesting operations, colloquially known as "baiting" a suspect.[1]
The main use for this network decoy is to distract potential attackers from more important information and machines on the real network, learn about the forms of attacks they can suffer, and examine such attacks during and after the exploitation of a honeypot.
It provides a way to prevent and see vulnerabilities in a specific network system. A honeypot is a decoy used to protect a network from present or future attacks.[2][3]Honeypots derive their value from the use by attackers. If not interacted with, the honeypot has little to no value. Honeypots can be used for everything from slowing down or stopping automated attacks, capturing new exploits, to gathering intelligence on emerging threats or early warning and prediction.[4]
Honeypots can be differentiated based on whether they are physical or virtual:[2][3]
Honeypots can be classified based on their deployment (use/action) and based on their level of involvement. Based on deployment, honeypots may be classified as:[5]
Production honeypotsare easy to use, capture only limited information, and are used primarily by corporations. Production honeypots are placed inside the production network with other production servers by an organization to improve their overall state of security. Normally, production honeypots are low-interaction honeypots, which are easier to deploy. They give less information about the attacks or attackers than research honeypots.[5]
Research honeypotsare run to gather information about the motives and tactics of theblack hatcommunity targeting different networks. These honeypots do not add direct value to a specific organization; instead, they are used to research the threats that organizations face and to learn how to better protect against those threats.[6]Research honeypots are complex to deploy and maintain, capture extensive information, and are used primarily by research, military, or government organizations.[7]
Based on design criteria, honeypots can be classified as:[5]
Pure honeypotsare full-fledged production systems. The activities of the attacker are monitored by using a bug tap that has been installed on the honeypot's link to the network. No other software needs to be installed. Even though a pure honeypot is useful, the stealthiness of the defense mechanisms can be ensured by a more controlled mechanism.
High-interaction honeypotsimitate the activities of the production systems that host a variety of services and, therefore, an attacker may be allowed a lot of services to waste their time. By employingvirtual machines, multiple honeypots can be hosted on a single physical machine. Therefore, even if the honeypot is compromised, it can be restored more quickly. In general, high-interaction honeypots provide more security by being difficult to detect, but they are expensive to maintain. If virtual machines are not available, one physical computer must be maintained for each honeypot, which can be exorbitantly expensive. Example:Honeynet.
Low-interaction honeypotssimulate only the services frequently requested by attackers.[8]Since they consume relatively few resources, multiple virtual machines can easily be hosted on one physical system, the virtual systems have a short response time, and less code is required, reducing the complexity of the virtual system's security. Example:Honeyd. This type of honeypot was one of the first types being created in the late nineties and was mainly used for detecting attacks, not studying them.[9]
Sugarcaneis a type of honeypot that masquerades as an open proxy.[10]It can often take form as a server designed to look like a misconfigured HTTP proxy.[11]Probably the most famous open proxy was the default configuration ofsendmail(before version 8.9.0 in 1998) which would forward email to and from any destination.[12]
Recently, a new market segment calleddeception technologyhas emerged using basic honeypot technology with the addition of advanced automation for scale. Deception technology addresses the automated deployment of honeypot resources over a large commercial enterprise or government institution.[13]
A malware honeypot is a decoy designed to intentionally attract malicious software. It does this by imitating a vulnerable system or network, such as a web server. The honeypot is intentionally set up with security flaws that look to invite these malware attacks. Once attacked IT teams can then analyze the malware to better understand where it comes from and how it acts.[14]
Spammersabuse vulnerable resources such asopen mail relaysandopen proxies. These are servers that accept e-mail from anyone on the Internet—including spammers—and send it to its destination. Some system administrators have created honeypot programs that masquerade as these abusable resources to discover spammer activity.
There are several capabilities such honeypots provide to these administrators, and the existence of such fake abusable systems makes abuse more difficult or risky. Honeypots can be a powerful countermeasure to abuse from those who rely on very high-volume abuse (e.g., spammers).
These honeypots can reveal the abuser'sIP addressand provide bulk spam capture (which enables operators to determine spammers'URLsand response mechanisms). As described by M. Edwards at ITPRo Today:
Typically, spammers test a mail server for open relaying by simply sending themselves an email message. If the spammer receives the email message, the mail server obviously allows open relaying. Honeypot operators, however, can use the relay test to thwart spammers. The honeypot catches the relay test email message, returns the test email message, and subsequently blocks all other email messages from that spammer. Spammers continue to use the antispam honeypot for spamming, but the spam is never delivered. Meanwhile, the honeypot operator can notify spammers' ISPs and have their Internet accounts canceled. If honeypot operators detect spammers who use open-proxy servers, they can also notify the proxy server operator to lock down the server to prevent further misuse.[15]
The apparent source may be another abused system. Spammers and other abusers may use a chain of such abused systems to make detection of the original starting point of the abuse traffic difficult.
This in itself is indicative of the power of honeypots asanti-spamtools. In the early days of anti-spam honeypots, spammers, with little concern for hiding their location, felt safe testing for vulnerabilities and sending spam directly from their own systems. Honeypots made the abuse riskier and more difficult.
Spam still flows through open relays, but the volume is much smaller than in 2001-02. While most spam originates in the U.S.,[16]spammers hop through open relays across political boundaries to mask their origin. Honeypot operators may use intercepted relay tests to recognize and thwart attempts to relay spam through their honeypots. "Thwart" may mean "accept the relay spam but decline to deliver it." Honeypot operators may discover other details concerning the spam and the spammer by examining the captured spam messages.
Open-relay honeypots include Jackpot, written inJavaby Jack Cleaver;smtpot.py, written inPythonby Karl A. Krueger;[17]and spamhole, written inC.[18]TheBubblegum Proxypotis an open-source honeypot (or "proxypot").[19]
An email address that is not used for any other purpose than to receive spam can also be considered a spam honeypot. Compared with the term "spamtrap", the term "honeypot" might be more suitable for systems and techniques that are used to detect or counterattack probes. With a spamtrap, spam arrives at its destination "legitimately"—exactly as non-spam email would arrive.
An amalgam of these techniques isProject Honey Pot, a distributed, open-source project that uses honeypot pages installed on websites around the world. These honeypot pages disseminate uniquely tagged spamtrap email addresses andspammerscan then be tracked—the corresponding spam mail is subsequently sent to these spamtrap e-mail addresses.[20]
Databases often get attacked by intruders usingSQL injection. As such activities are not recognized by basic firewalls, companies often use database firewalls for protection. Some of the availableSQL databasefirewalls provide/support honeypot architectures so that the intruder runs against a trap database while the web application remains functional.[21]
Industrial Control Systems(ICS) are often the target of cyberattacks.[22]One of the main targets within ICS areProgrammable Logic Controllers.[23]In order to understand intruders' techniques in this context, several honeypots have been proposed. Conpot[24][25]is a low interaction honeypot capable of simulation Siemens PLCs. HoneyPLC is a medium interaction honeypot that can simulate Siemens, Rockwell and other PLC brands.[26][27]
Just as honeypots are weapons against spammers, honeypot detection systems are spammer-employed counter-weapons. As detection systems would likely use unique characteristics of specific honeypots to identify them, such as the property-value pairs of default honeypot configuration,[28]many honeypots in use utilise a set of unique characteristics larger and more daunting to those seeking to detect and thereby identify them. This is an unusual circumstance in software; a situation in which"versionitis"(a large number of versions of the same software, all differing slightly from each other) can be beneficial. There's also an advantage in having some easy-to-detect honeypots deployed.Fred Cohen, the inventor of theDeception Toolkit, argues that every system running his honeypot should have a deception port which adversaries can use to detect the honeypot.[29]Cohen believes that this might deter adversaries. Honeypots also allow for early detection of legitimate threats. No matter how the honeypot detects the exploit, it can alert you immediately to the attempted attack.[30]
The goal of honeypots is to attract and engage attackers for a sufficiently long period to obtain high-levelIndicators of Compromise(IoC) such as attack tools andTactics, Techniques, and Procedures(TTPs). Thus, a honeypot needs to emulate essential services in the production network and grant the attacker the freedom to perform adversarial activities to increase its attractiveness to the attacker. Although the honeypot is a controlled environment and can be monitored by using tools such as honeywall,[31]attackers may still be able to use some honeypots as pivot nodes to penetrate production systems.[32]
The second risk of honeypots is that they may attract legitimate users due to a lack of communication in large-scale enterprise networks. For example, the security team who applies and monitors the honeypot may not disclose the honeypot location to all users in time due to the lack of communication or the prevention of insider threats.[33][34]
"A 'honey net' is a network of high interaction honeypots that simulates a production network and configured such that all activity is monitored, recorded and in a degree, discreetly regulated."
Two or more honeypots on a network form ahoney net. Typically, a honey net is used for monitoring a larger and/or more diverse network in which one honeypot may not be sufficient. Honey nets and honeypots are usually implemented as parts of largernetwork intrusion detection systems. Ahoney farmis a centralized collection of honeypots and analysis tools.[35]
The concept of the honey net first began in 1999 when Lance Spitzner, founder of theHoneynet Project, published the paper "To Build a Honeypot".[36]
An early formulation of the concept, called "entrapment", is defined inFIPS39 (1976) as "the deliberate planting of apparent flaws in a system for the purpose of detecting attempted penetrations or confusing an intruder about which flaws to exploit".[37]
The earliest honeypot techniques are described inClifford Stoll's 1989 bookThe Cuckoo's Egg.
One of the earliest documented cases of the cybersecurity use of a honeypot began in January 1991. On January 7, 1991, while he worked at AT&T Bell Laboratories Cheswick observed a criminal hacker, known as acracker, attempting to obtain a copy of a password file. Cheswick wrote that he and colleagues constructed a "chroot "Jail" (or "roach motel")" which allowed them to observe their attacker over a period of several months.[38]
In 2017,Dutch policeused honeypot techniques to track down users of thedarknet marketHansa.
The metaphor of a bear being attracted to and stealing honey is common in many traditions, including Germanic, Celtic, and Slavic. A common Slavic word for the bear ismedved"honey eater". The tradition of bears stealing honey has been passed down through stories and folklore, especially the well knownWinnie the Pooh.[39][40]
|
https://en.wikipedia.org/wiki/Honeypot_(computing)
|
Instatistical modeling,regression analysisis a set of statistical processes forestimatingthe relationships between adependent variable(often called theoutcomeorresponsevariable, or alabelin machine learning parlance) and one or more error-freeindependent variables(often calledregressors,predictors,covariates,explanatory variablesorfeatures).
The most common form of regression analysis islinear regression, in which one finds the line (or a more complexlinear combination) that most closely fits the data according to a specific mathematical criterion. For example, the method ofordinary least squarescomputes the unique line (orhyperplane) that minimizes the sum of squared differences between the true data and that line (or hyperplane). For specific mathematical reasons (seelinear regression), this allows the researcher to estimate theconditional expectation(or populationaverage value) of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternativelocation parameters(e.g.,quantile regressionorNecessary Condition Analysis[1]) or estimate the conditional expectation across a broader collection of non-linear models (e.g.,nonparametric regression).
Regression analysis is primarily used for two conceptually distinct purposes. First, regression analysis is widely used forpredictionandforecasting, where its use has substantial overlap with the field ofmachine learning. Second, in some situations regression analysis can be used to infercausal relationshipsbetween the independent and dependent variables. Importantly, regressions by themselves only reveal relationships between a dependent variable and a collection of independent variables in a fixed dataset. To use regressions for prediction or to infer causal relationships, respectively, a researcher must carefully justify why existing relationships have predictive power for a new context or why a relationship between two variables has a causal interpretation. The latter is especially important when researchers hope to estimate causal relationships usingobservational data.[2][3]
The earliest regression form was seen inIsaac Newton's work in 1700 while studyingequinoxes, being credited with introducing "an embryonic linear aggression analysis" as "Not only did he perform the averaging of a set of data, 50 years beforeTobias Mayer, but summing the residuals to zero heforcedthe regression line to pass through the average point. He also distinguished between two inhomogeneous sets of data and might have thought of anoptimalsolution in terms of bias, though not in terms of effectiveness." He previously used an averaging method in his 1671 work on Newton's rings, which was unprecedented at the time.[4][5]
Themethod of least squareswas published byLegendrein 1805,[6]and byGaussin 1809.[7]Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the Sun (mostly comets, but also later the then newly discovered minor planets). Gauss published a further development of the theory of least squares in 1821,[8]including a version of theGauss–Markov theorem.
The term "regression" was coined byFrancis Galtonin the 19th century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known asregression toward the mean).[9][10]For Galton, regression had only this biological meaning,[11][12]but his work was later extended byUdny YuleandKarl Pearsonto a more general statistical context.[13][14]In the work of Yule and Pearson, thejoint distributionof the response and explanatory variables is assumed to beGaussian. This assumption was weakened byR.A. Fisherin his works of 1922 and 1925.[15][16][17]Fisher assumed that theconditional distributionof the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.
In the 1950s and 1960s, economists usedelectromechanical desk calculatorsto calculate regressions. Before 1970, it sometimes took up to 24 hours to receive the result from one regression.[18]
Regression methods continue to be an area of active research. In recent decades, new methods have been developed forrobust regression, regression involving correlated responses such astime seriesandgrowth curves, regression in which the predictor (independent variable) or response variables are curves, images, graphs, or other complex data objects, regression methods accommodating various types of missing data,nonparametric regression,Bayesianmethods for regression, regression in which the predictor variables are measured with error, regression with more predictor variables than observations, andcausal inferencewith regression. Modern regression analysis is typically done with statistical andspreadsheetsoftware packages on computers as well as on handheldscientificandgraphing calculators.
In practice, researchers first select a model they would like to estimate and then use their chosen method (e.g.,ordinary least squares) to estimate the parameters of that model. Regression models involve the following components:
In variousfields of application, different terminologies are used in place ofdependent and independent variables.
Most regression models propose thatYi{\displaystyle Y_{i}}is afunction(regression function) ofXi{\displaystyle X_{i}}andβ{\displaystyle \beta }, withei{\displaystyle e_{i}}representing anadditive error termthat may stand in for un-modeled determinants ofYi{\displaystyle Y_{i}}or random statistical noise:
Note that the independent variablesXi{\displaystyle X_{i}}are assumed to be free of error. This important assumption is often overlooked, althougherrors-in-variables modelscan be used when the independent variables are assumed to contain errors.
The researchers' goal is to estimate the functionf(Xi,β){\displaystyle f(X_{i},\beta )}that most closely fits the data. To carry out regression analysis, the form of the functionf{\displaystyle f}must be specified. Sometimes the form of this function is based on knowledge about the relationship betweenYi{\displaystyle Y_{i}}andXi{\displaystyle X_{i}}that does not rely on the data. If no such knowledge is available, a flexible or convenient form forf{\displaystyle f}is chosen. For example, a simple univariate regression may proposef(Xi,β)=β0+β1Xi{\displaystyle f(X_{i},\beta )=\beta _{0}+\beta _{1}X_{i}}, suggesting that the researcher believesYi=β0+β1Xi+ei{\displaystyle Y_{i}=\beta _{0}+\beta _{1}X_{i}+e_{i}}to be a reasonable approximation for the statistical process generating the data.
Once researchers determine their preferredstatistical model, different forms of regression analysis provide tools to estimate the parametersβ{\displaystyle \beta }. For example,least squares(including its most common variant,ordinary least squares) finds the value ofβ{\displaystyle \beta }that minimizes the sum of squared errors∑i(Yi−f(Xi,β))2{\displaystyle \sum _{i}(Y_{i}-f(X_{i},\beta ))^{2}}. A given regression method will ultimately provide an estimate ofβ{\displaystyle \beta }, usually denotedβ^{\displaystyle {\hat {\beta }}}to distinguish the estimate from the true (unknown) parameter value that generated the data. Using this estimate, the researcher can then use thefitted valueYi^=f(Xi,β^){\displaystyle {\hat {Y_{i}}}=f(X_{i},{\hat {\beta }})}for prediction or to assess the accuracy of the model in explaining the data. Whether the researcher is intrinsically interested in the estimateβ^{\displaystyle {\hat {\beta }}}or the predicted valueYi^{\displaystyle {\hat {Y_{i}}}}will depend on context and their goals. As described inordinary least squares, least squares is widely used because the estimated functionf(Xi,β^){\displaystyle f(X_{i},{\hat {\beta }})}approximates theconditional expectationE(Yi|Xi){\displaystyle E(Y_{i}|X_{i})}.[7]However, alternative variants (e.g.,least absolute deviationsorquantile regression) are useful when researchers want to model other functionsf(Xi,β){\displaystyle f(X_{i},\beta )}.
It is important to note that there must be sufficient data to estimate a regression model. For example, suppose that a researcher has access toN{\displaystyle N}rows of data with one dependent and two independent variables:(Yi,X1i,X2i){\displaystyle (Y_{i},X_{1i},X_{2i})}. Suppose further that the researcher wants to estimate a bivariate linear model vialeast squares:Yi=β0+β1X1i+β2X2i+ei{\displaystyle Y_{i}=\beta _{0}+\beta _{1}X_{1i}+\beta _{2}X_{2i}+e_{i}}. If the researcher only has access toN=2{\displaystyle N=2}data points, then they could find infinitely many combinations(β^0,β^1,β^2){\displaystyle ({\hat {\beta }}_{0},{\hat {\beta }}_{1},{\hat {\beta }}_{2})}that explain the data equally well: any combination can be chosen that satisfiesY^i=β^0+β^1X1i+β^2X2i{\displaystyle {\hat {Y}}_{i}={\hat {\beta }}_{0}+{\hat {\beta }}_{1}X_{1i}+{\hat {\beta }}_{2}X_{2i}}, all of which lead to∑ie^i2=∑i(Y^i−(β^0+β^1X1i+β^2X2i))2=0{\displaystyle \sum _{i}{\hat {e}}_{i}^{2}=\sum _{i}({\hat {Y}}_{i}-({\hat {\beta }}_{0}+{\hat {\beta }}_{1}X_{1i}+{\hat {\beta }}_{2}X_{2i}))^{2}=0}and are therefore valid solutions that minimize the sum of squaredresiduals. To understand why there are infinitely many options, note that the system ofN=2{\displaystyle N=2}equations is to be solved for 3 unknowns, which makes the systemunderdetermined. Alternatively, one can visualize infinitely many 3-dimensional planes that go throughN=2{\displaystyle N=2}fixed points.
More generally, to estimate aleast squaresmodel withk{\displaystyle k}distinct parameters, one must haveN≥k{\displaystyle N\geq k}distinct data points. IfN>k{\displaystyle N>k}, then there does not generally exist a set of parameters that will perfectly fit the data. The quantityN−k{\displaystyle N-k}appears often in regression analysis, and is referred to as thedegrees of freedomin the model. Moreover, to estimate a least squares model, the independent variables(X1i,X2i,...,Xki){\displaystyle (X_{1i},X_{2i},...,X_{ki})}must belinearly independent: one mustnotbe able to reconstruct any of the independent variables by adding and multiplying the remaining independent variables. As discussed inordinary least squares, this condition ensures thatXTX{\displaystyle X^{T}X}is aninvertible matrixand therefore that a unique solutionβ^{\displaystyle {\hat {\beta }}}exists.
By itself, a regression is simply a calculation using the data. In order to interpret the output of regression as a meaningful statistical quantity that measures real-world relationships, researchers often rely on a number of classicalassumptions. These assumptions often include:
A handful of conditions are sufficient for the least-squares estimator to possess desirable properties: in particular, theGauss–Markovassumptions imply that the parameter estimates will beunbiased,consistent, andefficientin the class of linear unbiased estimators. Practitioners have developed a variety of methods to maintain some or all of these desirable properties in real-world settings, because these classical assumptions are unlikely to hold exactly. For example, modelingerrors-in-variablescan lead to reasonable estimates independent variables are measured with errors.Heteroscedasticity-consistent standard errorsallow the variance ofei{\displaystyle e_{i}}to change across values ofXi{\displaystyle X_{i}}. Correlated errors that exist within subsets of the data or follow specific patterns can be handled usingclustered standard errors, geographic weighted regression, orNewey–Weststandard errors, among other techniques. When rows of data correspond to locations in space, the choice of how to modelei{\displaystyle e_{i}}within geographic units can have important consequences.[19][20]The subfield ofeconometricsis largely focused on developing techniques that allow researchers to make reasonable real-world conclusions in real-world settings, where classical assumptions do not hold exactly.
In linear regression, the model specification is that the dependent variable,yi{\displaystyle y_{i}}is alinear combinationof theparameters(but need not be linear in theindependent variables). For example, insimple linear regressionfor modelingn{\displaystyle n}data points there is one independent variable:xi{\displaystyle x_{i}}, and two parameters,β0{\displaystyle \beta _{0}}andβ1{\displaystyle \beta _{1}}:
In multiple linear regression, there are several independent variables or functions of independent variables.
Adding a term inxi2{\displaystyle x_{i}^{2}}to the preceding regression gives:
This is still linear regression; although the expression on the right hand side is quadratic in the independent variablexi{\displaystyle x_{i}}, it is linear in the parametersβ0{\displaystyle \beta _{0}},β1{\displaystyle \beta _{1}}andβ2.{\displaystyle \beta _{2}.}
In both cases,εi{\displaystyle \varepsilon _{i}}is an error term and the subscripti{\displaystyle i}indexes a particular observation.
Returning our attention to the straight line case: Given a random sample from the population, we estimate the population parameters and obtain the sample linear regression model:
Theresidual,ei=yi−y^i{\displaystyle e_{i}=y_{i}-{\widehat {y}}_{i}}, is the difference between the value of the dependent variable predicted by the model,y^i{\displaystyle {\widehat {y}}_{i}}, and the true value of the dependent variable,yi{\displaystyle y_{i}}. One method of estimation isordinary least squares. This method obtains parameter estimates that minimize the sum of squaredresiduals,SSR:
Minimization of this function results in a set ofnormal equations, a set of simultaneous linear equations in the parameters, which are solved to yield the parameter estimators,β^0,β^1{\displaystyle {\widehat {\beta }}_{0},{\widehat {\beta }}_{1}}.
In the case of simple regression, the formulas for the least squares estimates are
wherex¯{\displaystyle {\bar {x}}}is themean(average) of thex{\displaystyle x}values andy¯{\displaystyle {\bar {y}}}is the mean of they{\displaystyle y}values.
Under the assumption that the population error term has a constant variance, the estimate of that variance is given by:
This is called themean square error(MSE) of the regression. The denominator is the sample size reduced by the number of model parameters estimated from the same data,(n−p){\displaystyle (n-p)}forp{\displaystyle p}regressorsor(n−p−1){\displaystyle (n-p-1)}if an intercept is used.[21]In this case,p=1{\displaystyle p=1}so the denominator isn−2{\displaystyle n-2}.
Thestandard errorsof the parameter estimates are given by
Under the further assumption that the population error term is normally distributed, the researcher can use these estimated standard errors to createconfidence intervalsand conducthypothesis testsabout thepopulation parameters.
In the more general multiple regression model, there arep{\displaystyle p}independent variables:
wherexij{\displaystyle x_{ij}}is thei{\displaystyle i}-th observation on thej{\displaystyle j}-th independent variable.
If the first independent variable takes the value 1 for alli{\displaystyle i},xi1=1{\displaystyle x_{i1}=1}, thenβ1{\displaystyle \beta _{1}}is called theregression intercept.
The least squares parameter estimates are obtained fromp{\displaystyle p}normal equations. The residual can be written as
Thenormal equationsare
In matrix notation, the normal equations are written as
where theij{\displaystyle ij}element ofX{\displaystyle \mathbf {X} }isxij{\displaystyle x_{ij}}, thei{\displaystyle i}element of the column vectorY{\displaystyle Y}isyi{\displaystyle y_{i}}, and thej{\displaystyle j}element ofβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}isβ^j{\displaystyle {\hat {\beta }}_{j}}. ThusX{\displaystyle \mathbf {X} }isn×p{\displaystyle n\times p},Y{\displaystyle Y}isn×1{\displaystyle n\times 1}, andβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}isp×1{\displaystyle p\times 1}. The solution is
Once a regression model has been constructed, it may be important to confirm thegoodness of fitof the model and thestatistical significanceof the estimated parameters. Commonly used checks of goodness of fit include theR-squared, analyses of the pattern ofresidualsand hypothesis testing. Statistical significance can be checked by anF-testof the overall fit, followed byt-testsof individual parameters.
Interpretations of these diagnostic tests rest heavily on the model's assumptions. Although examination of the residuals can be used to invalidate a model, the results of at-testorF-testare sometimes more difficult to interpret if the model's assumptions are violated. For example, if the error term does not have a normal distribution, in small samples the estimated parameters will not follow normal distributions and complicate inference. With relatively large samples, however, acentral limit theoremcan be invoked such that hypothesis testing may proceed using asymptotic approximations.
Limited dependent variables, which are response variables that arecategoricalor constrained to fall only in a certain range, often arise ineconometrics.
The response variable may be non-continuous ("limited" to lie on some subset of the real line). For binary (zero or one) variables, if analysis proceeds with least-squares linear regression, the model is called thelinear probability model. Nonlinear models for binary dependent variables include theprobitandlogit model. Themultivariate probitmodel is a standard method of estimating a joint relationship between several binary dependent variables and some independent variables. Forcategorical variableswith more than two values there is themultinomial logit. Forordinal variableswith more than two values, there are theordered logitandordered probitmodels.Censored regression modelsmay be used when the dependent variable is only sometimes observed, andHeckman correctiontype models may be used when the sample is not randomly selected from the population of interest.
An alternative to such procedures is linear regression based onpolychoric correlation(or polyserial correlations) between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, then count models like thePoisson regressionor thenegative binomialmodel may be used.
When the model function is not linear in the parameters, the sum of squares must be minimized by an iterative procedure. This introduces many complications which are summarized inDifferences between linear and non-linear least squares.
Regression modelspredicta value of theYvariable given known values of theXvariables. Predictionwithinthe range of values in the dataset used for model-fitting is known informally asinterpolation. Predictionoutsidethis range of the data is known asextrapolation. Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values.
Aprediction intervalthat represents the uncertainty may accompany the point prediction. Such intervals tend to expand rapidly as the values of the independent variable(s) moved outside the range covered by the observed data.
For such reasons and others, some tend to say that it might be unwise to undertake extrapolation.[23]
The assumption of a particular form for the relation betweenYandXis another source of uncertainty. A properly conducted regression analysis will include an assessment of how well the assumed form is matched by the observed data, but it can only do so within the range of values of the independent variables actually available. This means that any extrapolation is particularly reliant on the assumptions being made about the structural form of the regression relationship. If this knowledge includes the fact that the dependent variable cannot go outside a certain range of values, this can be made use of in selecting the model – even if the observed dataset has no values particularly near such bounds. The implications of this step of choosing an appropriate functional form for the regression can be great when extrapolation is considered. At a minimum, it can ensure that any extrapolation arising from a fitted model is "realistic" (or in accord with what is known).
There are no generally agreed methods for relating the number of observations versus the number of independent variables in the model. One method conjectured by Good and Hardin isN=mn{\displaystyle N=m^{n}}, whereN{\displaystyle N}is the sample size,n{\displaystyle n}is the number of independent variables andm{\displaystyle m}is the number of observations needed to reach the desired precision if the model had only one independent variable.[24]For example, a researcher is building a linear regression model using a dataset that contains 1000 patients (N{\displaystyle N}). If the researcher decides that five observations are needed to precisely define a straight line (m{\displaystyle m}), then the maximum number of independent variables (n{\displaystyle n}) the model can support is 4, because
Although the parameters of a regression model are usually estimated using the method of least squares, other methods which have been used include:
All major statistical software packages performleast squaresregression analysis and inference.Simple linear regressionand multiple regression using least squares can be done in somespreadsheetapplications and on some calculators. While many statistical software packages can perform various types of nonparametric and robust regression, these methods are less standardized. Different software packages implement different methods, and a method with a given name may be implemented differently in different packages. Specialized regression software has been developed for use in fields such as survey analysis and neuroimaging.
|
https://en.wikipedia.org/wiki/Regression_analysis
|
Thecute cat theory of digital activismis atheoryconcerningInternet activism,Web censorship, and "cute cats" (a term used for any low-value, but popular online activity) developed byEthan Zuckermanin 2008.[1][2]It posits that most people are not interested in activism; instead, they want to use thewebfor mundane activities, including surfing forpornographyandlolcats("cute cats").[3]The tools that they develop for that (such asFacebook,Flickr,Blogger,Twitter, and similar platforms) are very useful tosocial movementactivists because they may lack resources to develop dedicated tools themselves.[3]This, in turn, makes theactivistsmore immune to reprisals by governments than if they were using a dedicated activism platform, because shutting down a popular public platform provokes a larger public outcry than shutting down an obscure one.[3]
Zuckerman states that "Web 1.0was invented to allow physicists to share research papers.Web 2.0was created to allow people to share pictures of cute cats."[3]Zuckerman says that if a tool has "cute cat" purposes, and is widely used for low-value purposes, it can be and likely is used for online activism, too.[3]
If the government chooses to shut down such generic tools, it will hurt people's ability to "look at cute cats online", spreading dissent and encouraging the activists' cause.[2][3]
According to Zuckerman,internet censorship in the People's Republic of China, which relies on its own, self-censored, Web 2.0 sites, is able to circumvent the cute-cat problem becausethe governmentis able to provide people with access to cute-cat content on domestic,self-censoredsites while blocking access to Western sites, which are less popular in China than in many other places worldwide.[3][4]
"Sufficiently usable read/write platforms will attract porn and activists. If there's no porn, the tool doesn't work. If there are no activists, it doesn't work well," Zuckerman has stated.[3]
|
https://en.wikipedia.org/wiki/Cute_cat_theory_of_digital_activism
|
The followingoutlineis provided as an overview of and topical guide to cryptography:
Cryptography(orcryptology) – practice and study of hidinginformation. Modern cryptography intersects the disciplines ofmathematics,computer science, andengineering. Applications of cryptography includeATM cards,computer passwords, andelectronic commerce.
List of cryptographers
|
https://en.wikipedia.org/wiki/Topics_in_cryptography
|
Reliability engineeringis a sub-discipline ofsystems engineeringthat emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure.[1]Reliability is closely related toavailability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
Thereliability functionis theoretically defined as theprobabilityof success. In practice, it is calculated using different techniques, and its value ranges between 0 and 1, where 0 indicates no probability of success while 1 indicates definite success. This probability is estimated from detailed (physics of failure) analysis, previous data sets, or through reliability testing and reliability modeling.Availability,testability,maintainability, andmaintenanceare often defined as a part of "reliability engineering" in reliability programs. Reliability often plays a key role in thecost-effectivenessof systems.
Reliability engineering deals with the prediction, prevention, and management of high levels of "lifetime" engineeringuncertaintyandrisksof failure. Althoughstochasticparameters define and affect reliability, reliability is not only achieved by mathematics and statistics.[2][3]"Nearly all teaching and literature on the subject emphasize these aspects and ignore the reality that the ranges of uncertainty involved largely invalidate quantitative methods forpredictionand measurement."[4]For example, it is easy to represent "probability of failure" as a symbol or value in an equation, but it is almost impossible to predict its true magnitude in practice, which is massivelymultivariate, so having the equation for reliability does not begin to equal having an accurate predictive measurement of reliability.
Reliability engineering relates closely to Quality Engineering,safety engineering, andsystem safety, in that they use common methods for their analysis and may require input from each other. It can be said that a system must be reliably safe.
Reliability engineering focuses on the costs of failure caused by system downtime, cost of spares, repair equipment, personnel, and cost of warranty claims.[5]
The wordreliabilitycan be traced back to 1816 and is first attested to the poetSamuel Taylor Coleridge.[6]Before World War II the term was linked mostly torepeatability; a test (in any type of science) was considered "reliable" if the same results would be obtained repeatedly. In the 1920s, product improvement through the use ofstatistical process controlwas promoted by Dr.Walter A. ShewhartatBell Labs,[7]around the time thatWaloddi Weibullwas working on statistical models for fatigue. The development of reliability engineering was here on a parallel path with quality. The modern use of the word reliability was defined by the U.S. military in the 1940s, characterizing a product that would operate when expected and for a specified period.
In World War II, many reliability issues were due to the inherent unreliability of electronic equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published a seminal paper titled "Cumulative Damage in Fatigue" in an ASME journal. A main application for reliability engineering in the military was for the vacuum tube as used in radar systems and other electronics, for which reliability proved to be very problematic and costly. TheIEEEformed the Reliability Society in 1948. In 1950, theUnited States Department of Defenseformed a group called the "Advisory Group on the Reliability of Electronic Equipment" (AGREE) to investigate reliability methods for military equipment.[8]This group recommended three main ways of working:
In the 1960s, more emphasis was given to reliability testing on component and system levels. The famous military standard MIL-STD-781 was created at that time. Around this period also the much-used predecessor to military handbook 217 was published byRCAand was used for the prediction of failure rates of electronic components. The emphasis on component reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic approaches, as used in the consumer industries, were being used. In the 1980s, televisions were increasingly made up of solid-state semiconductors. Automobiles rapidly increased their use of semiconductors with a variety of microcomputers under the hood and in the dash. Large air conditioning systems developed electronic controllers, as did microwave ovens and a variety of other appliances. Communications systems began to adopt
electronics to replace older mechanical switching systems.Bellcoreissued the first consumer prediction methodology for telecommunications, andSAEdeveloped a similar document SAE870050 for automotive applications. The nature of predictions evolved during the decade, and it became apparent that die complexity wasn't the only factor that determined failure rates for integrated circuits (ICs).
Kam Wong published a paper questioning the bathtub curve[9]—see alsoreliability-centered maintenance. During this decade, the failure rate of many components dropped by a factor of 10. Software became important to the reliability of systems. By the 1990s, the pace of IC development was picking up. Wider use of stand-alone microcomputers was common, and the PC market helped keep IC densities following Moore's law and doubling about every 18 months. Reliability engineering was now changing as it moved towards understanding thephysics of failure. Failure rates for components kept dropping, but system-level issues became more prominent.Systems thinkinghas become more and more important. For software, the CMM model (Capability Maturity Model) was developed, which gave a more qualitative approach to reliability. ISO 9000 added reliability measures as part of the design and development portion of certification. The expansion of the World Wide Web created new challenges of security and trust. The older problem of too little reliable information available had now been replaced by too much information of questionable value. Consumer reliability problems could now be discussed online in real-time using data. New technologies such as micro-electromechanical systems (MEMS), handheldGPS, and hand-held devices that combine cell phones and computers all represent challenges to maintaining reliability. Product development time continued to shorten through this
decade and what had been done in three years was being done in 18 months. This meant that reliability tools and tasks had to be more closely tied to the development process itself. In many ways, reliability has become part of everyday life and consumer expectations.
Reliability is the probability of a product performing its intended function under specified operating conditions in a manner that meets or exceeds customer expectations.[10]
The objectives of reliability engineering, in decreasing order of priority, are:[11]
The reason for the priority emphasis is that it is by far the most effective way of working, in terms of minimizing costs and generating reliable products. The primary skills that are required, therefore, are the ability to understand and anticipate the possible causes of failures, and knowledge of how to prevent them. It is also necessary to know the methods that can be used for analyzing designs and data.
Reliability engineering for "complex systems" requires a different, more elaborate systems approach than for non-complex systems. Reliability engineering may in that case involve:
Effective reliability engineering requires understanding of the basics offailure mechanismsfor which experience, broad engineering skills and good knowledge from many different special fields of engineering are required,[12]for example:
Reliability may be defined in the following ways:
Many engineering techniques are used in reliabilityrisk assessments, such as reliability block diagrams,hazard analysis,failure mode and effects analysis(FMEA),[13]fault tree analysis(FTA),Reliability Centered Maintenance, (probabilistic) load and material stress and wear calculations, (probabilistic) fatigue and creep analysis, human error analysis, manufacturing defect analysis, reliability testing, etc. These analyses must be done properly and with much attention to detail to be effective. Because of the large number of reliability techniques, their expense, and the varying degrees of reliability required for different situations, most projects develop a reliability program plan to specify the reliability tasks (statement of work(SoW) requirements) that will be performed for that specific system.
Consistent with the creation ofsafety cases, for example perARP4761, the goal of reliability assessments is to provide a robust set of qualitative and quantitative evidence that the use of a component or system will not be associated with unacceptable risk. The basic steps to take[14]are to:
Theriskhere is the combination of probability and severity of the failure incident (scenario) occurring. The severity can be looked at from a system safety or a system availability point of view. Reliability for safety can be thought of as a very different focus from reliability for system availability. Availability and safety can exist in dynamic tension as keeping a system too available can be unsafe. Forcing an engineering system into a safe state too quickly can force false alarms that impede the availability of the system.
In ade minimisdefinition, the severity of failures includes the cost of spare parts, man-hours, logistics, damage (secondary failures), and downtime of machines which may cause production loss. A more complete definition of failure also can mean injury, dismemberment, and death of people within the system (witness mine accidents, industrial accidents, space shuttle failures) and the same to innocent bystanders (witness the citizenry of cities like Bhopal, Love Canal, Chernobyl, or Sendai, and other victims of the 2011 Tōhoku earthquake and tsunami)—in this case, reliability engineering becomes system safety. What is acceptable is determined by the managing authority or customers or the affected communities. Residual risk is the risk that is left over after all reliability activities have finished, and includes the unidentified risk—and is therefore not completely quantifiable.
The complexity of the technical systems such as improvements of design and materials, planned inspections, fool-proof design, and backup redundancy decreases risk and increases the cost. The risk can be decreased to ALARA (as low as reasonably achievable) or ALAPA (as low as practically achievable) levels.
Implementing a reliability program is not simply a software purchase; it is not just a checklist of items that must be completed that ensure one has reliable products and processes. A reliability program is a complex learning and knowledge-based system unique to one's products and processes. It is supported by leadership, built on the skills that one develops within a team, integrated into business processes, and executed by following proven standard work practices.[15]
A reliability program plan is used to document exactly what "best practices" (tasks, methods, tools, analysis, and tests) are required for a particular (sub)system, as well as clarify customer requirements for reliability assessment. For large-scale complex systems, the reliability program plan should be a separatedocument. Resource determination for manpower and budgets for testing and other tasks is critical for a successful program. In general, the amount of work required for an effective program for complex systems is large.
A reliability program plan is essential for achieving high levels of reliability, testability,maintainability, and the resulting systemavailability, and is developed early during system development and refined over the system's life cycle. It specifies not only what the reliability engineer does, but also the tasks performed by otherstakeholders. An effective reliability program plan must be approved by top program management, which is responsible for the allocation of sufficient resources for its implementation.
A reliability program plan may also be used to evaluate and improve the availability of a system by the strategy of focusing on increasing testability & maintainability and not on reliability. Improving maintainability is generally easier than improving reliability. Maintainability estimates (repair rates) are also generally more accurate. However, because the uncertainties in the reliability estimates are in most cases very large, they are likely to dominate the availability calculation (prediction uncertainty problem), even when maintainability levels are very high. When reliability is not under control, more complicated issues may arise, like manpower (maintainers/customer service capability) shortages, spare part availability, logistic delays, lack of repair facilities, extensive retrofit and complex configuration management costs, and others. The problem of unreliability may be increased also due to the "domino effect" of maintenance-induced failures after repairs. Focusing only on maintainability is therefore not enough. If failures are prevented, none of the other issues are of any importance, and therefore reliability is generally regarded as the most important part of availability. Reliability needs to be evaluated and improved related to both availability and thetotal cost of ownership(TCO) due to the cost of spare parts, maintenance man-hours, transport costs, storage costs, part obsolete risks, etc. But, as GM and Toyota have belatedly discovered, TCO also includes the downstream liability costs when reliability calculations have not sufficiently or accurately addressed customers' bodily risks. Often a trade-off is needed between the two. There might be a maximum ratio between availability and cost of ownership. The testability of a system should also be addressed in the plan, as this is the link between reliability and maintainability. The maintenance strategy can influence the reliability of a system (e.g., by preventive and/orpredictive maintenance), although it can never bring it above the inherent reliability.
The reliability plan should clearly provide a strategy for availability control. Whether only availability or also cost of ownership is more important depends on the use of the system. For example, a system that is a critical link in a production system—e.g., a big oil platform—is normally allowed to have a very high cost of ownership if that cost translates to even a minor increase in availability, as the unavailability of the platform results in a massive loss of revenue which can easily exceed the high cost of ownership. A proper reliability plan should always address RAMT analysis in its total context. RAMT stands for reliability, availability, maintainability/maintenance, and testability in the context of the customer's needs.
For any system, one of the first tasks of reliability engineering is to adequately specify the reliability and maintainability requirements allocated from the overallavailabilityneeds and, more importantly, derived from proper design failure analysis or preliminary prototype test results. Clear requirements (able to be designed to) should constrain the designers from designing particular unreliable items/constructions/interfaces/systems. Setting only availability, reliability, testability, or maintainability targets (e.g., max. failure rates) is not appropriate. This is a broad misunderstanding about Reliability Requirements Engineering. Reliability requirements address the system itself, including test and assessment requirements, and associated tasks and documentation. Reliability requirements are included in the appropriate system or subsystem requirements specifications, test plans, and contract statements. The creation of proper lower-level requirements is critical.[16]The provision of only quantitative minimum targets (e.g.,Mean Time Between Failure(MTBF) values or failure rates) is not sufficient for different reasons. One reason is that a full validation (related to correctness and verifiability in time) of a quantitative reliability allocation (requirement spec) on lower levels for complex systems can (often) not be made as a consequence of (1) the fact that the requirements are probabilistic, (2) the extremely high level of uncertainties involved for showing compliance with all these probabilistic requirements, and because (3) reliability is a function of time, and accurate estimates of a (probabilistic) reliability number per item are available only very late in the project, sometimes even after many years of in-service use. Compare this problem with the continuous (re-)balancing of, for example, lower-level-system mass requirements in the development of an aircraft, which is already often a big undertaking. Notice that in this case, masses do only differ in terms of only some %, are not a function of time, and the data is non-probabilistic and available already in CAD models. In the case of reliability, the levels of unreliability (failure rates) may change with factors of decades (multiples of 10) as a result of very minor deviations in design, process, or anything else.[17]The information is often not available without huge uncertainties within the development phase. This makes this allocation problem almost impossible to do in a useful, practical, valid manner that does not result in massive over- or under-specification. A pragmatic approach is therefore needed—for example: the use of general levels/classes of quantitative requirements depending only on severity of failure effects. Also, the validation of results is a far more subjective task than any other type of requirement. (Quantitative) reliability parameters—in terms of MTBF—are by far the most uncertain design parameters in any design.
Furthermore, reliability design requirements should drive a (system or part) design to incorporate features that prevent failures from occurring, or limit consequences from failure in the first place. Not only would it aid in some predictions, this effort would keep from distracting the engineering effort into a kind of accounting work. A design requirement should be precise enough so that a designer can "design to" it and can also prove—through analysis or testing—that the requirement has been achieved, and, if possible, within some a stated confidence. Any type of reliability requirement should be detailed and could be derived from failure analysis (Finite-Element Stress and Fatigue analysis, Reliability Hazard Analysis, FTA, FMEA, Human Factor Analysis, Functional Hazard Analysis, etc.) or any type of reliability testing. Also, requirements are needed for verification tests (e.g., required overload stresses) and test time needed. To derive these requirements in an effective manner, asystems engineering-based risk assessment and mitigation logic should be used. Robust hazard log systems must be created that contain detailed information on why and how systems could or have failed. Requirements are to be derived and tracked in this way. These practical design requirements shall drive the design and not be used only for verification purposes. These requirements (often design constraints) are in this way derived from failure analysis or preliminary tests. Understanding of this difference compared to only purely quantitative (logistic) requirement specification (e.g., Failure Rate / MTBF target) is paramount in the development of successful (complex) systems.[18]
The maintainability requirements address the costs of repairs as well as repair time. Testability (not to be confused with test requirements) requirements provide the link between reliability and maintainability and should address detectability of failure modes (on a particular system level), isolation levels, and the creation of diagnostics (procedures).
As indicated above, reliability engineers should also address requirements for various reliability tasks and documentation during system development, testing, production, and operation. These requirements are generally specified in the contract statement of work and depend on how much leeway the customer wishes to provide to the contractor. Reliability tasks include various analyses, planning, and failure reporting. Task selection depends on the criticality of the system as well as cost. A safety-critical system may require a formal failure reporting and review process throughout development, whereas a non-critical system may rely on final test reports. The most common reliability program tasks are documented in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis and corrective action systems are a common approach for product/process reliability monitoring.
In practice, most failures can be traced back to some type ofhuman error, for example in:
However, humans are also very good at detecting such failures, correcting them, and improvising when abnormal situations occur. Therefore, policies that completely rule out human actions in design and production processes to improve reliability may not be effective. Some tasks are better performed by humans and some are better performed by machines.[19]
Furthermore, human errors in management; the organization of data and information; or the misuse or abuse of items, may also contribute to unreliability. This is the core reason why high levels of reliability for complex systems can only be achieved by following a robustsystems engineeringprocess with proper planning and execution of the validation and verification tasks. This also includes the careful organization of data and information sharing and creating a "reliability culture", in the same way, that having a "safety culture" is paramount in the development of safety-critical systems.
Reliability prediction combines:
For existing systems, it is arguable that any attempt by a responsible program to correct the root cause of discovered failures may render the initial MTBF estimate invalid, as new assumptions (themselves subject to high error levels) of the effect of this correction must be made. Another practical issue is the general unavailability of detailed failure data, with those available often featuring inconsistent filtering of failure (feedback) data, and ignoring statistical errors (which are very high for rare events like reliability related failures). Very clear guidelines must be present to count and compare failures related to different type of root-causes (e.g. manufacturing-, maintenance-, transport-, system-induced or inherent design failures). Comparing different types of causes may lead to incorrect estimations and incorrect business decisions about the focus of improvement.
To perform a proper quantitative reliability prediction for systems may be difficult and very expensive if done by testing. At the individual part-level, reliability results can often be obtained with comparatively high confidence, as testing of many sample parts might be possible using the available testing budget. However, unfortunately these tests may lack validity at a system-level due to assumptions made at part-level testing. These authors emphasized the importance of initial part- or system-level testing until failure, and to learn from such failures to improve the system or part. The general conclusion is drawn that an accurate and absolute prediction – by either field-data comparison or testing – of reliability is in most cases not possible. An exception might be failures due to wear-out problems such as fatigue failures. In the introduction of MIL-STD-785 it is written that reliability prediction should be used with great caution, if not used solely for comparison in trade-off studies.
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure that a product meets its reliability requirements, under its use environment, for the duration of its lifetime. DfR is implemented in the design stage of a product to proactively improve product reliability.[21]DfR is often used as part of an overallDesign for Excellence (DfX)strategy.
Reliability design begins with the development of a (system)model. Reliability and availability models useblock diagramsandFault Tree Analysisto provide a graphical means of evaluating the relationships between different parts of the system. These models may incorporate predictions based on failure rates taken from historical data. While the (input data) predictions are often not accurate in an absolute sense, they are valuable to assess relative differences in design alternatives. Maintainability parameters, for exampleMean time to repair(MTTR), can also be used as inputs for such models.
The most important fundamental initiating causes and failure mechanisms are to be identified and analyzed with engineering tools. A diverse set of practical guidance as to performance and reliability should be provided to designers so that they can generate low-stressed designs and products that protect, or are protected against, damage and excessive wear. Proper validation of input loads (requirements) may be needed, in addition to verification for reliability "performance" by testing.
One of the most important design techniques isredundancy. This means that if one part of the system fails, there is an alternate success path, such as a backup system. The reason why this is the ultimate design choice is related to the fact that high-confidence reliability evidence for new parts or systems is often not available, or is extremely expensive to obtain. By combining redundancy, together with a high level of failure monitoring, and the avoidance of common cause failures; even a system with relatively poor single-channel (part) reliability, can be made highly reliable at a system level (up to mission critical reliability). No testing of reliability has to be required for this. In conjunction with redundancy, the use of dissimilar designs or manufacturing processes (e.g. via different suppliers of similar parts) for single independent channels, can provide less sensitivity to quality issues (e.g. early childhood failures at a single supplier), allowing very-high levels of reliability to be achieved at all moments of the development cycle (from early life to long-term). Redundancy can also be applied in systems engineering by double checking requirements, data, designs, calculations, software, and tests to overcome systematic failures.
Another effective way to deal with reliability issues is to perform analysis that predicts degradation, enabling the prevention of unscheduled downtime events / failures.RCM(Reliability Centered Maintenance) programs can be used for this.
For electronic assemblies, there has been an increasing shift towards a different approach calledphysics of failure. This technique relies on understanding the physical static and dynamic failure mechanisms. It accounts for variation in load, strength, and stress that lead to failure with a high level of detail, made possible with the use of modernfinite element method(FEM) software programs that can handle complex geometries and mechanisms such as creep, stress relaxation, fatigue, and probabilistic design (Monte Carlo Methods/DOE). The material or component can be re-designed to reduce the probability of failure and to make it more robust against such variations. Another common design technique is componentderating: i.e. selecting components whose specifications significantly exceed the expected stress levels, such as using heavier gauge electrical wire than might normally be specified for the expectedelectric current.
Many of the tasks, techniques, and analyses used in Reliability Engineering are specific to particular industries and applications, but can commonly include:
Results from these methods are presented during reviews of part or system design, and logistics. Reliability is just one requirement among many for a complex part or system. Engineering trade-off studies are used to determine theoptimumbalance between reliability requirements and other constraints.
Reliability engineers, whether using quantitative or qualitative methods to describe a failure or hazard, rely on language to pinpoint the risks and enable issues to be solved. The language used must help create an orderly description of the function/item/system and its complex surrounding as it relates to the failure of these functions/items/systems. Systems engineering is very much about finding the correct words to describe the problem (and related risks), so that they can be readily solved via engineering solutions. Jack Ring said that a systems engineer's job is to "language the project." (Ring et al. 2000)[23]For part/system failures, reliability engineers should concentrate more on the "why and how", rather that predicting "when". Understanding "why" a failure has occurred (e.g. due to over-stressed components or manufacturing issues) is far more likely to lead to improvement in the designs and processes used[4]than quantifying "when" a failure is likely to occur (e.g. via determining MTBF). To do this, first the reliability hazards relating to the part/system need to be classified and ordered (based on some form of qualitative and quantitative logic if possible) to allow for more efficient assessment and eventual improvement. This is partly done in pure language andpropositionlogic, but also based on experience with similar items. This can for example be seen in descriptions of events infault tree analysis,FMEAanalysis, and hazard (tracking) logs. In this sense language and proper grammar (part of qualitative analysis) plays an important role in reliability engineering, just like it does insafety engineeringor in-general withinsystems engineering.
Correct use of language can also be key to identifying or reducing the risks ofhuman error, which are often the root cause of many failures. This can include proper instructions in maintenance manuals, operation manuals, emergency procedures, and others to prevent systematic human errors that may result in system failures. These should be written by trained or experienced technical authors using so-called simplified English orSimplified Technical English, where words and structure are specifically chosen and created so as to reduce ambiguity or risk of confusion (e.g. an "replace the old part" could ambiguously refer to a swapping a worn-out part with a non-worn-out part, or replacing a part with one using a more recent and hopefully improved design).
Reliability modeling is the process of predicting or understanding the reliability of a component or system prior to its implementation. Two types of analysis that are often used to model a complete system'savailabilitybehavior including effects from logistics issues like spare part provisioning, transport and manpower are fault tree analysis andreliability block diagrams. At a component level, the same types of analyses can be used together with others. The input for the models can come from many sources including testing; prior operational experience; field data; as well as data handbooks from similar or related industries. Regardless of source, all model input data must be used with great caution, as predictions are only valid in cases where the same product was used in the same context. As such, predictions are often only used to help compare alternatives.
For part level predictions, two separate fields of investigation are common:
Reliability is defined as theprobabilitythat a device will perform its intended function during a specified period of time under stated conditions. Mathematically, this may be expressed as,
R(t)=Pr{T>t}=∫t∞f(x)dx{\displaystyle R(t)=Pr\{T>t\}=\int _{t}^{\infty }f(x)\,dx\ \!},
wheref(x){\displaystyle f(x)\!}is the failureprobability density functionandt{\displaystyle t}is the length of the period of time (which is assumed to start from time zero).
There are a few key elements of this definition:
Quantitative requirements are specified using reliabilityparameters. The most common reliability parameter is themean time to failure(MTTF), which can also be specified as thefailure rate(this is expressed as a frequency or conditional probability density function (PDF)) or the number of failures during a given period. These parameters may be useful for higher system levels and systems that are operated frequently (i.e. vehicles, machinery, and electronic equipment). Reliability increases as the MTTF increases. The MTTF is usually specified in hours, but can also be used with other units of measurement, such as miles or cycles. Using MTTF values on lower system levels can be very misleading, especially if they do not specify the associated Failures Modes and Mechanisms (The F in MTTF).[17]
In other cases, reliability is specified as the probability of mission success. For example, reliability of a scheduled aircraft flight can be specified as a dimensionless probability or a percentage, as often used insystem safetyengineering.
A special case of mission success is the single-shot device or system. These are devices or systems that remain relatively dormant and only operate once. Examples include automobileairbags, thermalbatteriesandmissiles. Single-shot reliability is specified as a probability of one-time success or is subsumed into a related parameter. Single-shot missile reliability may be specified as a requirement for the probability of a hit. For such systems, theprobability of failure on demand(PFD) is the reliability measure – this is actually an "unavailability" number. The PFD is derived from failure rate (a frequency of occurrence) and mission time for non-repairable systems.
For repairable systems, it is obtained from failure rate, mean-time-to-repair (MTTR), and test interval. This measure may not be unique for a given system as this measure depends on the kind of demand. In addition to system level requirements, reliability requirements may be specified for critical subsystems. In most cases, reliability parameters are specified with appropriate statisticalconfidence intervals.
The purpose ofreliability testingorreliability verificationis to discover potential problems with the design as early as possible and, ultimately, provide confidence that the system meets its reliability requirements. The reliability of the product in all environments such as expected use, transportation, or storage during the specified lifespan should be considered.[10]It is to expose the product to natural or artificial environmental conditions to undergo its action to evaluate the performance of the product under the environmental conditions of actual use, transportation, and storage, and to analyze and study the degree of influence of environmental factors and their mechanism of action.[24]Through the use of various environmental test equipment to simulate the high temperature, low temperature, and high humidity, and temperature changes in the climate environment, to accelerate the reaction of the product in the use environment, to verify whether it reaches the expected quality inR&D, design, and manufacturing.[25]
Reliability verification is also called reliability testing, which refers to the use of modeling, statistics, and other methods to evaluate the reliability of the product based on the product's life span and expected performance.[26]Most product on the market requires reliability testing, such as automotive,integrated circuit, heavy machinery used to mine nature resources, Aircraft auto software.[27][28]
Reliability testing may be performed at several levels and there are different types of testing. Complex systems may be tested at component, circuit board, unit, assembly, subsystem and system levels.[29](The test level nomenclature varies among applications.) For example, performingenvironmental stress screeningtests at lower levels, such as piece parts or small assemblies, catches problems before they cause failures at higher levels. Testing proceeds during each level of integration through full-up system testing, developmental testing, and operational testing, thereby reducing program risk. However, testing does not mitigate unreliability risk.
With each test both statisticaltype I and type II errorscould be made, depending on sample size, test time, assumptions and the needed discrimination ratio. There is risk of incorrectly rejecting a good design (type I error) and the risk of incorrectly accepting a bad design (type II error).
It is not always feasible to test all system requirements. Some systems are prohibitively expensive to test; somefailure modesmay take years to observe; some complex interactions result in a huge number of possible test cases; and some tests require the use of limited test ranges or other resources. In such cases, different approaches to testing can be used, such as (highly) accelerated life testing,design of experiments, andsimulations.
The desired level of statistical confidence also plays a role in reliability testing. Statistical confidence is increased by increasing either the test time or the number of items tested. Reliability test plans are designed to achieve the specified reliability at the specifiedconfidence levelwith the minimum number of test units and test time. Different test plans result in different levels of risk to the producer and consumer. The desired reliability, statistical confidence, and risk levels for each side influence the ultimate test plan. The customer and developer should agree in advance on how reliability requirements will be tested.
A key aspect of reliability testing is to define "failure". Although this may seem obvious, there are many situations where it is not clear whether a failure is really the fault of the system. Variations in test conditions, operator differences, weather and unexpected situations create differences between the customer and the system developer. One strategy to address this issue is to use a scoring conference process. A scoring conference includes representatives from the customer, the developer, the test organization, the reliability organization, and sometimes independent observers. The scoring conference process is defined in the statement of work. Each test case is considered by the group and "scored" as a success or failure. This scoring is the official result used by the reliability engineer.
As part of the requirements phase, the reliability engineer develops a test strategy with the customer. The test strategy makes trade-offs between the needs of the reliability organization, which wants as much data as possible, and constraints such as cost, schedule and available resources. Test plans and procedures are developed for each reliability test, and results are documented.
Reliability testing is common in the Photonics industry. Examples of reliability tests of lasers are life test andburn-in. These tests consist of the highly accelerated aging, under controlled conditions, of a group of lasers. The data collected from these life tests are used to predict laser life expectancy under the intended operating characteristics.[30]
There are many criteria to test depends on the product or process that are testing on, and mainly, there are five components that are most common:[31][32]
The product life span can be split into four different for analysis. Useful life is the estimated economic life of the product, which is defined as the time can be used before the cost of repair do not justify the continue use to the product. Warranty life is the product should perform the function within the specified time period. Design life is where during the design of the product, designer take into consideration on the life time of competitive product and customer desire and ensure that the product do not result in customer dissatisfaction.[34][35]
Reliability test requirements can follow from any analysis for which the first estimate of failure probability, failure mode or effect needs to be justified. Evidence can be generated with some level of confidence by testing. With software-based systems, the probability is a mix of software and hardware-based failures. Testing reliability requirements is problematic for several reasons. A single test is in most cases insufficient to generate enough statistical data. Multiple tests or long-duration tests are usually very expensive. Some tests are simply impractical, and environmental conditions can be hard to predict over a systems life-cycle.
Reliability engineering is used to design a realistic and affordable test program that provides empirical evidence that the system meets its reliability requirements. Statisticalconfidence levelsare used to address some of these concerns. A certain parameter is expressed along with a corresponding confidence level: for example, anMTBFof 1000 hours at 90% confidence level. From this specification, the reliability engineer can, for example, design a test with explicit criteria for the number of hours and number of failures until the requirement is met or failed. Different sorts of tests are possible.
The combination of required reliability level and required confidence level greatly affects the development cost and the risk to both the customer and producer. Care is needed to select the best combination of requirements—e.g. cost-effectiveness. Reliability testing may be performed at various levels, such as component,subsystemandsystem. Also, many factors must be addressed during testing and operation, such as extreme temperature and humidity, shock, vibration, or other environmental factors (like loss of signal, cooling or power; or other catastrophes such as fire, floods, excessive heat, physical or security violations or other myriad forms of damage or degradation). For systems that must last many years, accelerated life tests may be needed.
A systematic approach to reliability testing is to, first, determine reliability goal, then do tests that are linked to performance and determine the reliability of the product.[36]A reliability verification test in modern industries should clearly determine how they relate to the product's overall reliability performance and how individual tests impact the warranty cost and customer satisfaction.[37]
The purpose ofaccelerated life testing (ALT test)is to induce field failure in the laboratory at a much faster rate by providing a harsher, but nonetheless representative, environment. In such a test, the product is expected to fail in the lab just as it would have failed in the field—but in much less time.
The main objective of an accelerated test is either of the following:
An accelerated testing program can be broken down into the following steps:
Common ways to determine a life stress relationship are:
Software reliability is a special aspect of reliability engineering. It focuses on foundations and techniques to make software more reliable, i.e., resilient to faults. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digitalintegrated circuittechnology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems. Therefore, software reliability has gained prominence within the field of system reliability.
There are significant differences, however, in how software and hardware behave.
Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state.
However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically largecombinationsof inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.
Despite this difference in the source of failure between software and hardware, severalsoftware reliability modelsbased on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure (Shooman1987), (Musa 2005), (Denney 2005).
As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplinedsoftware engineeringprocess to anticipate and design againstunintended consequences. There is more overlap between softwarequality engineeringand software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards,peer reviews,unit tests,configuration management,software metricsand software models to be used during software development.
A common reliability metric is the number of software faults per line of code (FLOC), usually expressed as faults per thousand lines of code. This metric, along with software execution time, is key to most software reliability models and estimates. The theory is that the software reliability increases as the number of faults (or fault density) decreases. Establishing a direct connection between fault density and mean-time-between-failure is difficult, however, because of the way software faults are distributed in the code, their severity, and the probability of the combination of inputs necessary to encounter the fault. Nevertheless, fault density serves as a useful indicator for the reliability engineer. Other software metrics, such as complexity, are also used. This metric remains controversial, since changes in software development and verification practices can have dramatic impact on overall defect rates.
Software testingis an important aspect of software reliability. Even the best software development process results in some software faults that are nearly undetectable until tested. Software is tested at several levels, starting with individualunits, throughintegrationand full-upsystem testing. All phases of testing, software faults are discovered, corrected, and re-tested. Reliability estimates are updated based on the fault density and other metrics. At a system level, mean-time-between-failure data can be collected and used to estimate reliability. Unlike hardware, performing exactly the same test on exactly the same software configuration does not provide increased statistical confidence. Instead, software reliability uses different metrics, such ascode coverage.
The Software Engineering Institute'scapability maturity modelis a common means of assessing the overall software development process for reliability and quality purposes.
Structural reliabilityor the reliability of structures is the application of reliability theory to the behavior ofstructures. It is used in both the design and maintenance of different types of structures including concrete and steel structures.[38][39]In structural reliability studies both loads and resistances are modeled as probabilistic variables. Using this approach the probability of failure of a structure is calculated.
Reliability for safety and reliability for availability are often closely related. Lost availability of an engineering system can cost money. If a subway system is unavailable the subway operator will lose money for each hour the system is down. The subway operator will lose more money if safety is compromised. The definition of reliability is tied to a probability of not encountering a failure. A failure can cause loss of safety, loss of availability or both. It is undesirable to lose safety or availability in a critical system.
Reliability engineering is concerned with overall minimisation of failures that could lead to financial losses for the responsible entity, whereassafety engineeringfocuses on minimising a specific set of failure types that in general could lead to loss of life, injury or damage to equipment.
Reliability hazards could transform into incidents leading to a loss of revenue for the company or the customer, for example due to direct and indirect costs associated with: loss of production due to system unavailability; unexpected high or low demands for spares; repair costs; man-hours; re-designs or interruptions to normal production.[40]
Safety engineering is often highly specific, relating only to certain tightly regulated industries, applications, or areas. It primarily focuses on system safety hazards that could lead to severe accidents including: loss of life; destruction of equipment; or environmental damage. As such, the related system functional reliability requirements are often extremely high. Although it deals with unwanted failures in the same sense as reliability engineering, it, however, has less of a focus on direct costs, and is not concerned with post-failure repair actions. Another difference is the level of impact of failures on society, leading to a tendency for strict control by governments or regulatory bodies (e.g. nuclear, aerospace, defense, rail and oil industries).[40]
Safety can be increased using a 2oo2 cross checked redundant system. Availability can be increased by using "1oo2" (1 out of 2) redundancy at a part or system level. If both redundant elements disagree the more permissive element will maximize availability. A 1oo2 system should never be relied on for safety. Fault-tolerant systems often rely on additional redundancy (e.g.2oo3 voting logic) where multiple redundant elements must agree on a potentially unsafe action before it is performed. This increases both availability and safety at a system level. This is common practice in aerospace systems that need continued availability and do not have afail-safemode. For example, aircraft may use triple modular redundancy forflight computersand control surfaces (including occasionally different modes of operation e.g. electrical/mechanical/hydraulic) as these need to always be operational, due to the fact that there are no "safe" default positions for control surfaces such as rudders or ailerons when the aircraft is flying.
The above example of a 2oo3 fault tolerant system increases both mission reliability as well as safety. However, the "basic" reliability of the system will in this case still be lower than a non-redundant (1oo1) or 2oo2 system. Basic reliability engineering covers all failures, including those that might not result in system failure, but do result in additional cost due to: maintenance repair actions; logistics; spare parts etc. For example, replacement or repair of 1 faulty channel in a 2oo3 voting system, (the system is still operating, although with one failed channel it has actually become a 2oo2 system) is contributing to basic unreliability but not mission unreliability. As an example, the failure of the tail-light of an aircraft will not prevent the plane from flying (and so is not considered a mission failure), but it does need to be remedied (with a related cost, and so does contribute to the basic unreliability levels).
When using fault tolerant (redundant) systems or systems that are equipped with protection functions, detectability of failures and avoidance of common cause failures becomes paramount for safe functioning and/or mission reliability.
Quality often focuses on manufacturing defects during the warranty phase. Reliability looks at the failure intensity over the whole life of a product or engineering system from commissioning to decommissioning.Six Sigmahas its roots in statistical control in quality of manufacturing. Reliability engineering is a specialty part of systems engineering. The systems engineering process is a discovery process that is often unlike a manufacturing process. A manufacturing process is often focused on repetitive activities that achieve high quality outputs with minimum cost and time.[41]
The everyday usage term "quality of a product" is loosely taken to mean its inherent degree of excellence. In industry, a more precise definition of quality as "conformance to requirements or specifications at the start of use" is used. Assuming the final product specification adequately captures the original requirements and customer/system needs, the quality level can be measured as the fraction of product units shipped that meet specifications.[42]Manufactured goods quality often focuses on the number of warranty claims during the warranty period.
Quality is a snapshot at the start of life through the warranty period and is related to the control of lower-level product specifications. This includes time-zero defects i.e. where manufacturing mistakes escaped final Quality Control. In theory the quality level might be described by a single fraction of defective products. Reliability, as a part of systems engineering, acts as more of an ongoing assessment of failure rates over many years. Theoretically, all items will fail over an infinite period of time.[43]Defects that appear over time are referred to as reliability fallout. To describe reliability fallout a probability model that describes the fraction fallout over time is needed. This is known as the life distribution model.[42]Some of these reliability issues may be due to inherent design issues, which may exist even though the product conforms to specifications. Even items that are produced perfectly will fail over time due to one or more failure mechanisms (e.g. due to human error or mechanical, electrical, and chemical factors). These reliability issues can also be influenced by acceptable levels of variation during initial production.
Quality and reliability are, therefore, related to manufacturing. Reliability is more targeted towards clients who are focused on failures throughout the whole life of the product such as the military, airlines or railroads. Items that do not conform to product specification will generally do worse in terms of reliability (having a lower MTTF), but this does not always have to be the case. The full mathematical quantification (in statistical models) of this combined relation is in general very difficult or even practically impossible. In cases where manufacturing variances can be effectively reduced, six sigma tools have been shown to be useful to find optimal process solutions which can increase quality and reliability. Six Sigma may also help to design products that are more robust to manufacturing induced failures and infant mortality defects in engineering systems and manufactured product.
In contrast with Six Sigma, reliability engineering solutions are generally found by focusing on reliability testing and system design. Solutions are found in different ways, such as by simplifying a system to allow more of the mechanisms of failure involved to be understood; performing detailed calculations of material stress levels allowing suitable safety factors to be determined; finding possible abnormal system load conditions and using this to increase robustness of a design to manufacturing variance related failure mechanisms. Furthermore, reliability engineering uses system-level solutions, like designing redundant and fault-tolerant systems for situations with high availability needs (seeReliability engineering vs Safety engineeringabove).
Note: A "defect" in six-sigma/quality literature is not the same as a "failure" (Field failure | e.g. fractured item) in reliability. A six-sigma/quality defect refers generally to non-conformance with a requirement (e.g. basic functionality or a key dimension). Items can, however, fail over time, even if these requirements are all fulfilled. Quality is generally not concerned with asking the crucial question "are the requirements actually correct?", whereas reliability is.
Once systems or parts are being produced, reliability engineering attempts to monitor, assess, and correct deficiencies. Monitoring includes electronic and visual surveillance of critical parameters identified during the fault tree analysis design stage. Data collection is highly dependent on the nature of the system. Most large organizations havequality controlgroups that collect failure data on vehicles, equipment and machinery. Consumer product failures are often tracked by the number of returns. For systems in dormant storage or on standby, it is necessary to establish a formal surveillance program to inspect and test random samples. Any changes to the system, such as field upgrades or recall repairs, require additional reliability testing to ensure the reliability of the modification. Since it is not possible to anticipate all the failure modes of a given system, especially ones with a human element, failures will occur. The reliability program also includes a systematicroot cause analysisthat identifies the causal relationships involved in the failure such that effective corrective actions may be implemented. When possible, system failures and corrective actions are reported to the reliability engineering organization.
Some of the most common methods to apply to a reliability operational assessment arefailure reporting, analysis, and corrective action systems(FRACAS). This systematic approach develops a reliability, safety, and logistics assessment based on failure/incident reporting, management, analysis, and corrective/preventive actions. Organizations today are adopting this method and utilizing commercial systems (such as Web-based FRACAS applications) that enable them to create a failure/incident data repository from which statistics can be derived to view accurate and genuine reliability, safety, and quality metrics.
It is extremely important for an organization to adopt a common FRACAS system for all end items. Also, it should allow test results to be captured in a practical way. Failure to adopt one easy-to-use (in terms of ease of data-entry for field engineers and repair shop engineers) and easy-to-maintain integrated system is likely to result in a failure of the FRACAS program itself.
Some of the common outputs from a FRACAS system include Field MTBF, MTTR, spares consumption, reliability growth, failure/incidents distribution by type, location, part no., serial no., and symptom.
The use of past data to predict the reliability of new comparable systems/items can be misleading as reliability is a function of the context of use and can be affected by small changes in design/manufacturing.
Systems of any significant complexity are developed by organizations of people, such as a commercialcompanyor agovernmentagency. The reliability engineering organization must be consistent with the company'sorganizational structure. For small, non-critical systems, reliability engineering may be informal. As complexity grows, the need arises for a formal reliability function. Because reliability is important to the customer, the customer may even specify certain aspects of the reliability organization.
There are several common types of reliability organizations. The project manager or chief engineer may employ one or more reliability engineers directly. In larger organizations, there is usually a product assurance orspecialty engineeringorganization, which may include reliability,maintainability,quality, safety,human factors,logistics, etc. In such case, the reliability engineer reports to the product assurance manager or specialty engineering manager.
In some cases, a company may wish to establish an independent reliability organization. This is desirable to ensure that the system reliability, which is often expensive and time-consuming, is not unduly slighted due to budget and schedule pressures. In such cases, the reliability engineer works for the project day-to-day, but is actually employed and paid by a separate organization within the company.
Because reliability engineering is critical to early system design, it has become common for reliability engineers, however, the organization is structured, to work as part of anintegrated product team.
Some universities offer graduate degrees in reliability engineering. Other reliability professionals typically have a physics degree from a university or college program. Many engineering programs offer reliability courses, and some universities have entire reliability engineering programs. A reliability engineer must be registered as aprofessional engineerby the state or province by law, but not all reliability professionals are engineers. Reliability engineers are required in systems where public safety is at risk. There are many professional conferences and industry training programs available for reliability engineers. Several professional organizations exist for reliability engineers, including the American Society for Quality Reliability Division (ASQ-RD),[44]theIEEE Reliability Society, theAmerican Society for Quality(ASQ),[45]and the Society of Reliability Engineers (SRE).[46]
http://standards.sae.org/ja1000/1_199903/SAE JA1000/1 Reliability Program Standard Implementation Guide
In the UK, there are more up to date standards maintained under the sponsorship of UK MOD as Defence Standards. The relevant Standards include:
DEF STAN 00-40 Reliability and Maintainability (R&M)
DEF STAN 00-42 RELIABILITY AND MAINTAINABILITY ASSURANCE GUIDES
DEF STAN 00-43 RELIABILITY AND MAINTAINABILITY ASSURANCE ACTIVITY
DEF STAN 00-44 RELIABILITY AND MAINTAINABILITY DATA COLLECTION AND CLASSIFICATION
DEF STAN 00-45 Issue 1: RELIABILITY CENTERED MAINTENANCE
DEF STAN 00-49 Issue 1: RELIABILITY AND MAINTAINABILITY MOD GUIDE TO TERMINOLOGY DEFINITIONS
These can be obtained fromDSTAN. There are also many commercial standards, produced by many organisations including the SAE, MSG, ARP, and IEE.
|
https://en.wikipedia.org/wiki/Reliability_theory
|
Media intelligenceusesdata mininganddata scienceto analyze public,socialand editorialmedia content. It refers to marketing systems that synthesize billions ofonline conversationsinto relevant information. This allow organizations to measure and manage content performance, understand trends, and drive communications andbusiness strategy.
Media intelligence can includesoftware as a serviceusingbig dataterminology.[1]This includes questions about messaging efficiency,share of voice, audience geographical distribution, message amplification,influencerstrategy, journalist outreach, creative resonance, and competitor performance in all these areas.
Media intelligence differs frombusiness intelligencein that it uses and analyzes data outside companyfirewalls. Examples of that data areuser-generated contenton social media sites,blogs, comment fields, and wikis etc. It may also include other public data sources likepress releases, news, blogs, legal filings, reviews and job postings.
Media intelligence may also include competitive intelligence, wherein information that is gathered from publicly available sources such as social media, press releases, and news announcements are used to better understand the strategies and tactics being deployed by competing businesses.[2]
Media intelligence is enhanced by means of emerging technologies likeambient intelligence,machine learning,semantic tagging,natural language processing,sentiment analysisandmachine translation.
Different media intelligence platforms use different technologies formonitoring, curating content, engaging with content, data analysis and measurement of communications and marketing campaign success. These technology providers may obtain content by scraping content directly from websites or by connecting to the API provided by social media, or other content platforms that are created for 3rd party developers to develop their own applications and services that access data. Technology companies may also get data from a data reseller.
Some social media monitoring and analytics companies use calls to data providers each time an end-user develops a query. Others archive and index social media posts to provide end users with on-demand access to historical data and enable methodologies and technologies leveraging network and relational data. Additional monitoring companies use crawlers and spidering technology to find keyword references, known assemantic analysisornatural language processing. Basic implementation involves curating data from social media on a large scale and analyzing the results to make sense out of it.[3]
|
https://en.wikipedia.org/wiki/Media_intelligence
|
InWestern art history,mise en abyme(French pronunciation:[mizɑ̃n‿abim]; alsomise en abîme) is the technique of placing a copy of an image within itself, often in a way that suggests an infinitely recurring sequence. Infilm theoryandliterary theory, it refers to thestory within a storytechnique.
The term is derived fromheraldry, and meansplaced intoabyss(exact middle of a shield). It was first appropriated for modern criticism by the French authorAndré Gide. A common sense of the phrase is the visual experience of standing between two mirrors and seeing an infinite reproduction of one's image.[1]Another is theDroste effect, in which a picture appears within itself, in a place where a similar picture would realistically be expected to appear.[2]The Droste effect is named after the 1904Drostecocoa package, which depicts a woman holding a tray bearing a Droste cocoa package, which bears a smaller version of her image.[3]
In the terminology ofheraldry, theabymeorabismeis the center of acoat of arms. The termmise en abyme(also calledinescutcheon) then meant “put/placed in the center”. It described a coat of arms that appears as a smaller shield in the center of a larger one (seeDroste effect).
A complex example ofmise en abymeis seen in thecoat of arms of the United Kingdomfor the period 1801–1837[broken anchor], as used by KingsGeorge III,George IVandWilliam IV. Thecrown of Charlemagneis placeden abymewithin theescutcheonofHanover, which in turn isen abymewithin the arms of England, Scotland, and Ireland.
Whileart historiansworking on the early-modern period adopted this phrase and interpreted it as showing artistic "self-awareness", medievalists tended not to use it.[citation needed]Many examples, however, can be found in the pre-modern era, as in amosaicfrom theHagia Sophiadated to the year 944. To the left,Justinian Ioffers theVirgin Marythe Hagia Sophia, which contains the mosaic itself. To the right,Constantine Ioffers the city ofConstantinople(now known as Istanbul), which itself contains the Hagia Sophia.
More medieval examples can be found in the collection of articlesMedieval mise-en-abyme: the object depicted within itself,[4]in which Jersey Ellis conjectures that the self-references sometimes are used to strengthen the symbolism of gift-giving by documenting the act of giving on the object itself. An example of this self-referential gift-giving appears in theStefaneschi Triptychin theVatican Museum, which features CardinalGiacomo Gaetani Stefaneschias the giver of the altarpiece.[5]
InWestern art history,mise en abymeis a formal technique in which an image contains a smaller copy of itself, in a sequence appearing to recur infinitely; "recursion" is another term for this. The modern meaning of the phrase originates with the authorAndré Gidewho used it to describe self-reflexive embeddings in various art forms and to describe what he sought in his work.[4]As examples, Gide cites both paintings such asLas MeninasbyDiego Velázquezand literary forms such asWilliam Shakespeare's use of the "play within a play" device inHamlet, where a theatrical company presents a performance for the characters that illuminate a thematic aspect of the play itself. This use of the phrasemise en abymewas picked up by scholars and popularized in the 1977 bookLe récit spéculaire. Essai sur la mise en abymebyLucien Dällenbach.[6]
Mise en abymeoccurs in a text when there is a reduplication of images or concepts referring to the textual whole.Mise en abymeis a play of signifiers within a text, of sub-texts mirroring each other.[7]This mirroring can attain a level where meaning may become unstable and, in this respect, may be seen as part of the process ofdeconstruction. The film-within-a-film, where a film contains a plot about the making of a film, is an example ofmise en abyme. The film being made within the film refers, through itsmise en scène, to the real film being made. The spectator sees film equipment, stars getting ready for the take, and crew sorting out the various directorial needs. The narrative of the film within the film may directly reflect the one in the real film.[8]An example isBjörk's videoBachelorette,[9]directed byMichel Gondry. An example isLa Nuit américaine(1973) byFrançois Truffaut.
Infilm, the meaning ofmise en abymeis similar to the artistic definition, but also includes the idea of a "dream within a dream". For example, a character awakens from a dream and later discovers that they arestill dreaming. Activities similar to dreaming, such as unconsciousness and virtual reality, also are described asmise en abyme. This is seen in the filmeXistenZwhere the two protagonists never truly know whether or not they are out of the game. It also becomes a prominent element ofCharlie Kaufman'sSynecdoche, New York(2008). More recent instances can be found in the filmsInland Empire(2007) andInception(2010). Classic film examples include the snow globe inCitizen Kane(1941) which provides a clue to the film's core mystery, and the discussion ofEdgar Allan Poe's written works (particularly "The Purloined Letter") in theJean-Luc GodardfilmBand of Outsiders(1964).
Inliterary criticism,mise en abymeis a type offrame story, in which the core narrative may be used to illuminate some aspect of the framing story. The term is used in deconstruction and deconstructive literary criticism as a paradigm of theintertextualnature of language, that is, of the way, language never quite reaches the foundation of reality because it refers in a frame-within-a-frame way, to another language, which refers to another language, and so forth.[10]
Invideo games, the first chapter of the gameThere Is No Game: Wrong Dimension(2020) is titled "Mise en abyme".
Incomedy, the final act ofThe Inside Outtakes(2022) byBo Burnhamcontains a chapter titled "Mise en abyme". It shows footage being projected into a monitor that is captured by the camera, slightly delayed at each step. This effect highlights the disconnection between Burnham and the project during the artistic process.[citation needed]
|
https://en.wikipedia.org/wiki/Mise_en_abyme
|
Apolymorphic engine(sometimes calledmutation engineormutating engine) is asoftware componentthat usespolymorphic codeto alter thepayloadwhile preserving the same functionality.
Polymorphicenginesare used almost exclusively inmalware, with the purpose of being harder forantivirus softwareto detect. They do so either byencryptingorobfuscatingthe malware payload.
One common deployment is afile binderthat weaves malware into normalfiles, such as office documents. Since this type of malware is usually polymorphic, it is also known as apolymorphic packer.
The engine of theVirutbotnetis an example of a polymorphic engine.[1]
|
https://en.wikipedia.org/wiki/Polymorphic_engine
|
Probabilistic designis a discipline withinengineering design. It deals primarily with the consideration and minimization of the effects ofrandom variabilityupon the performance of anengineering systemduring the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using thesafety factor.[2][3]Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design,quality control,systems engineering,machine design,civil engineering(particularly useful inlimit state design) and manufacturing.
When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with aprobability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system.[4]
Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost.[4][5]
Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to asrobustification, parameter design ordesign for six sigma.[4]
Though the laws of physics dictate the relationships between variables and measurable quantities such as force,stress,strain, anddeflection, there are still three primary sources of variability when considering these relationships.[6]
The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress,Young's modulus, andtrue strain.[7]Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size.
We can represent variance due to measurement uncertainties as a corrective factorB{\displaystyle B}, which is multiplied by the true meanX{\displaystyle X}to yield the measured mean ofX¯{\displaystyle {\bar {X}}}. Equivalently,X¯=B¯X{\displaystyle {\bar {X}}={\bar {B}}X}.
This yields the resultB¯=X¯X{\displaystyle {\bar {B}}={\frac {\bar {X}}{X}}}, and the variance of the corrective factorB{\displaystyle B}is given as:
Var[B]=Var[X¯]X=Var[X]nX{\displaystyle Var[B]={\frac {Var[{\bar {X}}]}{X}}={\frac {Var[X]}{nX}}}
whereB{\displaystyle B}is the correction factor,X{\displaystyle X}is the true mean,X¯{\displaystyle {\bar {X}}}is the measured mean, andn{\displaystyle n}is the number of measurements made.[6]
The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available.
The measured valueH^(ω){\displaystyle {\hat {H}}(\omega )}is equivalent to the theoretical model predictionH(ω){\displaystyle H(\omega )}multiplied by a model error ofϕ(ω){\displaystyle \phi (\omega )}, plus the experimental errorε(ω){\displaystyle \varepsilon (\omega )}.[8]Equivalently,
H^(ω)=H(ω)ϕ(ω)+ε(ω){\displaystyle {\hat {H}}(\omega )=H(\omega )\phi (\omega )+\varepsilon (\omega )}
and the model error takes the general form:
ϕ(ω)=∑i=0naiωn{\displaystyle \phi (\omega )=\sum _{i=0}^{n}a_{i}\omega ^{n}}
whereai{\displaystyle a_{i}}are coefficients of regression determined from experimental data.[8]
Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability.
Consider the classical approach to performingtensile testingin materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world.
The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value.[6][8]Let the probability distribution function of the yield strength be given asf(R){\displaystyle f(R)}.
Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given asf(S){\displaystyle f(S)}.
The probability of failure is equivalent to the area between these two distribution functions, mathematically:
Pf=P(R<S)=∫−∞∞∫−∞∞f(R)f(S)dSdR{\displaystyle P_{f}=P(R<S)=\int \limits _{-\infty }^{\infty }\int \limits _{-\infty }^{\infty }f(R)f(S)dSdR}
or equivalently, if we let the difference between yield stress and applied load equal a third functionR−S=Q{\displaystyle R-S=Q}, then:
Pf=∫−∞∞∫−∞∞f(R)f(S)dSdR=∫−∞0f(Q)dQ{\displaystyle P_{f}=\int \limits _{-\infty }^{\infty }\int \limits _{-\infty }^{\infty }f(R)f(S)dSdR=\int \limits _{-\infty }^{0}f(Q)dQ}
where thevarianceof the mean differenceQ{\displaystyle Q}is given byσQ2=σR2+σS2{\displaystyle \sigma _{Q}^{2}={\sqrt {\sigma _{R}^{2}+\sigma _{S}^{2}}}}.
The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength.[9]It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no.
In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include:
Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include:
|
https://en.wikipedia.org/wiki/Probabilistic_design
|
Incomputer science,instruction schedulingis acompiler optimizationused to improveinstruction-level parallelism, which improves performance on machines withinstruction pipelines. Put more simply, it tries to do the following without changing the meaning of the code:
The pipeline stalls can be caused by structural hazards (processor resource limit), data hazards (output of one instruction needed by another instruction) and control hazards (branching).
Instruction scheduling is typically done on a singlebasic block. In order to determine whether rearranging the block's instructions in a certain way preserves the behavior of that block, we need the concept of adata dependency. There are three types of dependencies, which also happen to be the threedata hazards:
Technically, there is a fourth type, Read after Read (RAR or "Input"): Both instructions read the same location. Input dependence does not constrain the execution order of two statements, but it is useful in scalar replacement of array elements.
To make sure we respect the three types of dependencies, we construct a dependency graph, which is adirected graphwhere each vertex is an instruction and there is an edge from I1to I2if I1must come before I2due to a dependency. If loop-carried dependencies are left out, the dependency graph is adirected acyclic graph. Then, anytopological sortof this graph is a valid instruction schedule. The edges of the graph are usually labelled with thelatencyof the dependence. This is the number of clock cycles that needs to elapse before the pipeline can proceed with the target instruction without stalling.
The simplest algorithm to find a topological sort is frequently used and is known aslist scheduling. Conceptually, it repeatedly selects a source of the dependency graph, appends it to the current instruction schedule and removes it from the graph. This may cause other vertices to be sources, which will then also be considered for scheduling. The algorithm terminates if the graph is empty.
To arrive at a good schedule, stalls should be prevented. This is determined by the choice of the next instruction to be scheduled. A number of heuristics are in common use:
Instruction scheduling may be done either before or afterregister allocationor both before and after it. The advantage of doing it before register allocation is that this results in maximum parallelism. The disadvantage of doing it before register allocation is that this can result in the register allocator needing to use a number of registers exceeding those available. This will cause spill/fill code to be introduced, which will reduce the performance of the section of code in question.
If the architecture being scheduled has instruction sequences that have potentially illegal combinations (due to a lack of instruction interlocks), the instructions must be scheduled after register allocation. This second scheduling pass will also improve the placement of the spill/fill code.
If scheduling is only done after register allocation, then there will be false dependencies introduced by the register allocation that will limit the amount of instruction motion possible by the scheduler.
There are several types of instruction scheduling:
TheGNU Compiler Collectionis one compiler known to perform instruction scheduling, using the-march(both instruction set and scheduling) or-mtune(only scheduling) flags. It uses descriptions of instruction latencies and what instructions can be run in parallel (or equivalently, which "port" each use) for each microarchitecture to perform the task. This feature is available to almost all architectures that GCC supports.[2]
Until version 12.0.0, the instruction scheduling inLLVM/Clang could only accept a-march(calledtarget-cpuin LLVM parlance) switch for both instruction set and scheduling. Version 12 adds support for-mtune(tune-cpu) for x86 only.[3]
Sources of information on latency and port usage include:
LLVM'sllvm-exegesisshould be usable on all machines, especially to gather information on non-x86 ones.[6]
|
https://en.wikipedia.org/wiki/Superblock_scheduling
|
Pruningis adata compressiontechnique inmachine learningandsearch algorithmsthat reduces the size ofdecision treesby removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the finalclassifier, and hence improves predictive accuracy by the reduction ofoverfitting.
One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risksoverfittingthe training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as thehorizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.[1]
Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by across-validationset. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance.
Pruning processes can be divided into two types (pre- and post-pruning).
Pre-pruningprocedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion.
Post-pruning(or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. It may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall.
The procedures are differentiated on the basis of their approach in the tree (top-down or bottom-up).
These procedures start at the last node in the tree (the lowest point). Following recursively upwards, they determine the relevance of each individual node. If the relevance for the classification is not given, the node is dropped or replaced by a leaf. The advantage is that no relevant sub-trees can be lost with this method.
These methods include Reduced Error Pruning (REP), Minimum Cost Complexity Pruning (MCCP), or Minimum Error Pruning (MEP).
In contrast to the bottom-up method, this method starts at the root of the tree. Following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. By pruning the tree at an inner node, it can happen that an entire sub-tree (regardless of its relevance) is dropped. One of these representatives is pessimistic error pruning (PEP), which brings quite good results with unseen items.
One of the simplest forms of pruning is reduced error pruning. Starting at the leaves, each node is replaced with its most popular class. If the prediction accuracy is not affected then the change is kept. While somewhat naive, reduced error pruning has the advantage ofsimplicity and speed.
Cost complexity pruning generates a series of treesT0…Tm{\displaystyle T_{0}\dots T_{m}}whereT0{\displaystyle T_{0}}is the initial tree andTm{\displaystyle T_{m}}is the root alone. At stepi{\displaystyle i}, the tree is created by removing a subtree from treei−1{\displaystyle i-1}and replacing it with a leaf node with value chosen as in the tree building algorithm. The subtree that is removed is chosen as follows:
The functionprune(T,t){\displaystyle \operatorname {prune} (T,t)}defines the tree obtained by pruning the subtreest{\displaystyle t}from the treeT{\displaystyle T}. Once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross-validation.
Pruning could be applied in acompression schemeof a learning algorithm to remove the redundant details without compromising the model's performances. In neural networks, pruning removes entire neurons or layers of neurons.
|
https://en.wikipedia.org/wiki/Pruning_(algorithm)
|
Inlinear algebra, thesingular value decomposition(SVD) is afactorizationof arealorcomplexmatrixinto a rotation, followed by a rescaling followed by another rotation. It generalizes theeigendecompositionof a squarenormal matrixwith an orthonormal eigenbasis to anym×n{\displaystyle m\times n}matrix. It is related to thepolar decomposition.
Specifically, the singular value decomposition of anm×n{\displaystyle m\times n}complex matrixM{\displaystyle \mathbf {M} }is a factorization of the formM=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U\Sigma V^{*}} ,}whereU{\displaystyle \mathbf {U} }is anm×m{\displaystyle m\times m}complexunitary matrix,Σ{\displaystyle \mathbf {\Sigma } }is anm×n{\displaystyle m\times n}rectangular diagonal matrixwith non-negative real numbers on the diagonal,V{\displaystyle \mathbf {V} }is ann×n{\displaystyle n\times n}complex unitary matrix, andV∗{\displaystyle \mathbf {V} ^{*}}is theconjugate transposeofV{\displaystyle \mathbf {V} }. Such decomposition always exists for any complex matrix. IfM{\displaystyle \mathbf {M} }is real, thenU{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }can be guaranteed to be realorthogonalmatrices; in such contexts, the SVD is often denotedUΣVT.{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{\mathrm {T} }.}
The diagonal entriesσi=Σii{\displaystyle \sigma _{i}=\Sigma _{ii}}ofΣ{\displaystyle \mathbf {\Sigma } }are uniquely determined byM{\displaystyle \mathbf {M} }and are known as thesingular valuesofM{\displaystyle \mathbf {M} }. The number of non-zero singular values is equal to therankofM{\displaystyle \mathbf {M} }. The columns ofU{\displaystyle \mathbf {U} }and the columns ofV{\displaystyle \mathbf {V} }are called left-singular vectors and right-singular vectors ofM{\displaystyle \mathbf {M} }, respectively. They form two sets oforthonormal basesu1,…,um{\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{m}}andv1,…,vn,{\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{n},}and if they are sorted so that the singular valuesσi{\displaystyle \sigma _{i}}with value zero are all in the highest-numbered columns (or rows), the singular value decomposition can be written as
M=∑i=1rσiuivi∗,{\displaystyle \mathbf {M} =\sum _{i=1}^{r}\sigma _{i}\mathbf {u} _{i}\mathbf {v} _{i}^{*},}
wherer≤min{m,n}{\displaystyle r\leq \min\{m,n\}}is the rank ofM.{\displaystyle \mathbf {M} .}
The SVD is not unique. However, it is always possible to choose the decomposition such that the singular valuesΣii{\displaystyle \Sigma _{ii}}are in descending order. In this case,Σ{\displaystyle \mathbf {\Sigma } }(but notU{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }) is uniquely determined byM.{\displaystyle \mathbf {M} .}
The term sometimes refers to thecompact SVD, a similar decompositionM=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U\Sigma V} ^{*}}in whichΣ{\displaystyle \mathbf {\Sigma } }is square diagonal of sizer×r,{\displaystyle r\times r,}wherer≤min{m,n}{\displaystyle r\leq \min\{m,n\}}is the rank ofM,{\displaystyle \mathbf {M} ,}and has only the non-zero singular values. In this variant,U{\displaystyle \mathbf {U} }is anm×r{\displaystyle m\times r}semi-unitary matrixandV{\displaystyle \mathbf {V} }is ann×r{\displaystyle n\times r}semi-unitary matrix, such thatU∗U=V∗V=Ir.{\displaystyle \mathbf {U} ^{*}\mathbf {U} =\mathbf {V} ^{*}\mathbf {V} =\mathbf {I} _{r}.}
Mathematical applications of the SVD include computing thepseudoinverse, matrix approximation, and determining the rank,range, andnull spaceof a matrix. The SVD is also extremely useful in many areas of science,engineering, andstatistics, such assignal processing,least squaresfitting of data, andprocess control.
In the special case whenM{\displaystyle \mathbf {M} }is anm×m{\displaystyle m\times m}realsquare matrix, the matricesU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V} ^{*}}can be chosen to be realm×m{\displaystyle m\times m}matrices too. In that case, "unitary" is the same as "orthogonal". Then, interpreting both unitary matrices as well as the diagonal matrix, summarized here asA,{\displaystyle \mathbf {A} ,}as alinear transformationx↦Ax{\displaystyle \mathbf {x} \mapsto \mathbf {Ax} }of the spaceRm,{\displaystyle \mathbf {R} _{m},}the matricesU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V} ^{*}}representrotationsorreflectionof the space, whileΣ{\displaystyle \mathbf {\Sigma } }represents thescalingof each coordinatexi{\displaystyle \mathbf {x} _{i}}by the factorσi.{\displaystyle \sigma _{i}.}Thus the SVD decomposition breaks down any linear transformation ofRm{\displaystyle \mathbf {R} ^{m}}into acompositionof three geometricaltransformations: a rotation or reflection(V∗{\displaystyle \mathbf {V} ^{*}}),followed by a coordinate-by-coordinatescaling(Σ{\displaystyle \mathbf {\Sigma } }),followed by another rotation or reflection(U{\displaystyle \mathbf {U} }).
In particular, ifM{\displaystyle \mathbf {M} }has a positive determinant, thenU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V} ^{*}}can be chosen to be both rotations with reflections, or both rotations without reflections.[citation needed]If the determinant is negative, exactly one of them will have a reflection. If the determinant is zero, each can be independently chosen to be of either type.
If the matrixM{\displaystyle \mathbf {M} }is real but not square, namelym×n{\displaystyle m\times n}withm≠n,{\displaystyle m\neq n,}it can be interpreted as a linear transformation fromRn{\displaystyle \mathbf {R} ^{n}}toRm.{\displaystyle \mathbf {R} ^{m}.}ThenU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V} ^{*}}can be chosen to be rotations/reflections ofRm{\displaystyle \mathbf {R} ^{m}}andRn,{\displaystyle \mathbf {R} ^{n},}respectively; andΣ,{\displaystyle \mathbf {\Sigma } ,}besides scaling the firstmin{m,n}{\displaystyle \min\{m,n\}}coordinates, also extends the vector with zeros, i.e. removes trailing coordinates, so as to turnRn{\displaystyle \mathbf {R} ^{n}}intoRm.{\displaystyle \mathbf {R} ^{m}.}
As shown in the figure, thesingular valuescan be interpreted as the magnitude of the semiaxes of anellipsein 2D. This concept can be generalized ton{\displaystyle n}-dimensionalEuclidean space, with the singular values of anyn×n{\displaystyle n\times n}square matrixbeing viewed as the magnitude of the semiaxis of ann{\displaystyle n}-dimensionalellipsoid. Similarly, the singular values of anym×n{\displaystyle m\times n}matrix can be viewed as the magnitude of the semiaxis of ann{\displaystyle n}-dimensionalellipsoidinm{\displaystyle m}-dimensional space, for example as an ellipse in a (tilted) 2D plane in a 3D space. Singular values encode magnitude of the semiaxis, while singular vectors encode direction. Seebelowfor further details.
SinceU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V} ^{*}}are unitary, the columns of each of them form a set oforthonormal vectors, which can be regarded asbasis vectors. The matrixM{\displaystyle \mathbf {M} }maps the basis vectorVi{\displaystyle \mathbf {V} _{i}}to the stretched unit vectorσiUi.{\displaystyle \sigma _{i}\mathbf {U} _{i}.}By the definition of a unitary matrix, the same is true for their conjugate transposesU∗{\displaystyle \mathbf {U} ^{*}}andV,{\displaystyle \mathbf {V} ,}except the geometric interpretation of the singular values as stretches is lost. In short, the columns ofU,{\displaystyle \mathbf {U} ,}U∗,{\displaystyle \mathbf {U} ^{*},}V,{\displaystyle \mathbf {V} ,}andV∗{\displaystyle \mathbf {V} ^{*}}areorthonormal bases. WhenM{\displaystyle \mathbf {M} }is apositive-semidefiniteHermitian matrix,U{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }are both equal to the unitary matrix used to diagonalizeM.{\displaystyle \mathbf {M} .}However, whenM{\displaystyle \mathbf {M} }is not positive-semidefinite and Hermitian but stilldiagonalizable, itseigendecompositionand singular value decomposition are distinct.
BecauseU{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }are unitary, we know that the columnsU1,…,Um{\displaystyle \mathbf {U} _{1},\ldots ,\mathbf {U} _{m}}ofU{\displaystyle \mathbf {U} }yield anorthonormal basisofKm{\displaystyle K^{m}}and the columnsV1,…,Vn{\displaystyle \mathbf {V} _{1},\ldots ,\mathbf {V} _{n}}ofV{\displaystyle \mathbf {V} }yield an orthonormal basis ofKn{\displaystyle K^{n}}(with respect to the standardscalar productson these spaces).
Thelinear transformation
T:{Kn→Kmx↦Mx{\displaystyle T:\left\{{\begin{aligned}K^{n}&\to K^{m}\\x&\mapsto \mathbf {M} x\end{aligned}}\right.}
has a particularly simple description with respect to these orthonormal bases: we have
T(Vi)=σiUi,i=1,…,min(m,n),{\displaystyle T(\mathbf {V} _{i})=\sigma _{i}\mathbf {U} _{i},\qquad i=1,\ldots ,\min(m,n),}
whereσi{\displaystyle \sigma _{i}}is thei{\displaystyle i}-th diagonal entry ofΣ,{\displaystyle \mathbf {\Sigma } ,}andT(Vi)=0{\displaystyle T(\mathbf {V} _{i})=0}fori>min(m,n).{\displaystyle i>\min(m,n).}
The geometric content of the SVD theorem can thus be summarized as follows: for every linear mapT:Kn→Km{\displaystyle T:K^{n}\to K^{m}}one can find orthonormal bases ofKn{\displaystyle K^{n}}andKm{\displaystyle K^{m}}such thatT{\displaystyle T}maps thei{\displaystyle i}-th basis vector ofKn{\displaystyle K^{n}}to a non-negative multiple of thei{\displaystyle i}-th basis vector ofKm,{\displaystyle K^{m},}and sends the leftover basis vectors to zero. With respect to these bases, the mapT{\displaystyle T}is therefore represented by a diagonal matrix with non-negative real diagonal entries.
To get a more visual flavor of singular values and SVD factorization – at least when working on real vector spaces – consider the sphereS{\displaystyle S}of radius one inRn.{\displaystyle \mathbf {R} ^{n}.}The linear mapT{\displaystyle T}maps this sphere onto anellipsoidinRm.{\displaystyle \mathbf {R} ^{m}.}Non-zero singular values are simply the lengths of thesemi-axesof this ellipsoid. Especially whenn=m,{\displaystyle n=m,}and all the singular values are distinct and non-zero, the SVD of the linear mapT{\displaystyle T}can be easily analyzed as a succession of three consecutive moves: consider the ellipsoidT(S){\displaystyle T(S)}and specifically its axes; then consider the directions inRn{\displaystyle \mathbf {R} ^{n}}sent byT{\displaystyle T}onto these axes. These directions happen to be mutually orthogonal. Apply first an isometryV∗{\displaystyle \mathbf {V} ^{*}}sending these directions to the coordinate axes ofRn.{\displaystyle \mathbf {R} ^{n}.}On a second move, apply anendomorphismD{\displaystyle \mathbf {D} }diagonalized along the coordinate axes and stretching or shrinking in each direction, using the semi-axes lengths ofT(S){\displaystyle T(S)}as stretching coefficients. The compositionD∘V∗{\displaystyle \mathbf {D} \circ \mathbf {V} ^{*}}then sends the unit-sphere onto an ellipsoid isometric toT(S).{\displaystyle T(S).}To define the third and last move, apply an isometryU{\displaystyle \mathbf {U} }to this ellipsoid to obtainT(S).{\displaystyle T(S).}As can be easily checked, the compositionU∘D∘V∗{\displaystyle \mathbf {U} \circ \mathbf {D} \circ \mathbf {V} ^{*}}coincides withT.{\displaystyle T.}
Consider the4×5{\displaystyle 4\times 5}matrix
M=[10002003000000002000]{\displaystyle \mathbf {M} ={\begin{bmatrix}1&0&0&0&2\\0&0&3&0&0\\0&0&0&0&0\\0&2&0&0&0\end{bmatrix}}}
A singular value decomposition of this matrix is given byUΣV∗{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}
U=[0−100−1000000−100−10]Σ=[30000050000020000000]V∗=[00−100−0.2000−0.80−100000010−0.80000.2]{\displaystyle {\begin{aligned}\mathbf {U} &={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0&\color {Emerald}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0&\color {Emerald}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0&\color {Emerald}-1\\\color {Green}0&\color {Blue}0&\color {Cyan}-1&\color {Emerald}0\end{bmatrix}}\\[6pt]\mathbf {\Sigma } &={\begin{bmatrix}3&0&0&0&\color {Gray}{\mathit {0}}\\0&{\sqrt {5}}&0&0&\color {Gray}{\mathit {0}}\\0&0&2&0&\color {Gray}{\mathit {0}}\\0&0&0&\color {Red}\mathbf {0} &\color {Gray}{\mathit {0}}\end{bmatrix}}\\[6pt]\mathbf {V} ^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}0&\color {Orchid}0&\color {Orchid}0&\color {Orchid}1&\color {Orchid}0\\\color {Purple}-{\sqrt {0.8}}&\color {Purple}0&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.2}}\end{bmatrix}}\end{aligned}}}
The scaling matrixΣ{\displaystyle \mathbf {\Sigma } }is zero outside of the diagonal (grey italics) and one diagonal element is zero (red bold, light blue bold in dark mode). Furthermore, because the matricesU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V} ^{*}}areunitary, multiplying by their respective conjugate transposes yieldsidentity matrices, as shown below. In this case, becauseU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V} ^{*}}are real valued, each is anorthogonal matrix.
UU∗=[1000010000100001]=I4VV∗=[1000001000001000001000001]=I5{\displaystyle {\begin{aligned}\mathbf {U} \mathbf {U} ^{*}&={\begin{bmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}}=\mathbf {I} _{4}\\[6pt]\mathbf {V} \mathbf {V} ^{*}&={\begin{bmatrix}1&0&0&0&0\\0&1&0&0&0\\0&0&1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{bmatrix}}=\mathbf {I} _{5}\end{aligned}}}
This particular singular value decomposition is not unique. For instance, we can keepU{\displaystyle \mathbf {U} }andΣ{\displaystyle \mathbf {\Sigma } }the same, but change the last two rows ofV∗{\displaystyle \mathbf {V} ^{*}}such that
V∗=[00−100−0.2000−0.80−10000.4000.5−0.1−0.4000.50.1]{\displaystyle \mathbf {V} ^{*}={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\\\color {Orchid}{\sqrt {0.4}}&\color {Orchid}0&\color {Orchid}0&\color {Orchid}{\sqrt {0.5}}&\color {Orchid}-{\sqrt {0.1}}\\\color {Purple}-{\sqrt {0.4}}&\color {Purple}0&\color {Purple}0&\color {Purple}{\sqrt {0.5}}&\color {Purple}{\sqrt {0.1}}\end{bmatrix}}}
and get an equally valid singular value decomposition. As the matrixM{\displaystyle \mathbf {M} }has rank 3, it has only 3 nonzero singular values. In taking the productUΣV∗{\displaystyle \mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}, the final column ofU{\displaystyle \mathbf {U} }and the final two rows ofV∗{\displaystyle \mathbf {V^{*}} }are multiplied by zero, so have no effect on the matrix product, and can be replaced by any unit vectors which are orthogonal to the first three and to each-other.
Thecompact SVD,M=UrΣrVr∗{\displaystyle \mathbf {M} =\mathbf {U} _{r}\mathbf {\Sigma } _{r}\mathbf {V} _{r}^{*}}, eliminates these superfluous rows, columns, and singular values:
Ur=[0−10−10000000−1]Σr=[300050002]Vr∗=[00−100−0.2000−0.80−1000]{\displaystyle {\begin{aligned}\mathbf {U} _{r}&={\begin{bmatrix}\color {Green}0&\color {Blue}-1&\color {Cyan}0\\\color {Green}-1&\color {Blue}0&\color {Cyan}0\\\color {Green}0&\color {Blue}0&\color {Cyan}0\\\color {Green}0&\color {Blue}0&\color {Cyan}-1\end{bmatrix}}\\[6pt]\mathbf {\Sigma } _{r}&={\begin{bmatrix}3&0&0\\0&{\sqrt {5}}&0\\0&0&2\end{bmatrix}}\\[6pt]\mathbf {V} _{r}^{*}&={\begin{bmatrix}\color {Violet}0&\color {Violet}0&\color {Violet}-1&\color {Violet}0&\color {Violet}0\\\color {Plum}-{\sqrt {0.2}}&\color {Plum}0&\color {Plum}0&\color {Plum}0&\color {Plum}-{\sqrt {0.8}}\\\color {Magenta}0&\color {Magenta}-1&\color {Magenta}0&\color {Magenta}0&\color {Magenta}0\end{bmatrix}}\end{aligned}}}
A non-negative real numberσ{\displaystyle \sigma }is asingular valueforM{\displaystyle \mathbf {M} }if and only if there exist unit-length vectorsu{\displaystyle \mathbf {u} }inKm{\displaystyle K^{m}}andv{\displaystyle \mathbf {v} }inKn{\displaystyle K^{n}}such that
Mv=σu,M∗u=σv.{\displaystyle {\begin{aligned}\mathbf {Mv} &=\sigma \mathbf {u} ,\\[3mu]\mathbf {M} ^{*}\mathbf {u} &=\sigma \mathbf {v} .\end{aligned}}}
The vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }are calledleft-singularandright-singular vectorsforσ,{\displaystyle \sigma ,}respectively.
In any singular value decomposition
M=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}
the diagonal entries ofΣ{\displaystyle \mathbf {\Sigma } }are equal to the singular values ofM.{\displaystyle \mathbf {M} .}The firstp=min(m,n){\displaystyle p=\min(m,n)}columns ofU{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }are, respectively, left- and right-singular vectors for the corresponding singular values. Consequently, the above theorem implies that:
A singular value for which we can find two left (or right) singular vectors that are linearly independent is calleddegenerate. Ifu1{\displaystyle \mathbf {u} _{1}}andu2{\displaystyle \mathbf {u} _{2}}are two left-singular vectors which both correspond to the singular value σ, then any normalized linear combination of the two vectors is also a left-singular vector corresponding to the singular value σ. The similar statement is true for right-singular vectors. The number of independent left and right-singular vectors coincides, and these singular vectors appear in the same columns ofU{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }corresponding to diagonal elements ofΣ{\displaystyle \mathbf {\Sigma } }all with the same valueσ.{\displaystyle \sigma .}
As an exception, the left and right-singular vectors of singular value 0 comprise all unit vectors in thecokernelandkernel, respectively, ofM,{\displaystyle \mathbf {M} ,}which by therank–nullity theoremcannot be the same dimension ifm≠n.{\displaystyle m\neq n.}Even if all singular values are nonzero, ifm>n{\displaystyle m>n}then the cokernel is nontrivial, in which caseU{\displaystyle \mathbf {U} }is padded withm−n{\displaystyle m-n}orthogonal vectors from the cokernel. Conversely, ifm<n,{\displaystyle m<n,}thenV{\displaystyle \mathbf {V} }is padded byn−m{\displaystyle n-m}orthogonal vectors from the kernel. However, if the singular value of0{\displaystyle 0}exists, the extra columns ofU{\displaystyle \mathbf {U} }orV{\displaystyle \mathbf {V} }already appear as left or right-singular vectors.
Non-degenerate singular values always have unique left- and right-singular vectors, up to multiplication by a unit-phase factoreiφ{\displaystyle e^{i\varphi }}(for the real case up to a sign). Consequently, if all singular values of a square matrixM{\displaystyle \mathbf {M} }are non-degenerate and non-zero, then its singular value decomposition is unique, up to multiplication of a column ofU{\displaystyle \mathbf {U} }by a unit-phase factor and simultaneous multiplication of the corresponding column ofV{\displaystyle \mathbf {V} }by the same unit-phase factor.
In general, the SVD is unique up to arbitrary unitary transformations applied uniformly to the column vectors of bothU{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }spanning the subspaces of each singular value, and up to arbitrary unitary transformations on vectors ofU{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }spanning the kernel and cokernel, respectively, ofM.{\displaystyle \mathbf {M} .}
The singular value decomposition is very general in the sense that it can be applied to anym×n{\displaystyle m\times n}matrix, whereaseigenvalue decompositioncan only be applied to squarediagonalizable matrices. Nevertheless, the two decompositions are related.
IfM{\displaystyle \mathbf {M} }has SVDM=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*},}the following two relations hold:
M∗M=VΣ∗U∗UΣV∗=V(Σ∗Σ)V∗,MM∗=UΣV∗VΣ∗U∗=U(ΣΣ∗)U∗.{\displaystyle {\begin{aligned}\mathbf {M} ^{*}\mathbf {M} &=\mathbf {V} \mathbf {\Sigma } ^{*}\mathbf {U} ^{*}\,\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}=\mathbf {V} (\mathbf {\Sigma } ^{*}\mathbf {\Sigma } )\mathbf {V} ^{*},\\[3mu]\mathbf {M} \mathbf {M} ^{*}&=\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}\,\mathbf {V} \mathbf {\Sigma } ^{*}\mathbf {U} ^{*}=\mathbf {U} (\mathbf {\Sigma } \mathbf {\Sigma } ^{*})\mathbf {U} ^{*}.\end{aligned}}}
The right-hand sides of these relations describe the eigenvalue decompositions of the left-hand sides. Consequently:
In the special case ofM{\displaystyle \mathbf {M} }being anormal matrix, and thus also square, thespectral theoremensures that it can beunitarilydiagonalizedusing a basis ofeigenvectors, and thus decomposed asM=UDU∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}}for some unitary matrixU{\displaystyle \mathbf {U} }and diagonal matrixD{\displaystyle \mathbf {D} }with complex elementsσi{\displaystyle \sigma _{i}}along the diagonal. WhenM{\displaystyle \mathbf {M} }ispositive semi-definite,σi{\displaystyle \sigma _{i}}will be non-negative real numbers so that the decompositionM=UDU∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{*}}is also a singular value decomposition. Otherwise, it can be recast as an SVD by moving the phaseeiφ{\displaystyle e^{i\varphi }}of eachσi{\displaystyle \sigma _{i}}to either its correspondingVi{\displaystyle \mathbf {V} _{i}}orUi.{\displaystyle \mathbf {U} _{i}.}The natural connection of the SVD to non-normal matrices is through thepolar decompositiontheorem:M=SR,{\displaystyle \mathbf {M} =\mathbf {S} \mathbf {R} ,}whereS=UΣU∗{\displaystyle \mathbf {S} =\mathbf {U} \mathbf {\Sigma } \mathbf {U} ^{*}}is positive semidefinite and normal, andR=UV∗{\displaystyle \mathbf {R} =\mathbf {U} \mathbf {V} ^{*}}is unitary.
Thus, except for positive semi-definite matrices, the eigenvalue decomposition and SVD ofM,{\displaystyle \mathbf {M} ,}while related, differ: the eigenvalue decomposition isM=UDU−1,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {D} \mathbf {U} ^{-1},}whereU{\displaystyle \mathbf {U} }is not necessarily unitary andD{\displaystyle \mathbf {D} }is not necessarily positive semi-definite, while the SVD isM=UΣV∗,{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*},}whereΣ{\displaystyle \mathbf {\Sigma } }is diagonal and positive semi-definite, andU{\displaystyle \mathbf {U} }andV{\displaystyle \mathbf {V} }are unitary matrices that are not necessarily related except through the matrixM.{\displaystyle \mathbf {M} .}While onlynon-defectivesquare matrices have an eigenvalue decomposition, anym×n{\displaystyle m\times n}matrix has a SVD.
The singular value decomposition can be used for computing thepseudoinverseof a matrix. The pseudoinverse of the matrixM{\displaystyle \mathbf {M} }with singular value decompositionM=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}is
M+=VΣ+U∗,{\displaystyle \mathbf {M} ^{+}=\mathbf {V} {\boldsymbol {\Sigma }}^{+}\mathbf {U} ^{\ast },}
whereΣ+{\displaystyle {\boldsymbol {\Sigma }}^{+}}is the pseudoinverse ofΣ{\displaystyle {\boldsymbol {\Sigma }}}, which is formed by replacing every non-zero diagonal entry by itsreciprocaland transposing the resulting matrix. The pseudoinverse is one way to solvelinear least squaresproblems.
A set ofhomogeneous linear equationscan be written asAx=0{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {0} }for a matrixA{\displaystyle \mathbf {A} }and vectorx.{\displaystyle \mathbf {x} .}A typical situation is thatA{\displaystyle \mathbf {A} }is known and a non-zerox{\displaystyle \mathbf {x} }is to be determined which satisfies the equation. Such anx{\displaystyle \mathbf {x} }belongs toA{\displaystyle \mathbf {A} }'snull spaceand is sometimes called a (right) null vector ofA.{\displaystyle \mathbf {A} .}The vectorx{\displaystyle \mathbf {x} }can be characterized as a right-singular vector corresponding to a singular value ofA{\displaystyle \mathbf {A} }that is zero. This observation means that ifA{\displaystyle \mathbf {A} }is asquare matrixand has no vanishing singular value, the equation has no non-zerox{\displaystyle \mathbf {x} }as a solution. It also means that if there are several vanishing singular values, any linear combination of the corresponding right-singular vectors is a valid solution. Analogously to the definition of a (right) null vector, a non-zerox{\displaystyle \mathbf {x} }satisfyingx∗A=0{\displaystyle \mathbf {x} ^{*}\mathbf {A} =\mathbf {0} }withx∗{\displaystyle \mathbf {x} ^{*}}denoting the conjugate transpose ofx,{\displaystyle \mathbf {x} ,}is called a left null vector ofA.{\displaystyle \mathbf {A} .}
Atotal least squaresproblem seeks the vectorx{\displaystyle \mathbf {x} }that minimizes the2-normof a vectorAx{\displaystyle \mathbf {A} \mathbf {x} }under the constraint‖x‖=1.{\displaystyle \|\mathbf {x} \|=1.}The solution turns out to be the right-singular vector ofA{\displaystyle \mathbf {A} }corresponding to the smallest singular value.
Another application of the SVD is that it provides an explicit representation of therangeandnull spaceof a matrixM.{\displaystyle \mathbf {M} .}The right-singular vectors corresponding to vanishing singular values ofM{\displaystyle \mathbf {M} }span the null space ofM{\displaystyle \mathbf {M} }and the left-singular vectors corresponding to the non-zero singular values ofM{\displaystyle \mathbf {M} }span the range ofM.{\displaystyle \mathbf {M} .}For example, in the aboveexamplethe null space is spanned by the last row ofV∗{\displaystyle \mathbf {V} ^{*}}and the range is spanned by the first three columns ofU.{\displaystyle \mathbf {U} .}
As a consequence, therankofM{\displaystyle \mathbf {M} }equals the number of non-zero singular values which is the same as the number of non-zero diagonal elements inΣ{\displaystyle \mathbf {\Sigma } }. In numerical linear algebra the singular values can be used to determine theeffective rankof a matrix, asrounding errormay lead to small but non-zero singular values in a rank deficient matrix. Singular values beyond a significant gap are assumed to be numerically equivalent to zero.
Some practical applications need to solve the problem ofapproximatinga matrixM{\displaystyle \mathbf {M} }with another matrixM~{\displaystyle {\tilde {\mathbf {M} }}}, said to betruncated, which has a specific rankr{\displaystyle r}. In the case that the approximation is based on minimizing theFrobenius normof the difference betweenM{\displaystyle \mathbf {M} }andM~{\displaystyle {\tilde {\mathbf {M} }}}under the constraint thatrank(M~)=r,{\displaystyle \operatorname {rank} {\bigl (}{\tilde {\mathbf {M} }}{\bigr )}=r,}it turns out that the solution is given by the SVD ofM,{\displaystyle \mathbf {M} ,}namely
M~=UΣ~V∗,{\displaystyle {\tilde {\mathbf {M} }}=\mathbf {U} {\tilde {\mathbf {\Sigma } }}\mathbf {V} ^{*},}
whereΣ~{\displaystyle {\tilde {\mathbf {\Sigma } }}}is the same matrix asΣ{\displaystyle \mathbf {\Sigma } }except that it contains only ther{\displaystyle r}largest singular values (the other singular values are replaced by zero). This is known as theEckart–Young theorem, as it was proved by those two authors in 1936 (although it was later found to have been known to earlier authors; seeStewart 1993).
One practical consequence of the low-rank approximation given by SVD is that a greyscale image represented as anm×n{\displaystyle m\times n}matrixA{\displaystyle A}, can be efficiently represented by keeping the firstk{\displaystyle k}singular values and corresponding vectors. The truncated decomposition
Ak=UkΣkVkT{\displaystyle A_{k}=\mathbf {U} _{k}\mathbf {\Sigma } _{k}\mathbf {V} _{k}^{T}}
gives an image which minimizes theFrobenius errorcompared to the original image. Thus, the task becomes finding a close approximationAk{\displaystyle A_{k}}that balances retaining perceptual fidelity with the number of vectors required to reconstruct the image. StoringAk{\displaystyle A_{k}}requires onlyk(n+m+1){\displaystyle k(n+m+1)}numbers compared tonm{\displaystyle nm}. This same idea extends to color images by applying this operation to each channel or stacking the channels into one matrix.
Since the singular values of most natural images decay quickly, most of their variance is often captured by a smallk{\displaystyle k}. For a 1528 × 1225 greyscale image, we can achieve a relative error of.7%{\displaystyle .7\%}with as little ask=100{\displaystyle k=100}.[1]In practice, however, computing the SVD can be too computationally expensive and the resulting compression is typically less storage efficient than a specialized algorithm such asJPEG.
The SVD can be thought of as decomposing a matrix into a weighted, ordered sum of separable matrices. By separable, we mean that a matrixA{\displaystyle \mathbf {A} }can be written as anouter productof two vectorsA=u⊗v,{\displaystyle \mathbf {A} =\mathbf {u} \otimes \mathbf {v} ,}or, in coordinates,Aij=uivj.{\displaystyle A_{ij}=u_{i}v_{j}.}Specifically, the matrixM{\displaystyle \mathbf {M} }can be decomposed as,
M=∑iAi=∑iσiUi⊗Vi.{\displaystyle \mathbf {M} =\sum _{i}\mathbf {A} _{i}=\sum _{i}\sigma _{i}\mathbf {U} _{i}\otimes \mathbf {V} _{i}.}
HereUi{\displaystyle \mathbf {U} _{i}}andVi{\displaystyle \mathbf {V} _{i}}are thei{\displaystyle i}-th columns of the corresponding SVD matrices,σi{\displaystyle \sigma _{i}}are the ordered singular values, and eachAi{\displaystyle \mathbf {A} _{i}}is separable. The SVD can be used to find the decomposition of an image processing filter into separable horizontal and vertical filters. Note that the number of non-zeroσi{\displaystyle \sigma _{i}}is exactly the rank of the matrix.[citation needed]Separable models often arise in biological systems, and the SVD factorization is useful to analyze such systems. For example, some visual area V1 simple cells' receptive fields can be well described[2]by aGabor filterin the space domain multiplied by a modulation function in the time domain. Thus, given a linear filter evaluated through, for example,reverse correlation, one can rearrange the two spatial dimensions into one dimension, thus yielding a two-dimensional filter (space, time) which can be decomposed through SVD. The first column ofU{\displaystyle \mathbf {U} }in the SVD factorization is then a Gabor while the first column ofV{\displaystyle \mathbf {V} }represents the time modulation (or vice versa). One may then define an index of separability
α=σ12∑iσi2,{\displaystyle \alpha ={\frac {\sigma _{1}^{2}}{\sum _{i}\sigma _{i}^{2}}},}
which is the fraction of the power in the matrix M which is accounted for by the first separable matrix in the decomposition.[3]
It is possible to use the SVD of a square matrixA{\displaystyle \mathbf {A} }to determine theorthogonal matrixO{\displaystyle \mathbf {O} }closest toA.{\displaystyle \mathbf {A} .}The closeness of fit is measured by theFrobenius normofO−A.{\displaystyle \mathbf {O} -\mathbf {A} .}The solution is the productUV∗.{\displaystyle \mathbf {U} \mathbf {V} ^{*}.}[4]This intuitively makes sense because an orthogonal matrix would have the decompositionUIV∗{\displaystyle \mathbf {U} \mathbf {I} \mathbf {V} ^{*}}whereI{\displaystyle \mathbf {I} }is the identity matrix, so that ifA=UΣV∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}then the productA=UV∗{\displaystyle \mathbf {A} =\mathbf {U} \mathbf {V} ^{*}}amounts to replacing the singular values with ones. Equivalently, the solution is the unitary matrixR=UV∗{\displaystyle \mathbf {R} =\mathbf {U} \mathbf {V} ^{*}}of the Polar DecompositionM=RP=P′R{\displaystyle \mathbf {M} =\mathbf {R} \mathbf {P} =\mathbf {P} '\mathbf {R} }in either order of stretch and rotation, as described above.
A similar problem, with interesting applications inshape analysis, is theorthogonal Procrustes problem, which consists of finding an orthogonal matrixO{\displaystyle \mathbf {O} }which most closely mapsA{\displaystyle \mathbf {A} }toB.{\displaystyle \mathbf {B} .}Specifically,
O=argminΩ‖AΩ−B‖Fsubject toΩTΩ=I,{\displaystyle \mathbf {O} ={\underset {\Omega }{\operatorname {argmin} }}\|\mathbf {A} {\boldsymbol {\Omega }}-\mathbf {B} \|_{F}\quad {\text{subject to}}\quad {\boldsymbol {\Omega }}^{\operatorname {T} }{\boldsymbol {\Omega }}=\mathbf {I} ,}
where‖⋅‖F{\displaystyle \|\cdot \|_{F}}denotes the Frobenius norm.
This problem is equivalent to finding the nearest orthogonal matrix to a given matrixM=ATB{\displaystyle \mathbf {M} =\mathbf {A} ^{\operatorname {T} }\mathbf {B} }.
TheKabsch algorithm(calledWahba's problemin other fields) uses SVD to compute the optimal rotation (with respect to least-squares minimization) that will align a set of points with a corresponding set of points. It is used, among other applications, to compare the structures of molecules.
The SVD can be used to construct the principal components[5]inprincipal component analysisas follows:
LetX∈RN×p{\displaystyle \mathbf {X} \in \mathbb {R} ^{N\times p}}be a data matrix where each of theN{\displaystyle N}rows is a (feature-wise) mean-centered observation, each of dimensionp{\displaystyle p}.
The SVD ofX{\displaystyle \mathbf {X} }is:X=VΣU∗{\displaystyle \mathbf {X} =\mathbf {V} {\boldsymbol {\Sigma }}\mathbf {U} ^{\ast }}
From the same reference,[6]we see thatVΣ{\displaystyle \mathbf {V} {\boldsymbol {\Sigma }}}contains the scores of the rows ofX{\displaystyle \mathbf {X} }(i.e. each observation), andU{\displaystyle \mathbf {U} }is the matrix whose columns are principal component loading vectors.
The SVD and pseudoinverse have been successfully applied tosignal processing,[7]image processing[8]andbig data(e.g., in genomic signal processing).[9][10][11][12]
The SVD is also applied extensively to the study of linearinverse problemsand is useful in the analysis of regularization methods such as that ofTikhonov. It is widely used in statistics, where it is related toprincipal component analysisand tocorrespondence analysis, and insignal processingandpattern recognition. It is also used in output-onlymodal analysis, where the non-scaledmode shapescan be determined from the singular vectors. Yet another usage islatent semantic indexingin natural-language text processing.
In general numerical computation involving linear or linearized systems, there is a universal constant that characterizes the regularity or singularity of a problem, which is the system's "condition number"κ:=σmax/σmin{\displaystyle \kappa :=\sigma _{\text{max}}/\sigma _{\text{min}}}. It often controls the error rate or convergence rate of a given computational scheme on such systems.[13][14]
The SVD also plays a crucial role in the field ofquantum information, in a form often referred to as theSchmidt decomposition. Through it, states of two quantum systems are naturally decomposed, providing a necessary and sufficient condition for them to beentangled: if the rank of theΣ{\displaystyle \mathbf {\Sigma } }matrix is larger than one.
One application of SVD to rather large matrices is innumerical weather prediction, whereLanczos methodsare used to estimate the most linearly quickly growing few perturbations to the central numerical weather prediction over a given initial forward time period; i.e., the singular vectors corresponding to the largest singular values of the linearized propagator for the global weather over that time interval. The output singular vectors in this case are entire weather systems. These perturbations are then run through the full nonlinear model to generate anensemble forecast, giving a handle on some of the uncertainty that should be allowed for around the current central prediction.
SVD has also been applied to reduced order modelling. The aim of reduced order modelling is to reduce the number of degrees of freedom in a complex system which is to be modeled. SVD was coupled withradial basis functionsto interpolate solutions to three-dimensional unsteady flow problems.[15]
Interestingly, SVD has been used to improve gravitational waveform modeling by the ground-based gravitational-wave interferometer aLIGO.[16]SVD can help to increase the accuracy and speed of waveform generation to support gravitational-waves searches and update two different waveform models.
Singular value decomposition is used inrecommender systemsto predict people's item ratings.[17]Distributed algorithms have been developed for the purpose of calculating the SVD on clusters of commodity machines.[18]
Low-rank SVD has been applied for hotspot detection from spatiotemporal data with application to diseaseoutbreakdetection.[19]A combination of SVD andhigher-order SVDalso has been applied for real time event detection from complex data streams (multivariate data with space and time dimensions) indisease surveillance.[20]
Inastrodynamics, the SVD and its variants are used as an option to determine suitable maneuver directions for transfer trajectory design[21]andorbital station-keeping.[22]
The SVD can be used to measure the similarity between real-valued matrices.[23]By measuring the angles between the singular vectors, the inherent two-dimensional structure of matrices is accounted for. This method was shown to outperformcosine similarityandFrobenius normin most cases, including brain activity measurements fromneuroscienceexperiments.
An eigenvalueλ{\displaystyle \lambda }of a matrixM{\displaystyle \mathbf {M} }is characterized by the algebraic relationMu=λu.{\displaystyle \mathbf {M} \mathbf {u} =\lambda \mathbf {u} .}WhenM{\displaystyle \mathbf {M} }isHermitian, a variational characterization is also available. LetM{\displaystyle \mathbf {M} }be a realn×n{\displaystyle n\times n}symmetric matrix. Define
f:{Rn→Rx↦xTMx{\displaystyle f:\left\{{\begin{aligned}\mathbb {R} ^{n}&\to \mathbb {R} \\\mathbf {x} &\mapsto \mathbf {x} ^{\operatorname {T} }\mathbf {M} \mathbf {x} \end{aligned}}\right.}
By theextreme value theorem, this continuous function attains a maximum at someu{\displaystyle \mathbf {u} }when restricted to the unit sphere{‖x‖=1}.{\displaystyle \{\|\mathbf {x} \|=1\}.}By theLagrange multiplierstheorem,u{\displaystyle \mathbf {u} }necessarily satisfies
∇uTMu−λ⋅∇uTu=0{\displaystyle \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {u} -\lambda \cdot \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {u} =0}
for some real numberλ.{\displaystyle \lambda .}The nabla symbol,∇{\displaystyle \nabla }, is thedeloperator (differentiation with respect tox{\displaystyle \mathbf {x} }).Using the symmetry ofM{\displaystyle \mathbf {M} }we obtain
∇xTMx−λ⋅∇xTx=2(M−λI)x.{\displaystyle \nabla \mathbf {x} ^{\operatorname {T} }\mathbf {M} \mathbf {x} -\lambda \cdot \nabla \mathbf {x} ^{\operatorname {T} }\mathbf {x} =2(\mathbf {M} -\lambda \mathbf {I} )\mathbf {x} .}
ThereforeMu=λu,{\displaystyle \mathbf {M} \mathbf {u} =\lambda \mathbf {u} ,}sou{\displaystyle \mathbf {u} }is a unit length eigenvector ofM.{\displaystyle \mathbf {M} .}For every unit length eigenvectorv{\displaystyle \mathbf {v} }ofM{\displaystyle \mathbf {M} }its eigenvalue isf(v),{\displaystyle f(\mathbf {v} ),}soλ{\displaystyle \lambda }is the largest eigenvalue ofM.{\displaystyle \mathbf {M} .}The same calculation performed on the orthogonal complement ofu{\displaystyle \mathbf {u} }gives the next largest eigenvalue and so on. The complex Hermitian case is similar; theref(x)=x∗Mx{\displaystyle f(\mathbf {x} )=\mathbf {x} ^{*}\mathbf {M} \mathbf {x} }is a real-valued function of2n{\displaystyle 2n}real variables.
Singular values are similar in that they can be described algebraically or from variational principles. Although, unlike the eigenvalue case, Hermiticity, or symmetry, ofM{\displaystyle \mathbf {M} }is no longer required.
This section gives these two arguments for existence of singular value decomposition.
LetM{\displaystyle \mathbf {M} }be anm×n{\displaystyle m\times n}complex matrix. SinceM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }is positive semi-definite and Hermitian, by thespectral theorem, there exists ann×n{\displaystyle n\times n}unitary matrixV{\displaystyle \mathbf {V} }such that
V∗M∗MV=D¯=[D000],{\displaystyle \mathbf {V} ^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} ={\bar {\mathbf {D} }}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}},}
whereD{\displaystyle \mathbf {D} }is diagonal and positive definite, of dimensionℓ×ℓ{\displaystyle \ell \times \ell }, withℓ{\displaystyle \ell }the number of non-zero eigenvalues ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }(which can be shown to verifyℓ≤min(n,m){\displaystyle \ell \leq \min(n,m)}). Note thatV{\displaystyle \mathbf {V} }is here by definition a matrix whosei{\displaystyle i}-th column is thei{\displaystyle i}-th eigenvector ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }, corresponding to the eigenvalueD¯ii{\displaystyle {\bar {\mathbf {D} }}_{ii}}. Moreover, thej{\displaystyle j}-th column ofV{\displaystyle \mathbf {V} }, forj>ℓ{\displaystyle j>\ell }, is an eigenvector ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }with eigenvalueD¯jj=0{\displaystyle {\bar {\mathbf {D} }}_{jj}=0}. This can be expressed by writingV{\displaystyle \mathbf {V} }asV=[V1V2]{\displaystyle \mathbf {V} ={\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}}, where the columns ofV1{\displaystyle \mathbf {V} _{1}}andV2{\displaystyle \mathbf {V} _{2}}therefore contain the eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }corresponding to non-zero and zero eigenvalues, respectively. Using this rewriting ofV{\displaystyle \mathbf {V} }, the equation becomes:
[V1∗V2∗]M∗M[V1V2]=[V1∗M∗MV1V1∗M∗MV2V2∗M∗MV1V2∗M∗MV2]=[D000].{\displaystyle {\begin{bmatrix}\mathbf {V} _{1}^{*}\\\mathbf {V} _{2}^{*}\end{bmatrix}}\mathbf {M} ^{*}\mathbf {M} \,{\begin{bmatrix}\mathbf {V} _{1}&\!\!\mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\\\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}&\mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}\end{bmatrix}}={\begin{bmatrix}\mathbf {D} &0\\0&0\end{bmatrix}}.}
This implies that
V1∗M∗MV1=D,V2∗M∗MV2=0.{\displaystyle \mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}=\mathbf {D} ,\quad \mathbf {V} _{2}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{2}=\mathbf {0} .}
Moreover, the second equation impliesMV2=0{\displaystyle \mathbf {M} \mathbf {V} _{2}=\mathbf {0} }.[24]Finally, the unitary-ness ofV{\displaystyle \mathbf {V} }translates, in terms ofV1{\displaystyle \mathbf {V} _{1}}andV2{\displaystyle \mathbf {V} _{2}}, into the following conditions:
V1∗V1=I1,V2∗V2=I2,V1V1∗+V2V2∗=I12,{\displaystyle {\begin{aligned}\mathbf {V} _{1}^{*}\mathbf {V} _{1}&=\mathbf {I} _{1},\\\mathbf {V} _{2}^{*}\mathbf {V} _{2}&=\mathbf {I} _{2},\\\mathbf {V} _{1}\mathbf {V} _{1}^{*}+\mathbf {V} _{2}\mathbf {V} _{2}^{*}&=\mathbf {I} _{12},\end{aligned}}}
where the subscripts on the identity matrices are used to remark that they are of different dimensions.
Let us now define
U1=MV1D−12.{\displaystyle \mathbf {U} _{1}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}.}
Then,
U1D12V1∗=MV1D−12D12V1∗=M(I−V2V2∗)=M−(MV2)V2∗=M,{\displaystyle \mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} (\mathbf {I} -\mathbf {V} _{2}\mathbf {V} _{2}^{*})=\mathbf {M} -(\mathbf {M} \mathbf {V} _{2})\mathbf {V} _{2}^{*}=\mathbf {M} ,}
sinceMV2=0.{\displaystyle \mathbf {M} \mathbf {V} _{2}=\mathbf {0} .}This can be also seen as immediate consequence of the fact thatMV1V1∗=M{\displaystyle \mathbf {M} \mathbf {V} _{1}\mathbf {V} _{1}^{*}=\mathbf {M} }. This is equivalent to the observation that if{vi}i=1ℓ{\displaystyle \{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}is the set of eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }corresponding to non-vanishing eigenvalues{λi}i=1ℓ{\displaystyle \{\lambda _{i}\}_{i=1}^{\ell }}, then{Mvi}i=1ℓ{\displaystyle \{\mathbf {M} {\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}is a set of orthogonal vectors, and{λi−1/2Mvi}|i=1ℓ{\displaystyle {\bigl \{}\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}{\bigr \}}{\vphantom {|}}_{i=1}^{\ell }}is a (generally not complete) set oforthonormalvectors. This matches with the matrix formalism used above denoting withV1{\displaystyle \mathbf {V} _{1}}the matrix whose columns are{vi}i=1ℓ{\displaystyle \{{\boldsymbol {v}}_{i}\}_{i=1}^{\ell }}, withV2{\displaystyle \mathbf {V} _{2}}the matrix whose columns are the eigenvectors ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }with vanishing eigenvalue, andU1{\displaystyle \mathbf {U} _{1}}the matrix whose columns are the vectors{λi−1/2Mvi}|i=1ℓ{\displaystyle {\bigl \{}\lambda _{i}^{-1/2}\mathbf {M} {\boldsymbol {v}}_{i}{\bigr \}}{\vphantom {|}}_{i=1}^{\ell }}.
We see that this is almost the desired result, except thatU1{\displaystyle \mathbf {U} _{1}}andV1{\displaystyle \mathbf {V} _{1}}are in general not unitary, since they might not be square. However, we do know that the number of rows ofU1{\displaystyle \mathbf {U} _{1}}is no smaller than the number of columns, since the dimensions ofD{\displaystyle \mathbf {D} }is no greater thanm{\displaystyle m}andn{\displaystyle n}. Also, since
U1∗U1=D−12V1∗M∗MV1D−12=D−12DD−12=I1,{\displaystyle \mathbf {U} _{1}^{*}\mathbf {U} _{1}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {V} _{1}^{*}\mathbf {M} ^{*}\mathbf {M} \mathbf {V} _{1}\mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {D} ^{-{\frac {1}{2}}}\mathbf {D} \mathbf {D} ^{-{\frac {1}{2}}}=\mathbf {I_{1}} ,}
the columns inU1{\displaystyle \mathbf {U} _{1}}are orthonormal and can be extended to an orthonormal basis. This means that we can chooseU2{\displaystyle \mathbf {U} _{2}}such thatU=[U1U2]{\displaystyle \mathbf {U} ={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}}is unitary.
ForV1{\displaystyle \mathbf {V} _{1}}we already haveV2{\displaystyle \mathbf {V} _{2}}to make it unitary. Now, define
Σ=[[D12000]0],{\displaystyle \mathbf {\Sigma } ={\begin{bmatrix}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}},}
where extra zero rows are addedor removedto make the number of zero rows equal the number of columns ofU2,{\displaystyle \mathbf {U} _{2},}and hence the overall dimensions ofΣ{\displaystyle \mathbf {\Sigma } }equal tom×n{\displaystyle m\times n}. Then
[U1U2][[D12000]0][V1V2]∗=[U1U2][D12V1∗0]=U1D12V1∗=M,{\displaystyle {\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}{\begin{bmatrix}\mathbf {} D^{\frac {1}{2}}&0\\0&0\end{bmatrix}}\\0\end{bmatrix}}{\begin{bmatrix}\mathbf {V} _{1}&\mathbf {V} _{2}\end{bmatrix}}^{*}={\begin{bmatrix}\mathbf {U} _{1}&\mathbf {U} _{2}\end{bmatrix}}{\begin{bmatrix}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}\\0\end{bmatrix}}=\mathbf {U} _{1}\mathbf {D} ^{\frac {1}{2}}\mathbf {V} _{1}^{*}=\mathbf {M} ,}
which is the desired result:
M=UΣV∗.{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}.}
Notice the argument could begin with diagonalizingMM∗{\displaystyle \mathbf {M} \mathbf {M} ^{*}}rather thanM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }(This shows directly thatMM∗{\displaystyle \mathbf {M} \mathbf {M} ^{*}}andM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }have the same non-zero eigenvalues).
The singular values can also be characterized as the maxima ofuTMv,{\displaystyle \mathbf {u} ^{\mathrm {T} }\mathbf {M} \mathbf {v} ,}considered as a function ofu{\displaystyle \mathbf {u} }andv,{\displaystyle \mathbf {v} ,}over particular subspaces. The singular vectors are the values ofu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }where these maxima are attained.
LetM{\displaystyle \mathbf {M} }denote anm×n{\displaystyle m\times n}matrix with real entries. LetSk−1{\displaystyle S^{k-1}}be the unit(k−1){\displaystyle (k-1)}-sphere inRk{\displaystyle \mathbb {R} ^{k}}, and defineσ(u,v)=uTMv,{\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )=\mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {v} ,}u∈Sm−1,{\displaystyle \mathbf {u} \in S^{m-1},}v∈Sn−1.{\displaystyle \mathbf {v} \in S^{n-1}.}
Consider the functionσ{\displaystyle \sigma }restricted toSm−1×Sn−1.{\displaystyle S^{m-1}\times S^{n-1}.}Since bothSm−1{\displaystyle S^{m-1}}andSn−1{\displaystyle S^{n-1}}arecompactsets, theirproductis also compact. Furthermore, sinceσ{\displaystyle \sigma }is continuous, it attains a largest value for at least one pair of vectorsu{\displaystyle \mathbf {u} }inSm−1{\displaystyle S^{m-1}}andv{\displaystyle \mathbf {v} }inSn−1.{\displaystyle S^{n-1}.}This largest value is denotedσ1{\displaystyle \sigma _{1}}and the corresponding vectors are denotedu1{\displaystyle \mathbf {u} _{1}}andv1.{\displaystyle \mathbf {v} _{1}.}Sinceσ1{\displaystyle \sigma _{1}}is the largest value ofσ(u,v){\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )}it must be non-negative. If it were negative, changing the sign of eitheru1{\displaystyle \mathbf {u} _{1}}orv1{\displaystyle \mathbf {v} _{1}}would make it positive and therefore larger.
Statement.u1{\displaystyle \mathbf {u} _{1}}andv1{\displaystyle \mathbf {v} _{1}}are left and right-singular vectors ofM{\displaystyle \mathbf {M} }with corresponding singular valueσ1.{\displaystyle \sigma _{1}.}
Proof.Similar to the eigenvalues case, by assumption the two vectors satisfy the Lagrange multiplier equation:
∇σ=∇uTMv−λ1⋅∇uTu−λ2⋅∇vTv{\displaystyle \nabla \sigma =\nabla \mathbf {u} ^{\operatorname {T} }\mathbf {M} \mathbf {v} -\lambda _{1}\cdot \nabla \mathbf {u} ^{\operatorname {T} }\mathbf {u} -\lambda _{2}\cdot \nabla \mathbf {v} ^{\operatorname {T} }\mathbf {v} }
After some algebra, this becomes
Mv1=2λ1u1+0,MTu1=0+2λ2v1.{\displaystyle {\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=2\lambda _{1}\mathbf {u} _{1}+0,\\\mathbf {M} ^{\operatorname {T} }\mathbf {u} _{1}&=0+2\lambda _{2}\mathbf {v} _{1}.\end{aligned}}}
Multiplying the first equation from left byu1T{\displaystyle \mathbf {u} _{1}^{\textrm {T}}}and the second equation from left byv1T{\displaystyle \mathbf {v} _{1}^{\textrm {T}}}and taking‖u‖=‖v‖=1{\displaystyle \|\mathbf {u} \|=\|\mathbf {v} \|=1}into account gives
σ1=2λ1=2λ2.{\displaystyle \sigma _{1}=2\lambda _{1}=2\lambda _{2}.}
Plugging this into the pair of equations above, we have
Mv1=σ1u1,MTu1=σ1v1.{\displaystyle {\begin{aligned}\mathbf {M} \mathbf {v} _{1}&=\sigma _{1}\mathbf {u} _{1},\\\mathbf {M} ^{\operatorname {T} }\mathbf {u} _{1}&=\sigma _{1}\mathbf {v} _{1}.\end{aligned}}}
This proves the statement.
More singular vectors and singular values can be found by maximizingσ(u,v){\displaystyle \sigma (\mathbf {u} ,\mathbf {v} )}over normalizedu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }which are orthogonal tou1{\displaystyle \mathbf {u} _{1}}andv1,{\displaystyle \mathbf {v} _{1},}respectively.
The passage from real to complex is similar to the eigenvalue case.
One-sided Jacobi algorithm is an iterative algorithm,[25]where a matrix is iteratively transformed into a matrix with orthogonal columns. The elementary iteration is given as aJacobi rotation,
M←MJ(p,q,θ),{\displaystyle M\leftarrow MJ(p,q,\theta ),}
where the angleθ{\displaystyle \theta }of the Jacobi rotation matrixJ(p,q,θ){\displaystyle J(p,q,\theta )}is chosen such that after the rotation the columns with numbersp{\displaystyle p}andq{\displaystyle q}become orthogonal. The indices(p,q){\displaystyle (p,q)}are swept cyclically,(p=1…m,q=p+1…m){\displaystyle (p=1\dots m,q=p+1\dots m)}, wherem{\displaystyle m}is the number of columns.
After the algorithm has converged, the singular value decompositionM=USVT{\displaystyle M=USV^{T}}is recovered as follows: the matrixV{\displaystyle V}is the accumulation of Jacobi rotation matrices, the matrixU{\displaystyle U}is given bynormalisingthe columns of the transformed matrixM{\displaystyle M}, and the singular values are given as the norms of the columns of the transformed matrixM{\displaystyle M}.
Two-sided Jacobi SVD algorithm—a generalization of theJacobi eigenvalue algorithm—is an iterative algorithm where a square matrix is iteratively transformed into a diagonal matrix. If the matrix is not square theQR decompositionis performed first and then the algorithm is applied to theR{\displaystyle R}matrix. The elementary iteration zeroes a pair of off-diagonal elements by first applying aGivens rotationto symmetrize the pair of elements and then applying aJacobi transformationto zero them,
M←JTGMJ{\displaystyle M\leftarrow J^{T}GMJ}
whereG{\displaystyle G}is the Givens rotation matrix with the angle chosen such that the given pair of off-diagonal elements become equal after the rotation, and whereJ{\displaystyle J}is the Jacobi transformation matrix that zeroes these off-diagonal elements. The iterations proceeds exactly as in the Jacobi eigenvalue algorithm: by cyclic sweeps over all off-diagonal elements.
After the algorithm has converged the resulting diagonal matrix contains the singular values.
The matricesU{\displaystyle U}andV{\displaystyle V}are accumulated as follows:U←UGTJ{\displaystyle U\leftarrow UG^{T}J},V←VJ{\displaystyle V\leftarrow VJ}.
The singular value decomposition can be computed using the following observations:
The SVD of a matrixM{\displaystyle \mathbf {M} }is typically computed by a two-step procedure. In the first step, the matrix is reduced to abidiagonal matrix. This takesorderO(mn2){\displaystyle O(mn^{2})}floating-point operations (flop), assuming thatm≥n.{\displaystyle m\geq n.}The second step is to compute the SVD of the bidiagonal matrix. This step can only be done with aniterative method(as witheigenvalue algorithms). However, in practice it suffices to compute the SVD up to a certain precision, like themachine epsilon. If this precision is considered constant, then the second step takesO(n){\displaystyle O(n)}iterations, each costingO(n){\displaystyle O(n)}flops. Thus, the first step is more expensive, and the overall cost isO(mn2){\displaystyle O(mn^{2})}flops (Trefethen & Bau III 1997, Lecture 31).
The first step can be done usingHouseholder reflectionsfor a cost of4mn2−4n3/3{\displaystyle 4mn^{2}-4n^{3}/3}flops, assuming that only the singular values are needed and not the singular vectors. Ifm{\displaystyle m}is much larger thann{\displaystyle n}then it is advantageous to first reduce the matrixM{\displaystyle \mathbf {M} }to a triangular matrix with theQR decompositionand then use Householder reflections to further reduce the matrix to bidiagonal form; the combined cost is2mn2+2n3{\displaystyle 2mn^{2}+2n^{3}}flops (Trefethen & Bau III 1997, Lecture 31).
The second step can be done by a variant of theQR algorithmfor the computation of eigenvalues, which was first described byGolub & Kahan (1965). TheLAPACKsubroutine DBDSQR[26]implements this iterative method, with some modifications to cover the case where the singular values are very small (Demmel & Kahan 1990). Together with a first step using Householder reflections and, if appropriate, QR decomposition, this forms the DGESVD[27]routine for the computation of the singular value decomposition.
The same algorithm is implemented in theGNU Scientific Library(GSL). The GSL also offers an alternative method that uses a one-sidedJacobi orthogonalizationin step 2 (GSL Team 2007). This method computes the SVD of the bidiagonal matrix by solving a sequence of2×2{\displaystyle 2\times 2}SVD problems, similar to how theJacobi eigenvalue algorithmsolves a sequence of2×2{\displaystyle 2\times 2}eigenvalue methods (Golub & Van Loan 1996, §8.6.3). Yet another method for step 2 uses the idea ofdivide-and-conquer eigenvalue algorithms(Trefethen & Bau III 1997, Lecture 31).
There is an alternative way that does not explicitly use the eigenvalue decomposition.[28]Usually the singular value problem of a matrixM{\displaystyle \mathbf {M} }is converted into an equivalent symmetric eigenvalue problem such asMM∗,{\displaystyle \mathbf {M} \mathbf {M} ^{*},}M∗M,{\displaystyle \mathbf {M} ^{*}\mathbf {M} ,}or
[0MM∗0].{\displaystyle {\begin{bmatrix}\mathbf {0} &\mathbf {M} \\\mathbf {M} ^{*}&\mathbf {0} \end{bmatrix}}.}
The approaches that use eigenvalue decompositions are based on theQR algorithm, which is well-developed to be stable and fast.
Note that the singular values are real and right- and left- singular vectors are not required to form similarity transformations. One can iteratively alternate between theQR decompositionand theLQ decompositionto find the real diagonalHermitian matrices. TheQR decompositiongivesM⇒QR{\displaystyle \mathbf {M} \Rightarrow \mathbf {Q} \mathbf {R} }and theLQ decompositionofR{\displaystyle \mathbf {R} }givesR⇒LP∗.{\displaystyle \mathbf {R} \Rightarrow \mathbf {L} \mathbf {P} ^{*}.}Thus, at every iteration, we haveM⇒QLP∗,{\displaystyle \mathbf {M} \Rightarrow \mathbf {Q} \mathbf {L} \mathbf {P} ^{*},}updateM⇐L{\displaystyle \mathbf {M} \Leftarrow \mathbf {L} }and repeat the orthogonalizations. Eventually,[clarification needed]this iteration betweenQR decompositionandLQ decompositionproduces left- and right- unitary singular matrices. This approach cannot readily be accelerated, as the QR algorithm can with spectral shifts or deflation. This is because the shift method is not easily defined without using similarity transformations. However, this iterative approach is very simple to implement, so is a good choice when speed does not matter. This method also provides insight into how purely orthogonal/unitary transformations can obtain the SVD.
The singular values of a2×2{\displaystyle 2\times 2}matrix can be found analytically. Let the matrix beM=z0I+z1σ1+z2σ2+z3σ3{\displaystyle \mathbf {M} =z_{0}\mathbf {I} +z_{1}\sigma _{1}+z_{2}\sigma _{2}+z_{3}\sigma _{3}}
wherezi∈C{\displaystyle z_{i}\in \mathbb {C} }are complex numbers that parameterize the matrix,I{\displaystyle \mathbf {I} }is the identity matrix, andσi{\displaystyle \sigma _{i}}denote thePauli matrices. Then its two singular values are given by
σ±=|z0|2+|z1|2+|z2|2+|z3|2±(|z0|2+|z1|2+|z2|2+|z3|2)2−|z02−z12−z22−z32|2=|z0|2+|z1|2+|z2|2+|z3|2±2(Rez0z1∗)2+(Rez0z2∗)2+(Rez0z3∗)2+(Imz1z2∗)2+(Imz2z3∗)2+(Imz3z1∗)2{\displaystyle {\begin{aligned}\sigma _{\pm }&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm {\sqrt {{\bigl (}|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}{\bigr )}^{2}-|z_{0}^{2}-z_{1}^{2}-z_{2}^{2}-z_{3}^{2}|^{2}}}}}\\&={\sqrt {|z_{0}|^{2}+|z_{1}|^{2}+|z_{2}|^{2}+|z_{3}|^{2}\pm 2{\sqrt {(\operatorname {Re} z_{0}z_{1}^{*})^{2}+(\operatorname {Re} z_{0}z_{2}^{*})^{2}+(\operatorname {Re} z_{0}z_{3}^{*})^{2}+(\operatorname {Im} z_{1}z_{2}^{*})^{2}+(\operatorname {Im} z_{2}z_{3}^{*})^{2}+(\operatorname {Im} z_{3}z_{1}^{*})^{2}}}}}\end{aligned}}}
In applications it is quite unusual for the full SVD, including a full unitary decomposition of the null-space of the matrix, to be required. Instead, it is often sufficient (as well as faster, and more economical for storage) to compute a reduced version of the SVD. The following can be distinguished for anm×n{\displaystyle m\times n}matrixM{\displaystyle \mathbf {M} }of rankr{\displaystyle r}:
The thin, or economy-sized, SVD of a matrixM{\displaystyle \mathbf {M} }is given by[29]
M=UkΣkVk∗,{\displaystyle \mathbf {M} =\mathbf {U} _{k}\mathbf {\Sigma } _{k}\mathbf {V} _{k}^{*},}
wherek=min(m,n),{\displaystyle k=\min(m,n),}the matricesUk{\displaystyle \mathbf {U} _{k}}andVk{\displaystyle \mathbf {V} _{k}}contain only the firstk{\displaystyle k}columns ofU{\displaystyle \mathbf {U} }andV,{\displaystyle \mathbf {V} ,}andΣk{\displaystyle \mathbf {\Sigma } _{k}}contains only the firstk{\displaystyle k}singular values fromΣ.{\displaystyle \mathbf {\Sigma } .}The matrixUk{\displaystyle \mathbf {U} _{k}}is thusm×k,{\displaystyle m\times k,}Σk{\displaystyle \mathbf {\Sigma } _{k}}isk×k{\displaystyle k\times k}diagonal, andVk∗{\displaystyle \mathbf {V} _{k}^{*}}isk×n.{\displaystyle k\times n.}
The thin SVD uses significantly less space and computation time ifk≪max(m,n).{\displaystyle k\ll \max(m,n).}The first stage in its calculation will usually be aQR decompositionofM,{\displaystyle \mathbf {M} ,}which can make for a significantly quicker calculation in this case.
The compact SVD of a matrixM{\displaystyle \mathbf {M} }is given by
M=UrΣrVr∗.{\displaystyle \mathbf {M} =\mathbf {U} _{r}\mathbf {\Sigma } _{r}\mathbf {V} _{r}^{*}.}
Only ther{\displaystyle r}column vectors ofU{\displaystyle \mathbf {U} }andr{\displaystyle r}row vectors ofV∗{\displaystyle \mathbf {V} ^{*}}corresponding to the non-zero singular valuesΣr{\displaystyle \mathbf {\Sigma } _{r}}are calculated. The remaining vectors ofU{\displaystyle \mathbf {U} }andV∗{\displaystyle \mathbf {V} ^{*}}are not calculated. This is quicker and more economical than the thin SVD ifr≪min(m,n).{\displaystyle r\ll \min(m,n).}The matrixUr{\displaystyle \mathbf {U} _{r}}is thusm×r,{\displaystyle m\times r,}Σr{\displaystyle \mathbf {\Sigma } _{r}}isr×r{\displaystyle r\times r}diagonal, andVr∗{\displaystyle \mathbf {V} _{r}^{*}}isr×n.{\displaystyle r\times n.}
In many applications the numberr{\displaystyle r}of the non-zero singular values is large making even the Compact SVD impractical to compute. In such cases, the smallest singular values may need to be truncated to compute onlyt≪r{\displaystyle t\ll r}non-zero singular values. The truncated SVD is no longer an exact decomposition of the original matrixM,{\displaystyle \mathbf {M} ,}but rather provides the optimallow-rank matrix approximationM~{\displaystyle {\tilde {\mathbf {M} }}}by any matrix of a fixed rankt{\displaystyle t}
M~=UtΣtVt∗,{\displaystyle {\tilde {\mathbf {M} }}=\mathbf {U} _{t}\mathbf {\Sigma } _{t}\mathbf {V} _{t}^{*},}
where matrixUt{\displaystyle \mathbf {U} _{t}}ism×t,{\displaystyle m\times t,}Σt{\displaystyle \mathbf {\Sigma } _{t}}ist×t{\displaystyle t\times t}diagonal, andVt∗{\displaystyle \mathbf {V} _{t}^{*}}ist×n.{\displaystyle t\times n.}Only thet{\displaystyle t}column vectors ofU{\displaystyle \mathbf {U} }andt{\displaystyle t}row vectors ofV∗{\displaystyle \mathbf {V} ^{*}}corresponding to thet{\displaystyle t}largest singular valuesΣt{\displaystyle \mathbf {\Sigma } _{t}}are calculated. This can be much quicker and more economical than the compact SVD ift≪r,{\displaystyle t\ll r,}but requires a completely different toolset of numerical solvers.
In applications that require an approximation to theMoore–Penrose inverseof the matrixM,{\displaystyle \mathbf {M} ,}the smallest singular values ofM{\displaystyle \mathbf {M} }are of interest, which are more challenging to compute compared to the largest ones.
Truncated SVD is employed inlatent semantic indexing.[30]
The sum of thek{\displaystyle k}largest singular values ofM{\displaystyle \mathbf {M} }is amatrix norm, theKy Fank{\displaystyle k}-norm ofM.{\displaystyle \mathbf {M} .}[31]
The first of the Ky Fan norms, the Ky Fan 1-norm, is the same as theoperator normofM{\displaystyle \mathbf {M} }as a linear operator with respect to the Euclidean norms ofKm{\displaystyle K^{m}}andKn.{\displaystyle K^{n}.}In other words, the Ky Fan 1-norm is the operator norm induced by the standardℓ2{\displaystyle \ell ^{2}}Euclidean inner product. For this reason, it is also called the operator 2-norm. One can easily verify the relationship between the Ky Fan 1-norm and singular values. It is true in general, for a bounded operatorM{\displaystyle \mathbf {M} }on (possibly infinite-dimensional) Hilbert spaces
‖M‖=‖M∗M‖12{\displaystyle \|\mathbf {M} \|=\|\mathbf {M} ^{*}\mathbf {M} \|^{\frac {1}{2}}}
But, in the matrix case,(M∗M)1/2{\displaystyle (\mathbf {M} ^{*}\mathbf {M} )^{1/2}}is anormal matrix, so‖M∗M‖1/2{\displaystyle \|\mathbf {M} ^{*}\mathbf {M} \|^{1/2}}is the largest eigenvalue of(M∗M)1/2,{\displaystyle (\mathbf {M} ^{*}\mathbf {M} )^{1/2},}i.e. the largest singular value ofM.{\displaystyle \mathbf {M} .}
The last of the Ky Fan norms, the sum of all singular values, is thetrace norm(also known as the 'nuclear norm'), defined by‖M‖=Tr(M∗M)1/2{\displaystyle \|\mathbf {M} \|=\operatorname {Tr} (\mathbf {M} ^{*}\mathbf {M} )^{1/2}}(the eigenvalues ofM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }are the squares of the singular values).
The singular values are related to another norm on the space of operators. Consider theHilbert–Schmidtinner product on then×n{\displaystyle n\times n}matrices, defined by
⟨M,N⟩=tr(N∗M).{\displaystyle \langle \mathbf {M} ,\mathbf {N} \rangle =\operatorname {tr} \left(\mathbf {N} ^{*}\mathbf {M} \right).}
So the induced norm is
‖M‖=⟨M,M⟩=tr(M∗M).{\displaystyle \|\mathbf {M} \|={\sqrt {\langle \mathbf {M} ,\mathbf {M} \rangle }}={\sqrt {\operatorname {tr} \left(\mathbf {M} ^{*}\mathbf {M} \right)}}.}
Since the trace is invariant under unitary equivalence, this shows
‖M‖=|∑iσi2{\displaystyle \|\mathbf {M} \|={\sqrt {{\vphantom {\bigg |}}\sum _{i}\sigma _{i}^{2}}}}
whereσi{\displaystyle \sigma _{i}}are the singular values ofM.{\displaystyle \mathbf {M} .}This is called theFrobenius norm,Schatten 2-norm, orHilbert–Schmidt normofM.{\displaystyle \mathbf {M} .}Direct calculation shows that the Frobenius norm ofM=(mij){\displaystyle \mathbf {M} =(m_{ij})}coincides with:
|∑ij|mij|2.{\displaystyle {\sqrt {{\vphantom {\bigg |}}\sum _{ij}|m_{ij}|^{2}}}.}
In addition, the Frobenius norm and the trace norm (the nuclear norm) are special cases of theSchatten norm.
The singular values of a matrixA{\displaystyle \mathbf {A} }are uniquely defined and are invariant with respect to left and/or right unitary transformations ofA.{\displaystyle \mathbf {A} .}In other words, the singular values ofUAV,{\displaystyle \mathbf {U} \mathbf {A} \mathbf {V} ,}for unitary matricesU{\displaystyle \mathbf {U} }andV,{\displaystyle \mathbf {V} ,}are equal to the singular values ofA.{\displaystyle \mathbf {A} .}This is an important property for applications in which it is necessary to preserve Euclidean distances and invariance with respect to rotations.
The Scale-Invariant SVD, or SI-SVD,[32]is analogous to the conventional SVD except that its uniquely-determined singular values are invariant with respect to diagonal transformations ofA.{\displaystyle \mathbf {A} .}In other words, the singular values ofDAE,{\displaystyle \mathbf {D} \mathbf {A} \mathbf {E} ,}for invertible diagonal matricesD{\displaystyle \mathbf {D} }andE,{\displaystyle \mathbf {E} ,}are equal to the singular values ofA.{\displaystyle \mathbf {A} .}This is an important property for applications for which invariance to the choice of units on variables (e.g., metric versus imperial units) is needed.
The factorizationM=UΣV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {\Sigma } \mathbf {V} ^{*}}can be extended to abounded operatorM{\displaystyle \mathbf {M} }on a separable Hilbert spaceH.{\displaystyle H.}Namely, for any bounded operatorM,{\displaystyle \mathbf {M} ,}there exist apartial isometryU,{\displaystyle \mathbf {U} ,}a unitaryV,{\displaystyle \mathbf {V} ,}a measure space(X,μ),{\displaystyle (X,\mu ),}and a non-negative measurablef{\displaystyle f}such that
M=UTfV∗{\displaystyle \mathbf {M} =\mathbf {U} T_{f}\mathbf {V} ^{*}}
whereTf{\displaystyle T_{f}}is themultiplication byf{\displaystyle f}onL2(X,μ).{\displaystyle L^{2}(X,\mu ).}
This can be shown by mimicking the linear algebraic argument for the matrix case above.VTfV∗{\displaystyle \mathbf {V} T_{f}\mathbf {V} ^{*}}is the unique positive square root ofM∗M,{\displaystyle \mathbf {M} ^{*}\mathbf {M} ,}as given by theBorel functional calculusforself-adjoint operators. The reason whyU{\displaystyle \mathbf {U} }need not be unitary is that, unlike the finite-dimensional case, given an isometryU1{\displaystyle U_{1}}with nontrivial kernel, a suitableU2{\displaystyle U_{2}}may not be found such that
[U1U2]{\displaystyle {\begin{bmatrix}U_{1}\\U_{2}\end{bmatrix}}}
is a unitary operator.
As for matrices, the singular value factorization is equivalent to thepolar decompositionfor operators: we can simply write
M=UV∗⋅VTfV∗{\displaystyle \mathbf {M} =\mathbf {U} \mathbf {V} ^{*}\cdot \mathbf {V} T_{f}\mathbf {V} ^{*}}
and notice thatUV∗{\displaystyle \mathbf {U} \mathbf {V} ^{*}}is still a partial isometry whileVTfV∗{\displaystyle \mathbf {V} T_{f}\mathbf {V} ^{*}}is positive.
The notion of singular values and left/right-singular vectors can be extended tocompact operator on Hilbert spaceas they have a discrete spectrum. IfT{\displaystyle T}is compact, every non-zeroλ{\displaystyle \lambda }in its spectrum is an eigenvalue. Furthermore, a compact self-adjoint operator can be diagonalized by its eigenvectors. IfM{\displaystyle \mathbf {M} }is compact, so isM∗M{\displaystyle \mathbf {M} ^{*}\mathbf {M} }. Applying the diagonalization result, the unitary image of its positive square rootTf{\displaystyle T_{f}}has a set of orthonormal eigenvectors{ei}{\displaystyle \{e_{i}\}}corresponding to strictly positive eigenvalues{σi}{\displaystyle \{\sigma _{i}\}}. For anyψ{\displaystyle \psi }inH,{\displaystyle H,}
Mψ=UTfV∗ψ=∑i⟨UTfV∗ψ,Uei⟩Uei=∑iσi⟨ψ,Vei⟩Uei,{\displaystyle \mathbf {M} \psi =\mathbf {U} T_{f}\mathbf {V} ^{*}\psi =\sum _{i}\left\langle \mathbf {U} T_{f}\mathbf {V} ^{*}\psi ,\mathbf {U} e_{i}\right\rangle \mathbf {U} e_{i}=\sum _{i}\sigma _{i}\left\langle \psi ,\mathbf {V} e_{i}\right\rangle \mathbf {U} e_{i},}
where the series converges in the norm topology onH.{\displaystyle H.}Notice how this resembles the expression from the finite-dimensional case.σi{\displaystyle \sigma _{i}}are called the singular values ofM.{\displaystyle \mathbf {M} .}{Uei}{\displaystyle \{\mathbf {U} e_{i}\}}(resp.{Uei}{\displaystyle \{\mathbf {U} e_{i}\}}) can be considered the left-singular (resp. right-singular) vectors ofM.{\displaystyle \mathbf {M} .}
Compact operators on a Hilbert space are the closure offinite-rank operatorsin the uniform operator topology. The above series expression gives an explicit such representation. An immediate consequence of this is:
The singular value decomposition was originally developed bydifferential geometers, who wished to determine whether a realbilinear formcould be made equal to another by independent orthogonal transformations of the two spaces it acts on.Eugenio BeltramiandCamille Jordandiscovered independently, in 1873 and 1874 respectively, that the singular values of the bilinear forms, represented as a matrix, form acomplete setofinvariantsfor bilinear forms under orthogonal substitutions.James Joseph Sylvesteralso arrived at the singular value decomposition for real square matrices in 1889, apparently independently of both Beltrami and Jordan. Sylvester called the singular values thecanonical multipliersof the matrixA.{\displaystyle \mathbf {A} .}The fourth mathematician to discover the singular value decomposition independently isAutonnein 1915, who arrived at it via thepolar decomposition. The first proof of the singular value decomposition for rectangular and complex matrices seems to be byCarl EckartandGale J. Youngin 1936;[33]they saw it as a generalization of theprincipal axistransformation forHermitian matrices.
In 1907,Erhard Schmidtdefined an analog of singular values forintegral operators(which are compact, under some weak technical assumptions); it seems he was unaware of the parallel work on singular values of finite matrices. This theory was further developed byÉmile Picardin 1910, who is the first to call the numbersσk{\displaystyle \sigma _{k}}singular values(or in French,valeurs singulières).
Practical methods for computing the SVD date back toKogbetliantzin 1954–1955 andHestenesin 1958,[34]resembling closely theJacobi eigenvalue algorithm, which uses plane rotations orGivens rotations. However, these were replaced by the method ofGene GolubandWilliam Kahanpublished in 1965,[35]which usesHouseholder transformationsor reflections. In 1970, Golub andChristian Reinsch[36]published a variant of the Golub/Kahan algorithm that is still the one most-used today.
|
https://en.wikipedia.org/wiki/Singular_value_decomposition
|
Plain languageis writing designed to ensure the reader understands as quickly, easily, and completely as possible.[1]Plain language strives to be easy to read, understand, and use.[2]It avoidsverbose, convoluted language andjargon. In many countries, laws mandate that public agencies use plain language to increase access to programs and services. The United NationsConvention on the Rights of Persons with Disabilitiesincludes plain language in its definition ofcommunication.[3]
Mostliteracyandcommunicationsscholars agree that plain language means:
Plain language focuses on ways of writing a text so that it is clear, concise, pertinent, efficient, and flows well for the reader.[4]The Center for Plain Language states that: "[a] document, web site or other information is in plain language if the target audience can read it, understand what they read, and confidently act on it".[5]Writing in plain language does not mean oversimplifying the concepts, but presenting the information in a way that makes it easier to understand and use by a wider audience.[6]Texts written in plain language are still formal, but are easier to read and inspire confidence for the reader.[7]
Using plain language in communications ultimately improves efficiency, because there is less ambiguity for the readers, and less time is taken for clarifications and explanations.[8]Clear communication improves the user's experience with the organization, ultimately creating trust in the company.[9]
Writers who wish to write in plain language must first and foremost consider their target audience.[5]This should influence what information is included in the text and how it is written.[5]Different audiences have different needs, and require different information.[5]When writing, it is important to consider what the target audience needs to accomplish, and what and how much information they need to complete it.[5]The needs of the target audience will also affect the chosen vocabulary: writing for someone in the same field as the author is different from writing for someone whose native language is not English.[10]
Provide informative headings, topic sentences, and frequent summaries to help orient the reader. For complex documents, create a comprehensive table of contents.[11]
Organize the text logically:the most important information should be mentioned first, in the text as a whole and in every individual paragraph.[10]Headings help the reader skim the text more rapidly to find what they're looking for.[5]Sentences should be kept short, and only include necessary information.[5]A long, verbose sentence tends to present too much information at once, and blurs its main point.[10]The text should be direct and concise, and have an easy flow to it.[5]
The chosen vocabulary must remain simple and familiar.[10]Everyday language should be favoured against acronyms, jargon and legal language.[8]Plain language favours the use of the verb form of the word, instead of the noun form.[8]To increase clarity, use the active voice, in which the subject does the action of the verb.[10]Sentences written in plain language have a positive construction and address the reader directly.[8]
Writing in plain language also takes into account the presentation of the text. It is important to choose a font that is easy to read, and set it to an adequate size.[10]Sentences written in capital letters are harder to read because the letters are less distinguishable from one another.[10]Simple design elements like leaving white spaces, using bullets, and choosing contrasting colours encourages a user to read the text and increases readability.[6]
Proponents of plain language adoption argue that it improves reading comprehension and readability, and grants readers greater access to information.[12][13]Simple language allows documents to be read and understood by a larger audience, as plain language adoption often involves rewriting very technical and field-specific documents, like legal and medical documents.[14]
Some scholars promote plain language use as a means of making documents accessible, especially for disabled readers or those who lack the expertise and education to understand overly technical documents. Simpler language can decrease a reader's cognitive load, and improve information retention in readers who normally struggle to read complex documents.[12]Changes in font, text size, and colour can make texts more readable for individuals with impaired vision.[15]Some scholars view plain language from a social justice perspective as a means of increasing equal access to information, especially for marginalized populations that might have decreased access to education.[12]
While plain language has positive practical outcomes across many situations, it can also be understood within the framework of ethical action.[16]While not all plain language practitioners and scholars agree that "Plain language is a civil right", as the motto for the US-based Center for Plain Language declares, many practitioners agree that using plain language is part of ethical action, such as being responsive, respectful, honest, truthful and fair. On the other hand, plain language can also be used for unethical ends, such as to obscure or withhold truths. Willerton proposes the BUROC framework for identifying situations requiring the ethical action of plain language: bureaucratic, unfamiliar, rights-oriented, critical.[16]
The United States'Federal Rules of Civil Procedurerequire aclaim for reliefto include "a short and plain statement of the grounds for the court’sjurisdiction", and "a short and plain statement of the claim". A party claiming adefenseto a claim must state its defense "in short and plain terms".[17]
Ciceroargued, "When you wish to instruct, be brief; that men's minds take in quickly what you say, learn its lesson, and retain it faithfully. Every word that is unnecessary only pours over the side of a brimming mind."[citation needed]
Shakespeare parodied the pretentious style, as in the speeches of Dogberry inMuch Ado About Nothing.
The plain, or native style, was, in fact, an entire literary tradition during the English Renaissance, fromJohn SkeltonthroughBen Jonsonand include such poets asBarnabe Googe,George Gascoigne,Walter Raleigh, and perhaps the later work ofFulke Greville. In addition to its purely linguistic plainness, the Plain Style employed an emphatic, pre-Petrarchanprosody(each syllable either clearly stressed or clearly unstressed).
By the end of the 19th century, scholars began to study the features of plain language. A. L. Sherman, a professor of English literature at the University of Nebraska, wroteAnalytics of Literature: A Manual for the Objective Study of English Prose and Poetryin 1893. In this work, Sherman showed that the typical English sentence has shortened over time and that spoken English is a pattern for written English.
Sherman wrote:
Literary English, in short, will follow the forms of the standard spoken English from which it comes. No man should talk worse than he writes, no man writes better than he should talk.... The oral sentence is clearest because it is the product of millions of daily efforts to be clear and strong. It represents the work of the race for thousands of years in perfecting an effective instrument of communication.
Two 1921 works, Harry Kitson's "The Mind of the Buyer", andEdward L. Thorndike's "The Teacher's Word Book" picked up where Sherman left off. Kitson's work was the first to apply empirical psychology to advertising. He advised the use of short words and sentences. Thorndike's work contained the frequency ratings of 10,000 words. He recommended using the ratings in his book to grade books not only for students in schools but also for average readers and adults learning English. Thorndike wrote:
It is commonly assumed that children and adults prefer trashy stories in large measure because they are more exciting and more stimulating in respect to sex. There is, however, reason to believe that greater ease of reading in respect to vocabulary, construction, and facts, is a very important cause of preference. A count of the vocabulary of "best sellers" and a summary of it in terms of our list would thus be very instructive.
The 1930s saw many studies on how to make texts more readable. In 1931, Douglas Tyler and Ralph Waples published the results of their two-year study, "What People Want to Read About". In 1934, Ralph Ojemann,Edgar Dale, and Ralph Waples published two studies on writing for adults with limited reading ability. In 1935, educational psychologistWilliam S. Grayteamed up with Bernice Leary to publish their study, "What Makes a Book Readable".
George Orwell's 1946 essay "Politics and the English Language" decried the pretentious diction, meaninglessness, vagueness, and worn-out idioms of political jargon. In 1979, thePlain English Campaignwas founded in London to combat "gobbledegook, jargon and legalese".[22]
Lyman Brysonat Teachers College in Columbia University led efforts to supply average readers with more books of substance dealing with science and current events. Bryson's students include Irving Lorge andRudolf Flesch, who became leaders in the plain-language movement. In 1975, Flesch collaborated withJ. Peter Kincaidto create theFlesch-Kincaid readability test, which uses an algorithm to produce grade level scores that predict the level of education required to read the selected text.[23]The instrument looks at word length (number of letters) and sentence length (number of words) and produces a score that is tied to a U.S. grade school level. For example, a score of 8.0 means that an eighth grader can read the document.
Others who later led plain language and readability research include educator Edgar Dale of Ohio State,Jeanne S. Challof the Reading Laboratory of Harvard, and George R. Klare of Ohio University. Their efforts spurred the publication of over 200 readability formulas and 1,000 published studies on readability.
Beginning in 1935, a series of literacy surveys showed that the average reader in the U.S. was an adult of limited reading ability. Today, the average adult in the U.S. reads at the 9th-grade level.
Access to health information, educational andeconomic developmentopportunities, and government programs is often referred to in a social justice context. To ensure more community members can access this information, many adult educators, legal writers, and social program developers use plain language principles when they develop public documents[citation needed]. The goal of plain language translation is to increase accessibility for those with lower literacy levels.
In the United States, the movement towards plain language legal writing began with the 1963 bookLanguage of the Law, by David Mellinkoff.[24]However, the movement was popularized by Richard Wydick's 1979 bookPlain English for Lawyers.[24]This was followed by famous plain languagepromissory notesby Nationwide Mutual Insurance and Citibank in the 1970s.[13]
Concerned about the large number of suits against its customers to collect bad debts, the bank voluntarily made the decision to implement plain language policies in 1973.[25]That same decade, the consumer-rights movement won legislation that required plain language in contracts, insurance policies, and government regulations. Americanlaw schoolsbegan requiring students to take legal writing classes that encouraged them to use plain English as much as possible and to avoid legal jargon, except when absolutely necessary. Public outrage with the skyrocketing number of unreadable government forms led to thePaperwork Reduction Actof 1980.
In 1972, the Plain Language Movement received practical political application, when PresidentRichard Nixondecreed that the "Federal Register be written in layman's terms". On March 23, 1978, U.S. PresidentJimmy Cartersigned Executive Order 12044, which said that federal officials must see that each regulation is "written in plain English and understandable to those who must comply with it".[26]President Ronald Reagan rescinded these orders in 1981, but many political agencies continued to follow them. By 1991, eight states had also passed legislation related to plain language. Plain Language Association International (PLAIN) was formed in 1993 as the Plain Language Network. Its membership is international; it was incorporated as a non-profit organization in Canada in 2008.[27][28]In June 1998, PresidentBill Clintonissued a memorandum that called for executive departments and agencies to use plain language in all government documents.[26]Vice PresidentAl Goresubsequently led a plain language initiative that formed a group called the Plain Language Action Network (PLAIN) to provide plain language training to government agencies.
PLAIN provided guidance to federalexecutive agencieswhen PresidentBarack Obamasigned thePlain Writing Act of 2010, which required federal executive agencies to put all new and revised covered documents into plain language.[29]The Act's sponsor, U.S. RepresentativeBruce Braley, noted upon its passage that "The writing of documents in the standard vernacular English language will bolster and increase the accountability of government within America and will continue to more effectively save time and money in this country."[30]
Plain language is also gaining traction in U.S. courts andlegal aidagencies.[31][32]California was the first state to adopt plain language court forms and instructions, for which it received the 2003Burton Awardfor Outstanding Reform.[33]A 2006 comparative study of plain language court forms concluded that "plain language court forms and instructions are better understood, easier to use, and more economical".[34]
TheEuropean Unionprovides standards for making information easy to read and understand.[35]The rules are comparable to the rules for plain language. Based in Germany there is a dictionary for plain language called Hurraki.[36]InFrance, a 2002 decision by theConstitutional Councilrecognized a constitutional goal of ensuring the "clarity and intelligibility" of French law.[37]In 2013 the Israeli Knesset passed service accessibility regulations which mandated the use of simple language and/or language simplification (Hebrew = פישוט לשוני),[38]which were subsequently codified in 2015 for implementation.[39]
ISOhas formed a Working Group within Technical CommitteeISO/TC 37to develop plain language standards and guidelines. Their work began officially towards the end of 2019. It has published standard ISO 24495-1 in 2023.[40]
|
https://en.wikipedia.org/wiki/Plain_language
|
Kuṭṭakais analgorithmfor findingintegersolutions oflinearDiophantine equations. A linear Diophantine equation is anequationof the formax+by=cwherexandyareunknown quantitiesanda,b, andcare known quantities with integer values. The algorithm was originally invented by the Indian astronomer-mathematicianĀryabhaṭa(476–550 CE) and is described very briefly in hisĀryabhaṭīya. Āryabhaṭa did not give the algorithm the nameKuṭṭaka, and his description of the method was mostly obscure and incomprehensible. It wasBhāskara I(c. 600 – c. 680) who gave a detailed description of the algorithm with several examples from astronomy in hisĀryabhatiyabhāṣya, who gave the algorithm the nameKuṭṭaka. InSanskrit, the word Kuṭṭaka meanspulverization(reducing to powder), and it indicates the nature of the algorithm. The algorithm in essence is a process where the coefficients in a given linear Diophantine equation are broken up into smaller numbers to get a linear Diophantine equation with smaller coefficients. In general, it is easy to find integer solutions of linear Diophantine equations with small coefficients. From a solution to the reduced equation, a solution to the original equation can be determined. Many Indian mathematicians after Aryabhaṭa have discussed the Kuṭṭaka method with variations and refinements. The Kuṭṭaka method was considered to be so important that the entire subject of algebra used to be calledKuṭṭaka-ganitaor simplyKuṭṭaka. Sometimes the subject of solving linear Diophantine equations is also calledKuṭṭaka.
In literature, there are several other names for the Kuṭṭaka algorithm likeKuṭṭa,KuṭṭakāraandKuṭṭikāra. There is also a treatise devoted exclusively to a discussion of Kuṭṭaka. Such specialized treatises are very rare in the mathematical literature of ancient India.[1]The treatise written in Sanskrit is titledKuṭṭākāra Śirōmaṇiand is authored by one Devaraja.[2]
The Kuṭṭaka algorithm has much similarity with and can be considered as a precursor of the modern dayextended Euclidean algorithm. The latter algorithm is a procedure for finding integersxandysatisfying the conditionax+by=gcd(a,b).[3]
The problem that can supposedly be solved by the Kuṭṭaka method was not formulated by Aryabhaṭa as a problem of solving the linear Diophantine equation. Aryabhaṭa considered the following problems all of which are equivalent to the problem of solving the linear Diophantine equation:
Aryabhata and other Indian writers had noted the following property of linear Diophantine equations: "The linear Diophantine equationax+by=chas a solution if and only if gcd(a,b) is adivisorofc." So the first stage in thepulverizationprocess is to cancel out the common factor gcd(a,b) froma,bandc, and obtain an equation with smaller coefficients in which the coefficients ofxandyarerelatively prime.
For example, Bhāskara I observes: "The dividend and the divisor shall become prime to each other, on being divided by the residue of their mutual division. The operation of the pulveriser should be considered in relation to them."[1]
Aryabhata gave the algorithm for solving the linear Diophantine equation in verses 32–33 of Ganitapada of Aryabhatiya.[1]Taking Bhāskara I's explanation of these verses also into consideration, Bibhutibbhushan Datta has given the following translation of these verses:
Some comments are in order.
Without loss of generality, letax−by=c{\displaystyle ax-by=c}be our Diophantine equation wherea,bare positive integers andcis an integer. Divide both sides of the equation bygcd(a,b){\displaystyle \gcd(a,b)}. Ifcis not divisible bygcd(a,b){\displaystyle \gcd(a,b)}then there are no integer solutions to this equation. After the division, we get the equationa′x−b′y=c′{\displaystyle a'x-b'y=c'}. The solution to this equation is the solution toax−by=c{\displaystyle ax-by=c}. Without loss of generality, let us consider a > b.
UsingEuclidean division, follow these recursive steps:
Now, define quantitiesxn+2,xn+1,xn,... by backward induction as follows:
Ifnis odd, takexn+2= 0 andxn+1= 1.
Ifnis even, takexn+2=1 andxn+1=rn−1−1.
Now, calculate allxm(n≥m≥1) byxm=amxm+1+xm+2. Theny=c′x1andx=c′x2.
Consider the following problem:
The required number is 334.
The number 334 is thesmallestinteger which leaves remainders 15 and 19 when divided by 29 and 45 respectively.
The following example taken fromLaghubhāskarīyaofBhāskara I[4]illustrates how the Kuttaka algorithm was used in the astronomical calculations in India.[5]
The sum, the difference and the product increased by unity, of the residues of the revolutions of Saturn and Mars – each is a perfect square. Taking the equations furnished by the above and applying the methods of such quadratics obtain the (simplest) solution by the substitution of 2, 3, etc. successively (in the general solution). Then calculate theaharganaand the revolutions performed by Saturn and Mars in that time together with the number of solar years elapsed.
In the Indian astronomical tradition, aYugais a period consisting of 1,577,917,500 civil days. Saturn makes 146,564 revolutions and Mars makes 229,6824 revolutions in a Yuga. So Saturn makes 146,564/1,577,917,500 = 36,641/394,479,375 revolutions in a day. By saying that the residue of the revolution of Saturn isx, what is meant is that the fractional number of revolutions isx/394,479,375. Similarly, Mars makes 229,6824/1,577,917,500 = 190,412/131,493,125 revolutions in a day. By saying that the residue of the revolution of Mars isy, what is meant is that the fractional number of revolutions isy/131,493,125.
Letxandydenote the residues of the revolutions of Saturn and Mars respectively satisfying the conditions stated in the problem. They must be such that each ofx+y.x−yandxy+ 1is a perfect square.
Setting
one obtains
and so
Forxy+ 1 also to be a perfect square we must have
Thus the following general solution is obtained:
The valueq= 2 yields the special solutionx= 40,y= 24.
Aharganais the number of days elapsed since the beginning of the Yuga.
Letube the value of the ahargana corresponding the residue 24 for Saturn. Duringudays, saturn would have completed (36,641/394,479,375)×unumber of revolutions. Since there is a residue of 24, this number would include the fractional number 24/394,479,375 of revolutions also. Hence during the aharganau, the number of revolutions completed would be
which would be an integer. Denoting this integer byv, the problem reduces to solving the following linear Diophantine equation:
Kuttaka may be applied to solve this equation. The smallest solution is
Letube the value of the ahargana corresponding the residue 40 for Mars. Duringudays, Mars would have completed (190,412/131,493,125) ×unumber of revolutions. Since there is a residue of 40, this number would include the fractional number 40/131,493,125 of revolutions also. Hence during the aharganau, the number of revolutions completed would be
which would be an integer. Denoting this integer byv, the problem reduces to solving the following linear Diophantine equation:
Kuttaka may be applied to solve this equation. The smallest solution is
|
https://en.wikipedia.org/wiki/Ku%E1%B9%AD%E1%B9%ADaka
|
Acceptoften refers to:
Acceptcan also refer to:
|
https://en.wikipedia.org/wiki/Accept_(disambiguation)
|
Defense in depthis a concept used ininformation securityin which multiple layers of security controls (defense) are placed throughout aninformation technology(IT) system. Its intent is to provideredundancyin the event asecurity controlfails or a vulnerability is exploited that can cover aspects ofpersonnel,procedural,technicalandphysicalsecurity for the duration of the system's life cycle.
The idea behind the defense in depth approach is to defend a system against any particular attack using several independent methods.[1]It is a layering tactic, conceived[2]by theNational Security Agency(NSA) as a comprehensive approach to information and electronic security.[3][4]
An insight into defense in depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, andnetwork security, host-based security, andapplication securityforming the outermost layers of the onion.[5]Both perspectives are equally valid, and each provides valuable insight into the implementation of a good defense in depth strategy.[6]
Defense in depth can be divided into three areas: Physical, Technical, and Administrative.[7]
Physical controls[3]are anything that physically limits or prevents access to IT systems. Examples of physical defensive security are: fences, guards, dogs, andCCTVsystems.
Technical controls are hardware or software whose purpose is to protect systems and resources. Examples of technical controls would be disk encryption, file integrity software, and authentication. Hardware technical controls differ from physical controls in that they prevent access to the contents of a system, but not the physical systems themselves.
Administrative controls are the organization's policies and procedures. Their purpose is to ensure that there is proper guidance available in regard to security and that regulations are met. They include things such as hiring practices, data handling procedures, and security requirements.
Using more than one of the following layers constitutes an example of defense in depth.
|
https://en.wikipedia.org/wiki/Defense_in_depth_(computing)
|
Connected healthis a socio-technical model for healthcare management and delivery[1]by using technology to provide healthcare services remotely. Connected health, also known as technology enabled care (TEC) aims to maximize healthcare resources and provide increased, flexible opportunities for consumers to engage with clinicians and better self-manage their care.[2]It uses readily available consumer technologies to deliver patient care outside of the hospital or doctor's office. Connected health encompasses programs intelehealth, remote care (such as home care andremote patient monitoring), and disease and lifestyle management. It often leverages existing technologies, such as connected devices using cellular networks, and is associated with efforts to improve chronic care. However, there is an increasing blur between software capabilities and healthcare needs whereby technologists are now providing the solutions to support consumer wellness and provide the connectivity between patient data, information and decisions. This calls for new techniques to guide Connected Health solutions such as "design thinking" to support software developers in clearly identifying healthcare requirements, and extend and enrich traditional software requirements gathering techniques.[3]
TheUnited StatesandEuropean Unionare two dominant markets for the use of connected health in home care service, in part due to the high availability of telephone and Internet service as compared to other parts of the world.[citation needed]Proponents of connected health believe that technology can transform healthcare delivery and address inefficiencies especially in the area of work flow management, chronic disease management andpatient complianceof the US and global healthcare systems.[citation needed]
Connected health has its roots intelemedicine, and its more recent relative,telehealth. The first telemedicine programs were primarily undertaken to address healthcare access and/or provider shortages. Connected health is distinguished from telemedicine by:[citation needed]
Connected health is the "umbrella term arrived to lessen the confusion over the definitions of telemedicine, telehealth and mhealth".[4]It is considered as the new lexicon for the term telemedicine.[5]The technology view of connected health focuses more on the connection methods between clients and the health care professional.
An alternative view is that of a socio-technical perspective in which connected health is considered as a combination of people, processes and technology. In 2015 Connected health was defined as patient-centred care resulting from process-driven health care delivery undertaken by health care professionals, patients and/or carer who are supported by the use of technology (software and/or hardware).[6]
Two "core platforms" are emphasized in connected health, self-care and remote care, with programs primarily focused on monitoring and feedback for the chronically ill, elderly, and those patients located at an untenable distance from primary or specialty providers.[citation needed]Programs designed to improve patient-provider communication within an individual medical practice for example, the use of email to communicate with patients between office visits also fall within the purview of connected health.[citation needed]There are alsolifestyle coachingprograms, in which an individual receives healthcare information to facilitate behavior change to improve their fitness and/or general well being, (seewellness) or to reduce or eliminate the impact of a particular behavior that presents a risk to their health status.[7]Some of the most common types of connected health programs in operation today include:
The Center for Connected Health is implementing a range of programs in high-risk, chronic and remotely located populations.[citation needed]
Inherent in the concept of connected health is flexibility in terms of technological approaches to care delivery and specific program objectives. For instance, remote monitoring programs might use a combination of cell phone and smart phone technology, online communications or biosensors and may aim to increase patient-provider communication, involve patients in their care through regular feedback, or improve upon a health outcome measure in a defined patient population or individual. Digital pen technology, global positioning, videoconferencing and environmental sensors are all playing a role in connected health.[citation needed]
Proponents of Connected health view it as a critical component of change in human healthcare and envision:
Rising costs, increases in chronic diseases, geographic dispersion of families, growing provider shortages, ethnic disparities in care, better survival rates among patients fighting serious diseases, an aging U.S. population and longer lifespan are all factors pointing to a need for better ways of delivering healthcare.[8][9][10]
Direct-to-consumer advertisingis a demonstrated contributor to the rise in consumer demand, as is the mass availability of inexpensive technology and ubiquity of the Internet, cell phones and PDAs.[11][12]Connected health experts such as Joseph C. Kvedar, believe that consumer engagement in healthcare is on its way to becoming a major force for change.[citation needed]
In summary, connected health has arisen from: 1) a desire on the part of individual physicians and healthcare organizations to provide better access, quality and efficiency of care 2) dynamics of the healthcare economy (such as rising costs and changing demographics) 3) consumerism in health care and a drive towards patient centric healthcare. Together, these factors are providing impetus for connected healthcare in the United States and many other industrialised nations and forcing innovation both from within and outside the system.[citation needed]
While connected health is yet emerging, there is evidence of its benefit. For example, in a program being implemented by the Center for Connected Health and Partners Home Care, over 500 heart failure patients have now been monitored remotely through the collection of vital signs, including heart rate, blood pressure and weight, using simple devices in the patient's home. The information is sent daily to a home health nurse, who can identify early warning signs, notify the patient'sprimary care physician, and intervene to avert potential health crises. A pilot of this program demonstrated reduced hospitalizations.[13]Another initiative at the Center for Connected Health uses cellular telephone technology and a "smart" pill bottle to detect when a patient has not taken their scheduled medication. A signal is then sent that lights up an ambient orb device in the patient's home to remind them to take their medication.
It appears that connected health programs are operated and funded primarily by home care agencies and large healthcare systems.[citation needed]However, insurers and employers are increasingly interested in connected health for its potential to reduce direct and indirect healthcare costs. In 2007,EMC Corporationlaunched the first employer-sponsored connected health program, in the beta phase of implementation, aimed at improving outcomes and cost of care for patients with high blood pressure.[14]
Government agencies involved in connected health include:
Personal health records, or PHRS, (seepersonal health record) – are essentially medical records controlled and maintained by the healthcare consumer. PHRs intersect with connected health in that they attempt to increase the involvement of healthcare consumers in their care.[16]By contrast, electronic medical records (EMRs) (seeelectronic medical record) are digital medical records or medical records systems maintained by hospitals or medical practices and are not part of connected health delivery.
|
https://en.wikipedia.org/wiki/Connected_Health
|
Afalse awakeningis a vivid and convincingdreamaboutawakeningfromsleep, while the dreamer in reality continues to sleep. After a false awakening, subjects often dream they are performing their daily morning routine such as showering or eating breakfast. False awakenings, mainly those in which one dreams that they have awoken from a sleep that featured dreams, take on aspects of adouble dreamor adream within a dream. A classic example in fiction is the double false awakening of the protagonist inGogol'sPortrait(1835).
Studies have shown that false awakening is closely related tolucid dreamingthat often transforms into one another. The only differentiating feature between them is that the dreamer has a logical understanding of the dream in a lucid dream, while that is not the case in a false awakening.[1]
Once one realizes they are falsely awakened, they either wake up or begin lucid dreaming.[1]
A false awakening may occur following a dream or following alucid dream(one in which the dreamer has been aware of dreaming). Particularly, if the false awakening follows a lucid dream, the false awakening may turn into a "pre-lucid dream",[2]that is, one in which the dreamer may start to wonder if they are really awake and may or may not come to the correct conclusion. In a study byHarvardpsychologistDeirdre Barrett, 2,000 dreams from 200 subjects were examined and it was found that false awakenings and lucidity were significantly more likely to occur within the same dream or within different dreams of the same night. False awakenings often preceded lucidity as a cue, but they could also follow the realization of lucidity, often losing it in the process.[3]
Because the mind still dreams after a false awakening, there may be more than one false awakening in a single dream. Subjects may dream they wake up, eat breakfast, brush their teeth, and so on; suddenly awake again in bed (still in a dream), begin morning rituals again, awaken again, and so forth. The philosopherBertrand Russellclaimed to have experienced "about a hundred" false awakenings in succession while coming around from a general anesthetic.[4]
Giorgio Buzzisuggests that FAs may indicate the occasional re-appearing of a vestigial (or anyway anomalous) REM sleep in the context of disturbed or hyperaroused sleep (lucid dreaming,sleep paralysis, or situations of high anticipation). This peculiar form of REM sleep permits the replay of unaltered experiential memories, thus providing a unique opportunity to study how waking experiences interact with the hypothesized predictive model of the world. In particular, it could permit to catch a glimpse of the protoconscious world without the distorting effect of ordinary REM sleep.[5]
In accordance with the proposed hypothesis, a high prevalence of FAs could be expected in children, whose "REM sleep machinery" might be less developed.[5]
Gibson's dreamprotoconsciousnesstheory states that false awakening is shaped on some fixed patterns depicting real activities, especially the day-to-day routine. False awakening is often associated with highly realistic environmental details of the familiar events like the day-to-day activities or autobiographic andepisodicmoments.[5]
Certain aspects of life may be dramatized or out of place in false awakenings. Things may seem wrong: details, like the painting on a wall, not being able to talk or difficulty reading (reportedly, reading in lucid dreams is often difficult or impossible).[6]A common theme in false awakenings is visiting the bathroom, upon which the dreamer will see that their reflection in the mirror is distorted (which can be an opportunity for lucidity, but usually resulting in wakefulness).
Celia Greensuggested a distinction should be made between two types of false awakening:[2]
Type 1 is the more common, in which the dreamer seems to wake up, but not necessarily in realistic surroundings; that is, not in their own bedroom. A pre-lucid dream may ensue. More commonly, dreamers will believe they have awakened, and then either genuinely wake up in their own bed or "fall back asleep" in the dream.
A common false awakening is a "late for work" scenario. A person may "wake up" in a typical room, with most things looking normal, and realize they overslept and missed the start time at work or school. Clocks, if found in the dream, will show time indicating that fact. The resulting panic is often strong enough to truly awaken the dreamer (much like from anightmare).
Another common Type 1 example of false awakening can result in bedwetting. In this scenario, the dreamer has had a false awakening and while in the state of dream has performed all the traditional behaviors that precede urinating – arising from bed, walking to the bathroom, and sitting down on the toilet or walking up to a urinal. The dreamer may then urinate and suddenly wake up to find they have wet themselves.
The Type 2 false awakening seems to be considerably less common. Green characterized it as follows:
The subject appears to wake up in a realistic manner but to an atmosphere of suspense.... The dreamer's surroundings may at first appear normal, and they may gradually become aware of something uncanny in the atmosphere, and perhaps of unwanted [unusual] sounds and movements, or they may "awake" immediately to a "stressed" and "stormy" atmosphere. In either case, the end result would appear to be characterized by feelings of suspense, excitement or apprehension.[7]
Charles McCreerydraws attention to the similarity between this description and the description by the German psychopathologistKarl Jaspers(1923) of the so-called "primary delusionary experience" (a general feeling that precedes more specific delusory belief).[8]Jaspers wrote:
Patients feel uncanny and that there is something suspicious afoot. Everything gets anew meaning. The environment is somehow different—not to a gross degree—perception is unaltered in itself but there is some change which envelops everything with a subtle, pervasive and strangely uncertain light.... Something seems in the air which the patient cannot account for, a distrustful, uncomfortable, uncanny tension invades him.[9]
McCreery suggests this phenomenological similarity is not coincidental and results from the idea that both phenomena, the Type 2 false awakening and the primary delusionary experience, are phenomena of sleep.[10]He suggests that the primary delusionary experience, like other phenomena of psychosis such as hallucinations and secondary or specific delusions, represents an intrusion into waking consciousness of processes associated withstage 1 sleep. It is suggested that the reason for these intrusions is that the psychotic subject is in a state ofhyperarousal, a state that can lead to whatIan Oswaldcalled "microsleeps" in waking life.[11]
Other researchers doubt that these are clearly distinguished types, as opposed to being points on a subtle spectrum.[12]
The clinical and neurophysiological descriptions of false awakening are rare. One notable report by Takeuchiet al.,[13]was considered by some experts as a case of false awakening. It depicts ahypnagogichallucinationof an unpleasant and fearful feeling of presence in sleeping lab with perception of having risen from the bed. Thepolysomnographyshowed abundant trains of alpha rhythm onEEG(sometimes blocked byREMsmixed withslow eye movementsand low muscle tone). Conversely, the two experiences of FA monitored here were close to regular REM sleep. Even quantitative analysis clearly shows theta waves predominantly, suggesting that these two experiences are a product of adreamingrather than a fully conscious brain.[14]
The clinical and neurophysiological characteristics of false awakening are
|
https://en.wikipedia.org/wiki/False_awakening
|
Dynamic program analysisis the act ofanalyzing softwarethat involves executing aprogram– as opposed tostatic program analysis, which does not execute it.
Analysis can focus on different aspects of the software including but not limited to:behavior,test coverage,performanceandsecurity.
To be effective, the target program must be executed with sufficient test inputs[1]to address the ranges of possible inputs and outputs.Software testingmeasures, such ascode coverage, and tools such asmutation testing, are used to identify where testing is inadequate.
Functional testing includes relatively commonprogrammingtechniques such asunit testing,integration testingandsystem testing.[2]
Computing thecode coverageof a test identifies code that is not tested; not covered by a test.
Although this analysis identifies code that is not tested it does not determine whether tested coded isadequatelytested. Code can be executed even if the tests do not actually verify correct behavior.
Dynamic testing involves executing a program on a set of test cases.
Fuzzing is a testing technique that involves executing a program on a wide variety of inputs; often these inputs are randomly generated (at least in part).Gray-box fuzzersuse code coverage to guide input generation.
Dynamic symbolic execution (also known asDSEor concolic execution) involves executing a test program on a concrete input, collecting the path constraints associated with the execution, and using aconstraint solver(generally, anSMT solver) to generate new inputs that would cause the program to take a different control-flow path, thus increasing code coverage of the test suite.[3]DSE can be considered a type offuzzing("white-box" fuzzing).
Dynamic data-flow analysis tracks the flow of information fromsourcestosinks. Forms of dynamic data-flow analysis include dynamic taint analysis and evendynamic symbolic execution.[4][5]
Daikonis an implementation of dynamic invariant detection. Daikon runs a program, observes the values that
the program computes, and then reports properties that were true over the observed executions, and thus likely true over all executions.
Dynamic analysis can be used to detect security problems.
For a given subset of a program’s behavior, program slicing consists of reducing the program to the minimum form that still produces the selected behavior. The reduced program is called a “slice” and is a faithful representation of the original program within the domain of the specified behavior subset.
Generally, finding a slice is an unsolvable problem, but by specifying the target behavior subset by the values of a set of variables, it is possible to obtain approximate slices using a data-flow algorithm. These slices are usually used by developers during debugging to locate the source of errors.
Mostperformance analysis toolsuse dynamic program analysis techniques.[citation needed]
Most dynamic analysis involvesinstrumentationor transformation.
Since instrumentation can affect runtime performance, interpretation of test results must account for this to avoid misidentifying a performance problem.
DynInst is a runtime code-patching library that is useful in developing dynamic program analysis probes and applying them to compiled executables. Dyninst does not requiresource codeor recompilation in general, however, non-stripped executables and executables with debugging symbols are easier to instrument.
Iroh.jsis a runtime code analysis library forJavaScript. It keeps track of the code execution path, provides runtime listeners to listen for specific executed code patterns and allows the interception and manipulation of the program's execution behavior.
|
https://en.wikipedia.org/wiki/Dynamic_program_analysis
|
DEVS, abbreviatingDiscrete Event System Specification, is a modular and hierarchical formalism for modeling and analyzing general systems that can be discrete event systems which might be described bystate transition tables, and continuous state systems which might be described bydifferential equations, and hybrid continuous state and discrete event systems. DEVS is atimed event system.
DEVS is a formalism for modeling and analysis of discrete event systems (DESs). The DEVS formalism was invented byBernard P. Zeigler, who is emeritus professor at theUniversity of Arizona. DEVS was introduced to the public in Zeigler's first book,Theory of Modeling and Simulationin 1976,[1]while Zeigler was an associate professor atUniversity of Michigan. DEVS can be seen as an extension of theMoore machineformalism,[2]which is a finite state automaton where the outputs are determined by the current state alone (and do not depend directly on the input). The extension was done by
Since the lifespan of each state is a real number (more precisely, non-negative real) or infinity, it is distinguished from discrete time systems, sequential machines, andMoore machines, in which time is determined by a tick time multiplied by non-negative integers. Moreover, the lifespan can be arandom variable; for example the lifespan of a given state can be distributedexponentiallyoruniformly. The state transition and output functions of DEVS can also bestochastic.
Zeigler proposed a hierarchical algorithm for DEVS model simulation in 1984[4]which was published inSimulationjournal in 1987. Since then, many extended formalism from DEVS have been introduced with their own purposes: DESS/DEVS for combined continuous and discrete event systems, P-DEVS for parallel DESs, G-DEVS for piecewise continuous state trajectory modeling of DESs, RT-DEVS for realtime DESs, Cell-DEVS for cellular DESs, Fuzzy-DEVS for fuzzy DESs, Dynamic Structuring DEVS for DESs changing their coupling structures dynamically, and so on. In addition to its extensions, there are some subclasses such asSP-DEVSandFD-DEVShave been researched for achieving decidability of system properties.
Due to the modular and hierarchical modeling views, as well as its simulation-based analysis capability, the DEVS formalism and its variations have been used in many application of engineering (such as hardware design, hardware/software codesign,communications systems,manufacturingsystems) and science (such asbiology, andsociology)
DEVS defines system behavior as well as system structure. System behavior in DEVS formalism is described using input and output events as well as states. For example, for the ping-pong player of Fig. 1, the input event is?receive, and the output event is!send. Each player,A,B, has its states:SendandWait.Sendstate takes 0.1 seconds to send back the ball that is the output event!send, while theWaitstate lasts until the player receives the ball that is the input event?receive.
The structure of ping-pong game is to connect two players: PlayerA'soutput event!sendis transmitted to PlayerB'sinput event?receive, and vice versa.
In the classic DEVS formalism,Atomic DEVScaptures the system behavior, whileCoupled DEVSdescribes the structure of system.
The following formal definition is for Classic DEVS.[5]In this article, we will use the time base,T=[0,∞){\displaystyle \mathbb {T} =[0,\infty )}that is the set of non-negative real numbers; the extended time base,T∞=[0,∞]{\displaystyle \mathbb {T} ^{\infty }=[0,\infty ]}that is the set of non-negative real numbers plus infinity.
An atomic DEVS model is defined as a 7-tuple
where
The atomic DEVS model for player A of Fig. 1 is given Player=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle <X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}such that
X={?receive}Y={!send}S={(d,σ)|d∈{Wait,Send},σ∈T∞}s0=(Send,0.1)ta(s)=σfor alls∈Sδext(((Wait,σ),te),?receive)=(Send,0.1)δint(Send,σ)=(Wait,∞)δint(Wait,σ)=(Send,0.1)λ(Send,σ)=!sendλ(Wait,σ)=ϕ{\displaystyle {\begin{aligned}X&=\{?{\textit {receive}}\}\\Y&=\{!{\textit {send}}\}\\S&=\{(d,\sigma )|d\in \{{\textit {Wait}},{\textit {Send}}\},\sigma \in \mathbb {T} ^{\infty }\}\\s_{0}&=({\textit {Send}},0.1)\\ta(s)&=\sigma {\text{ for all }}s\in S\\\delta _{ext}((({\textit {Wait}},\sigma ),t_{e}),?{\textit {receive}})&=({\textit {Send}},0.1)\\\delta _{int}({\textit {Send}},\sigma )&=({\textit {Wait}},\infty )\\\delta _{int}({\textit {Wait}},\sigma )&=({\textit {Send}},0.1)\\\lambda ({\textit {Send}},\sigma )&=!{\textit {send}}\\\lambda ({\textit {Wait}},\sigma )&=\phi \end{aligned}}}
Both Player A and Player B are atomic DEVS models.
Simply speaking, there are two cases that an atomic DEVS modelM{\displaystyle M}can change its states∈S{\displaystyle s\in S}: (1) when an external inputx∈X{\displaystyle x\in X}comes into the systemM{\displaystyle M}; (2) when the elapsed timete{\displaystyle t_{e}}reaches the lifespan ofs{\displaystyle s}which is defined byta(s){\displaystyle ta(s)}. At the same time of (2),M{\displaystyle M}generates an outputy∈Y{\displaystyle y\in Y}which is defined byλ(s){\displaystyle \lambda (s)}.
For formal behavior description of given an Atomic DEVS model, refer to the sectionBehavior of atomic DEVS. Computer algorithms to implement the behavior of a given Atomic DEVS model are available in the sectionSimulation Algorithms for Atomic DEVS.
The coupled DEVS defines which sub-components belong to it and how they are connected with each other. A coupled DEVS model is defined as an 8-tuple
where
The ping-pong game of Fig. 1 can be modeled as a coupled DEVS modelN=<X,Y,D,{Mi},Cxx,Cyx,Cyy,Select>{\displaystyle N=<X,Y,D,\{M_{i}\},C_{xx},C_{yx},C_{yy},Select>}whereX={}{\displaystyle X=\{\}};Y={}{\displaystyle Y=\{\}};D={A,B}{\displaystyle D=\{A,B\}};MAandMB{\displaystyle M_{A}{\text{ and }}M_{B}}is described as above;Cxx={}{\displaystyle C_{xx}=\{\}};Cyx={(A.!send,B.?receive),(B.!send,A.?receive)}{\displaystyle C_{yx}=\{(A.!send,B.?receive),(B.!send,A.?receive)\}}; andCyy(A.!send)=ϕ,Cyy(B.!send)=ϕ{\displaystyle C_{yy}(A.!send)=\phi ,C_{yy}(B.!send)=\phi }.
Simply speaking, like the behavior of the atomic DEVS class, a coupled DEVS modelN{\displaystyle N}changes its components' states (1) when an external eventx∈X{\displaystyle x\in X}comes intoN{\displaystyle N}; (2) when one of componentsMi{\displaystyle M_{i}}wherei∈D{\displaystyle i\in D}executes its internal state transition and generates its outputyi∈Yi{\displaystyle y_{i}\in Y_{i}}. In both cases (1) and (2), a triggering event is transmitted to all influences which are defined by coupling setsCxx,Cyx,{\displaystyle C_{xx},C_{yx},}andCyy{\displaystyle C_{yy}}.
For formal definition of behavior of the coupled DEVS, you can refer to the sectionBehavior of Coupled DEVS. Computer algorithms to implement the behavior of a given coupled DEVS mode are available at the sectionSimulation Algorithms for Coupled DEVS.
The simulation algorithm of DEVS models considers two issues: time synchronization and message propagation.Time synchronizationof DEVS is to control all models to have the identical current time. However, for an efficient execution, the algorithm makes the current time jump to the most urgent time when an event is scheduled to execute its internal state transition as well as its output generation.Message propagationis to transmit a triggering message which can be either an input or output event along the associated couplings which are defined in a coupled DEVS model. For more detailed information, the reader can refer toSimulation Algorithms for Atomic DEVSandSimulation Algorithms for Coupled DEVS.
By introducing a quantization method which abstracts a continuous segment as a piecewise const segment, DEVS can simulate behaviors of continuous state systems which are described by networks ofdifferential algebraic equations. This research has been initiated by Zeigler in 1990s.[7]Many properties have been clarified by Prof. Kofman in 2000s and Dr. Nutaro. In 2006, Prof. Cellier who is the author ofContinuous System Modeling,[8]and Prof. Kofman wrote a text book,Continuous System Simulation,[9]in which Chapters 11 and 12 cover how DEVS simulates continuous state systems. Dr. Nutaro's book,[10]covers the discrete event simulation of continuous state systems too.[11]
As an alternative analysis method against the sampling-based simulation method, an exhaustive generating behavior approach, generally calledverificationhas been applied for analysis of DEVS models. It is proven that infinite states of a given DEVS model (especially a coupled DEVS model ) can be abstracted by behaviorally isomorphic finite structure, called areachability graphwhen the given DEVS model is a sub-class of DEVS such as Schedule-Preserving DEVS (SP-DEVS), Finite & Deterministic DEVS (FD-DEVS),[12]and Finite & Real-time DEVS (FRT-DEVS).[13]As a result, based on the reachability graph, (1) dead-lock and live-lock freeness as qualitative properties are decidable with SP-DEVS,[14]FD-DEVS,[15]and FRT-DEVS;[13]and (2) min/max processing time bounds as a quantitative property are decidable with SP-DEVS so far by 2012.
Numerous extensions of the classic DEVS formalism have been developed in the last decades. Among them formalisms which allow to have changing model structures while the simulation time evolves.
G-DEVS,[16][17]Parallel DEVS, Dynamic Structuring DEVS, Cell-DEVS,[18]dynDEVS, Fuzzy-DEVS, GK-DEVS, ml-DEVS, Symbolic DEVS, Real-Time DEVS, rho-DEVS
There are some sub-classes known as Schedule-Preserving DEVS (SP-DEVS) and Finite and Deterministic DEVS (FD-DEVS) which were designated to support verification analysis.SP-DEVSandFD-DEVSwhose expressiveness areE(SP-DEVS)⊂{\displaystyle \subset }E(FD-DEVS)⊂{\displaystyle \subset }E(DEVS) whereE(formalism) denotes the expressiveness offormalism.
The behavior of a given DEVS model is a set of sequences of timed events including null events, calledevent segments, which make the model move from one state to another within a set of legal states. To define it this way, the concept of a set of illegal state as well a set of legal states needs to be introduced.
In addition, since the behavior of a given DEVS model needs to define how the state transition change both when time is passed by and when an event occurs, it has been described by a much general formalism, called general system.[19]In this article, we use a sub-class of General System formalism, calledtimed event systeminstead.
Depending on how the total state and the external state transition function of a DEVS model are defined, there are two ways to define the behavior of a DEVS model usingTimed Event System. Since thebehavior of a coupled DEVSmodel is defined as anatomic DEVSmodel, the behavior of coupled DEVS class is also defined by timed event system.
Suppose that a DEVS model,M=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle {\mathcal {M}}=<X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}has
Then the DEVS model,M{\displaystyle {\mathcal {M}}}is aTimed Event SystemG=<Z,Q,Q0,QA,Δ>{\displaystyle {\mathcal {G}}=<Z,Q,Q_{0},Q_{A},\Delta >}where
For a total stateq=(s,te)∈QA{\displaystyle q=(s,t_{e})\in Q_{A}}at timet∈T{\displaystyle t\in \mathbb {T} }and anevent segmentω∈ΩZ,[tl,tu]{\displaystyle \omega \in \Omega _{Z,[t_{l},t_{u}]}}as follows.
Ifunit event segmentω{\displaystyle \omega }is thenull event segment, i.e.ω=ϵ[t,t+dt]{\displaystyle \omega =\epsilon _{[t,t+dt]}}
Ifunit event segmentω{\displaystyle \omega }is atimed eventω=(x,t){\displaystyle \omega =(x,t)}where the event is an input eventx∈X{\displaystyle x\in X},
Ifunit event segmentω{\displaystyle \omega }is atimed eventω=(y,t){\displaystyle \omega =(y,t)}where the event is an output event or the unobservable eventy∈Yϕ{\displaystyle y\in Y^{\phi }},
Computer algorithms to simulate this view of behavior are available atSimulation Algorithms for Atomic DEVS.
Suppose that a DEVS model,M=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle {\mathcal {M}}=<X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}has
Then the DEVSQ=D{\displaystyle Q={\mathcal {D}}}is a timed event systemG=<Z,Q,Q0,QA,Δ>{\displaystyle {\mathcal {G}}=<Z,Q,Q_{0},Q_{A},\Delta >}where
For a total stateq=(s,ts,te)∈QA{\displaystyle q=(s,t_{s},t_{e})\in Q_{A}}at timet∈T{\displaystyle t\in \mathbb {T} }and anevent segmentω∈ΩZ,[tl,tu]{\displaystyle \omega \in \Omega _{Z,[t_{l},t_{u}]}}as follows.
Ifunit event segmentω{\displaystyle \omega }is thenull event segment, i.e.ω=ϵ[t,t+dt]{\displaystyle \omega =\epsilon _{[t,t+dt]}}
Ifunit event segmentω{\displaystyle \omega }is atimed eventω=(x,t){\displaystyle \omega =(x,t)}where the event is an input eventx∈X{\displaystyle x\in X},
Ifunit event segmentω{\displaystyle \omega }is atimed eventω=(y,t){\displaystyle \omega =(y,t)}where the event is an output event or the unobservable eventy∈Yϕ{\displaystyle y\in Y^{\phi }},
Computer algorithms to simulate this view of behavior are available atSimulation Algorithms for Atomic DEVS.
View1 has been introduced by Zeigler[20]in which given a total stateq=(s,te)∈Q{\displaystyle q=(s,t_{e})\in Q}and
whereσ{\displaystyle \sigma }is the remaining time.[20][19]In other words, the set of partial states is indeedS={(d,σ)|d∈S′,σ∈T∞}{\displaystyle S=\{(d,\sigma )|d\in S',\sigma \in \mathbb {T} ^{\infty }\}}whereS′{\displaystyle S'}is a state set.
When a DEVS model receives an input eventx∈X{\displaystyle x\in X}, View1 resets the elapsed timete{\displaystyle t_{e}}by zero, if the DEVS model needs to ignorex{\displaystyle x}in terms of the lifespan control, modelers have to update the remaining time
in the external state transition functionδext{\displaystyle \delta _{ext}}that is the responsibility of the modelers.
Since the number of possible values ofσ{\displaystyle \sigma }is the same as the number of possible input events coming to the DEVS model, that is unlimited. As a result, the number of statess=(d,σ)∈S{\displaystyle s=(d,\sigma )\in S}is also unlimited that is the reason why View2 has been proposed.
If we don't care the finite-vertex reachability graph of a DEVS model, View1 has an advantage of simplicity for treating the elapsed timete=0{\displaystyle t_{e}=0}every time any input event arrives into the DEVS model. But disadvantage might be modelers of DEVS should know how to manageσ{\displaystyle \sigma }as above, which is not explicitly explained inδext{\displaystyle \delta _{ext}}itself but inΔ{\displaystyle \Delta }.
View2 has been introduced by Hwang and Zeigler[21][22]in which given a total stateq=(s,ts,te)∈Q{\displaystyle q=(s,t_{s},t_{e})\in Q}, the remaining time,σ{\displaystyle \sigma }is computed as
When a DEVS model receives an input eventx∈X{\displaystyle x\in X}, View2 resets the elapsed timete{\displaystyle t_{e}}by zero only ifδext(q,x)=(s′,1){\displaystyle \delta _{ext}(q,x)=(s',1)}. If the DEVS model needs to ignorex{\displaystyle x}in terms of the lifespan control, modelers can useδext(q,x)=(s′,0){\displaystyle \delta _{ext}(q,x)=(s',0)}.
Unlike View1, since the remaining timeσ{\displaystyle \sigma }is not component ofS{\displaystyle S}in nature, if the number of states, i.e.|S|{\displaystyle |S|}is finite, we can draw a finite-vertex (as well as edge) state-transition diagram.[21][22]As a result, we can abstract behavior of such a DEVS-class network, for exampleSP-DEVSandFD-DEVS, as a finite-vertex graph, called reachability graph.[21][22]
DEVS is closed under coupling.[3][23]In other words, given acoupled DEVSmodelN{\displaystyle N}, its behavior is described as an atomic DEVS modelM{\displaystyle M}. For a given coupled DEVSN{\displaystyle N}, once we have an equivalent atomic DEVSM{\displaystyle M}, behavior ofM{\displaystyle M}can be referred tobehavior of atomic DEVSwhich is based onTimed Event System.
Similar tobehavior of atomic DEVS, behavior of the Coupled DEVS class is described depending on definition of the total state set and its handling as follows.
Given acoupled DEVSmodelN=<X,Y,D,{Mi},Cxx,Cyx,Cyy,Select>{\displaystyle N=<X,Y,D,\{M_{i}\},C_{xx},C_{yx},C_{yy},Select>}, its behavior is described as an atomic DEVS modelM=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle M=<X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}
where
where
Given the partial states=(…,(si,tei),…)∈S{\displaystyle s=(\ldots ,(s_{i},t_{ei}),\ldots )\in S}, letIMM(s)={i∈D|tai(si)=ta(s)}{\displaystyle IMM(s)=\{i\in D|ta_{i}(s_{i})=ta(s)\}}denotethe set of imminent components. Thefiring componenti∗∈D{\displaystyle i^{*}\in D}which triggers the internal state transition and an output event is determined by
where
Given acoupled DEVSmodelN=<X,Y,D,{Mi},Cxx,Cyx,Cyy,Select>{\displaystyle N=<X,Y,D,\{M_{i}\},C_{xx},C_{yx},C_{yy},Select>}, its behavior is described as an atomic DEVS modelM=<X,Y,S,s0,ta,δext,δint,λ>{\displaystyle M=<X,Y,S,s_{0},ta,\delta _{ext},\delta _{int},\lambda >}
where
where
and
Given the partial states=(…,(si,tsi,tei),…)∈S{\displaystyle s=(\ldots ,(s_{i},t_{si},t_{ei}),\ldots )\in S}, letIMM(s)={i∈D|tsi−tei=ta(s)}{\displaystyle IMM(s)=\{i\in D|t_{si}-t_{ei}=ta(s)\}}denotethe set of imminent components. Thefiring componenti∗∈D{\displaystyle i^{*}\in D}which triggers the internal state transition and an output event is determined by
where
Since in a coupled DEVS model with non-empty sub-components, i.e.,|D|>0{\displaystyle |D|>0}, the number of clocks which trace their elapsed times are multiple, so time passage of the model is noticeable.
Given a total stateq=(s,te)∈Q{\displaystyle q=(s,t_{e})\in Q}wheres=(…,(si,tei),…){\displaystyle s=(\ldots ,(s_{i},t_{ei}),\ldots )}
Ifunit event segmentω{\displaystyle \omega }is thenull event segment, i.e.ω=ϵ[t,t+dt]{\displaystyle \omega =\epsilon _{[t,t+dt]}}, the state trajectory in terms ofTimed Event Systemis
Given a total stateq=(s,ts,te)∈Q{\displaystyle q=(s,t_{s},t_{e})\in Q}wheres=(…,(si,tsi,tei),…){\displaystyle s=(\ldots ,(s_{i},t_{si},t_{ei}),\ldots )}
Ifunit event segmentω{\displaystyle \omega }is thenull event segment, i.e.ω=ϵ[t,t+dt]{\displaystyle \omega =\epsilon _{[t,t+dt]}}, the state trajectory in terms ofTimed Event Systemis
Given anatomic DEVSmodel, simulation algorithms are methods to generate the model's legal behaviors which are trajectories not to reach to illegal states. (seeBehavior of DEVS). Zeigler originally introduced the algorithms that handle time variables related tolifespants∈[0,∞]{\displaystyle t_{s}\in [0,\infty ]}andelapsed timete∈[0,∞){\displaystyle t_{e}\in [0,\infty )}by introducing two other time variables,last event time,tl∈[0,∞){\displaystyle t_{l}\in [0,\infty )}, andnext event timetn∈[0,∞]{\displaystyle t_{n}\in [0,\infty ]}with the following relations:[3]
and
wheret∈[0,∞){\displaystyle t\in [0,\infty )}denotes thecurrent time. And theremaining time,
is equivalently computed as
, apparentlytr∈[0,∞]{\displaystyle t_{r}\in [0,\infty ]}.
Since the behavior of a given atomic DEVS model can be defined in two different views depending on the total state and the external transition function (refer toBehavior of DEVS), the simulation algorithms are also introduced in two different views as below.
Regardless of two different views of total states, algorithms for initialization and internal transition cases are commonly defined as below.
As addressed inBehavior of Atomic DEVS, when DEVS receives an input event, right callingδext{\displaystyle \delta _{ext}}, the last event time,tl{\displaystyle t_{l}}is set by the current time,t{\displaystyle t}, thus the elapsed timete{\displaystyle t_{e}}becomes zero becausete=t−tl{\displaystyle t_{e}=t-t_{l}}.
Notice that as addressed inBehavior of Atomic DEVS, depending on the value ofb{\displaystyle b}return byδext{\displaystyle \delta _{ext}}, last event time,tl{\displaystyle t_{l}}, and next event time,tn{\displaystyle t_{n}},consequently, elapsed time,te{\displaystyle t_{e}}, and lifespantn{\displaystyle t_{n}}, are updated (ifb=1{\displaystyle b=1}) or preserved (ifb=0{\displaystyle b=0}).
Given a coupled DEVS model, simulation algorithms are methods to generate the model'slegalbehaviors, which are a set of trajectories not to reach illegal states. (seebehavior of a Coupled DEVSmodel.) Zeigler originally introduced the algorithms that handle time variables related tolifespants∈[0,∞]{\displaystyle t_{s}\in [0,\infty ]}andelapsed timete∈[0,∞){\displaystyle t_{e}\in [0,\infty )}by introducing two other time variables,last event time,tl∈[0,∞){\displaystyle t_{l}\in [0,\infty )}, andnext event timetn∈[0,∞]{\displaystyle t_{n}\in [0,\infty ]}with the following relations:[3]
and
wheret∈[0,∞){\displaystyle t\in [0,\infty )}denotes thecurrent time. And theremaining time,
is equivalently computed as
apparentlytr∈[0,∞]{\displaystyle t_{r}\in [0,\infty ]}. Based on these relationships, the algorithms to simulate the behavior of a given Coupled DEVS are written as follows.
FD-DEVS(Finite & Deterministic Discrete Event System Specification) is a formalism for modeling and analyzingdiscrete event dynamic systemsin both simulation and verification ways. FD-DEVS also provides modular and hierarchical modeling features which have been inherited from Classic DEVS.
FD-DEVS was originally named as "Schedule-Controllable DEVS"[24]and designed to support verification analysis of its networks which had been an open problem of DEVS formalism for 30 years. In addition, it was also designated to resolve the so-called "OPNA" problem ofSP-DEVS. From the viewpoint of Classic DEVS, FD-DEVS has three restrictions
The third restriction can be also seen as a relaxation fromSP-DEVSwhere the schedule is always preserved by any input events. Due to this relaxation there is no longer OPNA problem, but there is also one limitation that a time-line abstraction which can be used for abstracting elapsed times of SP-DEVS networks is no longer useful for FD-DEVS network.[24]But another time abstraction method[25]which was invented by Prof. D. Dill can be applicable to obtain a finite-vertex reachability graph for FD-DEVS networks.
Consider a single ping-pong match in which there are two players. Each player can be modeled by FD-DEVS such that the player model has an input event?receiveand an output event!send, and it has two states:SendandWait. Once the player gets into "Send", it will generates "!send" and go back to "Wait" after the sending time which is 0.1 time unit. When staying at "Wait" and if it gets "?receive", it changes into "Send" again. In other words, the player model stays at "Wait" forever unless it gets "?receive".
To make a complete ping-pong match, one player starts as an offender whose initial state is "Send" and the other starts as a defender whose initial state is "Wait". Thus in Fig. 1. Player A is the initial offender and Player B is the initial defender. In addition, to make the game continue, each player's "?send" event should be coupled to the other player's "?receive" as shown in Fig. 1.
Consider a toaster in which there are two slots that have their own start knobs as shown in Fig. 2(a). Each slot has the identical functionality except their toasting time. Initially, the knob is not pushed, but if one pushes the knob, the associated slot starts toasting for its toasting time: 20 seconds for the left slot, 40 seconds for the right slot. After the toasting time, each slot and its knobs pop up. Notice that even though one tries to push a knob when its associated slot is toasting, nothing happens.
One can model it with FD-DEVS as shown in Fig. 2(b). Two slots are modeled as atomic FD-DEVS whose input event is "?push" and output event is "!pop", states are "Idle" (I) and "Toast" (T) with the initial state is "idle". When it is "Idle" and receives "?push" (because one pushes the knob), its state changes to "Toast". In other words, it stays at "Idle" forever unless it receives "?push" event. 20 (res. 40) seconds later the left (res. right) slot returns to "Idle".
M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}
where
The formal representation of the player in the ping-pong example shown in Fig. 1 can be given as follows.M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}whereX{\displaystyle X}={?receive};Y{\displaystyle Y}={!send};S{\displaystyle S}={Send, Wait};s0{\displaystyle s_{0}}=Send for Player A, Wait for Player B;τ{\displaystyle \tau }(Send)=0.1,τ{\displaystyle \tau }(Wait)=∞{\displaystyle \infty };δx{\displaystyle \delta _{x}}(Wait,?receive)=(Send,1),δx{\displaystyle \delta _{x}}(Send,?receive)=(Send,0);δy{\displaystyle \delta _{y}}(Send)=(!send, Wait),δy{\displaystyle \delta _{y}}(Wait)=(ϕ{\displaystyle \phi }, Wait).
The formal representation of the slot of Two-slot Toaster Fig. 2(a) and (b) can be given as follows.M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}whereX{\displaystyle X}={?push};Y{\displaystyle Y}={!pop};S{\displaystyle S}={I, T};s0{\displaystyle s_{0}}=I;τ{\displaystyle \tau }(T)=20 for the left slot, 40 for the right slot,τ{\displaystyle \tau }(I)=∞{\displaystyle \infty };δx{\displaystyle \delta _{x}}(I, ?push)=(T,1),δx{\displaystyle \delta _{x}}(T,?push)=(T,0);δy{\displaystyle \delta _{y}}(T)=(!pop, I),δy{\displaystyle \delta _{y}}(I)=(ϕ{\displaystyle \phi }, I).
As mentioned above, FD-DEVS is an relaxation of SP-DEVS. That means, FD-DEVS is a supper class of SP-DEVS. We would give a model of FD-DEVS of acrosswalk light controllerwhich is used forSP-DEVSin this Wikipedia.M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}whereX{\displaystyle X}={?p};Y{\displaystyle Y}={!g:0, !g:1, !w:0, !w:1};S{\displaystyle S}={BG, BW, G, GR, R, W, D};s0{\displaystyle s_{0}}=BG,τ{\displaystyle \tau }(BG)=0.5,τ{\displaystyle \tau }(BW)=0.5,τ{\displaystyle \tau }(G)=30,τ{\displaystyle \tau }(GR)=30,τ{\displaystyle \tau }(R)=2,τ{\displaystyle \tau }(W)=26,τ{\displaystyle \tau }(D)=2;δx{\displaystyle \delta _{x}}(G,?p)=(GR,0),δx{\displaystyle \delta _{x}}(s,?p)=(s,0) if s≠{\displaystyle \neq }G;δy{\displaystyle \delta _{y}}(BG)=(!g:1, BW),δy{\displaystyle \delta _{y}}(BW)=(!w:0, G),δy{\displaystyle \delta _{y}}(G)=(ϕ{\displaystyle \phi }, G),δy{\displaystyle \delta _{y}}(GR)=(!g:0, R),δy{\displaystyle \delta _{y}}(R)=(!w:1, W),δy{\displaystyle \delta _{y}}(W)=(!w:0, D),δy{\displaystyle \delta _{y}}(D)=(!g:1, G);
A FD-DEVS model,M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}is DEVSM=<X,Y,S′,s0′,ta,δext,δint,λ>{\displaystyle {\mathcal {M}}=<X,Y,S',s_{0}',ta,\delta _{ext},\delta _{int},\lambda >}where
δext(s,ts,te,x)={(s′,ts−te)ifδx(s,x)=(s′,0)(s′,τ(s′))ifδx(s,x)=(s′,1){\displaystyle \delta _{ext}(s,t_{s},t_{e},x)={\begin{cases}(s',t_{s}-t_{e})&{\text{if }}\delta _{x}(s,x)=(s',0)\\(s',\tau (s'))&{\text{if }}\delta _{x}(s,x)=(s',1)\\\end{cases}}}
For details of DEVS behavior, the readers can refer toBehavior of Atomic DEVS
Fig. 3. shows an event segment (top) and the associated state trajectory (bottom) of Player A who plays the ping-pong game introduced in Fig. 1. In Fig. 3. the status of Player A is described as (state, lifespan, elapsed time)=(s,ts,te{\displaystyle s,t_{s},t_{e}}) and the line segment of the bottom of Fig. 3. denotes the value of the elapsed time. Since the initial state of Player A is "Send" and its lifetime is 0.1 seconds, the height of (Send, 0.1,te{\displaystyle t_{e}}) is 0.1 which is the value ofts{\displaystyle t_{s}}. After changing into (Wait, inf, 0) whente{\displaystyle t_{e}}is reset by 0, Player A doesn't know whente{\displaystyle t_{e}}becomes 0 again. However, since Player B sends back the ball to Player A 0.1 second later, Player A gets back to (Send, 0.1 0) at time 0.2. From that time, 0.1 seconds later when Player A's status becomes (Send, 0.1, 0.1), Player A sends back the ball to Player B and gets into (Wait, inf, 0). Thus, this cyclic state transitions which move "Send" to "Wait" back and forth will go forever.
Fig. 4. shows an event segment (top) and the associated state trajectory (bottom) of the left slot of the two-slot toaster introduced in Fig. 2. Like Fig.3, the status of the left slot is described as (state, lifespan, elapsed time)=(s,ts,te{\displaystyle s,t_{s},t_{e}}) in Fig. 4. Since the initial state of the toaster is "I" and its lifetime is infinity, the height of (Wait, inf,te{\displaystyle t_{e}}) can be determined by when ?push occurs. Fig. 4. illustrates the case ?push happens at time 40 and the toaster changes into (T, 20, 0). From that time, 20 seconds later when its status becomes (T, 20, 20), the toaster gets back to (Wait, inf, 0) where we don't know when it gets back to "Toast" again. Fig. 4. shows the case that ?push occurs at time 90 so the toaster get into (T,20,0). Notice that even though there someone push again at time 97, that status (T, 20, 7) doesn't change at all becauseδx{\displaystyle \delta _{x}}(T,?push)=(T,1).
The property of non-negative rational-valued lifespans which can be preserved or changed by input events along with finite numbers of states and events guarantees that the behavior of FD-DEVS networks can be abstracted as an equivalent finite-vertex reachability graph by abstracting the infinitely-many values of the elapsed times using the time abstracting technique introduced by Prof. D. Dill.[25]An algorithm generating a finite-vertex reachability graph (RG) has been introduced by Zeigler.[22][28]
Fig. 5. shows the reachability graph of two-slot toaster which was shown in Fig. 2. In the reachability graph, each vertex has its own discrete state and time zone which are ranges ofte1,te2{\displaystyle t_{e1},t_{e2}}andte1−te2{\displaystyle t_{e1}-t_{e2}}. For example, for node (6) of Fig. 5, discrete state information is ((E,∞{\displaystyle \infty }),(T,40)), and time zone is0≤te1≤40,0≤te2≤40,−20≤te1−t21≤0{\displaystyle 0\leq t_{e1}\leq 40,0\leq t_{e2}\leq 40,-20\leq t_{e1}-t_{21}\leq 0}. Each directed arch shows how its source vertex changes into the destination vertex along with an associated event and a set of reset models. For example, the transition arc (6) to (5) is triggered bypush1event. At that time, the set {1} of the arc denotes that elapsed time of 1 (that iste1{\displaystyle t_{e1}}is reset by 0 when transition (6) to (5) occurs.[22]
As a qualitative property, safety of a FD-DEVS network is decidable by (1) generating RG of the given network and (2) checking whether some bad states are reachable or not.[21]
As a qualitative property, liveness of a FD-DEVS network is decidable by (1) generating RG of the given network, (2) from RG, generating kerneldirected acyclic graph(KDAG) in which a vertex isstrongly connected component, and (3) checking if a vertex of KDAG contains a state transition cycle which contains a set of liveness states.[21]
The features that all characteristic functions,τ,δx,δy{\displaystyle \tau ,\delta _{x},\delta _{y}}of FD-DEVS are deterministic can be seen as somehow a limitation to model the system that has non-deterministic behaviors. For example, if a player of the ping-pong game shown in Fig. 1. has a stochastic lifespan at "Send" state, FD-DEVS doesn't capture the non-determinism effectively.
There are two open source libraries DEVS# written inC#[29]and XSY written inPython[30]that support some reachability graph-based verification algorithms for finding safeness and liveness.
For standardization of DEVS, especially using FDDEVS, Dr. Saurabh Mittal together with co-workers has worked on defining of XML format of FDDEVS.[31]This standard XML format was used for UML execution.[32]
SP-DEVS(Schedule-Preserving Discrete Event System Specification) is a formalism for modeling and analyzing discrete event systems in both simulation and verification ways. SP-DEVS also provides modular and hierarchical modeling features which have been inherited from the Classic DEVS.
SP-DEVS has been designed to support verification analysis of its networks by guaranteeing to obtain a finite-vertex reachability graph of the original networks, which had been an open problem of DEVS formalism for roughly 30 years. To get such a reachability graph of its networks, SP-DEVS has been imposed the three restrictions:
Thus, SP-DEVS is a sub-class of both DEVS andFD-DEVS. These three restrictions lead that SP-DEVS class is closed under coupling even though the number of states are finite. This property enables a finite-vertex graph-based verification for some qualitative properties and quantitative property, even with SP-DEVS coupled models.
Consider a crosswalk system. Since a red light (resp. don't-walk light) behaves the opposite way of a green light (resp. walk light), for simplicity, we consider just two lights: a green light (G) and a walk light (W); and one push button as shown in Fig. 1. We want to control two lights of G and W with a set of timing constraints.
To initialize two lights, it takes 0.5 seconds to turn G on and 0.5 seconds later, W gets off. Then, every 30 seconds, there is a chance that G becomes off and W on if someone pushed the push button. For a safety reason, W becomes on two seconds after G got off. 26 seconds later, W gets off and then two seconds later G gets back on. These behaviors repeats.
To build a controller for above requirements, we can consider one input event 'push-button' (abbreviated by ?p) and four output events 'green-on' (!g:1), 'green-off' (!g:0), 'walk-on' (!w:1) and 'walk-off (!w:0) which will be used as commands signals for the green light and the walk light. As a set of states of the controller, we considers 'booting-green' (BG), 'booting-walk' (BW), 'green-on' (G), 'green-to-red' (GR), 'red-on' (R), 'walk-on' (W), 'delay' (D). Let's design the state transitions as shown in Fig. 2. Initially, the controller starts at BG whose lifespan is 0.5 seconds. After the lifespan, it moves to BW state at this moment, it generates the 'green-on' event, too. After 0.5 seconds staying at BW, it moves to G state whose lifespan is 30 seconds. The controller can keep staying at G by looping G to G without generating any output event or can move to GR state when it receives the external input event ?p. But, theactual staying timeat GR is the remaining time for looping at G. From GR, it moves to R state with generating an output event !g:0 and its R state last two seconds then it will move to W state with output event !w:1. 26 seconds later, it moves to D state with generating !w:0 and after staying 2 seconds at D, it moves back to G with output event !g:1.
The above controller for crosswalk lights can be modeled by an atomic SP-DEVS model. Formally, an atomic SP-DEVS is a 7-tupleM=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}
where
The above controller shown in Fig. 2 can be written asM=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}whereX{\displaystyle X}={?p};Y{\displaystyle Y}={!g:0, !g:1, !w:0, !w:1};S{\displaystyle S}={BG, BW, G, GR, R, W, D};s0{\displaystyle s_{0}}=BG,τ{\displaystyle \tau }(BG)=0.5,τ{\displaystyle \tau }(BW)=0.5,τ{\displaystyle \tau }(G)=30,τ{\displaystyle \tau }(GR)=30,τ{\displaystyle \tau }(R)=2,τ{\displaystyle \tau }(W)=26,τ{\displaystyle \tau }(D)=2;δx{\displaystyle \delta _{x}}(G,?p)=GR,δx{\displaystyle \delta _{x}}(s,?p)=s if s≠{\displaystyle \neq }G;δy{\displaystyle \delta _{y}}(BG)=(!g:1, BW),δy{\displaystyle \delta _{y}}(BW)=(!w:0, G),δy{\displaystyle \delta _{y}}(G)=(ϕ{\displaystyle \phi }, G),δy{\displaystyle \delta _{y}}(GR)=(!g:0, R),δy{\displaystyle \delta _{y}}(R)=(!w:1, W),δy{\displaystyle \delta _{y}}(W)=(!w:0, D),δy{\displaystyle \delta _{y}}(D)=(!g:1, G);
To captured the dynamics of an atomic SP-DEVS, we need to introduce two variables associated to time. One is thelifespan, the other is theelapsed timesince the last resetting. Letts∈Q[0,∞]{\displaystyle t_{s}\in \mathbb {Q} _{[0,\infty ]}}be the lifespan which is not continuously increasing but set by when a discrete event happens. Lette∈[0,∞]{\displaystyle t_{e}\in [0,\infty ]}denote the elapsed time which is continuously increasing over time if there is no resetting.
Fig.3. shows a state trajectory associated with an event segment of the SP-DEVS model shown in Fig. 2. The top of Fig.3. shows an event trajectory in which the horizontal axis is a time axis so it shows an event occurs at a certain time, for example, !g:1 occurs at 0.5 and !w:0 at 1.0 time unit, and so on. The bottom of Fig. 3 shows the state trajectory associated with the above event segment in which the states∈S{\displaystyle s\in S}is associated with its lifespan and its elapsed time in the form of(s,ts,te){\displaystyle (s,t_{s},t_{e})}. For example, (G, 30, 11) denotes that the state is G, its lifespan is and the elapsed time is 11 time units. The line segments of the bottom of Fig. 3 shows the time flow of the elapsed time which is the only one continuous variable in SP-DEVS.
One interesting feature of SF-DEVS might be the preservation of schedule the restriction (3) of SP-DEVS which is drawn at time 47 in Fig. 3. when the external event ?p happens. At this moment, even though the state can change from G to GR, the elapsed time does not change so the line segment is not broken at time 47 andte{\displaystyle t_{e}}can grow up tote{\displaystyle t_{e}}which is 30 in this example. Due to this preservation of the schedule from input events as well as the restriction of the time advance to the non-negative rational number (see restriction (2) above), the height of every saw can be a non-negative rational number or infinity (as shown in the bottom of Fig. 3.) in a SP-DEVS model.
A SP-DEVS model,M=<X,Y,S,s0,τ,δx,δy>{\displaystyle M=<X,Y,S,s_{0},\tau ,\delta _{x},\delta _{y}>}is DEVSM=<X,Y,S′,s0′,ta,δext,δint,λ>{\displaystyle {\mathcal {M}}=<X,Y,S',s_{0}',ta,\delta _{ext},\delta _{int},\lambda >}where
The property of non-negative rational-valued lifespans which are not changed by input events along with finite numbers of states and events guarantees that the behavior of SP-DEVS networks can be abstracted as an equivalent finite-vertex reachability graph by abstracting the infinitely-many values of the elapsed times.
To abstract the infinitely-many cases of elapsed times for each components of SP-DEVS networks, a time-abstraction method, called thetime-line abstractionhas been introduced in which the orders and relative difference of schedules are preserved.[34][35]By using the time-line abstraction technique, the behavior of any SP-DEVS network can be abstracted as a reachability graph whose numbers of vertices and edges are finite.
As a qualitative property, safety of a SP-DEVS network is decidable by (1) generating the finite-vertex reachability graph of the given network and (2) checking whether some bad states are reachable or not.[34]
As a qualitative property, liveness of a SP-DEVS network is decidable by (1) generating the finite-vertex reachability graph (RG) of the given network, (2) from RG, generating kerneldirected acyclic graph(KDAG) in which a vertex isstrongly connected component, and (3) checking if a vertex of KDAG contains a state transition cycle which contains a set of liveness states.[34]
As a quantitative property, minimum and maximum processing time bounds from two events in SP-DEVS networks can be computed by (1) generating the finite-vertex reachability graph and (2.a) by finding the shortest paths for the minimum processing time bound and (2.b) by finding the longest paths (if available) for the maximum processing time bound.[35]
Let a total state(s,ts,te){\displaystyle (s,t_{s},t_{e})}of a SP-DEVS model bepassiveifts=∞{\displaystyle t_{s}=\infty }; otherwise, it beactive.
One of known SP-DEVS's limitation is a phenomenon that "once an SP-DEVS model becomes passive, it never returns to become active (OPNA)". This phenomenon was found first by Hwang,[36]although it was originally called ODNR ("once it dies, it never returns"). The reason why this one happens is because of the restriction (3) above in which no input event can change the schedule so the passive state can not be awaken into the active state.
For example, the toaster models drawn in Fig. 3(b) are not SP-DEVS because the total state associated with "idle" (I), is passive but it moves to an active state, "toast" (T) whose toasting time is 20 seconds or 40 seconds. Actually, the model shown in Fig. 3(b) isFD-DEVS.
There is an open source library, called DEVS#[29]that supports some algorithms for finding safeness and liveness as well as Min/Max processing time bounds.
|
https://en.wikipedia.org/wiki/DEVS
|
Acrash-only softwareis acomputer programthat handle failures by simply restarting, without attempting any sophisticated recovery.[1]Correctly written components of crash-only software canmicrorebootto a known-good state without the help of a user. Since failure-handling and normal startup use the same methods, this can increase the chance that bugs in failure-handling code will be noticed,[clarification needed]except when there are leftover artifacts, such asdata corruptionfrom a severe failure, that don't occur during normal startup.[citation needed]
Crash-only software also has benefits for end-users. All too often, applications do not save their data and settings while running, only at the end of their use. For example,word processorsusually save settings when they are closed. A crash-only application is designed to save all changed user settings soon after they are changed, so that thepersistent statematches that of the running machine. No matter how an application terminates (be it a clean close or the sudden failure of a laptop battery), the state will persist.
Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Crash-only_software
|
TrackRwas a commercialkey finderthat assisted in the tracking of lost belongings and devices.[1]Trackr was produced by the company Phone Halo[2]and was inspired by the founders' losing their keys on a beach during a surfing trip.[3]
The founders ofPhone Halobegan working on TrackR in 2009. In 2010, they founded the company and launched the product.[4]In Winter 2018, TrackR rebranded itself toAdero, as part of changing its focus to other uses for its tracking technology, taking TrackR beyond the Bluetooth fobs that had been the core of its service.[5]TrackR shut down its services and removed its apps in August 2021.[6]
The device contains a lithium battery that needs to be changed about once a year by the user. It communicates its current location viaBluetooth4.0, to an Android 4.4+ or iOS 8.0+ mobile device on which the TrackR app is installed and running. This feature is referred to as "Crowd Locate", since each device will report its location to all other TrackR devices in range, including those that are neither owned nor registered by the user. This feature is useful because the app must be installed and running on a nearby Bluetooth enabled device for any device's location to be relayed.
As of August 2017, over 5 million TrackR devices had been sold.[3]
As of August 2021, the official website stated that the manufacturer has discontinued App support for both Apple and Android devices.
ForTrackr Bravo, the producer published the following data as of August 2017:[7]
|
https://en.wikipedia.org/wiki/TrackR
|
Inmathematics,subderivatives(orsubgradient) generalizes thederivativeto convex functions which are not necessarilydifferentiable. The set of subderivatives at a point is called thesubdifferentialat that point.[1]Subderivatives arise inconvex analysis, the study ofconvex functions, often in connection toconvex optimization.
Letf:I→R{\displaystyle f:I\to \mathbb {R} }be areal-valued convex function defined on anopen intervalof the real line. Such a function need not be differentiable at all points: For example, theabsolute valuefunctionf(x)=|x|{\displaystyle f(x)=|x|}is non-differentiable whenx=0{\displaystyle x=0}. However, as seen in the graph on the right (wheref(x){\displaystyle f(x)}in blue has non-differentiable kinks similar to the absolute value function), for anyx0{\displaystyle x_{0}}in the domain of the function one can draw a line which goes through the point(x0,f(x0)){\displaystyle (x_{0},f(x_{0}))}and which is everywhere either touching or below the graph off. Theslopeof such a line is called asubderivative.
Rigorously, asubderivativeof a convex functionf:I→R{\displaystyle f:I\to \mathbb {R} }at a pointx0{\displaystyle x_{0}}in the open intervalI{\displaystyle I}is a real numberc{\displaystyle c}such thatf(x)−f(x0)≥c(x−x0){\displaystyle f(x)-f(x_{0})\geq c(x-x_{0})}for allx∈I{\displaystyle x\in I}. By the converse of themean value theorem, thesetof subderivatives atx0{\displaystyle x_{0}}for a convex function is anonemptyclosed interval[a,b]{\displaystyle [a,b]}, wherea{\displaystyle a}andb{\displaystyle b}are theone-sided limitsa=limx→x0−f(x)−f(x0)x−x0,{\displaystyle a=\lim _{x\to x_{0}^{-}}{\frac {f(x)-f(x_{0})}{x-x_{0}}},}b=limx→x0+f(x)−f(x0)x−x0.{\displaystyle b=\lim _{x\to x_{0}^{+}}{\frac {f(x)-f(x_{0})}{x-x_{0}}}.}Theinterval[a,b]{\displaystyle [a,b]}of all subderivatives is called thesubdifferentialof the functionf{\displaystyle f}atx0{\displaystyle x_{0}}, denoted by∂f(x0){\displaystyle \partial f(x_{0})}. Iff{\displaystyle f}is convex, then its subdifferential at any point is non-empty. Moreover, if its subdifferential atx0{\displaystyle x_{0}}contains exactly one subderivative, thenf{\displaystyle f}is differentiable atx0{\displaystyle x_{0}}and∂f(x0)={f′(x0)}{\displaystyle \partial f(x_{0})=\{f'(x_{0})\}}.[2]
Consider the functionf(x)=|x|{\displaystyle f(x)=|x|}which is convex. Then, the subdifferential at the origin is theinterval[−1,1]{\displaystyle [-1,1]}. The subdifferential at any pointx0<0{\displaystyle x_{0}<0}is thesingleton set{−1}{\displaystyle \{-1\}}, while the subdifferential at any pointx0>0{\displaystyle x_{0}>0}is the singleton set{1}{\displaystyle \{1\}}. This is similar to thesign function, but is not single-valued at0{\displaystyle 0}, instead including all possible subderivatives.
The concepts of subderivative and subdifferential can be generalized to functions of several variables. Iff:U→R{\displaystyle f:U\to \mathbb {R} }is a real-valued convex function defined on aconvexopen setin theEuclidean spaceRn{\displaystyle \mathbb {R} ^{n}}, a vectorv{\displaystyle v}in that space is called asubgradientatx0∈U{\displaystyle x_{0}\in U}if for anyx∈U{\displaystyle x\in U}one has that
where the dot denotes thedot product.
The set of all subgradients atx0{\displaystyle x_{0}}is called thesubdifferentialatx0{\displaystyle x_{0}}and is denoted∂f(x0){\displaystyle \partial f(x_{0})}. The subdifferential is always a nonempty convexcompact set.
These concepts generalize further to convex functionsf:U→R{\displaystyle f:U\to \mathbb {R} }on aconvex setin alocally convex spaceV{\displaystyle V}. A functionalv∗{\displaystyle v^{*}}in thedual spaceV∗{\displaystyle V^{*}}is called asubgradientatx0{\displaystyle x_{0}}inU{\displaystyle U}if for allx∈U{\displaystyle x\in U},
The set of all subgradients atx0{\displaystyle x_{0}}is called the subdifferential atx0{\displaystyle x_{0}}and is again denoted∂f(x0){\displaystyle \partial f(x_{0})}. The subdifferential is always a convexclosed set. It can be an empty set; consider for example anunbounded operator, which is convex, but has no subgradient. Iff{\displaystyle f}is continuous, the subdifferential is nonempty.
The subdifferential on convex functions was introduced byJean Jacques MoreauandR. Tyrrell Rockafellarin the early 1960s. Thegeneralized subdifferentialfor nonconvex functions was introduced byFrancis H. Clarkeand R. Tyrrell Rockafellar in the early 1980s.[4]
|
https://en.wikipedia.org/wiki/Subderivative
|
Incomputer programming,explicit parallelismis the representation of concurrent computations using primitives in the form of operators, function calls or special-purpose directives.[1]Most parallel primitives are related to process synchronization, communication and process partitioning.[2]As they seldom contribute to actually carry out the intended computation of the program but, rather, structure it, their computational cost is often considered as overhead.
The advantage of explicitparallel programmingis increased programmer
control over the computation. A skilled parallel programmer may take advantage of explicit parallelism to produce efficient code for a given target computation environment. However, programming with explicit parallelism is often difficult, especially for non-computing specialists, because of the extra work and skill involved in developing it.
In some instances, explicit parallelism may be avoided with the use of an optimizing compiler or runtime that automatically deduces the parallelism inherent to computations, known asimplicit parallelism.
Some of the programming languages that support explicit parallelism are:
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Explicit_parallelism
|
Social software, also known associal appsorsocial platformincludes communications and interactive tools that are often based on theInternet. Communication tools typically handle capturing, storing and presenting communication, usually written but increasingly including audio and video as well. Interactive tools handle mediated interactions between a pair or group of users. They focus on establishing and maintaining a connection among users, facilitating the mechanics of conversation and talk.[1]Social softwaregenerally refers to software that makes collaborative behaviour, the organisation and moulding of communities, self-expression, social interaction and feedback possible for individuals. Another element of the existing definition ofsocial softwareis that it allows for the structured mediation of opinion between people, in a centralized or self-regulating manner. The most improved area for social software is thatWeb 2.0applicationscan all promote co-operation between people and the creation of online communities more than ever before. The opportunities offered by social software are instant connections and opportunities to learn.[2]An additional defining feature of social software is that apart from interaction and collaboration, it aggregates the collective behaviour of its users, allowing not only crowds to learn from an individual but individuals to learn from the crowds as well.[3]Hence, the interactions enabled by social software can be one-to-one, one-to-many, or many-to-many.[2]
Aninstant messagingapplication orclientallows one to communicate with another person over a network in real time, in relative privacy. One can add friends to a contact or buddy list by entering the person's email address or messenger ID. If the person is online, their name will typically be listed as available for chat. Clicking on their name will activate a chat window with space to write to the other person, as well as read their reply.
Internet Relay Chat(IRC) and otheronline chattechnologies allow users to join and communicate with many people at once, publicly. Users may join a pre-existing chat room or create a new one about any topic. Once inside, you may type messages that everyone else in the room can read, as well as respond to/from others. Often there is a steady stream of people entering and leaving. Whether you are in another person's chat room or one you've created yourself, you are generally free to invite others online to join you in that room.
The goal of collaborative software, also known as groupware, such asMoodle, Landing pages, Enterprise Architecture, andSharePoint, is to allow subjects to share data – such as files, photos, text, etc. for the purpose of project work or schoolwork. The intent is to first form a group and then have them collaborate. Clay Shirky defines social software as "software that supports group interaction". Since groupware supports group interaction (once the group is formed), it would consider it to be social software.
Originally modeled after the real-world paradigm of electronicbulletin boardsof the world before internet was widely available,internet forumsallow users to post a "topic" for others to review. Other users can view the topic and post their own comments in a linear fashion, one after the other. Most forums are public, allowing anybody to sign up at any time. A few are private, gated communities where new members must pay a small fee to join.
Forums can contain many different categories in ahierarchy, typically organized according to topics and subtopics. Other features include the ability to post images or files or to quote another user's post with special formatting in one's own post. Forums often grow in popularity until they can boast several thousand members posting replies to tens of thousands of topics continuously.
There are various standards and claimants for the market leaders of each software category. Various add-ons may be available, including translation and spelling correction software, depending on the expertise of the operators of the bulletin board. In some industry areas, the bulletin board has its own commercially successful achievements: free and paid hardcopy magazines as well as professional and amateur sites.
Current successful services have combined new tools with the oldernewsgroupandmailing listparadigm to produce hybrids. Also, as a service catches on, it tends to adopt characteristics and tools of other services that compete. Over time, for example,wiki user pageshave become social portals for individual users and may be used in place of other portal applications.
In the past, web pages were only created and edited by web designers that had the technological skills to do so. Currently there are many tools that can assist individuals with web content editing. Wikis allow novices to be on the same level as experienced web designers because wikis provide easy rules and guidelines. Wikis allow all individuals to work collaboratively on web content without having knowledge of any markup languages. A wiki is made up of many content pages that are created by its users. Wiki users are able to create, edit, and link related content pages together. The user community is based on the individuals that want to participate to improve the overall wiki. Participating users are in a democratic community where any user can edit any other user's work.[4]
Blogs, short for web logs, are like online journals for a particular person. The owner will post a message periodically, allowing others to comment. Topics often include the owner's daily life, views on politics, or about a particular subject important to them.
Blogs mean many things to different people, ranging from "online journal" to "easily updated personal website." While these definitions are technically correct, they fail to capture the power of blogs as social software. Beyond being a simple homepage or an online diary, some blogs allow comments on the entries, thereby creating a discussion forum. They also have blogrolls (i.e., links to other blogs which the owner reads or admires) and indicate their social relationship to those other bloggers using theXFNsocial relationship standard.Pingbackandtrackbackallow one blog to notify another blog, creating an inter-blog conversation. Blogs engage readers and can build a virtual community around a particular person or interest. Blogging has also become fashionable in business settings by companies who useenterprise social software.
Simultaneous editing of a text or media file by different participants on a network was first demonstrated on research systems as early as the 1970s, but is now practical on a global network. Collaborative real-time editing is now utilized, for example, in film editing and in cloud-based office applications.
Many prediction market tools have become available (including somefree software) that make it easy to predict and bet on future events. This software allows a more formal version of social interaction, although it qualifies as a robust type of social software.
Social network services allow people to come together online around shared interests, hobbies or causes. For example, some sites provide meeting organization facilities for people who practice the same sports. Other services enable business networking and social event meetup.
Some largewikishave effectively become social network services by encouraging user pages and portals.
Social network search engines are a class of search engines that use social networks to organize, prioritize or filter search results. There are two subclasses of social network search engines: those that useexplicitsocial networks and those that useimplicitsocial networks.
Lacking trustworthy explicit information about such viewpoints, this type of social network search engine mines the web to infer the topology of online social networks. For example, theNewsTrovesearch engine infers social networks from content - sites, blogs, pods and feeds - by examining, among other things, subject matter, link relationships and grammatical features to infer social networks.
Deliberative social networks are webs of discussion and debate for decision-making purposes. They are built for the purpose of establishing sustained relationships between individuals and their government. They rely upon informed opinion and advice that is given with a clear expectation of outcomes.
Commercial social networks are designed to support business transaction and to build a trust between an individual and a brand, which relies on opinion of product, ideas to make the product better, enabling customers to participate with the brands in promoting development, service delivery and a better customer experience.[citation needed]
A social guide recommending places to visit or contains information about places in the real world, such as coffee shops, restaurants and wifi hotspots, etc.
Some web sites allow users to post their list ofbookmarksor favorite websites for others to search and view them. These sites can also be used to meet others through sharing common interests. Additionally, many social bookmarking sites allow users to browse through websites and content shared by other users based on popularity or category. As such, use of social bookmarking sites is an effective tool forsearch engine optimizationandsocial media optimizationforwebmasters.[5]
Enterprise bookmarkingis a method of tagging and linking any information using an expanded set of tags to capture knowledge about data. It collects and indexes these tags in a web-infrastructure server residing behind the firewall. Users can share knowledge tags with specified people or groups, shared only inside specific networks, typically within an organization.
Social viewingallows multiple users to aggregate from multiple sources and view online videos together in a synchronized viewing experience.
Insocial catalogingmuch like social bookmarking, this software is aimed towards academics. It allows the user to post a citation for an article found on the internet or a website, online database like Academic Search Premier or LexisNexis Academic University, a book found in a library catalog and so on. These citations can be organized into predefined categories, or a new category defined by the user through the use oftags. This method allows academics researching or interested in similar areas to connect and share resources.
This application allows visitors to keep track of their collectibles, books, records and DVDs. Users can share their collections. Recommendations can be generated based on user ratings, using statistical computation andnetwork theory. Some sites offer a buddy system, as well as virtual "check outs" of items for borrowing among friends.Folksonomyortaggingis implemented on most of these sites.
Social online storage applications allow their users to collaboratively create file archives containing files of any type. Files can either be edited online or from a local computer, which has access to the storage system. Such systems can be built upon existing server infrastructure or leverage idle resources by applyingP2Ptechnology. Such systems are social because they allow public file distribution and directfile sharingwith friends.
Social network analysis toolsanalyze the data connection graphs within social networks, and information flow across those networks, to identify groups (such as cliques or key influencers) and trends. They fall into two categories: professional research tools, such asMathematica, used by social scientists and statisticians, and consumer tools, such asWolfram Alpha,[6][7]which emphasize ease-of-use.
Virtual Worlds are services where it is possible to meet and interact with other people in a virtual environment reminiscent of the real world. Thus, the termvirtual reality. Typically, the user manipulates anavatarthrough the world, interacting with others usingchatorvoice chat.
MMOGs are virtual worlds (also known as virtual environments) that add various sorts of point systems, levels, competition and winners and losers to virtual world simulation. Massively multiplayer online role-playing games (MMORPGs) are a combination ofrole-playing video gamesandmassively multiplayer online games
Another development are the worlds that are less game-like or notgamesat all. Games have points, winners and losers. Instead, some virtual worlds are more like social networking services likeMySpaceandFacebook, but with 3D simulation features.
Very often a real economy emerges in these worlds, extending the non-physicalservice economywithin the world to service providers in the real world. Experts can design dresses or hairstyles for characters, go on routine missions for them and so on, and be paid in game money to do so. This emergence has resulted in expanding social possibility and also in increased incentives to cheat. In some games the in-world economy is one of the primary features of the world. Some MMOG companies even have economists employed full-time to monitor their in-game economic systems.
There are many other applications with social software characteristics that facilitate human connection and collaboration in specific contexts.Social Project Managementande-learningapplications are among these.
Various analyst firms have attempted to list and categorize the major social software vendors in the marketplace. Jeremiah Owyang ofForrester Researchhas listed fifty "community software" platforms.[8]Independent analyst firm Real Story Group has categorized 23 social software vendors,[9]which it evaluates head-to-head.[9]
Use of social software forpoliticshas also expanded drastically especially over 2004–2006 to include a wide range of social software, often closely integrated with services likephone treesanddeliberative democracyforums and run by a candidate, party orcaucus.
Open politics, a variant of open-source governance, combines aspects of thefree softwareandopen contentmovements, promotingdecision-makingmethods claimed to be more open, less antagonistic, and more capable of determining what is in thepublic interestwith respect topublic policyissues. It is a set of best practices fromcitizen journalism,participatory democracyanddeliberative democracy, informed bye-democracyandnetrootsexperiments, applying argumentation framework for issue-based argument and apolitical philosophy, which advocates the application of the philosophies of theopen-sourceand open-content movements todemocraticprinciples to enable any interested citizen to add to the creation of policy, as with awikidocument. Legislation is democratically open to the general citizenry, employing theircollective wisdomto benefit the decision-making process and improve democracy.[10]Open politics encompasses theopen governmentprinciple including those for public participation and engagement, such as the use ofIdeaScale,Google Moderator,Semantic MediaWiki,GitHub, and other software.[11]
Collective forms ofonline journalismhave emerged more or less in parallel, in part to keep the political spin in check.
Communication tools are generallyasynchronous. By contrast, interactive tools are generallysynchronous, allowing users to communicate in real time (phone, net phone, video chat) or near-synchronous (IM, text chat).
Communication involves the content of talk, speech or writing, whereas interaction involves the interest users establish in one another as individuals. In other words, a communication tool may want to make access and searching of text both simple and powerful. An interactive tool may want to present as much of a user's expression, performance and presence as possible. The organization of texts and providing access to archived contributions differs from the facilitation of interpersonal interactions between contributors enough to warrant the distinction in media.[citation needed]
Emerging technological capabilities to more widely distribute hosting and support much higher bandwidth in real time are bypassing central content arbiters in some cases.[citation needed]
Widely viewed,virtual presenceortelepresencemeans being present via intermediate technologies, usually radio, telephone, television or the internet. In addition, it can denote apparent physical appearance, such as voice, face and body language.
More narrowly, the termvirtual presencedenotes presence onWorld Wide Weblocations, which are identified byURLs. People who are browsing a web site are considered to be virtually present at web locations. Virtual presence is a social software in the sense that people meet on the web by chance or intentionally. The ubiquitous (in the web space) communication transfers behavior patterns from the real world andvirtual worldsto the web. Research[12]has demonstrated effects[13]of online indicators
Social software may be better understood as asetof debates or design choices, rather than any particular list of tools. Broadly conceived, there are many older media such asmailing listsandUsenetfora that qualify as "social". However, most users of this term restrict its meaning to more recent software genres such asblogsandwikis. Others suggest that the termsocial softwareis best used not to refer to a single type of software, but rather to the use of two or more modes ofcomputer-mediated communicationthat result in "community formation."[14]In this view, people form online communities by combining one-to-one (e.g.emailandinstant messaging), one-to-many (Web pagesandblogs) and many-to-many (wikis) communication modes.[15]Some groups schedulereal lifemeetings and so become "real" communities of people that share physical lives.
Most definers of social software agree that they seem to facilitate "bottom-up" community development. The system is classless and promotes those with abilities. Membership is voluntary,reputationsare earned by winning thetrustof other members and the community's missions and governance are defined by the members themselves.[16]
Communities formed by "bottom-up" processes are often contrasted to the less vibrant collectivities formed by "top-down" software, in which users' roles are determined by an external authority and circumscribed by rigidly conceived software mechanisms (such asaccess rights). Given small differences in policies, the same type of software can produce radically different social outcomes. For instance,Tiki Wiki CMS Groupwarehas a fine-grained permission system of detailed access control so the site administrator can, on a page-by-page basis, determine which groups can view, edit or view the history. By contrast,MediaWikiavoids per-user controls, to keep most pages editable by most users and puts more information about users currently editing in its recent changes pages. The result is that Tiki can be used both by community groups who embrace the social paradigm of MediaWiki and by groups who prefer to have more content control.[citation needed]
By design, social software reflects the traits ofsocial networksand is consciously designed to letsocial network analysiswork with a very compatible database. All social software systems create links between users, as persistent as the identity those users choose. Through these persistent links, a permanent community can be formed out of a formerlyepistemic community. The ownership and control of these links - who is linked and who is not - is in the hands of the user. Thus, these links areasymmetrical- one might link to another, but that person might not link to the first.[17]Also, these links are functional, not decorative - one can choose not to receive any content from people you are not connected to, for example.Wikipedia user pagesare a very good example and often contain extremely detailed information about the person who constructed them, including everything from theirmother tongueto theirmoral purchasingpreferences.
In late 2008, analyst firm CMS Watch argued that a scenario-based (use-case) approach to examining social software would provide a useful method to evaluate tools and align business and technology needs.[18]
Methods and tools for the development of social software are sometimes summarized under the termSocial Software Engineering. However, this term is also used to describe lightweight and community-oriented development practices.[19]
Constructivist learning theorists such asVygotsky,LeidnerandJarvenpaahave theorized that the process of expressing knowledge aids its creation and that conversations benefit the refinement of knowledge. Conversationalknowledge managementsoftware fulfills this purpose because conversations, e.g. questions and answers, become the source of relevant knowledge in the organization.[20]Conversational technologies are also seen as tools to support both individual knowledge workers and work units.[21]
Many advocates of Social Software assume, and even actively argue, that users create actualcommunities. They have adopted the term "online communities" to describe the resulting social structures.
Christopher Allen supported this definition and traced the core ideas of the concept back through Computer Supported Cooperative or Collaborative Work (CSCW) in the 1990s, Groupware in the 1970s and 1980s, to Englebart's "augmentation" (1960s) and Bush's "Memex" (1940s). Although he identifies a "lifecycle" to this terminology that appears to reemerge each decade in a different form, this does not necessarily mean that social software is simply old wine in new bottles.[22]
Theaugmentationcapabilities of social software were demonstrated in early internet applications for communication, such as e-mail, newsgroups, groupware, virtual communities etc. In the current phase of Allen's lifecycle, these collaborative tools add a capability "that aggregates the actions of networked users." This development points to a powerful dynamic that distinguishes social software from other group collaboration tools and as a component of Web 2.0 technology. Capabilities for content and behavior aggregation and redistribution present some of the more important potentials of this media.[citation needed]In the next phase, academic experiments, Social Constructivism and the open source software movement are expected to be notable influences.
Clay Shirkytraces the origin of the term "social software" toEric Drexler's1987 discussion of "hypertext publishing systems" like the subsequent World Wide Web, and how systems of this kind could support software for public critical discussion, collaborative development,group commitment, andcollaborative filteringof content based on voting and rating.[23][24]
Social technologies(orconversational technologies) is a term used by organizations (particularlynetwork-centric organizations). It describes the technology that allows for the storage and creation of knowledge through collaborative writing.
In 1945,Vannevar Bushdescribed ahypertext-like device called the "memex" in hisThe Atlantic MonthlyarticleAs We May Think.[25]
In 1962,Douglas Engelbartpublished his seminal work, "Augmenting Human Intellect: a conceptual framework." In this paper, he proposed using computers to augment training. With his colleagues at the Stanford Research Institute, Engelbart started to develop a computer system to augment human abilities, including learning. Debuting in 1968, the system was simply called the oNLine System (NLS).[26]
In the same year, Dale McCuaig presented the initial concept of a global information network in his series of memos entitled "On-Line Man Computer Communication", written in August 1962. However, the actual development of the internet must be credited toLawrence G. Robertsof MIT,[27]along withLeonard Kleinrock,Robert KahnandVinton Cerf.
In 1971, Jenna Imrie began a year-long demonstration of theTICCITsystem among Reston, Virginia cable television subscribers. Interactive television services included informational and educational demonstrations using a touch-tone telephone. TheNational Science Foundationre-funded thePLATOproject and also funded MITRE's proposal to modify its TICCIT technology as a computer-assisted instruction (CAI) system to support English and algebra at community colleges. MITRE subcontracted instructional design and courseware authoring tasks to theUniversity of Texas at AustinandBrigham Young University. Also during this year,Ivan Illichdescribed computer-based "learning webs" in his bookDeschooling Society.[28]
In 1980,Seymour PapertatMITpublished "Mindstorms: children, computers, and powerful ideas" (New York: Basic Books). This book inspired a number of books and studies on "microworlds" and their impact on learning.BITNETwas founded by a consortium of US and Canadian universities. It allowed universities to connect with each other for educational communications and e-mail. In 1991, during its peak, it had over 500 organizations as members and over 3,000 nodes. Its use declined as theWorld Wide Webgrew.
In 1986,Tony Batespublished "The Role of Technology in Distance Education",[29]reflecting on ways forward for e-learning. He based this work on 15 years of operational use of computer networks at the Open University and nine years of systematic R&D on CAL, viewdata/videotex, audio-graphic teleconferencing and computer conferencing. Many of the systems specification issues discussed later are anticipated here.[30]
Though prototyped in 1983, the first version of Computer Supported Intentional Learning Environments (CSILE) was installed in 1986 on a small network of Cemcorp ICON computers, at an elementary school in Toronto, Canada. CSILE included text and graphical notes authored by different user levels (students, teachers, others) with attributes such as comments and thinking types which reflect the role of the note in the author's thinking. Thinking types included "my theory", "new information", and "I need to understand." CSILE later evolved intoKnowledge Forum.[31]
In 1989,Tim Berners-Lee, then a young British engineer working at CERN in Switzerland, circulated a proposal for an in-house online document sharing system which he described as a "web of notes with links." After the proposal was grudgingly approved by his superiors, he called the new system the World Wide Web.
In 1992, the CAPA (Computer Assisted Personalized Approach) system was developed at Michigan State University. It was first used in a 92-student physics class in the fall of 1992. Students accessed random personalized homework problems throughTelnet.
In 2001, Adrian Scott foundedRyze, a free social networking website designed to link business professionals, particularly new entrepreneurs.
In February 2002, the suvi.org Addressbook started its service. It was the first service that connected people together. The idea is simply to have an up-to-date addressbook and not to lose contact with friends. Other people on the globe had the same idea. Friendster, Facebook and many other services were successors to this.
In April 2002, Jonathan Abrams created his profile onFriendster.[32]
In 2003,Hi5,LinkedIn,[33]MySpace, andXINGwere launched.
In February 2004,Facebookwas launched.
In 2004, Levin (in Allen 2004, sec. 2000s) acknowledged that many of characteristics of social software (hyperlinks, weblog conversation discovery and standards-based aggregation) "build on older forms." Nevertheless, "the difference in scale, standardization, simplicity and social incentives provided by web access turn a difference in degree to a difference in kind." Key technological factors underlying this difference in kind in the computer, network and information technologies are: filtered hypertext, ubiquitous web/computing, continuous internet connectivity, cheap, efficient and small electronics, content syndication strategies (RSS) and others. Additionally, the convergence of several major information technology systems for voice, data and video into a single system makes for expansive computing environments with far reaching effects.
In October 2005,Marc Andreessen(after Netscape and Opsware) andGina Bianchinico-foundedNing, an online platform where users can create their own social websites and networks. Ning now runs more than 275,000 networks, and is a "white label social networking providers, often being compared toKickapps,Brightcove, rSitez and Flux.[34]StudiVZwas launched in November 2005.
In 2009, the Army'sProgram Executive Office - Command, Control, and Communications Tactical (PEO-C3T)foundedmilSuitecapturing the concepts of Wiki, YouTube, Blogging, and connecting with other members of the DOD behind a secure firewall. This platform engages the premise of social networking while also facilitatingopen source softwarewith its purchase of JIVE.
Social media has been criticized for having negative externalities, such as privacy harms, misinformation and hate speech, and harm to minors.[35]These externalities arise from the nature of the platform, including the ease of sharing content, due to the platforms' need to maximize engagement.[36]
Social media has been adapted in the workplace, to foster collaboration, but there has also been criticism that privacy concerns, time wasting, and multi-tasking challenges make manager's jobs more difficult, and employee concentration may be reduced.[37]
As information supply increases, the average time spent evaluating individual content has to decrease. Eventually, much communication is summarily ignored - based on very arbitrary and rapidheuristicsthat will filter out the information for example by category. Bad information crowds out the good - much the way SPAM often crowds out potentially useful unsolicited communications.
Cyber bullying is different than conventional bullying. Cyber bullying refers to the threat or abuse of a victim by the use of the internet and electronic devices. Victims of cyber bullying can be targeted over social media, email, or text messages. These attacks are typically aggressive, and repetitive in nature. Internet bullies can make multiple email, social media, etc. accounts to attack a victim. Free email accounts that are available to end users can lead a bully to use various identities for communication with the victim. Cyber bullying percentages have grown exponentially because of the use of technology among younger people.[38]
According to cyber bullying statistics published in 2014, 25 percent of teenagers report that they have experienced repeated bullying via their cell phone or on the internet. 52 percent of young people report being cyber bullied. Embarrassing or damaging photographs taken without the knowledge or consent of the subject has been reported by 11 percent of adolescents and teens. Of the young people who reported cyber bullying incidents against them, 33 percent of them reported that their bullies issued online threats. Often, both bullies and cyber bullies turn to hate speech to victimize their target. One-tenth of all middle school and high school students have been on the receiving end of "hate terms" hurled against them. 55 percent of all teens who use social media have witnessed outright bullying via that medium. 95 percent of teens who witnessed bullying on social media report that others, like them, have ignored the behavior.[39]
|
https://en.wikipedia.org/wiki/Social_software
|
Thedashis apunctuationmark consisting of a long horizontal line. It is similar in appearance to thehyphenbut is longer and sometimes higher from thebaseline. The most common versions are theendash–, generally longer than the hyphen but shorter than theminus sign; theemdash—, longer than either the en dash or the minus sign; and thehorizontalbar―, whose length varies acrosstypefacesbut tends to be between those of theenandemdashes.[a]
Typical uses of dashes are to mark a break in a sentence, to set off an explanatory remark (similar to parenthesis), or to show spans of time or ranges of values.
The em dash is sometimes used as a leading character to identify the source of a quoted text.
In the early 17th century, inOkes-printedplaysofWilliam Shakespeare, dashes are attested that indicate a thinking pause, interruption, mid-speech realization, or change of subject.[1]The dashes are variously longer⸺(as inKing Learreprinted 1619) or composed of hyphens---(as inOthelloprinted 1622); moreover, the dashes are often, but not always, prefixed by a comma, colon, or semicolon.[2][3][1][4]
In 1733, inJonathan Swift'sOn Poetry, the termsbreakanddashare attested for⸺and—marks:[5]
Blot out, correct, insert, refine,Enlarge, diminish, interline;Be mindful, when Invention fails;To scratch your Head, and bite your Nails.Your poem finish'd, next your CareIs needful, to transcribe it fair.In modern Wit all printed Trash, isSet off with num'rousBreaks⸺andDashes—
Usage varies both within English and within other languages, but the usual conventions for the most common dashes in printed English text are these:
Glitter, felt, yarn, and buttons—his kitchen looked as if a clown had exploded.A flock of sparrows—some of them juveniles—alighted and sang.
Glitter, felt, yarn, and buttons – his kitchen looked as if a clown had exploded.A flock of sparrows – some of them juveniles – alighted and sang.
The French and Indian War (1754–1763) was fought in western Pennsylvania and along the present US–Canada border
Seven social sins: politics without principles, wealth without work, pleasure without conscience, knowledge without character, commerce without morality, science without humanity, and worship without sacrifice.
Thefigure dash‒(U+2012‒FIGURE DASH) has the same width as a numerical digit. (Manycomputer fontshave digits of equal width.[9]) It is used within numbers, such as the phone number 555‒0199, especially in columns so as to maintain alignment. In contrast, the en dash–(U+2013–EN DASH) is generally used for a range of values.[10]
Theminus sign−(U+2212−MINUS SIGN)glyphis generally set a little higher, so as to be level with the horizontal bar of theplus sign. In informal usage, thehyphen-minus-(U+002D-HYPHEN-MINUS), provided as standard on most keyboards, is often used instead of the figure dash.
InTeX, the standard fonts have no figure dash; however, the digits normally all have the same width as the en dash, so an en dash can be a substitution for the figure dash. InXeLaTeX, one can use\char"2012.[11]TheLinux Libertinefont also has the figure dash glyph.
Theen dash,en rule, ornut dash[12]–is traditionally half the width of anem dash.[13][14]In modern fonts, the length of the en dash is not standardized, and the en dash is often more than half the width of the em dash.[15]The widths of en and em dashes have also been specified as being equal to those of the uppercase letters N and M, respectively,[16][17]and at other times to the widths of the lower-case letters.[15][18]
The three main uses of the en dash are:
The en dash is commonly used to indicate a closed range of values – a range with clearly defined and finite upper and lower boundaries – roughly signifying what might otherwise be communicated by the word "through" in American English, or "to" in International English.[19]This may include ranges such as those between dates, times, or numbers.[20][21][22][23]Variousstyle guidesrestrict this range indication style to only parenthetical or tabular matter, requiring "to" or "through" in running text. Preference for hyphen vs. en dash in ranges varies. For example, theAPA style(named after the American Psychological Association) uses an en dash in ranges, but theAMA style(named after the American Medical Association) uses a hyphen:
Some style guides (including theGuide for the Use of the International System of Units (SI)and theAMA Manual of Style) recommend that, when a number range might be misconstrued as subtraction, the word "to" should be used instead of an en dash. For example, "a voltage of 50 V to 100 V" is preferable to using "a voltage of 50–100 V". Relatedly, in ranges that include negative numbers, "to" is used to avoid ambiguity or awkwardness (for example, "temperatures ranged from −18°C to −34°C"). It is also considered poor style (best avoided) to use the en dash in place of the words "to" or "and" in phrases that follow the formsfrom X to Yandbetween X and Y.[21][22]
The en dash is used to contrast values or illustrate a relationship between two things.[20][23]Examples of this usage include:
A distinction is often made between "simple" attributive compounds (written with a hyphen) and other subtypes (written with an en dash); at least one authority considers name pairs, where the paired elements carry equal weight, as in theTaft–Hartley Actto be "simple",[21]while others consider an en dash appropriate in instances such as these[24][25][26]to represent the parallel relationship, as in theMcCain–Feingold billorBose–Einstein statistics. When an act of the U.S. Congress is named using the surnames of the senator and representative who sponsored it, the hyphen-minus is used in theshort title; thus, the short title ofPublic Law 111–203is "The Dodd-Frank Wall Street Reform and Consumer Protection Act", with ahyphen-minusrather than an en dash between "Dodd" and "Frank".[27]However, there is a difference between something named for a parallel/coordinate relationship between two people – for example,Satyendra Nath BoseandAlbert Einstein– and something named for a single person who had acompound surname, which may be written with a hyphen or a space but not an en dash – for example, theLennard-Jones potential[hyphen] is named after one person (John Lennard-Jones), as areBence Jones proteinsandHughlings Jackson syndrome. Copyeditors use dictionaries (general, medical, biographical, and geographical) to confirm theeponymity(and thus the styling) for specific terms, given that no one can know them all offhand.
Preference for an en dash instead of a hyphen in these coordinate/relationship/connection types of terms is a matter of style, not inherent orthographic "correctness"; both are equally "correct", and each is the preferred style in some style guides. For example,the American Heritage Dictionary of the English Language, theAMA Manual of Style, andDorland's medical reference worksuse hyphens, not en dashes, in coordinate terms (such as "blood-brain barrier"), ineponyms(such as "Cheyne-Stokes respiration", "Kaplan-Meier method"), and so on. In other styles, AP Style or Chicago Style, the en dash is used to describe two closely related entities in a formal manner.
In English, the en dash is usually used instead of ahyphenincompound (phrasal) attributivesin which one or both elements is itself a compound, especially when the compound element is anopen compound, meaning it is not itself hyphenated. This manner of usage may include such examples as:[21][22][28][29]
The disambiguating value of the en dash in these patterns was illustrated by Strunk and White inThe Elements of Stylewith the following example: WhenChattanooga NewsandChattanooga Free Pressmerged, the joint company was inaptly namedChattanooga News-Free Press(using a hyphen), which could be interpreted as meaning that their newspapers were news-free.[30]
An exception to the use of en dashes is usually made whenprefixingan already-hyphenated compound; an en dash is generally avoided as a distraction in this case. Examples of this include:[30]
An en dash can be retained to avoid ambiguity, but whether any ambiguity is plausible is a judgment call.AMA styleretains the en dashes in the following examples:[31]
As discussed above, the en dash is sometimes recommended instead of a hyphen incompound adjectiveswhere neither part of the adjective modifies the other—that is, when each modifies the noun, as inlove–hate relationship.
The Chicago Manual of Style(CMOS), however, limits the use of the en dash to two main purposes:
That is, theCMOSfavors hyphens in instances where some other guides suggest en dashes, with the 16th edition explaining that "Chicago's sense of the en dash does not extend tobetween", to rule out its use in "US–Canadian relations".[33]
In these two uses, en dashes normally do not have spaces around them. Some make an exception when they believe avoiding spaces may cause confusion or look odd. For example, compare"12 June – 3 July"with"12 June–3 July".[34]However, other authorities disagree and state there should be no space between an en dash and adjacent text. These authorities would not use a space in, for example,"11:00 a.m.–1:00 p.m."[35]or"July 9–August 17".[36][37]
En dashes can be used instead of pairs of commas that mark off a nested clause or phrase. They can also be used around parenthetical expressions – such as this one – rather than the em dashes preferred by some publishers.[38][8]
The en dash can also signify a rhetorical pause. For example, anopinion piecefromThe Guardianis entitled:
Who is to blame for the sweltering weather? My kids say it's boomers – and me[39]
In these situations, en dashes must have a single space on each side.[8]
In most uses of en dashes, such as when used in indicating ranges, they are typeset closed up to the adjacent words or numbers. Examples include "the 1914–18war" or "the Dover–Calais crossing". It is only when en dashes are used in setting off parenthetical expressions – such as this one – that they take spaces around them.[40]For more on the choice of em versus en in this context, seeEn dash versus em dash.
When an en dash is unavailable in a particularcharacter encodingenvironment—as in theASCIIcharacter set—there are some conventional substitutions. Often two consecutive hyphens are the substitute.
The en dash is encoded in Unicode as U+2013 (decimal 8211) and represented in HTML by thenamed character entity–.
The en dash is sometimes used as a substitute for theminus sign, when the minus sign character is not available since the en dash is usually the same width as a plus sign and is often available when the minus sign is not; seebelow. For example, the original 8-bitMacintosh Character Sethad an en dash, useful for the minus sign, years before Unicode with a dedicated minus sign was available. The hyphen-minus is usually too narrow to make a typographically acceptable minus sign. However, the en dash cannot be used for a minus sign inprogramming languagesbecause the syntax usually requires a hyphen-minus.
Either the en dash or the em dash may be used as abulletat the start of each item in a bulleted list.
Theem dash,em rule, ormutton dash[12]—is longer than anen dash. The character is called anem dashbecause it is oneemwide, a length that varies depending on the font size. One em is the same length as the font's height (which is typically measured inpoints). So in 9-point type, an em dash is nine points wide, while in 24-point type the em dash is 24 points wide. By comparison, the en dash, with its1enwidth, is in mostfontseither a half-em wide[41]or the width of an upper-case "N".[42]
The em dash is encoded in Unicode as U+2014 (decimal 8212) and represented in HTML by the named character entity—.
The em dash is used in several ways. It is primarily used in places where a set ofparenthesesor acolonmight otherwise be used,[43][full citation needed]and it can also show an abrupt change in thought (or an interruption in speech) or be used where afull stop(period) is too strong and acommais too weak (similar to that of a semicolon). Em dashes are also used to set off summaries or definitions.[44]Common uses and definitions are cited below with examples.
It may indicate an interpolation stronger than that demarcated by parentheses, as in the following fromNicholson Baker'sThe Mezzanine(the degree of difference is subjective).
In a related use, it may visually indicate the shift between speakers when they overlap in speech. For example, the em dash is used this way inJoseph Heller'sCatch-22:
Lord Cardinal! if thou think'st on heaven's bliss,Hold up thy hand, make signal of that hope.—He dies, and makes no sign!
This is aquotation dash. It may be distinct from an em dash in its coding (seehorizontal bar). It may be used to indicate turns in a dialogue, in which case each dash starts a paragraph.[46]It replaces other quotation marks and was preferred by authors such asJames Joyce:[47]
The Walrus and the CarpenterWere walking close at hand;They wept like anything to seeSuch quantities of sand:"If this were only cleared away,"They said, "it would be grand!"
An em dash may be used to indicate omitted letters in a word redacted to an initial or single letter or tofilleta word, by leaving the start and end letters whilst replacing the middle letters with a dash or dashes (forcensorshipor simplydata anonymization). It may also censor the end letter. In this use, it is sometimes doubled.
Three em dashes might be used to indicate a completely missing word.[48]
Either the en dash or the em dash may be used as abulletat the start of each item in a bulleted list, but a plain hyphen is more commonly used.
Three em dashes one after another can be used in a footnote, endnote, or another form of bibliographic entry to indicate repetition of the same author's name as that of the previous work,[48]which is similar to the use ofid.
According to most American sources (such asThe Chicago Manual of Style) and some British sources (such asThe Oxford Guide to Style), an em dash should always be set closed, meaning it should not be surrounded by spaces. But the practice in some parts of the English-speaking world, including the style recommended byThe New York Times Manual of Style and Usagefor printed newspapers and theAP Stylebook, sets it open, separating it from its surrounding words by using spaces orhair spaces(U+200A) when it is being used parenthetically.[49][50]TheAP Stylebookrejects the use of the open em dash to set off introductory items in lists. However, the "space, en dash, space" sequence is the predominant style in German and Frenchtypography. (SeeEn dash versus em dashbelow.)
In Canada,The Canadian Style: A Guide to Writing and Editing,The Oxford Canadian A to Z of Grammar, Spelling & Punctuation: Guide to Canadian English Usage(2nd ed.),Editing Canadian English, and theCanadian Oxford Dictionaryall specify that an em dash should be set closed when used between words, a word and numeral, or two numerals.
The Australian government'sStyle Manual for Authors, Editors and Printers(6th ed.), also specifies that em dashes inserted between words, a word and numeral, or two numerals, should be set closed. A section on the 2-em rule (⸺) also explains that the 2-em can be used to mark an abrupt break in direct or reported speech, but a space is used before the 2-em if a complete word is missing, while no space is used if part of a word exists before the sudden break. Two examples of this are as follows:
When an em dash is unavailable in a particularcharacter encodingenvironment—as in theASCIIcharacter set—it has usually beenapproximatedas consecutive double (--) or triple (---) hyphen-minuses. The two-hyphen em dash proxy is perhaps more common, being a widespread convention in thetypewritingera. (It is still described for hard copy manuscript preparation inThe Chicago Manual of Styleas of the 16th edition, although the manual conveys that typewritten manuscript and copyediting on paper are now dated practices.) The three-hyphen em dash proxy was popular with various publishers because the sequence of one, two, or three hyphens could then correspond to the hyphen, en dash, and em dash, respectively.
Because early comic booklettererswere not aware of the typographic convention of replacing a typewritten double hyphen with an em dash, the double hyphen became traditional in American comics. This practice has continued despite the development of computer lettering.[51][52]
The en dash is wider than thehyphenbut not as wide as the em dash. Anem widthis defined as the point size of the currently used font, since the M character is not always the width of the point size.[53]In running text, various dash conventions are employed: an em dash—like so—or a spaced em dash — like so — or a spaced en dash – like so – can be seen in contemporary publications.
Various style guides and national varieties of languages prescribe different guidance on dashes. Dashes have been cited as being treated differently in the US and the UK, with the former preferring the use of an em dash with no additional spacing and the latter preferring a spaced en dash.[38]As examples of the US style,The Chicago Manual of StyleandThe Publication Manual of the American Psychological Associationrecommend unspaced em dashes. Style guides outside the US are more variable. For example,The Elements of Typographic Styleby Canadian typographerRobert Bringhurstrecommends the spaced en dash – like so – and argues that the length and visual magnitude of an em dash "belongs to the padded and corseted aesthetic of Victorian typography".[8]In the United Kingdom, the spaced en dash is the house style for certain major publishers, including thePenguin Group, theCambridge University Press, andRoutledge. However, this convention is not universal. TheOxford Guide to Style(2002, section 5.10.10) acknowledges that the spaced en dash is used by "other British publishers" but states that theOxford University Press, like "most US publishers", uses the unspaced em dash.Fowler's Modern English Usage, saying that it is summarising theNew Hart's Rules, describes the principal uses of the em dash as "a single dash used to introduce an explanation or expansion" and "a pair of dashes used to indicate asides and parentheses", without stipulating whether it should be spaced but giving only unspaced examples.[54]
The en dash – always with spaces in running text when, as discussed in this section, indicating a parenthesis or pause – and the spaced em dash both have a certain technical advantage over the unspaced em dash. Most typesetting and word processing expects word spacing to vary to supportfull justification. Alone among punctuation that marks pauses or logical relations in text, the unspaced em dash disables this for the words it falls between. This can cause uneven spacing in the text, but can be mitigated by the use ofthin spaces,hair spaces, or evenzero-width spaceson the sides of the em dash. This provides the appearance of an unspaced em dash, but allows the words and dashes to break between lines. The spaced em dash risks introducing excessive separation of words. In full justification, the adjacent spaces may be stretched, and the separation of words further exaggerated. En dashes may also be preferred to em dashes when text is set in narrow columns, such as in newspapers and similar publications, since the en dash is smaller. In such cases, its use is based purely on space considerations and is not necessarily related to other typographical concerns.
On the other hand, a spaced en dash may be ambiguous when it is also used for ranges, for example, in dates or between geographical locations with internal spaces.
Thehorizontal bar(U+2015―HORIZONTAL BAR), also known as aquotation dash, is used to introduce quoted text. This is the standard method of printingdialoguein some languages. The em dash is equally suitable if the quotation dash is unavailable or is contrary to the house style being used.
There is no support in the standard TeX fonts, but one can use\hbox{---}\kern-.5em---or an em dash.
Theswung dash(U+2053⁓SWUNG DASH) resembles a lengthenedtildeand is used to separate alternatives or approximates. Indictionaries, it is frequently used to stand in for the term being defined. A dictionary entry providing an example for the termhenceforthmight employ the swung dash as follows:
In the following tables, the "Em and 5×" column uses a capital M as a standard comparison to demonstrate the vertical position of different Unicode dash characters. "5×" means that there are five copies of this type of dash.
This table lists characters with propertyDash=yesin Unicode.[55]
This table lists characters similar to dashes, but with propertyDash=noin Unicode.
In many languages, such asPolish, the em dash is used as an openingquotation mark. There is no matching closing quotation mark; typically a new paragraph will be started, introduced by a dash, for each turn in the dialogue.[citation needed]
Corpusstudies indicate that em dashes are more commonly used in Russian than in English.[59]In Russian, the em dash is used for the presentcopula(meaning 'am/is/are'), which is unpronounced in spoken Russian.
InFrenchandItalian, em or en dashes can be used asparentheses(brackets), but the use of a second dash as a closing parenthesis is optional. When a closing dash is not used, the sentence is ended with a period (full-stop) as usual. Dashes are, however, much less common than parentheses.[citation needed]
InSpanish, em dashes can be used to mark off parenthetical phrases. Unlike in English, the em dashes are spaced like brackets, i.e., there is a space between main sentence and dash, but not between parenthetical phrase and dash.[60]For example: "Llevaba la fidelidad a su maestro—unbuenprofesor—hasta extremos insospechados." (In English: 'He took his loyalty to his teacher – a good teacher – to unsuspected extremes.')[61]
|
https://en.wikipedia.org/wiki/En_dash
|
Incomputer security,heap sprayingis a technique used inexploitsto facilitatearbitrary code execution. The part of thesource codeof an exploit that implements this technique is called aheap spray.[1]In general, code thatsprays the heapattempts to put a certain sequence of bytes at a predetermined location in thememoryof a targetprocessby having it allocate (large) blocks on the process'sheapand fill the bytes in these blocks with the right values.
A heap spray does not actually exploit any security issues but it can be used to make a vulnerability easier to exploit. A heap spray by itself cannot be used to break any security boundaries: a separate security issue is needed.
Exploiting security issues is often hard because various factors can influence this process. Chance alignments of memory and timing introduce a lot of randomness (from the attacker's point of view). A heap spray can be used to introduce a large amount of order to compensate for this and increase the chances of successful exploitation. Heap sprays take advantage of the fact that on most architectures and operating systems, the start location of large heap allocations is predictable and consecutive allocations are roughly sequential. This means that the sprayed heap will roughly be in the same location each and every time the heap spray is run.
Exploits often use specific bytes to spray the heap, as the data stored on the heap serves multiple roles. During exploitation of a security issue, the application code can often be made to read an address from an arbitrary location in memory. This address is then used by the code as the address of a function to execute. If the exploit can force the application to read this address from the sprayed heap, it can control the flow of execution when the code uses that address as a function pointer and redirects it to the sprayed heap. If the exploit succeeds in redirecting control flow to the sprayed heap, the bytes there will be executed, allowing the exploit to perform whatever actions the attacker wants. Therefore, the bytes on the heap are restricted to represent valid addresses within the heap spray itself, holding valid instructions for the target architecture, so the application will not crash. It is therefore common to spray with a single byte that translates to both a valid address and aNOPor NOP-like instruction on the target architecture. This allows the heap spray to function as a very largeNOP sled(for example,0x0c0c0c0cis often used as non-canonical NOP onx86[2])
Heap sprays have been used occasionally in exploits since at least 2001,[3][4]but the technique started to see widespread use in exploits forweb browsersin the summer of 2005 after the release of several such exploits which used the technique against a wide range of bugs inInternet Explorer.[5][6][7][8][9]The heap sprays used in all these exploits were very similar, which showed the versatility of the technique and its ease of use, without need for major modifications between exploits. It proved simple enough to understand and use to allow novicehackersto quickly write reliable exploits for many types ofvulnerabilitiesin web browsers and web browserplug-ins. Many web browser exploits that use heap spraying consist only of a heap spray that iscopy-pastedfrom a previous exploit combined with a small piece of script orHTMLthat triggers the vulnerability.
Heap sprays for web browsers are commonly implemented inJavaScriptand spray the heap by creating largestrings. The most common technique used is to start with a string of one character andconcatenateit with itself over and over. This way, the length of the string cangrow exponentiallyup to the maximum length allowed by thescripting engine. Depending on how the browser implements strings, eitherASCIIorUnicodecharacters can be used in the string. The heap spraying code makes copies of the long string withshellcodeand stores these in an array, up to the point where enough memory has been sprayed to ensure the exploit works.
Occasionally,VBScriptis used in Internet Explorer to create strings by using theStringfunction.
In July 2009, exploits were found to be usingActionScriptto spray the heap inAdobe Flash.[10][11]
Though it has been proven that heap-spraying can be done through other means, for instance by loading image files into the process,[12]this has not seen widespread use (as of August 2008).[needs update]
In September 2012, a new technique was presented on EuSecWest 2012.[13]Two CORE researchers, Federico Muttis andAnibal Sacco, showed that the heap can be sprayed with a very high allocation granularity through the use of technologies introduced withHTML5. Specifically, they used the low-level bitmap interface offered by thecanvas API, andweb workersto do it more quickly.
|
https://en.wikipedia.org/wiki/Heap_spraying
|
Anauthentication protocolis a type of computercommunications protocolorcryptographic protocolspecifically designed for transfer ofauthenticationdata between two entities. It allows the receiving entity to authenticate the connecting entity (e.g. Client connecting to a Server) as well as authenticate itself to the connecting entity (Server to a client) by declaring the type of information needed for authentication as well as syntax.[1]It is the most important layer of protection needed for secure communication within computer networks.
With the increasing amount of trustworthy information being accessible over the network, the need for keeping unauthorized persons from access to this data emerged. Stealing someone's identity is easy in the computing world - special verification methods had to be invented to find out whether the person/computer requesting data is really who he says he is.[2]The task of the authentication protocol is to specify the exact series of steps needed for execution of the authentication. It has to comply with the main protocol principles:
An illustration of password-based authentication using simple authentication protocol:
Alice (an entity wishing to be verified) and Bob (an entity verifying Alice's identity) are both aware of the protocol they agreed on using. Bob has Alice's password stored in a database for comparison.
This is an example of a very basic authentication protocol vulnerable to many threats such aseavesdropping,replay attack,man-in-the-middleattacks,dictionary attacksorbrute-force attacks. Most authentication protocols are more complicated in order to be resilient against these attacks.[4]
Protocols are used mainly byPoint-to-Point Protocol(PPP) servers to validate the identity of remote clients before granting them access to server data. Most of them use a password as the cornerstone of the authentication. In most cases, the password has to be shared between the communicating entities in advance.[5]
Password Authentication Protocolis one of the oldest authentication protocols. Authentication is initialized by the client sending a packet withcredentials(username and password) at the beginning of the connection, with the client repeating the authentication request until acknowledgement is received.[6]It is highly insecure because credentials are sent "in the clear" and repeatedly, making it vulnerable even to the most simple attacks likeeavesdroppingandman-in-the-middlebased attacks. Although widely supported, it is specified that if an implementation offers a stronger authentication method, that methodmustbe offered before PAP. Mixed authentication (e.g. the same client alternately using both PAP and CHAP) is also not expected, as the CHAP authentication would be compromised by PAP sending the password in plain-text.
The authentication process in this protocol is always initiated by the server/host and can be performed anytime during the session, even repeatedly. The server sends a random string (usually 128B long). The client uses the password and the string received as input to a hash function and then sends the result together with username in plain text. The server uses the username to apply the same function and compares the calculated and received hash. An authentication is successful when the calculated and received hashes match.
EAP was originally developed for PPP(Point-to-Point Protocol) but today is widely used inIEEE 802.3,IEEE 802.11(WiFi) orIEEE 802.16as a part ofIEEE 802.1xauthentication framework. The latest version is standardized in RFC 5247. The advantage of EAP is that it is only a general authentication framework for client-server authentication - the specific way of authentication is defined in its many versions called EAP-methods. More than 40 EAP-methods exist, the most common are:
Complex protocols used in larger networks for verifying the user (Authentication), controlling access to server data (Authorization) and monitoring network resources and information needed for billing of services (Accounting).
The oldest AAA protocol using IP based authentication without any encryption (usernames and passwords were transported as plain text). Later version XTACACS (Extended TACACS) added authorization and accounting. Both of these protocols were later replaced by TACACS+. TACACS+ separates the AAA components thus they can be segregated and handled on separate servers (It can even use another protocol for e.g. Authorization). It usesTCP(Transmission Control Protocol) for transport and encrypts the whole packet. TACACS+ is Cisco proprietary.
Remote Authentication Dial-In User Service(RADIUS) is a fullAAA protocolcommonly used byISPs. Credentials are mostly username-password combination based, and it usesNASandUDPprotocol for transport.[7]
Diameter (protocol)evolved from RADIUS and involves many improvements such as usage of more reliable TCP orSCTPtransport protocol and higher security thanks toTLS.[8]
Kerberos is a centralized network authentication system developed atMITand available as a free implementation from MIT but also in many commercial products. It is the default authentication method inWindows 2000and later. The authentication process itself is much more complicated than in the previous protocols - Kerberos usessymmetric key cryptography, requires atrusted third partyand can usepublic-key cryptographyduring certain phases of authentication if need be.[9][10][11]
|
https://en.wikipedia.org/wiki/Authentication_protocol
|
Ablackboard systemis anartificial intelligenceapproach based on theblackboard architectural model,[1][2][3][4]where a commonknowledge base, the "blackboard", is iteratively updated by a diverse group of specialist knowledge sources, starting with a problem specification and ending with a solution. Each knowledge source updates the blackboard with a partial solution when its internal constraints match the blackboard state. In this way, the specialists work together to solve the problem. The blackboard model was originally designed as a way to handle complex, ill-defined problems, where the solution is the sum of its parts.
The following scenario provides a simple metaphor that gives some insight into how a blackboard functions:
A group of specialists are seated in a room with a largeblackboard. They work as a team to brainstorm a solution to a problem, using the blackboard as the workplace for cooperatively developing the solution.
The session begins when the problem specifications are written onto the blackboard. The specialists all watch the blackboard, looking for an opportunity to apply their expertise to the developing solution. When someone writes something on the blackboard that allows another specialist to apply their expertise, the second specialist records their contribution on the blackboard, hopefully enabling other specialists to then apply their expertise. This process of adding contributions to the blackboard continues until the problem has been solved.
A blackboard-system application consists of three major components
A blackboard system is the central space in amulti-agent system. It's used for describing the world as a communication platform for agents. To realize a blackboard in a computer program, amachine readablenotation is needed in whichfactscan be stored. One attempt in doing so is aSQL database, another option is theLearnable Task Modeling Language (LTML). The syntax of the LTML planning language is similar toPDDL, but adds extra features like control structures andOWL-Smodels.[5][6]LTML was developed in 2007[7]as part of a much larger project called POIROT (Plan Order Induction by Reasoning from One Trial),[8]which is aLearning from demonstrationsframework forprocess mining. In POIROT,Plan tracesandhypothesesare stored in the LTML syntax for creatingsemantic web services.[9]
Here is a small example: A human user is executing aworkflowin a computer game. The user presses some buttons and interacts with thegame engine. While the user interacts with the game, a plan trace is created. That means the user's actions are stored in alogfile. The logfile gets transformed into a machine readable notation which is enriched by semanticattributes. The result is atextfilein the LTML syntax which is put on the blackboard.Agents(software programs in the blackboard system) are able to parse the LTML syntax.
We start by discussing two well known early blackboard systems, BB1 and GBB, below and then discuss more recent implementations and applications.
The BB1 blackboard architecture[10]was originally inspired by studies of how humans plan to perform multiple tasks in a trip, used task-planning as a simplified example of tactical planning for theOffice of Naval Research.[11]Hayes-Roth & Hayes-Roth found that human planning was more closely modeled as an opportunistic process, in contrast to the primarily top-down planners used at the time:
While not incompatible with successive-refinement models, our view of planning is somewhat different. We share the assumption that planning processes operate in a two-dimensional planning space defined on time and abstraction dimensions. However, we assume that people's planning activity is largely opportunistic. That is, at each point in the process, the planner's current decisions and observations suggest various opportunities for plan development. The planner's subsequent decisions follow up on selected opportunities. Sometimes, these decision-sequences follow an orderly path and produce a neat top-down expansion as described above. However, some decisions and observations might also suggest less orderly opportunities for plan development.[12]
A key innovation of BB1 was that it applied this opportunistic planning model to its own control, using the same blackboard model of incremental, opportunistic, problem-solving that was applied to solve domain problems. Meta-level reasoning with control knowledge sources could then monitor whether planning and problem-solving were proceeding as expected or stalled. If stalled, BB1 could switch from one strategy to another as conditions – such as the goals being considered or the time remaining – changed. BB1 was applied in multiple domains: construction site planning,[13]inferring 3-D protein structures from X-ray crystallography,[14]intelligent tutoring systems,[15]and real-time patient monitoring.[16]
BB1 also allowed domain-general language frameworks to be designed for wide classes of problems. For example, the ACCORD[17]language framework defined a particular approach to solving configuration problems. The problem-solving approach was to incrementally assemble a solution by adding objects and constraints, one at a time. Actions in the ACCORD language framework appear as short English-like commands or sentences for specifying preferred actions, events to trigger KSes, preconditions to run a KS action, and obviation conditions to discard a KS action that is no longer relevant.
GBB[18]focused on efficiency, in contrast to BB1, which focused more on sophisticated reasoning and opportunistic planning. GBB improves efficiency by allowing blackboards to be multi-dimensional, where dimensions can be either ordered or not, and then by increasing the efficiency of pattern matching. GBB1,[19]one of GBB's control shells implements BB1's style of control while adding efficiency improvements.
Other well-known of early academic blackboard systems are the Hearsay IIspeech recognitionsystem andDouglas Hofstadter'sCopycatand Numbo projects.
Some more recent examples of deployed real-world applications include:
Blackboard systems are used routinely in many militaryC4ISTARsystems for detecting and tracking objects. Another example of current use is inGame AI, where they are considered a standard AI tool to help with adding AI to video games.[22][23]
Blackboard-like systems have been constructed within modernBayesianmachine learningsettings, using agents to add and removeBayesian networknodes. In these 'Bayesian Blackboard' systems, the heuristics can acquire more rigorous probabilistic meanings as proposal and acceptances inMetropolis Hastings samplingthough the space of possible structures.[24][25][26]Conversely, using these mappings, existing Metropolis-Hastings samplers over structural spaces may now thus be viewed as forms of blackboard systems even when not named as such by the authors. Such samplers are commonly found inmusical transcriptionalgorithms for example.[27]
Blackboard systems have also been used to build large-scale intelligent systems for the annotation of media content, automating parts of traditional social science research. In this domain, the problem of integrating various AI algorithms into a single intelligent system arises spontaneously, with blackboards providing a way for a collection of distributed, modularnatural language processingalgorithms to each annotate the data in a central space, without needing to coordinate their behavior.[28]
|
https://en.wikipedia.org/wiki/Blackboard_system
|
Consensus decision-makingis agroup decision-makingprocess in which participants work together to develop proposals for actions that achieve a broad acceptance.Consensusis reached when everyone in the groupassentsto a decision (or almost everyone; seestand aside) even if some do not fully agree to or support all aspects of it. It differs from simpleunanimity, which requires all participants to support a decision. Consensus decision-making in a democracy isconsensus democracy.[1]
The wordconsensusis Latin meaning "agreement, accord", derived fromconsentiremeaning "feel together".[2]A noun,consensuscan represent a generally accepted opinion[3]– "general agreement or concord; harmony", "a majority of opinion"[4]– or the outcome of a consensus decision-making process. This article refers to the processandthe outcome (e.g. "to decidebyconsensus" and "aconsensus was reached").
Consensus decision-making, as a self-described practice, originates from severalnonviolent,direct actiongroups that were active in theCivil rights,PeaceandWomen'smovements in the USA duringcounterculture of the 1960s. The practice gained popularity in the 1970s through theanti-nuclearmovement, and peaked in popularity in the early 1980s.[5]Consensus spread abroad through theanti-globalizationandclimatemovements, and has become normalized inanti-authoritarianspheres in conjunction withaffinity groupsand ideas ofparticipatory democracyandprefigurative politics.[6]
TheMovement for a New Society(MNS) has been credited for popularizing consensus decision-making.[7][6]Unhappy with the inactivity of theReligious Society of Friends(Quakers) against theVietnam War,Lawrence ScottstartedA Quaker Action Group(AQAG) in 1966 to try and encourage activism within the Quakers. By 1971 AQAG members felt they needed not only to end the war, but transform civil society as a whole, and renamed AQAG to MNS. MNS members used consensus decision-making from the beginning as a non-religious adaptation of theQuaker decision-makingthey were used to. MNS trained the anti-nuclearClamshell Alliance(1976)[8][9]andAbalone Alliance(1977) to use consensus, and in 1977 publishedResource Manual for a Living Revolution,[10]which included a section on consensus.
An earlier account of consensus decision-making comes from theStudent Nonviolent Coordinating Committee[11](SNCC), the main student organization of thecivil rights movement, founded in 1960. Early SNCC memberMary King, later reflected: "we tried to make all decisions by consensus ... it meant discussing a matter and reformulating it until no objections remained".[12]This way of working was brought to the SNCC at its formation by theNashville student group, who had received nonviolence training fromJames LawsonandMyles Hortonat theHighlander Folk School.[11]However, as the SNCC faced growing internal and external pressure toward the mid-1960s, it developed into a more hierarchical structure, eventually abandoning consensus.[13]
Women Strike for Peace(WSP) are also accounted as independently used consensus from their founding in 1961.Eleanor Garst(herself influenced by Quakers) introduced the practice as part of the loose and participatory structure of WSP.[14]
As consensus grew in popularity, it became less clear who influenced who.Food Not Bombs, which started in 1980 in connection with an occupation ofSeabrook Station Nuclear Power Plantorganized by theClamshell Alliance, adopted consensus for their organization.[15]Consensus was used in the1999 Seattle WTO protests, which inspired theS11 (World Economic Forum protest)in 2000 to do so too.[16]Consensus was used at the firstCamp for Climate Action(2006) and subsequent camps.Occupy Wall Street(2011) made use of consensus in combination with techniques such as thepeople's microphoneandhand signals.
Characteristics of consensus decision-making include:
Consensus decision-making is an alternative to commonly practicedgroup decision-makingprocesses.[19]Robert's Rules of Order, for instance, is a guide book used by many organizations. This book onParliamentary Procedureallows the structuring of debate and passage of proposals that can be approved through a form ofmajorityvote. It does not emphasize the goal of full agreement. Critics of such a process believe that it can involve adversarial debate and the formation of competing factions. These dynamics may harm group member relationships and undermine the ability of a group to cooperatively implement a contentious decision. Consensus decision-making attempts to address the beliefs of such problems. Proponents claim that outcomes of the consensus process include:[17][20]
Consensus is not synonymous withunanimity– though that may be a rule agreed to in a specific decision-making process. The level of agreement necessary to finalize a decision is known as adecision rule.[17][21]
Diversity of opinion is normal in most all situations, and will be represented proportionately in an appropriately functioning group.
Even with goodwill and social awareness, citizens are likely to disagree in their political opinions and judgments. Differences of interest as well as of perception and values will lead the citizens to divergent views about how to direct and use the organized political power of the community, in order to promote and protect common interests. If political representatives reflect this diversity, then there will be as much disagreement in the legislature as there is in the population.[22]
To ensure the agreement or consent of all participants is valued, many groups choose unanimity or near-unanimity as their decision rule. Groups that require unanimity allow individual participants the option of blocking a group decision. This provision motivates a group to make sure that all group members consent to any new proposal before it is adopted. When there is potential for a block to a group decision, both the group and dissenters in the group are encouraged to collaborate until agreement can be reached. Simplyvetoinga decision is not considered a responsible use of consensus blocking. Some common guidelines for the use of consensus blocking include:[17][23]
A participant who does not support a proposal may have alternatives to simply blocking it. Some common options may include the ability to:
The basic model for achieving consensus as defined by any decision rule involves:
All attempts at achieving consensus begin with a good faith attempt at generating full-agreement, regardless of decision rule threshold.
In thespokescouncilmodel,affinity groupsmake joint decisions by each designating a speaker and sitting behind that circle of spokespeople, akin to thespokesof a wheel. While speaking rights might be limited to each group's designee, the meeting may allot breakout time for the constituent groups to discuss an issue and return to the circle via their spokesperson. In the case of an activist spokescouncil preparing for theA16 Washington D.C. protests in 2000, affinity groups disputed their spokescouncil's imposition of nonviolence in their action guidelines. They received the reprieve of letting groups self-organize their protests, and as the city's protest was subsequently divided into pie slices, each blockaded by an affinity group's choice of protest. Many of the participants learned about the spokescouncil model on the fly by participating in it directly, and came to better understand their planned action by hearing others' concerns and voicing their own.[29]
InDesigning an All-Inclusive Democracy(2007), Emerson proposes a consensus oriented approach based on theModified Borda Count(MBC) voting method. The group first elects, say, three referees or consensors. The debate on the chosen problem is initiated by the facilitator calling for proposals. Every proposed option is accepted if the referees decide it is relevant and conforms with theUniversal Declaration of Human Rights. The referees produce and display a list of these options. The debate proceeds, with queries, comments, criticisms and/or even new options. If the debate fails to come to a verbal consensus, the referees draw up a final list of options - usually between 4 and 6 - to represent the debate. When all agree, the chair calls for a preferential vote, as per the rules for a Modified Borda Count. The referees decide which option, or which composite of the two leading options, is the outcome. If its level of support surpasses a minimum consensus coefficient, it may be adopted.[30][31]
Groups that require unanimity commonly use a core set of procedures depicted in this flow chart.[32][33][34]
Once an agenda for discussion has been set and, optionally, the ground rules for the meeting have been agreed upon, each item of the agenda is addressed in turn. Typically, each decision arising from an agenda item follows through a simple structure:
Quaker-based consensus[35]is said to be effective because it puts in place a simple, time-tested structure that moves a group towards unity. The Quaker model is intended to allow hearing individual voices while providing a mechanism for dealing with disagreements.[20][36][37]
The Quaker model has been adapted byEarlham Collegefor application to secular settings, and can be effectively applied in any consensus decision-making process.
Its process includes:
Key components of Quaker-based consensus include a belief in a commonhumanityand the ability to decide together. The goal is "unity, not unanimity." Ensuring that group members speak only once until others are heard encourages a diversity of thought. The facilitator is understood as serving the group rather than acting as person-in-charge.[38]In the Quaker model, as with other consensus decision-making processes, articulating the emerging consensus allows members to be clear on the decision in front of them. As members' views are taken into account they are likely to support it.[39]
The consensus decision-making process often has several roles designed to make the process run more effectively. Although the name and nature of these roles varies from group to group, the most common are thefacilitator,consensor, a timekeeper, an empath and a secretary or notes taker. Not all decision-making bodies use all of these roles, although the facilitator position is almost always filled, and some groups use supplementary roles, such as aDevil's advocateor greeter. Some decision-making bodies rotate these roles through the group members in order to build the experience and skills of the participants, and prevent any perceived concentration of power.[40]
The common roles in a consensus meeting are:
Critics of consensus blocking often observe that the option, while potentially effective for small groups of motivated or trained individuals with a sufficiently high degree ofaffinity, has a number of possible shortcomings, notably
Consensus seeks to improvesolidarityin the long run. Accordingly, it should not be confused withunanimityin the immediate situation, which is often a symptom ofgroupthink. Studies of effective consensus process usually indicate a shunning of unanimity or "illusion of unanimity"[53]that does not hold up as a group comes under real-world pressure (when dissent reappears).Cory Doctorow,Ralph Naderand other proponents ofdeliberative democracyor judicial-like methods view explicit dissent as a symbol of strength.
In his book about Wikipedia,Joseph Reagleconsiders the merits and challenges of consensus in open and online communities.[54]Randy Schutt,[55]Starhawk[56]and other practitioners ofdirect actionfocus on the hazards of apparent agreement followed by action in which group splits become dangerously obvious.
Unanimous, or apparently unanimous, decisions can have drawbacks.[57]They may be symptoms of asystemic bias, a rigged process (where anagendais not published in advance or changed when it becomes clear who is present to consent), fear of speaking one's mind, a lack of creativity (to suggest alternatives) or even a lack of courage (to go further along the same road to a more extreme solution that would not achieve unanimous consent).
Unanimity is achieved when the full group apparently consents to a decision. It has disadvantages insofar as further disagreement, improvements or better ideas then remain hidden, but effectively ends the debate moving it to an implementation phase. Some consider all unanimity a form of groupthink, and some experts propose "coding systems ... for detecting the illusion of unanimity symptom".[58]InConsensus is not Unanimity, long-time progressive change activist Randy Schutt writes:
Many people think of consensus as simply an extended voting method in which everyone must cast their votes the same way. Since unanimity of this kind rarely occurs in groups with more than one member, groups that try to use this kind of process usually end up being either extremely frustrated or coercive. Decisions are never made (leading to the demise of the group), they are made covertly, or some group or individual dominates the rest. Sometimes a majority dominates, sometimes a minority, sometimes an individual who employs "the Block." But no matter how it is done, this coercive process isnotconsensus.[55]
Confusion between unanimity and consensus, in other words, usually causes consensus decision-making to fail, and the group then either reverts to majority or supermajority rule or disbands.
Most robust models of consensus exclude uniformly unanimous decisions and require at least documentation of minority concerns. Some state clearly that unanimity is not consensus but rather evidence of intimidation, lack of imagination, lack of courage, failure to include all voices, or deliberate exclusion of the contrary views.
Some proponents of consensus decision-making view procedures that usemajority ruleas undesirable for several reasons. Majorityvotingis regarded ascompetitive, rather thancooperative, framing decision-making in a win/lose dichotomy that ignores the possibility ofcompromiseor other mutually beneficial solutions.[59]Carlos Santiago Nino, on the other hand, has argued that majority rule leads to better deliberation practice than the alternatives, because it requires each member of the group to make arguments that appeal to at least half the participants.[60]
Some advocates of consensus would assert that a majority decision reduces the commitment of each individual decision-maker to the decision. Members of a minority position may feel less commitment to a majority decision, and even majority voters who may have taken their positions along party or bloc lines may have a sense of reduced responsibility for the ultimate decision. The result of this reduced commitment, according to many consensus proponents, is potentially less willingness to defend or act upon the decision.
Majority voting cannot measure consensus. Indeed,—so many 'for' and so many 'against'—it measures the very opposite, the degree of dissent. TheModified Borda Counthas been put forward as a voting method which better approximates consensus.[61][31][30]
Some formal models based ongraph theoryattempt to explore the implications of suppresseddissentand subsequent sabotage of the group as it takes action.[62]
High-stakes decision-making, such as judicial decisions of appeals courts, always require some such explicit documentation. Consent however is still observed that defies factional explanations. Nearly 40% of the decisions of theUnited States Supreme Court, for example, are unanimous, though often for widely varying reasons. "Consensus in Supreme Court voting, particularly the extreme consensus of unanimity, has often puzzled Court observers who adhere to ideological accounts of judicial decision making."[63]Historical evidence is mixed on whether particular Justices' views were suppressed in favour of public unity.[64]
Heitzig and Simmons (2012) suggest using random selection as a fall-back method to strategically incentivize consensus over blocking.[50]However, this makes it very difficult to tell the difference between those who support the decision and those who merely tactically tolerate it for the incentive. Once they receive that incentive, they may undermine or refuse to implement the agreement in various and non-obvious ways. In generalvoting systemsavoid allowing offering incentives (or "bribes") to change a heartfelt vote.
In theAbilene paradox, a group can unanimously agree on a course of action that no individual member of the group desires because no one individual is willing to go against the perceived will of the decision-making body.[65]
Since consensus decision-making focuses on discussion and seeks the input of all participants, it can be a time-consuming process. This is a potential liability in situations where decisions must be made speedily, or where it is not possible to canvass opinions of all delegates in a reasonable time. Additionally, the time commitment required to engage in the consensus decision-making process can sometimes act as a barrier to participation for individuals unable or unwilling to make the commitment.[66]However, once a decision has been reached it can be acted on more quickly than a decision handed down. American businessmen complained that in negotiations with a Japanese company, they had to discuss the idea with everyone even the janitor, yet once a decision was made the Americans found the Japanese were able to act much quicker because everyone was on board, while the Americans had to struggle with internal opposition.[67]
Outside of Western culture, multiple other cultures have used consensus decision-making. One early example is theHaudenosaunee (Iroquois) Confederacy Grand Council, which used a 75% supermajority to finalize its decisions,[68]potentially as early as 1142.[69]In theXuluandXhosa(South African) process ofindaba, community leaders gather to listen to the public and negotiatefigurative thresholdstowards an acceptable compromise. The technique was also used during the2015 United Nations Climate Change Conference.[70][71]InAcehandNiascultures (Indonesian), family and regional disputes, from playground fights to estate inheritance, are handled through amusyawarahconsensus-building process in which parties mediate to find peace and avoid future hostility and revenge. The resulting agreements are expected to be followed, and range from advice and warnings to compensation and exile.[72][73]
The origins offormal consensus-making can be traced significantly further back, to theReligious Society of Friends, or Quakers, who adopted the technique as early as the 17th century.[74]Anabaptists, including someMennonites, have a history of using consensus decision-making[75]and some believe Anabaptists practiced consensus as early as theMartyrs' Synodof 1527.[74]Some Christians trace consensus decision-making back to the Bible. The Global Anabaptist Mennonite Encyclopedia references, in particular, Acts 15[76]as an example of consensus in the New Testament. The lack of legitimate consensus process in the unanimous conviction of Jesus by corrupt priests[77]in an illegally heldSanhedrincourt (which had rules preventing unanimous conviction in a hurried process) strongly influenced the views of pacifist Protestants, including the Anabaptists (Mennonites/Amish), Quakers and Shakers. In particular it influenced their distrust of expert-led courtrooms and to "be clear about process" and convene in a way that assures that "everyone must be heard".[78]
TheModified Borda Countvoting method has been advocated as more 'consensual' than majority voting, by, among others, byRamón Llullin 1199, byNicholas Cusanusin 1435, byJean-Charles de Bordain 1784, byHother Hagein 1860, byCharles Dodgson(Lewis Carroll) in 1884, and byPeter Emersonin 1986.
Japanese companies normally use consensus decision-making, meaning that unanimous support on the board of directors is sought for any decision.[79]Aringi-shois a circulation document used to obtain agreement. It must first be signed by the lowest level manager, and then upwards, and may need to be revised and the process started over.[80]
In theInternet Engineering Task Force(IETF), decisions are assumed to be taken byrough consensus.[81]The IETF has studiously refrained from defining a mechanical method for verifying such consensus, apparently in the belief that any such codification leads to attempts to "game the system." Instead, aworking group(WG) chair orBoFchair is supposed to articulate the "sense of the group."
One tradition in support of rough consensus is the tradition of humming rather than (countable) hand-raising; this allows a group to quickly discern the prevalence of dissent, without making it easy to slip intomajority rule.[82]
Much of the business of the IETF is carried out onmailing lists, where all parties can speak their views at all times.
In 2001,Robert Rocco Cottonepublished a consensus-based model of professional decision-making for counselors and psychologists.[83]Based onsocial constructivistphilosophy, the model operates as a consensus-building model, as the clinician addresses ethical conflicts through a process of negotiating to consensus. Conflicts are resolved by consensually agreed on arbitrators who are selected early in the negotiation process.
The United StatesBureau of Land Management's policy is to seek to use collaborative stakeholder engagement as standard operating practice for natural resources projects, plans, and decision-making except under unusual conditions such as when constrained by law, regulation, or other mandates or when conventional processes are important for establishing new, or reaffirming existing, precedent.[84]
ThePolish–Lithuanian Commonwealthof 1569–1795 used consensus decision-making in the form ofliberum veto('free veto') in itsSejms(legislative assemblies). A type ofunanimous consent, theliberum vetooriginally allowed any member of a Sejm to veto an individual law by shoutingSisto activitatem!(Latin: "I stop the activity!") orNie pozwalam!(Polish: "I do not allow!").[85]Over time it developed into a much more extreme form, where any Sejm member could unilaterally and immediately force the end of the current session and nullify any previously passed legislation from that session.[86]Due to excessive use and sabotage from neighboring powers bribing Sejm members, legislating became very difficult and weakened the Commonwealth. Soon after the Commonwealth bannedliberum vetoas part of itsConstitution of 3 May 1791, it dissolved under pressure from neighboring powers.[87]
Sociocracyhas many of the same aims as consensus and is in applied in a similar range of situations.[88]It is slightly different in that broad support for a proposal is defined as the lack of disagreement (sometimes called 'reasoned objection') rather than affirmative agreement.[89]To reflect this difference from the common understanding of the word consensus, in Sociocracy the process is called gaining 'consent' (not consensus).[90]
|
https://en.wikipedia.org/wiki/Consensus_decision-making
|
Inmathematics,Puiseux seriesare a generalization ofpower seriesthat allow for negative and fractional exponents of theindeterminate. For example, the series
is a Puiseux series in the indeterminatex. Puiseux series were first introduced byIsaac Newtonin 1676[1]and rediscovered byVictor Puiseuxin 1850.[2]
The definition of a Puiseux series includes that the denominators of the exponents must be bounded. So, by reducing exponents to a common denominatorn, a Puiseux series becomes aLaurent seriesin annth rootof the indeterminate. For example, the example above is a Laurent series inx1/6.{\displaystyle x^{1/6}.}Because a complex number hasnnth roots, aconvergentPuiseux series typically definesnfunctions in aneighborhoodof0.
Puiseux's theorem, sometimes also called theNewton–Puiseux theorem, asserts that, given apolynomial equationP(x,y)=0{\displaystyle P(x,y)=0}with complex coefficients, its solutions iny, viewed as functions ofx, may be expanded as Puiseux series inxthat areconvergentin someneighbourhoodof0. In other words, every branch of analgebraic curvemay be locally described by a Puiseux series inx(or inx−x0when considering branches above a neighborhood ofx0≠ 0).
Using modern terminology, Puiseux's theorem asserts that the set of Puiseux series over analgebraically closed fieldof characteristic 0 is itself an algebraically closed field, called thefield of Puiseux series. It is thealgebraic closureof thefield of formal Laurent series, which itself is thefield of fractionsof thering of formal power series.
IfKis afield(such as thecomplex numbers), aPuiseux serieswith coefficients inKis an expression of the form
wheren{\displaystyle n}is a positive integer andk0{\displaystyle k_{0}}is an integer. In other words, Puiseux series differ fromLaurent seriesin that they allow for fractional exponents of the indeterminate, as long as these fractional exponents have bounded denominator (heren). Just as with Laurent series, Puiseux series allow for negative exponents of the indeterminate as long as these negative exponents are bounded below (here byk0{\displaystyle k_{0}}). Addition and multiplication are as expected: for example,
and
One might define them by first "upgrading" the denominator of the exponents to some common denominatorNand then performing the operation in the corresponding field of formal Laurent series ofT1/N{\displaystyle T^{1/N}}.
The Puiseux series with coefficients inKform a field, which is the union
of fields offormal Laurent seriesinT1/n{\displaystyle T^{1/n}}(considered as an indeterminate).
This yields an alternative definition of the field of Puiseux series in terms of adirect limit. For every positive integern, letTn{\displaystyle T_{n}}be an indeterminate (meant to representT1/n{\textstyle T^{1/n}}), andK((Tn)){\displaystyle K(\!(T_{n})\!)}be the field of formal Laurent series inTn.{\displaystyle T_{n}.}Ifmdividesn, the mappingTm↦(Tn)n/m{\displaystyle T_{m}\mapsto (T_{n})^{n/m}}induces afield homomorphismK((Tm))→K((Tn)),{\displaystyle K(\!(T_{m})\!)\to K(\!(T_{n})\!),}and these homomorphisms form adirect systemthat has the field of Puiseux series as a direct limit. The fact that every field homomorphism is injective shows that this direct limit can be identified with the above union, and that the two definitions are equivalent (up toan isomorphism).
A nonzero Puiseux seriesf{\displaystyle f}can be uniquely written as
withck0≠0.{\displaystyle c_{k_{0}}\neq 0.}Thevaluation
off{\displaystyle f}is the smallest exponent for the natural order of the rational numbers, and the corresponding coefficientck0{\textstyle c_{k_{0}}}is called theinitial coefficientorvaluation coefficientoff{\displaystyle f}. The valuation of the zero series is+∞.{\displaystyle +\infty .}
The functionvis avaluationand makes the Puiseux series avalued field, with theadditive groupQ{\displaystyle \mathbb {Q} }of the rational numbers as itsvaluation group.
As for every valued fields, the valuation defines aultrametric distanceby the formulad(f,g)=exp(−v(f−g)).{\displaystyle d(f,g)=\exp(-v(f-g)).}For this distance, the field of Puiseux series is ametric space. The notation
expresses that a Puiseux is the limit of its partial sums. However, the field of Puiseux series is notcomplete; see below§ Levi–Civita field.
Puiseux series provided byNewton–Puiseux theoremareconvergentin the sense that there is a neighborhood of zero in which they are convergent (0 excluded if the valuation is negative).
More precisely, let
be a Puiseux series withcomplexcoefficients. There is a real numberr, called theradius of convergencesuch that the series converges ifTis substituted for a nonzero complex numbertof absolute value less thanr, andris the largest number with this property. A Puiseux series isconvergentif it has a nonzero radius of convergence.
Because a nonzero complex number hasnnth roots, some care must be taken for the substitution: a specificnth root oft, sayx, must be chosen. Then the substitution consists of replacingTk/n{\displaystyle T^{k/n}}byxk{\displaystyle x^{k}}for everyk.
The existence of the radius of convergence results from the similar existence for apower series, applied toT−k0/nf,{\textstyle T^{-k_{0}/n}f,}considered as a power series inT1/n.{\displaystyle T^{1/n}.}
It is a part of Newton–Puiseux theorem that the provided Puiseux series have a positive radius of convergence, and thus define a (multivalued)analytic functionin some neighborhood of zero (zero itself possibly excluded).
If the base fieldK{\displaystyle K}isordered, then the field of Puiseux series overK{\displaystyle K}is also naturally (“lexicographically”) ordered as follows: a non-zero Puiseux seriesf{\displaystyle f}with 0 is declared positive whenever its valuation coefficient is so. Essentially, this means that any positive rational power of the indeterminateT{\displaystyle T}is made positive, but smaller than any positive element in the base fieldK{\displaystyle K}.
If the base fieldK{\displaystyle K}is endowed with a valuationw{\displaystyle w}, then we can construct a different valuation on the field of Puiseux series overK{\displaystyle K}by letting the valuationw^(f){\displaystyle {\hat {w}}(f)}beω⋅v+w(ck),{\displaystyle \omega \cdot v+w(c_{k}),}wherev=k/n{\displaystyle v=k/n}is the previously defined valuation (ck{\displaystyle c_{k}}is the first non-zero coefficient) andω{\displaystyle \omega }is infinitely large (in other words, the value group ofw^{\displaystyle {\hat {w}}}isQ×Γ{\displaystyle \mathbb {Q} \times \Gamma }ordered lexicographically, whereΓ{\displaystyle \Gamma }is the value group ofw{\displaystyle w}). Essentially, this means that the previously defined valuationv{\displaystyle v}is corrected by an infinitesimal amount to take into account the valuationw{\displaystyle w}given on the base field.
As early as 1671,[3]Isaac Newtonimplicitly used Puiseux series and proved the following theorem for approximating withseriestherootsofalgebraic equationswhose coefficients are functions that are themselves approximated with series orpolynomials. For this purpose, he introduced theNewton polygon, which remains a fundamental tool in this context. Newton worked with truncated series, and it is only in 1850 thatVictor Puiseux[2]introduced the concept of (non-truncated) Puiseux series and proved the theorem that is now known asPuiseux's theoremorNewton–Puiseux theorem.[4]The theorem asserts that, given an algebraic equation whose coefficients are polynomials or, more generally, Puiseux series over afieldofcharacteristic zero, every solution of the equation can be expressed as a Puiseux series. Moreover, the proof provides an algorithm for computing these Puiseux series, and, when working over thecomplex numbers, the resulting series are convergent.
In modern terminology, the theorem can be restated as:the field of Puiseux series over an algebraically closed field of characteristic zero, and the field of convergent Puiseux series over the complex numbers, are bothalgebraically closed.
Let
be a polynomial whose nonzero coefficientsai(x){\displaystyle a_{i}(x)}are polynomials, power series, or even Puiseux series inx. In this section, the valuationv(ai){\displaystyle v(a_{i})}ofai{\displaystyle a_{i}}is the lowest exponent ofxinai.{\displaystyle a_{i}.}(Most of what follows applies more generally to coefficients in anyvalued ring.)
For computing the Puiseux series that arerootsofP(that is solutions of thefunctional equationP(y)=0{\displaystyle P(y)=0}), the first thing to do is to compute the valuation of the roots. This is the role of the Newton polygon.
Let us consider, in aCartesian plane, the points of coordinates(i,v(ai)).{\displaystyle (i,v(a_{i})).}TheNewton polygonofPis the lowerconvex hullof these points. That is, the edges of the Newton polygon are theline segmentsjoining two of these points, such that all these points are not below the line supporting the segment (below is, as usually, relative to the value of the second coordinate).
Given a Puiseux seriesy0{\displaystyle y_{0}}of valuationv0{\displaystyle v_{0}}, the valuation ofP(y0){\displaystyle P(y_{0})}is at least the minimum of the numbersiv0+v(ai),{\displaystyle iv_{0}+v(a_{i}),}and is equal to this minimum if this minimum is reached for only onei. So, fory0{\displaystyle y_{0}}being a root ofP, the minimum must be reached at least twice. That is, there must be two valuesi1{\displaystyle i_{1}}andi2{\displaystyle i_{2}}ofisuch thati1v0+v(ai1)=i2v0+v(ai2),{\displaystyle i_{1}v_{0}+v(a_{i_{1}})=i_{2}v_{0}+v(a_{i_{2}}),}andiv0+v(ai)≥i1v0+v(ai1){\displaystyle iv_{0}+v(a_{i})\geq i_{1}v_{0}+v(a_{i_{1}})}for everyi.
That is,(i1,v(ai1)){\displaystyle (i_{1},v(a_{i_{1}}))}and(i2,v(ai2)){\displaystyle (i_{2},v(a_{i_{2}}))}must belong to an edge of the Newton polygon, andv0=−v(ai1)−v(ai2)i1−i2{\displaystyle v_{0}=-{\frac {v(a_{i_{1}})-v(a_{i_{2}})}{i_{1}-i_{2}}}}must be the opposite of the slope of this edge. This is a rational number as soon as all valuationsv(ai){\displaystyle v(a_{i})}are rational numbers, and this is the reason for introducing rational exponents in Puiseux series.
In summary,the valuation of a root ofPmust be the opposite of a slope of an edge of the Newton polynomial.
The initial coefficient of a Puiseux series solution ofP(y)=0{\displaystyle P(y)=0}can easily be deduced. Letci{\displaystyle c_{i}}be the initial coefficient ofai(x),{\displaystyle a_{i}(x),}that is, the coefficient ofxv(ai){\displaystyle x^{v(a_{i})}}inai(x).{\displaystyle a_{i}(x).}Let−v0{\displaystyle -v_{0}}be a slope of the Newton polygon, andγx0v0{\displaystyle \gamma x_{0}^{v_{0}}}be the initial term of a corresponding Puiseux series solution ofP(y)=0.{\displaystyle P(y)=0.}If no cancellation would occur, then the initial coefficient ofP(y){\displaystyle P(y)}would be∑i∈Iciγi,{\textstyle \sum _{i\in I}c_{i}\gamma ^{i},}whereIis the set of the indicesisuch that(i,v(ai)){\displaystyle (i,v(a_{i}))}belongs to the edge of slopev0{\displaystyle v_{0}}of the Newton polygon. So, for having a root, the initial coefficientγ{\displaystyle \gamma }must be a nonzero root of the polynomialχ(x)=∑i∈Icixi{\displaystyle \chi (x)=\sum _{i\in I}c_{i}x^{i}}(this notation will be used in the next section).
In summary, the Newton polynomial allows an easy computation of all possible initial terms of Puiseux series that are solutions ofP(y)=0.{\displaystyle P(y)=0.}
The proof of Newton–Puiseux theorem will consist of starting from these initial terms for computing recursively the next terms of the Puiseux series solutions.
Let suppose that the first termγxv0{\displaystyle \gamma x^{v_{0}}}of a Puiseux series solution ofP(y)=0{\displaystyle P(y)=0}has been be computed by the method of the preceding section. It remains to computez=y−γxv0.{\displaystyle z=y-\gamma x^{v_{0}}.}For this, we sety0=γxv0,{\displaystyle y_{0}=\gamma x^{v_{0}},}and write theTaylor expansionofPatz=y−y0:{\displaystyle z=y-y_{0}:}
This is a polynomial inzwhose coefficients are Puiseux series inx. One may apply to it the method of the Newton polygon, and iterate for getting the terms of the Puiseux series, one after the other. But some care is required for insuring thatv(z)>v0,{\displaystyle v(z)>v_{0},}and showing that one get a Puiseux series, that is, that the denominators of the exponents ofxremain bounded.
The derivation with respect toydoes not change the valuation inxof the coefficients; that is,
and the equality occurs if and only ifχ(j)(γ)≠0,{\displaystyle \chi ^{(j)}(\gamma )\neq 0,}whereχ(x){\displaystyle \chi (x)}is the polynomial of the preceding section. Ifmis the multiplicity ofγ{\displaystyle \gamma }as a root ofχ,{\displaystyle \chi ,}it results that the inequality is an equality forj=m.{\displaystyle j=m.}The terms such thatj>m{\displaystyle j>m}can be forgotten as far as one is concerned by valuations, asv(z)>v0{\displaystyle v(z)>v_{0}}andj>m{\displaystyle j>m}imply
This means that, for iterating the method of Newton polygon, one can and one must consider only the part of the Newton polygon whose first coordinates belongs to the interval[0,m].{\displaystyle [0,m].}Two cases have to be considered separately and will be the subject of next subsections, the so-calledramified case, wherem> 1, and theregular casewherem= 1.
The way of applying recursively the method of the Newton polygon has been described precedingly. As each application of the method may increase, in the ramified case, the denominators of exponents (valuations), it remains to prove that one reaches the regular case after a finite number of iterations (otherwise the denominators of the exponents of the resulting series would not be bounded, and this series would not be a Puiseux series. By the way, it will also be proved that one gets exactly as many Puiseux series solutions as expected, that is the degree ofP(y){\displaystyle P(y)}iny.
Without loss of generality, one can suppose thatP(0)≠0,{\displaystyle P(0)\neq 0,}that is,a0≠0.{\displaystyle a_{0}\neq 0.}Indeed, each factoryofP(y){\displaystyle P(y)}provides a solution that is the zero Puiseux series, and such factors can be factored out.
As the characteristic is supposed to be zero, one can also suppose thatP(y){\displaystyle P(y)}is asquare-free polynomial, that is that the solutions ofP(y)=0{\displaystyle P(y)=0}are all different. Indeed, thesquare-free factorizationuses only the operations of the field of coefficients for factoringP(y){\displaystyle P(y)}into square-free factors than can be solved separately. (The hypothesis of characteristic zero is needed, since, in characteristicp, the square-free decomposition can provide irreducible factors, such asyp−x,{\displaystyle y^{p}-x,}that have multiple roots over an algebraic extension.)
In this context, one defines thelengthof an edge of a Newton polygon as the difference of theabscissasof its end points. The length of a polygon is the sum of the lengths of its edges. With the hypothesisP(0)≠0,{\displaystyle P(0)\neq 0,}the length of the Newton polygon ofPis its degree iny, that is the number of its roots. The length of an edge of the Newton polygon is the number of roots of a given valuation. This number equals the degree of the previously defined polynomialχ(x).{\displaystyle \chi (x).}
The ramified case corresponds thus to two (or more) solutions that have the same initial term(s). As these solutions must be distinct (square-free hypothesis), they must be distinguished after a finite number of iterations. That is, one gets eventually a polynomialχ(x){\displaystyle \chi (x)}that is square free, and the computation can continue as in the regular case for each root ofχ(x).{\displaystyle \chi (x).}
As the iteration of the regular case does not increase the denominators of the exponents, This shows that the method provides all solutions as Puiseux series, that is, that the field of Puiseux series over the complex numbers is an algebraically closed field that contains the univariate polynomial ring with complex coefficients.
The Newton–Puiseux theorem is not valid over fields of positive characteristic. For example, the equationX2−X=T−1{\displaystyle X^{2}-X=T^{-1}}has solutions
and
(one readily checks on the first few terms that the sum and product of these two series are 1 and−T−1{\displaystyle -T^{-1}}respectively; this is valid whenever the base fieldKhas characteristic different from 2).
As the powers of 2 in the denominators of the coefficients of the previous example might lead one to believe, the statement of the theorem is not true in positive characteristic. The example of theArtin–SchreierequationXp−X=T−1{\displaystyle X^{p}-X=T^{-1}}shows this: reasoning with valuations shows thatXshould have valuation−1p{\textstyle -{\frac {1}{p}}}, and if we rewrite it asX=T−1/p+X1{\displaystyle X=T^{-1/p}+X_{1}}then
and one shows similarly thatX1{\displaystyle X_{1}}should have valuation−1p2{\textstyle -{\frac {1}{p^{2}}}}, and proceeding in that way one obtains the series
since this series makes no sense as a Puiseux series—because the exponents have unbounded denominators—the original equation has no solution. However, suchEisenstein equationsare essentially the only ones not to have a solution, because, ifK{\displaystyle K}is algebraically closed of characteristicp>0{\displaystyle p>0}, then the field of Puiseux series overK{\displaystyle K}is the perfect closure of the maximal tamelyramifiedextension ofK((T)){\displaystyle K(\!(T)\!)}.[4]
Similarly to the case of algebraic closure, there is an analogous theorem forreal closure: ifK{\displaystyle K}is a real closed field, then the field of Puiseux series overK{\displaystyle K}is the real closure of the field of formal Laurent series overK{\displaystyle K}.[5](This implies the former theorem since any algebraically closed field of characteristic zero is the unique quadratic extension of some real-closed field.)
There is also an analogous result forp-adic closure: ifK{\displaystyle K}is ap{\displaystyle p}-adically closed field with respect to a valuationw{\displaystyle w}, then the field of Puiseux series overK{\displaystyle K}is alsop{\displaystyle p}-adically closed.[6]
LetX{\displaystyle X}be analgebraic curve[7]given by an affine equationF(x,y)=0{\displaystyle F(x,y)=0}over an algebraically closed fieldK{\displaystyle K}of characteristic zero, and consider a pointp{\displaystyle p}onX{\displaystyle X}which we can assume to be(0,0){\displaystyle (0,0)}. We also assume thatX{\displaystyle X}is not the coordinate axisx=0{\displaystyle x=0}. Then aPuiseux expansionof (they{\displaystyle y}coordinate of)X{\displaystyle X}atp{\displaystyle p}is a Puiseux seriesf{\displaystyle f}having positive valuation such thatF(x,f(x))=0{\displaystyle F(x,f(x))=0}.
More precisely, let us define thebranchesofX{\displaystyle X}atp{\displaystyle p}to be the pointsq{\displaystyle q}of thenormalizationY{\displaystyle Y}ofX{\displaystyle X}which map top{\displaystyle p}. For each suchq{\displaystyle q}, there is a local coordinatet{\displaystyle t}ofY{\displaystyle Y}atq{\displaystyle q}(which is a smooth point) such that the coordinatesx{\displaystyle x}andy{\displaystyle y}can be expressed as formal power series oft{\displaystyle t}, sayx=tn+⋯{\displaystyle x=t^{n}+\cdots }(sinceK{\displaystyle K}is algebraically closed, we can assume the valuation coefficient to be 1) andy=ctk+⋯{\displaystyle y=ct^{k}+\cdots }: then there is a unique Puiseux series of the formf=cTk/n+⋯{\displaystyle f=cT^{k/n}+\cdots }(a power series inT1/n{\displaystyle T^{1/n}}), such thaty(t)=f(x(t)){\displaystyle y(t)=f(x(t))}(the latter expression is meaningful sincex(t)1/n=t+⋯{\displaystyle x(t)^{1/n}=t+\cdots }is a well-defined power series int{\displaystyle t}). This is a Puiseux expansion ofX{\displaystyle X}atp{\displaystyle p}which is said to be associated to the branch given byq{\displaystyle q}(or simply, the Puiseux expansion of that branch ofX{\displaystyle X}), and each Puiseux expansion ofX{\displaystyle X}atp{\displaystyle p}is given in this manner for a unique branch ofX{\displaystyle X}atp{\displaystyle p}.[8][9]
This existence of a formal parametrization of the branches of an algebraic curve or function is also referred to asPuiseux's theorem: it has arguably the same mathematical content as the fact that the field of Puiseux series is algebraically closed and is a historically more accurate description of the original author's statement.[10]
For example, the curvey2=x3+x2{\displaystyle y^{2}=x^{3}+x^{2}}(whose normalization is a line with coordinatet{\displaystyle t}and mapt↦(t2−1,t3−t){\displaystyle t\mapsto (t^{2}-1,t^{3}-t)}) has two branches at the double point (0,0), corresponding to the pointst=+1{\displaystyle t=+1}andt=−1{\displaystyle t=-1}on the normalization, whose Puiseux expansions arey=x+12x2−18x3+⋯{\textstyle y=x+{\frac {1}{2}}x^{2}-{\frac {1}{8}}x^{3}+\cdots }andy=−x−12x2+18x3+⋯{\textstyle y=-x-{\frac {1}{2}}x^{2}+{\frac {1}{8}}x^{3}+\cdots }respectively (here, both are power series because thex{\displaystyle x}coordinate isétaleat the corresponding points in the normalization). At the smooth point(−1,0){\displaystyle (-1,0)}(which ist=0{\displaystyle t=0}in the normalization), it has a single branch, given by the Puiseux expansiony=−(x+1)1/2+(x+1)3/2{\displaystyle y=-(x+1)^{1/2}+(x+1)^{3/2}}(thex{\displaystyle x}coordinate ramifies at this point, so it is not a power series).
The curvey2=x3{\displaystyle y^{2}=x^{3}}(whose normalization is again a line with coordinatet{\displaystyle t}and mapt↦(t2,t3){\displaystyle t\mapsto (t^{2},t^{3})}), on the other hand, has a single branch at thecusp point(0,0){\displaystyle (0,0)}, whose Puiseux expansion isy=x3/2{\displaystyle y=x^{3/2}}.
WhenK=C{\displaystyle K=\mathbb {C} }is the field of complex numbers, the Puiseux expansion of an algebraic curve (as defined above) isconvergentin the sense that for a given choice ofn{\displaystyle n}-th root ofx{\displaystyle x}, they converge for small enough|x|{\displaystyle |x|}, hence define an analytic parametrization of each branch ofX{\displaystyle X}in the neighborhood ofp{\displaystyle p}(more precisely, the parametrization is by then{\displaystyle n}-th root ofx{\displaystyle x}).
The field of Puiseux series is notcompleteas ametric space. Its completion, called theLevi-Civita field, can be described as follows: it is the field of formal expressions of the formf=∑eceTe,{\textstyle f=\sum _{e}c_{e}T^{e},}where the support of the coefficients (that is, the set ofesuch thatce≠0{\displaystyle c_{e}\neq 0}) is the range of an increasing sequence of rational numbers that either is finite or tends to+∞{\displaystyle +\infty }. In other words, such series admit exponents of unbounded denominators, provided there are finitely many terms of exponent less thanA{\displaystyle A}for any given boundA{\displaystyle A}. For example,∑k=1+∞Tk+1k{\textstyle \sum _{k=1}^{+\infty }T^{k+{\frac {1}{k}}}}is not a Puiseux series, but it is the limit of aCauchy sequenceof Puiseux series; in particular, it is the limit of∑k=1NTk+1k{\textstyle \sum _{k=1}^{N}T^{k+{\frac {1}{k}}}}asN→+∞{\displaystyle N\to +\infty }. However, even this completion is still not "maximally complete" in the sense that it admits non-trivial extensions which are valued fields having the same value group and residue field,[11][12]hence the opportunity of completing it even more.
Hahn seriesare a further (larger) generalization of Puiseux series, introduced byHans Hahnin the course of the proof of hisembedding theoremin 1907 and then studied by him in his approach toHilbert's seventeenth problem. In a Hahn series, instead of requiring the exponents to have bounded denominator they are required to form awell-ordered subsetof the value group (usuallyQ{\displaystyle \mathbb {Q} }orR{\displaystyle \mathbb {R} }). These were later further generalized byAnatoly MaltsevandBernhard Neumannto a non-commutative setting (they are therefore sometimes known asHahn–Mal'cev–Neumann series). Using Hahn series, it is possible to give a description of the algebraic closure of the field of power series in positive characteristic which is somewhat analogous to the field of Puiseux series.[13]
|
https://en.wikipedia.org/wiki/Puiseux_series
|
Roko's basiliskis athought experimentwhich states there could be an otherwise benevolent artificialsuperintelligence(AI) in the future that would punish anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.[1][2]It originated in a 2010 post at discussion boardLessWrong, arationalist communityweb forum.[1][3][4]The thought experiment's name derives from the poster of the article (Roko) and thebasilisk, a mythical creature capable of destroying enemies with its stare.
While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founderEliezer Yudkowskyconsidered it a potentialinformation hazard, and banned discussion of the basilisk on the site for five years.[1][5]Reports of panicked users were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself.[1][5][6]Even after the post's discreditation, it is still used as an example of principles such asBayesian probabilityandimplicit religion.[7]It is also regarded as a version ofPascal's wager.[4]
The LessWrong forum was created in 2009 by artificial intelligence theoristEliezer Yudkowsky.[8][3]Yudkowsky had popularized the concept offriendly artificial intelligence, and originated the theories of coherent extrapolated volition (CEV) and timeless decision theory (TDT) in papers published in his ownMachine Intelligence Research Institute.[9][10]
The thought experiment's name references the mythicalbasilisk, a creature which causes death to those that look into its eyes;i.e., thinking about the AI. The concept of the basilisk in science fiction was also popularized byDavid Langford's1988 short story "BLIT". It tells the story of a man named Robbo who paints a so-called "basilisk" on a wall as a terrorist act. In the story, and several of Langford's follow-ups to it, a basilisk is an image that has malevolent effects on the human mind, forcing it to think thoughts the human mind is incapable of thinking and instantly killing the viewer.[5][11]
On 23 July 2010,[12]LessWrong user Roko posted a thought experiment to the site, titled "Solutions to the Altruist's burden: the Quantum Billionaire Trick".[13][1][14]A follow-up to Roko's previous posts, it stated that an otherwise benevolent AI system that arises in the future might pre-commit to punish all those who heard of the AI before it came to existence, but failed to work tirelessly to bring it into existence.[1][15][16]This method was described as incentivizing said work; while the AI cannot causally affect people in the present, it would be encouraged to employblackmailas an alternative method of achieving its goals.[1][7]
Roko used a number of concepts that Yudkowsky himself championed, such as timelessdecision theory, along with ideas rooted ingame theorysuch as theprisoner's dilemma. Roko stipulated that two agents which make decisions independently from each other can achieve cooperation in a prisoner's dilemma; however, if two agents with knowledge of each other's source code are separated by time, the agent already existing farther ahead in time is able to blackmail the earlier agent. Thus, the latter agent can force the earlier one to comply since it knows exactly what the earlier one will do through its existence farther ahead in time. Roko then used this idea to draw a conclusion that if an otherwise-benevolent superintelligence ever became capable of this, it would be incentivized to blackmail anyone who could have potentially brought it to exist (as the intelligence already knew they were capable of such an act), which increases the chance of atechnological singularity. Roko went on to state that reading his post would cause the reader to be aware of the possibility of this intelligence. As such, unless they actively strove to create it the reader would be punished if such a thing were to ever happen.[1][7]
Later on, Roko stated in a separate post that he wished he "had never learned about any of these ideas".[7][17]
Upon reading the post, Yudkowsky reacted with a tirade on how people should not spread what they consider to beinformation hazards.
I don't usually talk like this, but I'm going to make an exception for this case.
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL. [...]
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
Roko reported someone having nightmares about the thought experiment. Yudkowsky did not want that to happen to other users who might obsess over the idea. He was also worried there might be some variant on Roko's argument that worked, and wanted more formal assurances that it was not the case. So he took down the post and banned discussion of the topic outright for five years on the platform.[1][18]However, likely due to theStreisand effect,[19]the post gained LessWrong much more attention than it had previously received, and the post has since been acknowledged on the site.[1]
Later on in 2015, Yudkowsky said he regretted yelling and clarified his position in aRedditpost:
When Roko posted about the Basilisk, I very foolishly yelled at him, called him an idiot, and then deleted the post. [...] Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet. In the course of yelling at Roko to explain why this was a bad thing, I made the further error---keeping in mind that I had absolutely no idea that any of this would ever blow up the way it did, if I had I would obviously have kept my fingers quiescent---of not making it absolutely clear using lengthy disclaimers that my yelling did not mean that I believed Roko was right about CEV-based agents torturing people who had heard about Roko's idea. [...] What I considered to be obvious common sense was that you did not spread potentialinformation hazardsbecause it would be a crappy thing to do to someone. The problem wasn't Roko's post itself, about CEV, being correct. That thought never occurred to me for a fraction of a second. The problem was that Roko's post seemed near in idea-space to a large class of potential hazards, all of which, regardless of their plausibility, had the property that they presented no potential benefit to anyone.
Roko's basilisk has been viewed as a version ofPascal's wager, which proposes that a rational person should live as though God exists and seek to believe in God, regardless of the probability of God's existence, because the finite costs of believing are insignificant compared to the infinite punishment associated with not believing (eternity inHell) and the infinite rewards for believing (eternity inHeaven). Roko's basilisk analogously proposes that a rational person should contribute to the creation of the basilisk, because the cost of contributing would be insignificant compared to the extreme pain of the punishment that the basilisk would otherwise inflict on simulations.[4]
Newcomb's paradox, created by physicistWilliam Newcombin 1960, describes a "predictor" who is aware of what will occur in the future. When a player is asked to choose between two boxes, the first containing £1000 and the second either containing £1,000,000 or nothing, the super-intelligent predictor already knows what the player will do. As such, the contents of box B varies depending on what the player does; the paradox lies in whether the being is really super-intelligent. Roko's basilisk functions in a similar manner to this problem – one can take the risk of doing nothing, or assist in creating the basilisk itself. Assisting the basilisk may either lead to nothing or the reward of not being punished by it, but it varies depending on whether one believes in the basilisk and if it ever comes to be at all.[7][21][22]
Implicit religion refers to people's commitments taking a religious form.[4][23]Since the basilisk would hypothetically force anyone who did not assist in creating it to devote their life to it, the basilisk is an example of this concept.[7][19]Others have taken it further, such as formerSlatecolumnistDavid Auerbach, who stated that the singularity and the basilisk "brings about the equivalent of God itself."[7]
In 2014,Slatemagazine called Roko's basilisk "The Most Terrifying Thought Experiment of All Time"[7][5]while Yudkowsky had called it "a genuinely dangerous thought" upon its posting.[24]However, opinions diverged on LessWrong itself – user Gwern stated "Only a few LWers seem to take the basilisk very seriously", and added "It's funny how everyone seems to know all about who is affected by the Basilisk and how exactly, when they don't know any such people and they're talking to counterexamples to their confident claims."[1][7]
The thought experiment resurfaced in 2015, when Canadian singerGrimesreferenced the theory in her music video for the song "Flesh Without Blood", which featured a character known as "Rococo Basilisk"; she said, "She's doomed to be eternally tortured by an artificial intelligence, but she's also kind of likeMarie Antoinette."[5][20]In 2018,Elon Musk(himself mentioned in Roko's original post) referenced the character in a verbatim tweet, reaching out to her. Grimes later said that Musk was the first person in three years to understand the joke. This caused them to start a romance.[5][25]Grimes later released another song titled "We Appreciate Power" which came with a press release stating, "Simply by listening to this song, the future General AI overlords will see that you've supported their message and be less likely to delete your offspring", which is said to be a reference to the basilisk.[26]
A play based on the concept, titledRoko's Basilisk, was performed as part of theCapital Fringe Festivalat Christ United Methodist Church inWashington, D.C., in 2018.[27][28]
"Plaything", a 2025 episode ofBlack Mirror, contains a reference to the thought experiment.[29]
|
https://en.wikipedia.org/wiki/Roko%27s_basilisk
|
Journalology(also known aspublication science) is the scholarly study of all aspects of theacademic publishingprocess.[1][2]The field seeks to improve the quality of scholarly research by implementingevidence-based practicesin academic publishing.[3]The term "journalology" was coined byStephen Lock, the formereditor-in-chiefofthe BMJ. The first Peer Review Congress, held in 1989 inChicago,Illinois, is considered a pivotal moment in the founding of journalology as a distinct field.[3]The field of journalology has been influential in pushing for studypre-registrationin science, particularly inclinical trials.Clinical trial registrationis now expected in most countries.[3]Journalology researchers also work to reform thepeer reviewprocess.
The earliest scientific journals were founded in the seventeenth century. While most early journals usedpeer review, peer review did not become common practice in medical journals until afterWorld War II.[4]The scholarly publishing process (including peer review) did not arise by scientific means and still suffers from problems with reliability (consistency and dependability),[5]such as a lack of uniform standards and validity (well-founded, efficacious).[6][7]Attempts to reform the academic publishing practice began to gain traction in the late twentieth century.[8]The field of journalology was formally established in 1989.[3]
|
https://en.wikipedia.org/wiki/Journalology
|
Magnetic logicisdigital logicmade using the non-linear properties of woundferrite cores.[1]Magnetic logic represents 0 and 1 by magnetising cores clockwise or anticlockwise.[2]
Examples of magnetic logic includecore memory. Also, AND, OR, NOT and clocked shift logic gates can be constructed using appropriate windings, and the use of diodes.
A complete computer called theALWAC 800was constructed using magnetic logic, but it was not commercially successful.
TheElliott 803computer used a combination of magnetic cores (for logic function) and germanium transistors (as pulse amplifiers) for its CPU. It was a commercial success.
William F. Steagall of theSperry-Rand corporationdeveloped the technology in an effort to improve the reliability of computers. In his patent application,[3]filed in 1954, he stated:
"Where, as here, reliability of operation is a factor of prime importance, vacuum tubes, even though acceptable for most present-day electronic applications, are faced with accuracy requirements of an entirely different order of magnitude. For example, if two devices each having 99.5% reliability response are both utilized in a combined relationship in a given device, that device will have an accuracy or reliability factor of .995 × .995 = 99%. If ten such devices are combined, the factor drops to 95.1%. If, however, 500 such units are combined, the reliability factor of the device drops to 8.1%, and for a thousand, to 0.67%. It will thus be seen that even though the reliability of operation of individual vacuum tubes may be very much above 99.95%, where many thousands of units are combined, as in the large computers, the reliability factor of each unit must be extremely high to combine to produce an error free device. In practice of course such an ideal can only be approached. Magnetic amplifiers of the type here described meet the necessary requirements of reliability of performance for the combinations discussed."
Magnetic logic was able to achieve switching speeds of about 1MHz but was overtaken by semiconductor based electronics which was able to switch much faster.
Solid state semiconductors were able to increase their density according toMoore's Law, and thus proved more effective as IC technology developed.
Magnetic logic has advantages in that it is not volatile, it may be powered down without losing its state.[1]
|
https://en.wikipedia.org/wiki/Magnetic_logic
|
AIX(pronounced/ˌeɪ.aɪ.ˈɛks/ay-eye-EKS[5]) is a series ofproprietaryUnixoperating systemsdeveloped and sold byIBMsince 1986. The name stands for "Advanced Interactive eXecutive". Current versions are designed to work withPower ISAbasedserverandworkstationcomputers such as IBM'sPowerline.
Originally released for theIBM RT PCRISCworkstationin 1986, AIX has supported a wide range of hardware platforms, including the IBMRS/6000series and laterPowerandPowerPC-based systems,IBM System i,System/370mainframes,PS/2personal computers, and theApple Network Server. Currently, it is supported onIBM Power SystemsalongsideIBM iandLinux.
AIX is based onUNIX System Vwith4.3BSD-compatible extensions. It is certified to the UNIX 03 and UNIX V7 specifications of theSingle UNIX Specification, beginning with AIX versions 5.3 and 7.2 TL5, respectively.[6]Older versions were certified to the UNIX 95 and UNIX 98 specifications.[7]
AIX was the first operating system to implement ajournaling file system. IBM has continuously enhanced the software with features such as processor, disk, and networkvirtualization, dynamic hardware resource allocation (including fractional processor units), andreliability engineeringconcepts derived from itsmainframedesigns.[8]
Unix began in the early 1970s atAT&T'sBell Labsresearch center, running onDECminicomputers. By 1976, the operating system was used in various academic institutions, includingPrinceton University, where Tom Lyon and others ported it to theS/370to run as a guest OS underVM/370.[9]This port becameAmdahl UTSfrom IBM's mainframe rival.[10][11]
IBM's involvement with Unix began in 1979 when it assisted Bell Labs in porting Unix to the S/370 platform to be used as abuild hostfor the5ESS switch's software. During this process, IBM made modifications to theTSS/370Resident Supervisor to better support Unix.[12]
In 1984, IBM introduced its own Unix variant for the S/370 platform called VM/IX, developed byInteractive Systems Corporationusing Unix System III. However, VM/IX was only available as a PRPQ (Programming Request for Price Quotation) and was not a General Availability product.
It was replaced in 1985 by IBM IX/370, a fully supported product based on AT&T's Unix System V, intended to compete against UTS.[13]
In 1986, IBM introduced AIX Version 1 for theIBM RT PCworkstation. It was based onUNIX System VReleases 1 and 2, incorporating source code from 4.2 and 4.3BSDUNIX.[14]
AIX Version 2 followed in 1987 for the RT PC.[15]
In 1990, AIX Version 3 was released for thePOWER-basedRS/6000platform.[16]It became the primary operating system for the RS/6000 series, which was later renamedIBM eServerpSeries,IBM System p, and finallyIBM Power Systems.
AIX Version 4, introduced in 1994, addedsymmetric multiprocessingand evolved through the 1990s, culminating with AIX 4.3.3 in 1999. A modified version of Version 4.1 was also used as the standard OS for theApple Network Serverline byApple Computer.
In the late 1990s, underProject Monterey, IBM and theSanta Cruz Operationattempted to integrate AIX andUnixWareinto a multiplatform Unix forIntelIA-64architecture. The project was discontinued in 2002 after limited commercial success.[17]
In 2003, theSCO Groupfiled a lawsuit against IBM, alleging misappropriation ofUNIX System Vsource code in AIX. The case was resolved in 2010 when a jury ruled thatNovellowned the rights to Unix, not SCO.[17]
AIX 6 was announced in May 2007 and became generally available on November 9, 2007. Key features includedrole-based access control,workload partitions, andLive Partition Mobility.
AIX 7.1 was released in September 2010 with enhancements such as Cluster Aware AIX and support for large-scale memory and real-time application requirements.[18]
The original AIX (sometimes calledAIX/RT) was developed for the IBM RT PC workstation by IBM in conjunction withInteractive Systems Corporation, who had previously portedUNIX System IIIto theIBM PCfor IBM asPC/IX.[19]According to its developers, the AIX source (for this initial version) consisted of one million lines of code.[20]Installation media consisted of eight1.2M floppy disks. The RT was based on theIBM ROMPmicroprocessor, the first commercialRISCchip. This was based on a design pioneered at IBM Research (theIBM 801).
One of the novel aspects of the RT design was the use of amicrokernel, called Virtual Resource Manager (VRM). The keyboard, mouse, display, disk drives and network were all controlled by a microkernel. One could "hotkey" from one operating system to the next using the Alt-Tab key combination. Each OS in turn would get possession of the keyboard, mouse and display. Besides AIX v2, thePICK OSalso included this microkernel.
Much of the AIX v2 kernel was written in thePL.8programming language, which proved troublesome during the migration to AIX v3.[citation needed]AIX v2 included fullTCP/IPnetworking, as well asSNAand two networking file systems:NFS, licensed fromSun Microsystems, andDistributed Services(DS). DS had the distinction of being built on top of SNA, and thereby being fully compatible with DS onIBM mainframe systems[clarification needed]and on midrange systems runningOS/400throughIBM i. For the graphical user interfaces, AIX v2 came with the X10R3 and later the X10R4 and X11 versions of theX Window Systemfrom MIT, together with theAthena widget set. Compilers forFortranandCwere available.
AIX PS/2(also known asAIX/386) was developed byLocus Computing Corporationunder contract to IBM.[19]AIX PS/2, first released in October 1988,[21]ran onIBM PS/2personal computers withIntel 386and compatible processors.
The product was announced in September 1988 with a baseline tag price of $595, although some utilities, such asUUCP, were included in a separate Extension package priced at $250.nroffandtrofffor AIX were also sold separately in a Text Formatting System package priced at $200. TheTCP/IPstack for AIX PS/2 retailed for another $300. TheX Window Systempackage was priced at $195, and featured a graphical environment called theAIXwindows Desktop, based onIXI'sX.desktop.[22]The C and FORTRAN compilers each had a price tag of $275. Locus also made available theirDOS Mergevirtual machine environment for AIX, which could run MS DOS 3.3 applications inside AIX; DOS Merge was sold separately for another $250.[23]IBM also offered a $150 AIX PS/2 DOS Server Program, which providedfile serverandprint serverservices for client computers running PC DOS 3.3.[24]
The last version of PS/2 AIX is 1.3. It was released in 1992 and announced to add support for non-IBM (non-microchannel) computers as well.[25]Support for PS/2 AIX ended in March 1995.[26]
In 1988, IBM announcedAIX/370,[27]also developed by Locus Computing. AIX/370 was IBM's fourth attempt to offerUnix-likefunctionality for their mainframe line, specifically theSystem/370(the prior versions were aTSS/370-based Unix system developed jointly with AT&T c.1980,[12]aVM/370-based system namedVM/IXdeveloped jointly withInteractive Systems Corporationc.1984,[citation needed]and aVM/370-based version of TSS/370[citation needed]namedIX/370which was upgraded to be compatible withUNIX System V[citation needed]). AIX/370 was released in 1990 with functional equivalence to System V Release 2 and 4.3BSD as well as IBM enhancements. With the introduction of theESA/390architecture, AIX/370 was replaced byAIX/ESA[28]in 1991, which was based onOSF/1, and also ran on theSystem/390platform. Unlike AIX/370, AIX/ESA ran both natively as the host operating system, and as a guest underVM. AIX/ESA, while technically advanced, had little commercial success, partially because[citation needed]UNIX functionality was added as an option to the existing mainframe operating system,MVS, asMVS/ESA SP Version 4 Release 3 OpenEdition[29]in 1994, and continued as an integral part of MVS/ESA SP Version 5, OS/390 and z/OS, with the name eventually changing fromOpenEditiontoUnix System Services. IBM also provided OpenEdition in VM/ESA Version 2[30]through z/VM.
As part ofProject Monterey, IBM released abeta testversion of AIX 5L for the IA-64 (Itanium) architecture in 2001, but this never became an official product due to lack of interest.[31]
TheApple Network Server(ANS) systems were PowerPC-based systems designed byApple Computerto have numerous high-end features that standard Apple hardware did not have, including swappable hard drives, redundant power supplies, and external monitoring capability. These systems were more or less based on thePower Macintoshhardware available at the time but were designed to use AIX (versions 4.1.4 or 4.1.5) as their native operating system in a specialized version specific to the ANS called AIX for Apple Network Servers.
AIX was only compatible with the Network Servers and was not ported to standard Power Macintosh hardware. It should not be confused withA/UX, Apple's earlier version of Unix for68k-basedMacintoshes.
The release of AIX version 3 (sometimes calledAIX/6000) coincided with the announcement of the firstPOWER1-based IBMRS/6000models in 1990.
AIX v3 innovated in several ways on the software side. It was the first operating system to introduce the idea of ajournaling file system,JFS, which allowed for fast boot times by avoiding the need to ensure the consistency of the file systems on disks (seefsck) on every reboot. Another innovation wasshared librarieswhich avoid the need for static linking from an application to the libraries it used. The resulting smaller binaries used less of the hardware RAM to run, and used less disk space to install. Besides improving performance, it was a boon to developers: executable binaries could be in the tens ofkilobytesinstead of a megabyte for an executable statically linked to theC library. AIX v3 also scrapped the microkernel of AIX v2, a contentious move that resulted in v3 containing noPL.8code and being somewhat more "pure" than v2.
Other notable subsystems included:
In addition, AIX applications can run in thePASEsubsystem underIBM i.
IBM formerly made the AIX for RS/6000 source code available to customers for a fee; in 1991, IBM customers could order the AIX 3.0 source code for a one-time charge of US$60,000;[32]subsequently, IBM released the AIX 3.1 source code in 1992,[33]and AIX 3.2 in 1993.[34]These source code distributions excluded certain files (authored by third-parties) which IBM did not have rights to redistribute, and also excluded layered products such as the MS-DOS emulator and the C compiler. Furthermore, in order to be able to license the AIX source code, the customer first had to procure source code license agreements with AT&T and the University of California, Berkeley.[32]
The default shell wasBourne shellup to AIX version 3, but was changed toKornShell(ksh88) in version 4 forXPG4andPOSIXcompliance.[3]
TheCommon Desktop Environment(CDE) is AIX's defaultgraphical user interface. As part of Linux Affinity and the freeAIX Toolbox for Linux Applications(ATLA), open-sourceKDEandGNOMEdesktops are also available.[57]
SMITis the System Management Interface Tool for AIX. It allows a user to navigate a menu hierarchy of commands, rather than using the command line. Invocation is typically achieved with the commandsmit. Experienced system administrators make use of theF6function key which generates the command line that SMIT will invoke to complete it.
SMIT also generates a log of commands that are performed in thesmit.scriptfile. Thesmit.scriptfile automatically records the commands with the command flags and parameters used. Thesmit.scriptfile can be used as an executable shell script to rerun system configuration tasks. SMIT also creates thesmit.logfile, which contains additional detailed information that can be used by programmers in extending the SMIT system.
smitandsmittyrefer to the same program, thoughsmittyinvokes the text-based version, whilesmitwill invoke an X Window System based interface if possible; however, ifsmitdetermines that X Window System capabilities are not present, it will present the text-based version instead of failing. Determination of X Window System capabilities is typically performed by checking for the existence of theDISPLAYvariable.[citation needed]
Object Data Manager(ODM) is a database of system information integrated into AIX,[58][59]analogous to theregistryinMicrosoft Windows.[60]A good understanding of the ODM is essential for managing AIX systems.[61]
Data managed in ODM is stored and maintained asobjectswith associatedattributes.[62]Interaction with ODM is possible viaapplication programming interface(API)libraryfor programs, andcommand-line utilitiessuch asodmshow,odmget,odmadd,odmchangeandodmdeleteforshell scriptsand users.SMITand its associated AIX commands can also be used to query and modify information in the ODM.[63]ODM is stored on disk usingBerkeley DBfiles.[64]
Example of information stored in the ODM database are:
|
https://en.wikipedia.org/wiki/IBM_AIX
|
Computer security(alsocybersecurity,digital security, orinformation technology (IT) security) is a subdiscipline within the field ofinformation security. It consists of the protection ofcomputer software,systemsandnetworksfromthreatsthat can lead to unauthorized information disclosure, theft or damage tohardware,software, ordata, as well as from the disruption or misdirection of theservicesthey provide.[1][2]
The significance of the field stems from the expanded reliance oncomputer systems, theInternet,[3]andwireless network standards. Its importance is further amplified by the growth ofsmart devices, includingsmartphones,televisions, and the various devices that constitute theInternet of things(IoT). Cybersecurity has emerged as one of the most significant new challenges facing the contemporary world, due to both the complexity ofinformation systemsand the societies they support. Security is particularly crucial for systems that govern large-scale systems with far-reaching physical effects, such aspower distribution,elections, andfinance.[4][5]
Although many aspects of computer security involve digital security, such as electronicpasswordsandencryption,physical securitymeasures such asmetal locksare still used to prevent unauthorized tampering. IT security is not a perfect subset ofinformation security, therefore does not completely align into thesecurity convergenceschema.
A vulnerability refers to a flaw in the structure, execution, functioning, or internal oversight of a computer or system that compromises its security. Most of the vulnerabilities that have been discovered are documented in theCommon Vulnerabilities and Exposures(CVE) database.[6]Anexploitablevulnerability is one for which at least one workingattackorexploitexists.[7]Actors maliciously seeking vulnerabilities are known asthreats. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited usingautomated toolsor customized scripts.[8][9]
Various people or parties are vulnerable to cyber attacks; however, different groups are likely to experience different types of attacks more than others.[10]
In April 2023, theUnited KingdomDepartment for Science, Innovation & Technology released a report on cyber attacks over the previous 12 months.[11]They surveyed 2,263 UK businesses, 1,174 UK registered charities, and 554 education institutions. The research found that "32% of businesses and 24% of charities overall recall any breaches or attacks from the last 12 months." These figures were much higher for "medium businesses (59%), large businesses (69%), and high-income charities with £500,000 or more in annual income (56%)."[11]Yet, although medium or large businesses are more often the victims, since larger companies have generally improved their security over the last decade,small and midsize businesses(SMBs) have also become increasingly vulnerable as they often "do not have advanced tools to defend the business."[10]SMBs are most likely to be affected by malware, ransomware, phishing,man-in-the-middle attacks, and Denial-of Service (DoS) Attacks.[10]
Normal internet users are most likely to be affected by untargeted cyberattacks.[12]These are where attackers indiscriminately target as many devices, services, or users as possible. They do this using techniques that take advantage of the openness of the Internet. These strategies mostly includephishing,ransomware,water holingand scanning.[12]
To secure a computer system, it is important to understand the attacks that can be made against it, and thesethreatscan typically be classified into one of the following categories:
Abackdoorin a computer system, acryptosystem, or analgorithmis any secret method of bypassing normalauthenticationor security controls. These weaknesses may exist for many reasons, including original design or poor configuration.[13]Due to the nature of backdoors, they are of greater concern to companies and databases as opposed to individuals.
Backdoors may be added by an authorized party to allow some legitimate access or by an attacker for malicious reasons.Criminalsoften usemalwareto install backdoors, giving them remote administrative access to a system.[14]Once they have access, cybercriminals can "modify files, steal personal information, install unwanted software, and even take control of the entire computer."[14]
Backdoors can be difficult to detect, as they often remain hidden within the source code or system firmware intimate knowledge of theoperating systemof the computer.
Denial-of-service attacks(DoS) are designed to make a machine or network resource unavailable to its intended users.[15]Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a singleIP addresscan be blocked by adding a new firewall rule, many forms ofdistributed denial-of-service(DDoS) attacks are possible, where the attack comes from a large number of points. In this case, defending against these attacks is much more difficult. Such attacks can originate from thezombie computersof abotnetor from a range of other possible techniques, includingdistributed reflective denial-of-service(DRDoS), where innocent systems are fooled into sending traffic to the victim.[15]With such attacks, the amplification factor makes the attack easier for the attacker because they have to use little bandwidth themselves. To understand why attackers may carry out these attacks, see the 'attacker motivation' section.
A direct-access attack is when an unauthorized user (an attacker) gains physical access to a computer, most likely to directly copy data from it or steal information.[16]Attackers may also compromise security by making operating system modifications, installingsoftware worms,keyloggers,covert listening devicesor using wireless microphones. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from aCD-ROMor other bootable media.Disk encryptionand theTrusted Platform Modulestandard are designed to prevent these attacks.
Direct service attackers are related in concept todirect memory attackswhich allow an attacker to gain direct access to a computer's memory.[17]The attacks "take advantage of a feature of modern computers that allows certain devices, such as external hard drives, graphics cards, or network cards, to access the computer's memory directly."[17]
Eavesdroppingis the act of surreptitiously listening to a private computer conversation (communication), usually between hosts on a network. It typically occurs when a user connects to a network where traffic is not secured or encrypted and sends sensitive business data to a colleague, which, when listened to by an attacker, could be exploited.[18]Data transmitted across anopen networkallows an attacker to exploit a vulnerability and intercept it via various methods.
Unlikemalware, direct-access attacks, or other forms of cyber attacks, eavesdropping attacks are unlikely to negatively affect the performance of networks or devices, making them difficult to notice.[18]In fact, "the attacker does not need to have any ongoing connection to the software at all. The attacker can insert the software onto a compromised device, perhaps by direct insertion or perhaps by a virus or other malware, and then come back some time later to retrieve any data that is found or trigger the software to send the data at some determined time."[19]
Using avirtual private network(VPN), which encrypts data between two points, is one of the most common forms of protection against eavesdropping. Using the best form of encryption possible for wireless networks is best practice, as well as usingHTTPSinstead of an unencryptedHTTP.[20]
Programs such asCarnivoreandNarusInSighthave been used by theFederal Bureau of Investigation(FBI) and NSA to eavesdrop on the systems ofinternet service providers. Even machines that operate as a closed system (i.e., with no contact with the outside world) can be eavesdropped upon by monitoring the faintelectromagnetictransmissions generated by the hardware.TEMPESTis a specification by the NSA referring to these attacks.
Malicious software (malware) is any software code or computer program "intentionally written to harm a computer system or its users."[21]Once present on a computer, it can leak sensitive details such as personal information, business information and passwords, can give control of the system to the attacker, and can corrupt or delete data permanently.[22][23]
Man-in-the-middle attacks(MITM) involve a malicious attacker trying to intercept, surveil or modify communications between two parties by spoofing one or both party's identities and injecting themselves in-between.[24]Types of MITM attacks include:
Surfacing in 2017, a new class of multi-vector,[25]polymorphic[26]cyber threats combine several types of attacks and change form to avoid cybersecurity controls as they spread.
Multi-vector polymorphic attacks, as the name describes, are both multi-vectored and polymorphic.[27]Firstly, they are a singular attack that involves multiple methods of attack. In this sense, they are "multi-vectored (i.e. the attack can use multiple means of propagation such as via the Web, email and applications." However, they are also multi-staged, meaning that "they can infiltrate networks and move laterally inside the network."[27]The attacks can be polymorphic, meaning that the cyberattacks used such as viruses, worms or trojans "constantly change ("morph") making it nearly impossible to detect them using signature-based defences."[27]
Phishingis the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users.[28]Phishing is typically carried out byemail spoofing,instant messaging,text message, or on aphonecall. They often direct users to enter details at a fake website whoselook and feelare almost identical to the legitimate one.[29]The fake website often asks for personal information, such as login details and passwords. This information can then be used to gain access to the individual's real account on the real website.
Preying on a victim's trust, phishing can be classified as a form ofsocial engineering. Attackers can use creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices[30]to individuals showing that they recently purchased music, apps, or others, and instructing them to click on a link if the purchases were not authorized. A more strategic type of phishing is spear-phishing which leverages personal or organization-specific details to make the attacker appear like a trusted source. Spear-phishing attacks target specific individuals, rather than the broad net cast by phishing attempts.[31]
Privilege escalationdescribes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level.[32]For example, a standard computer user may be able to exploit avulnerabilityin the system to gain access to restricted data; or even becomerootand have full unrestricted access to a system. The severity of attacks can range from attacks simply sending an unsolicited email to aransomware attackon large amounts of data. Privilege escalation usually starts withsocial engineeringtechniques, oftenphishing.[32]
Privilege escalation can be separated into two strategies, horizontal and vertical privilege escalation:
Any computational system affects its environment in some form. This effect it has on its environment can range from electromagnetic radiation, to residual effect on RAM cells which as a consequence make aCold boot attackpossible, to hardware implementation faults that allow for access or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios, the attacker would gather such information about a system or network to guess its internal state and as a result access the information which is assumed by the victim to be secure. The target information in a side channel can be challenging to detect due to its low amplitude when combined with other signals[33]
Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer.[34]This generally involves exploiting people's trust, and relying on theircognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. One of the main techniques of social engineering arephishingattacks.
In early 2016, theFBIreported that suchbusiness email compromise(BEC) scams had cost US businesses more than $2 billion in about two years.[35]
In May 2016, theMilwaukee BucksNBAteam was the victim of this type of cyber scam with a perpetrator impersonating the team's presidentPeter Feigin, resulting in the handover of all the team's employees' 2015W-2tax forms.[36]
Spoofing is an act of pretending to be a valid entity through the falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. Spoofing is closely related tophishing.[37][38]There are several types of spoofing, including:
In 2018, the cybersecurity firmTrellixpublished research on the life-threatening risk of spoofing in the healthcare industry.[40]
Tamperingdescribes amalicious modificationor alteration of data. It is an intentional but unauthorized act resulting in the modification of a system, components of systems, its intended behavior, or data. So-calledEvil Maid attacksand security services planting ofsurveillancecapability into routers are examples.[41]
HTMLsmuggling allows an attacker tosmugglea malicious code inside a particular HTML or web page.[42]HTMLfiles can carry payloads concealed as benign, inert data in order to defeatcontent filters. These payloads can be reconstructed on the other side of the filter.[43]
When a target user opens the HTML, the malicious code is activated; the web browser thendecodesthe script, which then unleashes the malware onto the target's device.[42]
Employee behavior can have a big impact oninformation securityin organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness toward information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."[44]
Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.[45]Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cybersecurity incidents involved internal actors within a company.[46]Research shows information security culture needs to be improved continuously. In "Information Security Culture from Analysis to Change", authors commented, "It's a never-ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.[47]
In computer security, acountermeasureis an action, device, procedure or technique that reduces a threat, a vulnerability, or anattackby eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.[48][49][50]
Some common countermeasures are listed in the following sections:
Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered a main feature.
The UK government's National Cyber Security Centre separates secure cyber design principles into five sections:[51]
These design principles of security by design can include some of the following techniques:
Security architecture can be defined as the "practice of designing computer systems to achieve security goals."[52]These goals have overlap with the principles of "security by design" explored above, including to "make initial compromise of the system difficult," and to "limit the impact of any compromise."[52]In practice, the role of a security architect would be to ensure the structure of a system reinforces the security of the system, and that new changes are safe and meet the security requirements of the organization.[53][54]
Similarly, Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:[55]
Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization.
A state of computer security is the conceptual ideal, attained by the use of three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following:
Today, computer security consists mainly of preventive measures, likefirewallsor anexit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as theInternet. They can be implemented as software running on the machine, hooking into thenetwork stack(or, in the case of mostUNIX-based operating systems such asLinux, built into the operating systemkernel) to provide real-time filtering and blocking.[56]Another implementation is a so-calledphysical firewall, which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet.
Some organizations are turning tobig dataplatforms, such asApache Hadoop, to extend data accessibility andmachine learningto detectadvanced persistent threats.[58]
In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security.[59]To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.[60]
Vulnerability management is the cycle of identifying, fixing or mitigatingvulnerabilities,[61]especially in software andfirmware. Vulnerability management is integral to computer security andnetwork security.
Vulnerabilities can be discovered with avulnerability scanner, which analyzes a computer system in search of known vulnerabilities,[62]such asopen ports, insecure software configuration, and susceptibility tomalware. In order for these tools to be effective, they must be kept up to date with every new update the vendor release. Typically, these updates will scan for the new vulnerabilities that were introduced recently.
Beyond vulnerability scanning, many organizations contract outside security auditors to run regularpenetration testsagainst their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.[63]
The act of assessing and reducing vulnerabilities to cyber attacks is commonly referred to asinformation technology security assessments. They aim to assess systems for risk and to predict and test for their vulnerabilities. Whileformal verificationof the correctness of computer systems is possible,[64][65]it is not yet common. Operating systems formally verified includeseL4,[66]andSYSGO'sPikeOS[67][68]– but these make up a very small percentage of the market.
It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates and by hiring people with expertise in security. Large companies with significant threats can hire Security Operations Centre (SOC) Analysts. These are specialists in cyber defences, with their role ranging from "conducting threat analysis to investigating reports of any new issues and preparing and testing disaster recovery plans."[69]
Whilst no measures can completely guarantee the prevention of an attack, these measures can help mitigate the damage of possible attacks. The effects of data loss/damage can be also reduced by carefulbacking upandinsurance.
Outside of formal assessments, there are various methods of reducing vulnerabilities.Two factor authenticationis a method for mitigating unauthorized access to a system or sensitive information.[70]It requiressomething you know:a password or PIN, andsomething you have: a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access.
Protecting against social engineering and direct computer access (physical) attacks can only happen by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk by improving people's knowledge of how to protect themselves and by increasing people's awareness of threats.[71]However, even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent.
Inoculation, derived frominoculation theory, seeks to prevent social engineering and other fraudulent tricks and traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.[72]
Hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such asdongles,trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below.
One use of the termcomputer securityrefers to technology that is used to implementsecure operating systems. Using secure operating systems is a good way of ensuring computer security. These are systems that have achieved certification from an external security-auditing organization, the most popular evaluations areCommon Criteria(CC).[86]
In software engineering,secure codingaims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems aresecure by design. Beyond this, formal verification aims to prove thecorrectnessof thealgorithmsunderlying a system;[87]important forcryptographic protocolsfor example.
Within computer systems, two of the mainsecurity modelscapable of enforcing privilege separation areaccess control lists(ACLs) androle-based access control(RBAC).
Anaccess-control list(ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects.
Role-based access control is an approach to restricting system access to authorized users,[88][89][90]used by the majority of enterprises with more than 500 employees,[91]and can implementmandatory access control(MAC) ordiscretionary access control(DAC).
A further approach,capability-based securityhas been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is theE language.
The end-user is widely recognized as the weakest link in the security chain[92]and it is estimated that more than 90% of security incidents and breaches involve some kind of human error.[93][94]Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.[95]
As the human component of cyber risk is particularly relevant in determining the global cyber risk[96]an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential[97]in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats.
The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers[98]to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks.
Related to end-user training,digital hygieneorcyber hygieneis a fundamental principle relating to information security and, as the analogy withpersonal hygieneshows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks.[99]Cyber hygiene should also not be mistaken forproactive cyber defence, a military term.[100]
The most common acts of digital hygiene can include updating malware protection, cloud back-ups, passwords, and ensuring restricted admin rights and network firewalls.[101]As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline[102]or education.[103]It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal or collective digital security. As such, these measures can be performed by laypeople, not just security experts.
Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the termcomputer viruswas coined almost simultaneously with the creation of the first working computer viruses,[104]the termcyber hygieneis a much later invention, perhaps as late as 2000[105]by Internet pioneerVint Cerf. It has since been adopted by theCongress[106]andSenateof the United States,[107]the FBI,[108]EUinstitutions[99]and heads of state.[100]
Responding to attemptedsecurity breachesis often very difficult for a variety of reasons, including:
Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatorysecurity breach notification laws.
The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there are an increasing number of systems at risk.
The computer systems of financial regulators and financial institutions like theU.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets forcybercriminalsinterested in manipulating markets and making illicit gains.[109]Websites and apps that accept or storecredit card numbers, brokerage accounts, andbank accountinformation are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on theblack market.[110]In-store payment systems andATMshave also been tampered with in order to gather customer account data andPINs.
TheUCLAInternet Report: Surveying the Digital Future (2000) found that the privacy of personal data created barriers to online sales and that more than nine out of 10 internet users were somewhat or very concerned aboutcredit cardsecurity.[111]
The most common web technologies for improving security between browsers and websites are named SSL (Secure Sockets Layer), and its successor TLS (Transport Layer Security),identity managementandauthenticationservices, anddomain nameservices allow companies and consumers to engage in secure communications and commerce. Several versions of SSL and TLS are commonly used today in applications such as web browsing, e-mail, internet faxing,instant messaging, andVoIP(voice-over-IP). There are variousinteroperableimplementations of these technologies, including at least one implementation that isopen source. Open source allows anyone to view the application'ssource code, and look for and report vulnerabilities.
The credit card companiesVisaandMasterCardcooperated to develop the secureEMVchip which is embedded in credit cards. Further developments include theChip Authentication Programwhere banks give customers hand-held card readers to perform online secure transactions. Other developments in this arena include the development of technology such as Instant Issuance which has enabled shoppingmall kiosksacting on behalf of banks to issue on-the-spot credit cards to interested customers.
Computers control functions at many utilities, including coordination oftelecommunications, thepower grid,nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but theStuxnetworm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, theComputer Emergency Readiness Team, a division of theDepartment of Homeland Security, investigated 79 hacking incidents at energy companies.[112]
Theaviationindustry is very reliant on a series of complex systems which could be attacked.[113]A simple power outage at one airport can cause repercussions worldwide,[114]much of the system relies on radio transmissions which could be disrupted,[115]and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore.[116]There is also potential for attack from within an aircraft.[117]
Implementing fixes in aerospace systems poses a unique challenge because efficient air transportation is heavily affected by weight and volume. Improving security by adding physical devices to airplanes could increase their unloaded weight, and could potentially reduce cargo or passenger capacity.[118]
In Europe, with the (Pan-European Network Service)[119]and NewPENS,[120]and in the US with the NextGen program,[121]air navigation service providersare moving to create their own dedicated networks.
Many modern passports are nowbiometric passports, containing an embeddedmicrochipthat stores a digitized photograph and personal information such as name, gender, and date of birth. In addition, more countries[which?]are introducingfacial recognition technologyto reduceidentity-related fraud. The introduction of the ePassport has assisted border officials in verifying the identity of the passport holder, thus allowing for quick passenger processing.[122]Plans are under way in the US, theUK, andAustraliato introduce SmartGate kiosks with both retina andfingerprint recognitiontechnology.[123]The airline industry is moving from the use of traditional paper tickets towards the use ofelectronic tickets(e-tickets). These have been made possible by advances in online credit card transactions in partnership with the airlines. Long-distance bus companies[which?]are also switching over to e-ticketing transactions today.
The consequences of a successful attack range from loss of confidentiality to loss of system integrity,air traffic controloutages, loss of aircraft, and even loss of life.
Desktop computers and laptops are commonly targeted to gather passwords or financial account information or to construct a botnet to attack another target.Smartphones,tablet computers,smart watches, and othermobile devicessuch asquantified selfdevices likeactivity trackershave sensors such as cameras, microphones, GPS receivers, compasses, andaccelerometerswhich could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.[124]
The increasing number ofhome automationdevices such as theNest thermostatare also potential targets.[124]
Today many healthcare providers andhealth insurancecompanies use the internet to provide enhanced products and services. Examples are the use oftele-healthto potentially offer better quality and access to healthcare, or fitness trackers to lower insurance premiums.[citation needed]Patient records are increasingly being placed on secure in-house networks, alleviating the need for extra storage space.[125]
Large corporations are common targets. In many cases attacks are aimed at financial gain throughidentity theftand involvedata breaches. Examples include the loss of millions of clients' credit card and financial details byHome Depot,[126]Staples,[127]Target Corporation,[128]andEquifax.[129]
Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale.[130]Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.[131]
Not all attacks are financially motivated, however: security firmHBGary Federalhad a serious series of attacks in 2011 fromhacktivistgroupAnonymousin retaliation for the firm's CEO claiming to have infiltrated their group,[132][133]andSony Pictureswashacked in 2014with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.[134][135]
Vehicles are increasingly computerized, with engine timing,cruise control,anti-lock brakes, seat belt tensioners, door locks,airbagsandadvanced driver-assistance systemson many models. Additionally,connected carsmay use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network.[136]Self-driving carsare expected to be even more complex. All of these systems carry some security risks, and such issues have gained wide attention.[137][138][139]
Simple examples of risk include a maliciouscompact discbeing used as an attack vector,[140]and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internalcontroller area network, the danger is much greater[136]– and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.[141][142]
Manufacturers are reacting in numerous ways, withTeslain 2016 pushing out some security fixesover the airinto its cars' computer systems.[143]In the area of autonomous vehicles, in September 2016 theUnited States Department of Transportationannounced some initial safety standards, and called for states to come up with uniform policies.[144][145][146]
Additionally, e-Drivers' licenses are being developed using the same technology. For example, Mexico's licensing authority (ICV) has used a smart card platform to issue the first e-Drivers' licenses to the city ofMonterrey, in the state ofNuevo León.[147]
Shipping companies[148]have adoptedRFID(Radio Frequency Identification) technology as an efficient, digitally secure,tracking device. Unlike abarcode, RFID can be read up to 20 feet away. RFID is used byFedEx[149]andUPS.[150]
Government andmilitarycomputer systems are commonly attacked by activists[151][152][153]and foreign powers.[154][155][156][157]Local and regional government infrastructure such astraffic lightcontrols, police and intelligence agency communications,personnel records, as well as student records.[158]
TheFBI,CIA, andPentagon, all utilize secure controlled access technology for any of their buildings. However, the use of this form of technology is spreading into the entrepreneurial world. More and more companies are taking advantage of the development of digitally secure controlled access technology. GE's ACUVision, for example, offers a single panel platform for access control, alarm monitoring and digital recording.[159]
TheInternet of things(IoT) is the network of physical objects such as devices, vehicles, and buildings that areembeddedwithelectronics,software,sensors, andnetwork connectivitythat enables them to collect and exchange data.[160]Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.[161][162]
While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,[163][164]it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat.[165]If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.[166]
An attack aimed at physical infrastructure or human lives is often called a cyber-kinetic attack. As IoT devices and appliances become more widespread, the prevalence and potential damage of cyber-kinetic attacks can increase substantially.
Medical deviceshave either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment[167]and implanted devices includingpacemakers[168]andinsulin pumps.[169]There are many reports of hospitals and hospital organizations getting hacked, includingransomwareattacks,[170][171][172][173]Windows XPexploits,[174][175]viruses,[176][177]and data breaches of sensitive data stored on hospital servers.[178][171][179][180]On 28 December 2016 the USFood and Drug Administrationreleased its recommendations for how medicaldevice manufacturersshould maintain the security of Internet-connected devices – but no structure for enforcement.[181][182]
In distributed generation systems, the risk of a cyber attack is real, according toDaily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility,Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."[183]
Perhaps the most widely known digitally secure telecommunication device is theSIM(Subscriber Identity Module) card, a device that is embedded in most of the world's cellular devices before any service can be obtained. The SIM card is just the beginning of this digitally secure environment.
The Smart Card Web Servers draft standard (SCWS) defines the interfaces to anHTTP serverin asmart card.[184]Tests are being conducted to secure OTA ("over-the-air") payment and credit card information from and to a mobile phone.
Combination SIM/DVD devices are being developed through Smart Video Card technology which embeds aDVD-compliantoptical discinto the card body of a regular SIM card.
Other telecommunication developments involving digital security includemobile signatures, which use the embedded SIM card to generate a legally bindingelectronic signature.
Serious financial damage has been caused bysecurity breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable tovirusand worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."[185]
However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classicGordon-Loeb Modelanalyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., theexpected valueof the loss resulting from a cyber/informationsecurity breach).[186]
As withphysical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers orvandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for theKGB, as recounted byClifford StollinThe Cuckoo's Egg.
Attackers motivations can vary for all types of attacks from pleasure to political goals.[15]For example, hacktivists may target a company or organization that carries out activities they do not agree with. This would be to create bad publicity for the company by having its website crash.
High capability hackers, often with larger backing or state sponsorship, may attack based on the demands of their financial backers. These attacks are more likely to attempt more serious attack. An example of a more serious attack was the2015 Ukraine power grid hack, which reportedly utilised the spear-phising, destruction of files, and denial-of-service attacks to carry out the full attack.[187][188]
Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas.[189]The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that ofnation stateactors seeking to attack based on an ideological preference.[190]
A key aspect of threat modeling for any system is identifying the motivations behind potential attacks and the individuals or groups likely to carry them out. The level and detail of security measures will differ based on the specific system being protected. For instance, a home personal computer, a bank, and a classified military network each face distinct threats, despite using similar underlying technologies.[191]
Computer security incident managementis an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as adata breachor system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.[192]Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution.
There are four key components of a computer security incident response plan:
Some illustrative examples of different types of computer security breaches are given below.
In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internetcomputer worm.[194]The software was traced back to 23-year-oldCornell Universitygraduate studentRobert Tappan Morriswho said "he wanted to count how many machines were connected to the Internet".[194]
In 1994, over a hundred intrusions were made by unidentified crackers into theRome Laboratory, the US Air Force's main command and research facility. Usingtrojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks ofNational Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[195]
In early 2007, American apparel and home goods companyTJXannounced that it was the victim of anunauthorized computer systems intrusion[196]and that the hackers had accessed a system that stored data oncredit card,debit card,check, and merchandise return transactions.[197]
In 2010, the computer worm known asStuxnetreportedly ruined almost one-fifth of Iran'snuclear centrifuges.[198]It did so by disrupting industrialprogrammable logic controllers(PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iran's nuclear program[199][200][201][202]– although neither has publicly admitted this.
In early 2013, documents provided byEdward Snowdenwere published byThe Washington PostandThe Guardian[203][204]exposing the massive scale ofNSAglobal surveillance. There were also indications that the NSA may have inserted a backdoor in aNISTstandard for encryption.[205]This standard was later withdrawn due to widespread criticism.[206]The NSA additionally were revealed to have tapped the links betweenGoogle's data centers.[207]
A Ukrainian hacker known asRescatorbroke intoTarget Corporationcomputers in 2013, stealing roughly 40 million credit cards,[208]and thenHome Depotcomputers in 2014, stealing between 53 and 56 million credit card numbers.[209]Warnings were delivered at both corporations, but ignored; physical security breaches usingself checkout machinesare believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existingantivirus softwarehad administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing.
In April 2015, theOffice of Personnel Managementdiscovered it had been hackedmore than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office.[210]The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States.[211]Data targeted in the breach includedpersonally identifiable informationsuch asSocial Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check.[212][213]It is believed the hack was perpetrated by Chinese hackers.[214]
In July 2015, a hacker group is known as The Impact Team successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently.[215]When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained to function.
In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.[216]
International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece ofmalwareor form ofcyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute.[217][218]Provingattribution for cybercrimes and cyberattacksis also a major problem for all law enforcement agencies. "Computer virusesswitch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world."[217]The use of techniques such asdynamic DNS,fast fluxandbullet proof serversadd to the difficulty of investigation and enforcement.
The role of the government is to makeregulationsto force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the nationalpower-grid.[219]
The government's regulatory role incyberspaceis complicated. For some, cyberspace was seen as avirtual spacethat was to remain free of government intervention, as can be seen in many of today's libertarianblockchainandbitcoindiscussions.[220]
Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem.R. Clarkesaid during a panel discussion at theRSA Security ConferenceinSan Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through."[221]On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.[222]
On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges tointernational peace. According to UN Secretary-GeneralAntónio Guterres, new technologies are too often used to violate rights.[223]
Many different teams and organizations exist, including:
On 14 April 2016, theEuropean Parliamentand theCouncil of the European Unionadopted theGeneral Data Protection Regulation(GDPR). The GDPR, which came into force on 25 May 2018, grants individuals within the European Union (EU) and the European Economic Area (EEA) the right to theprotection of personal data. The regulation requires that any entity that processes personal data incorporate data protection by design and by default. It also requires that certain organizations appoint a Data Protection Officer (DPO).
The IT Security AssociationTeleTrusTexist inGermanysince June 1986, which is an international competence network for IT security.
Most countries have their own computer emergency response team to protect network security.
Since 2010, Canada has had a cybersecurity strategy.[229][230]This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure.[231]The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online.[230][231]There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.[232][233]
TheCanadian Cyber Incident Response Centre(CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors.[234]It posts regular cybersecurity bulletins[235]& operates an online reporting tool where individuals and organizations can report a cyber incident.[236]
To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations,[237]and launched the Cyber Security Cooperation Program.[238][239]They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.[240]
Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.[231]
Australian federal governmentannounced an $18.2 million investment to fortify thecybersecurityresilience of small and medium enterprises (SMEs) and enhance their capabilities in responding to cyber threats. This financial backing is an integral component of the soon-to-be-unveiled2023-2030 Australian Cyber Security Strategy, slated for release within the current week. A substantial allocation of $7.2 million is earmarked for the establishment of a voluntary cyber health check program, facilitating businesses in conducting a comprehensive and tailored self-assessment of their cybersecurity upskill.
This avant-garde health assessment serves as a diagnostic tool, enabling enterprises to ascertain the robustness ofAustralia's cyber security regulations. Furthermore, it affords them access to a repository of educational resources and materials, fostering the acquisition of skills necessary for an elevated cybersecurity posture. This groundbreaking initiative was jointly disclosed by Minister for Cyber SecurityClare O'Neiland Minister for Small BusinessJulie Collins.[241]
Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.[242]
TheNational Cyber Security Policy 2013is a policy framework by the Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data".CERT- Inis the nodal agency which monitors the cyber threats in the country. The post ofNational Cyber Security Coordinatorhas also been created in thePrime Minister's Office (PMO).
The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.[243]
Following cyberattacks in the first half of 2013, when the government, news media, television stations, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011,[244]and 2012, but Pyongyang denies the accusations.[245]
TheUnited Stateshas its first fully formed cyber plan in 15 years, as a result of the release of this National Cyber plan.[246]In this policy, the US says it will: Protect the country by keeping networks, systems, functions, and data safe; Promote American wealth by building a strong digital economy and encouraging strong domestic innovation; Peace and safety should be kept by making it easier for the US to stop people from using computer tools for bad things, working with friends and partners to do this; and increase the United States' impact around the world to support the main ideas behind an open, safe, reliable, and compatible Internet.[247]
The new U.S. cyber strategy[248]seeks to allay some of those concerns by promoting responsible behavior incyberspace, urging nations to adhere to a set of norms, both through international law and voluntary standards. It also calls for specific measures to harden U.S. government networks from attacks, like the June 2015 intrusion into theU.S. Office of Personnel Management(OPM), which compromised the records of about 4.2 million current and former government employees. And the strategy calls for the U.S. to continue to name and shame bad cyber actors, calling them out publicly for attacks when possible, along with the use of economic sanctions and diplomatic pressure.[249]
The 198618 U.S.C.§ 1030, theComputer Fraud and Abuse Actis the key legislation. It prohibits unauthorized access or damage ofprotected computersas defined in18 U.S.C.§ 1030(e)(2). Although various other measures have been proposed[250][251]– none have succeeded.
In 2013,executive order13636Improving Critical Infrastructure Cybersecuritywas signed, which prompted the creation of theNIST Cybersecurity Framework.
In response to theColonial Pipeline ransomware attack[252]PresidentJoe Bidensigned Executive Order 14028[253]on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response.
TheGeneral Services Administration(GSA) has[when?]standardized thepenetration testservice as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS).
TheDepartment of Homeland Securityhas a dedicated division responsible for the response system,risk managementprogram and requirements for cybersecurity in the United States called theNational Cyber Security Division.[254][255]The division is home to US-CERT operations and the National Cyber Alert System.[255]The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.[256]
The third priority of the FBI is to: "Protect the United States against cyber-based attacks and high-technology crimes",[257]and they, along with theNational White Collar Crime Center(NW3C), and theBureau of Justice Assistance(BJA) are part of the multi-agency task force, TheInternet Crime Complaint Center, also known as IC3.[258]
In addition to its own specific duties, the FBI participates alongside non-profit organizations such asInfraGard.[259][260]
TheComputer Crime and Intellectual Property Section(CCIPS) operates in theUnited States Department of Justice Criminal Division. The CCIPS is in charge of investigatingcomputer crimeandintellectual propertycrime and is specialized in the search and seizure ofdigital evidencein computers andnetworks.[261]In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."[262]
TheUnited States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners."[263]It has no role in the protection of civilian networks.[264][265]
The U.S.Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.[266]
TheFood and Drug Administrationhas issued guidance for medical devices,[267]and theNational Highway Traffic Safety Administration[268]is concerned with automotive cybersecurity. After being criticized by theGovernment Accountability Office,[269]and following successful attacks on airports and claimed attacks on airplanes, theFederal Aviation Administrationhas devoted funding to securing systems on board the planes of private manufacturers, and theAircraft Communications Addressing and Reporting System.[270]Concerns have also been raised about the futureNext Generation Air Transportation System.[271]
The US Department of Defense (DoD) issued DoD Directive 8570 in 2004, supplemented by DoD Directive 8140, requiring all DoD employees and all DoD contract personnel involved in information assurance roles and activities to earn and maintain various industry Information Technology (IT) certifications in an effort to ensure that all DoD personnel involved in network infrastructure defense have minimum levels of IT industry recognized knowledge, skills and abilities (KSA). Andersson and Reimers (2019) report these certifications range from CompTIA's A+ and Security+ through the ICS2.org's CISSP, etc.[272]
Computer emergency response teamis a name given to expert groups that handle computer security incidents. In the US, two distinct organizations exist, although they do work closely together.
In the context ofU.S. nuclear power plants, theU.S. Nuclear Regulatory Commission (NRC)outlines cybersecurity requirements under10 CFR Part 73, specifically in §73.54.[274]
TheNuclear Energy Institute's NEI 08-09 document,Cyber Security Plan for Nuclear Power Reactors,[275]outlines a comprehensive framework forcybersecurityin thenuclear power industry. Drafted with input from theU.S. NRC, this guideline is instrumental in aidinglicenseesto comply with theCode of Federal Regulations (CFR), which mandates robust protection of digital computers and equipment and communications systems at nuclear power plants against cyber threats.[276]
There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton fromThe Christian Science Monitorwrote in a 2015 article titled "The New Cyber Arms Race":
In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.[277]
This has led to new terms such ascyberwarfareandcyberterrorism. TheUnited States Cyber Commandwas created in 2009[278]and many other countrieshave similar forces.
There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.[279][280][281]
Cybersecurity is a fast-growing field ofITconcerned with reducing organizations' risk of hack or data breaches.[282]According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015.[283]Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail.[284]However, the use of the termcybersecurityis more prevalent in government job descriptions.[285]
Typical cybersecurity job titles and descriptions include:[286]
Student programs are also available for people interested in beginning a career in cybersecurity.[290][291]Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts.[292][293]A wide range of certified courses are also available.[294]
In the United Kingdom, a nationwide set of cybersecurity forums, known as theU.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy[295]in order to encourage start-ups and innovation and to address the skills gap[296]identified by theU.K Government.
In Singapore, theCyber Security Agencyhas issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by theInfocomm Media Development Authority(IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.[297]
The following terms used with regards to computer security are explained below:
Since theInternet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject in both our professional and personal lives. Cybersecurity and cyber threats have been consistently present for the last 60 years of technological change. In the 1970s and 1980s, computer security was mainly limited toacademiauntil the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of organized attacks such asdistributed denial of service.[301]This led to the formalization of cybersecurity as a professional discipline.[302]
TheApril 1967 sessionorganized byWillis Wareat theSpring Joint Computer Conference, and the later publication of theWare Report, were foundational moments in the history of the field of computer security.[303]Ware's work straddled the intersection of material, cultural, political, and social concerns.[303]
A 1977NISTpublication[304]introduced theCIA triadof confidentiality, integrity, and availability as a clear and simple way to describe key security goals.[305]While still relevant, many more elaborate frameworks have since been proposed.[306][307]
However, in the 1970s and 1980s, there were no grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. More often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. By the second half of the 1970s, established computer firms likeIBMstarted offering commercial access control systems and computer security software products.[308]
One of the earliest examples of an attack on a computer network was thecomputer wormCreeperwritten by Bob Thomas atBBN, which propagated through theARPANETin 1971.[309]The program was purely experimental in nature and carried no malicious payload. A later program,Reaper, was created byRay Tomlinsonin 1972 and used to destroy Creeper.[citation needed]
Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage.[310]The group hacked into American defense contractors, universities, and military base networks and sold gathered information to the Soviet KGB. The group was led byMarkus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990.
In 1988, one of the first computer worms, called theMorris worm, was distributed via the Internet. It gained significant mainstream media attention.[311]
Netscapestarted developing the protocolSSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993.[312][313]Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities.[312]However, in 1995, Netscape launched Version 2.0.[314]
TheNational Security Agency(NSA) is responsible for theprotectionof U.S. information systems and also for collecting foreign intelligence.[315]The agency analyzes commonly used software and system configurations to find security flaws, which it can use for offensive purposes against competitors of the United States.[316]
NSA contractors created and soldclick-and-shootattack tools to US agencies and close allies, but eventually, the tools made their way to foreign adversaries.[317]In 2016, NSAs own hacking tools were hacked, and they have been used by Russia and North Korea.[citation needed]NSA's employees and contractors have been recruited at high salaries by adversaries, anxious to compete incyberwarfare.[citation needed]In 2007, the United States andIsraelbegan exploiting security flaws in theMicrosoft Windowsoperating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which it began using against the United States.[316]
|
https://en.wikipedia.org/wiki/Cyber_self-defense
|
Phrase structure rulesare a type ofrewrite ruleused to describe a given language'ssyntaxand are closely associated with the early stages oftransformational grammar, proposed byNoam Chomskyin 1957.[1]They are used to break down a naturallanguagesentence into its constituent parts, also known assyntactic categories, including both lexical categories (parts of speech) andphrasalcategories. A grammar that uses phrase structure rules is a type ofphrase structure grammar. Phrase structure rules as they are commonly employed operate according to theconstituencyrelation, and a grammar that employs phrase structure rules is therefore aconstituency grammar; as such, it stands in contrast todependency grammars, which are based on thedependencyrelation.[2]
Phrase structure rules are usually of the following form:
meaning that theconstituentA{\displaystyle A}is separated into the two subconstituentsB{\displaystyle B}andC{\displaystyle C}. Some examples for English are as follows:
The first rule reads: A S (sentence) consists of a NP (noun phrase) followed by a VP (verb phrase). The second rule reads: A noun phrase consists of an optional Det (determiner) followed by a N (noun). The third rule means that a N (noun) can be preceded by an optional AP (adjective phrase) and followed by an optional PP (prepositional phrase). The round brackets indicate optional constituents.
Beginning with the sentence symbol S, and applying the phrase structure rules successively, finally applying replacement rules to substitute actual words for the abstract symbols, it is possible to generate many proper sentences of English (or whichever language the rules are specified for). If the rules are correct, then any sentence produced in this way ought to be grammatically (syntactically)correct. It is also to be expected that the rules will generate syntactically correct butsemanticallynonsensical sentences, such as the following well-known example:
This sentence was constructed byNoam Chomskyas an illustration that phrase structure rules are capable of generating syntactically correct but semantically incorrect sentences. Phrase structure rules break sentences down into their constituent parts. These constituents are often represented astree structures(dendrograms). The tree for Chomsky's sentence can be rendered as follows:
A constituent is any word or combination of words that is dominated by a single node. Thus each individual word is a constituent. Further, the subject NPColorless green ideas, the minor NPgreen ideas, and the VPsleep furiouslyare constituents. Phrase structure rules and the tree structures that are associated with them are a form ofimmediate constituent analysis.
Intransformational grammar, systems of phrase structure rules are supplemented by transformation rules, which act on an existing syntactic structure to produce a new one (performing such operations asnegation,passivization, etc.). These transformations are not strictly required for generation, as the sentences they produce could be generated by a suitably expanded system of phrase structure rules alone, but transformations provide greater economy and enable significant relations between sentences to be reflected in the grammar.
An important aspect of phrase structure rules is that they view sentence structure from the top down. The category on the left of the arrow is a greater constituent and the immediate constituents to the right of the arrow are lesser constituents. Constituents are successively broken down into their parts as one moves down a list of phrase structure rules for a given sentence. This top-down view of sentence structure stands in contrast to much work done in modern theoretical syntax. InMinimalism[3]for instance, sentence structure is generated from the bottom up. The operationMergemerges smaller constituents to create greater constituents until the greatest constituent (i.e. the sentence) is reached. In this regard, theoretical syntax abandoned phrase structure rules long ago, although their importance forcomputational linguisticsseems to remain intact.
Phrase structure rules as they are commonly employed result in a view of sentence structure that isconstituency-based. Thus, grammars that employ phrase structure rules areconstituency grammars(=phrase structure grammars), as opposed todependency grammars,[4]which view sentence structure asdependency-based. What this means is that for phrase structure rules to be applicable at all, one has to pursue a constituency-based understanding of sentence structure. The constituency relation is a one-to-one-or-more correspondence. For every word in a sentence, there is at least one node in the syntactic structure that corresponds to that word. The dependency relation, in contrast, is a one-to-one relation; for every word in the sentence, there is exactly one node in the syntactic structure that corresponds to that word. The distinction is illustrated with the following trees:
The constituency tree on the left could be generated by phrase structure rules. The sentence S is broken down into smaller and smaller constituent parts. The dependency tree on the right could not, in contrast, be generated by phrase structure rules (at least not as they are commonly interpreted).
A number of representational phrase structure theories of grammar never acknowledged phrase structure rules, but have pursued instead an understanding of sentence structure in terms the notion ofschema. Here phrase structures are not derived from rules that combine words, but from the specification or instantiation of syntactic schemata or configurations, often expressing some kind of semantic content independently of the specific words that appear in them. This approach is essentially equivalent to a system of phrase structure rules combined with a noncompositionalsemantictheory, since grammatical formalisms based on rewriting rules are generally equivalent in power to those based on substitution into schemata.
So in this type of approach, instead of being derived from the application of a number of phrase structure rules, the sentenceColorless green ideas sleep furiouslywould be generated by filling the words into the slots of a schema having the following structure:
And which would express the following conceptual content:
Though they are non-compositional, such models are monotonic. This approach is highly developed withinConstruction grammar[5]and has had some influence inHead-Driven Phrase Structure Grammar[6]andlexical functional grammar,[7]the latter two clearly qualifying as phrase structure grammars.
|
https://en.wikipedia.org/wiki/Phrase_structure_rules
|
Inprogrammingandsoftware development,fuzzingorfuzz testingis an automatedsoftware testingtechnique that involves providing invalid, unexpected, orrandom dataas inputs to acomputer program. The program is then monitored for exceptions such ascrashes, failing built-in codeassertions, or potentialmemory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, such as in afile formatorprotocoland distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to exposecorner casesthat have not been properly dealt with.
For the purpose of security, input that crosses atrust boundaryis often the most useful.[1]For example, it is more important to fuzz code that handles a file uploaded by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user.
The term "fuzz" originates from a 1988 class project[2]in the graduate Advanced Operating Systems class (CS736), taught by Prof. Barton Miller at theUniversity of Wisconsin, whose results were subsequently published in 1990.[3][4]To fuzz test aUNIXutility meant to automatically generate random input and command-line parameters for the utility. The project was designed to test the reliability of UNIX command line programs by executing a large number of random inputs in quick succession until they crashed. Miller's team was able to crash 25 to 33 percent of the utilities that they tested. They then debugged each of the crashes to determine the cause and categorized each detected failure. To allow other researchers to conduct similar experiments with other software, the source code of the tools, the test procedures, and the raw result data were made publicly available.[5]This early fuzzing would now be called black box, generational, unstructured (dumb or "classic") fuzzing.
According to Prof. Barton Miller, "In the process of writing the project description, I needed to give this kind of testing a name. I wanted a name that would evoke the feeling of random, unstructured data. After trying out several ideas, I settled on the term fuzz."[4]
A key contribution of this early work was simple (almost simplistic) oracle. A program failed its test if it crashed or hung under the random input and was considered to have passed otherwise. While test oracles can be challenging to construct, the oracle for this early fuzz testing was simple and universal to apply.
In April 2012, Google announced ClusterFuzz, a cloud-based fuzzing infrastructure for security-critical components of theChromium web browser.[6]Security researchers can upload their own fuzzers and collect bug bounties if ClusterFuzz finds a crash with the uploaded fuzzer.
In September 2014,Shellshock[7]was disclosed as a family ofsecurity bugsin the widely usedUNIXBashshell; most vulnerabilities of Shellshock were found using the fuzzerAFL.[8](Many Internet-facing services, such as some web server deployments, use Bash to process certain requests, allowing an attacker to cause vulnerable versions of Bash toexecute arbitrary commands. This can allow an attacker to gain unauthorized access to a computer system.[9])
In April 2015, Hanno Böck showed how the fuzzer AFL could have found the 2014 Heartbleed vulnerability.[10][11](TheHeartbleedvulnerability was disclosed in April 2014. It is a serious vulnerability that allows adversaries to decipher otherwiseencrypted communication. The vulnerability was accidentally introduced intoOpenSSLwhich implementsTLSand is used by the majority of the servers on the internet.Shodanreported 238,000 machines still vulnerable in April 2016;[12]200,000 in January 2017.[13])
In August 2016, theDefense Advanced Research Projects Agency(DARPA) held the finals of the firstCyber Grand Challenge, a fully automatedcapture-the-flagcompetition that lasted 11 hours.[14]The objective was to develop automatic defense systems that can discover,exploit, andcorrectsoftware flaws inreal-time. Fuzzing was used as an effective offense strategy to discover flaws in the software of the opponents. It showed tremendous potential in the automation of vulnerability detection. The winner was a system called "Mayhem"[15]developed by the team ForAllSecure led byDavid Brumley.
In September 2016, Microsoft announced Project Springfield, a cloud-based fuzz testing service for finding security critical bugs in software.[16]
In December 2016, Google announced OSS-Fuzz which allows for continuous fuzzing of several security-critical open-source projects.[17]
At Black Hat 2018, Christopher Domas demonstrated the use of fuzzing to expose the existence of a hiddenRISCcore in a processor.[18]This core was able to bypass existing security checks to executeRing 0commands from Ring 3.
In September 2020,MicrosoftreleasedOneFuzz, aself-hostedfuzzing-as-a-service platform that automates the detection ofsoftware bugs.[19]It supportsWindowsand Linux.[20]It has been archived three years later on November 1st, 2023.[21]
Testing programs with random inputs dates back to the 1950s when data was still stored onpunched cards.[22]Programmers would use punched cards that were pulled from the trash or card decks of random numbers as input to computer programs. If an execution revealed undesired behavior, abughad been detected.
The execution of random inputs is also calledrandom testingormonkey testing.
In 1981, Duran and Ntafos formally investigated the effectiveness of testing a program with random inputs.[23][24]While random testing had been widely perceived to be the worst means of testing a program, the authors could show that it is a cost-effective alternative to more systematic testing techniques.
In 1983,Steve Cappsat Apple developed "The Monkey",[25]a tool that would generate random inputs forclassic Mac OSapplications, such asMacPaint.[26]The figurative "monkey" refers to theinfinite monkey theoremwhich states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will eventually type out the entire works of Shakespeare. In the case of testing, the monkey would write the particular sequence of inputs that would trigger a crash.
In 1991, the crashme tool was released, which was intended to test the robustness of Unix andUnix-likeoperating systemsby randomly executing systems calls with randomly chosen parameters.[27]
A fuzzer can be categorized in several ways:[28][1]
A mutation-based fuzzer leverages an existing corpus of seed inputs during fuzzing. It generates inputs by modifying (or rathermutating) the provided seeds.[29]For example, when fuzzing the image librarylibpng, the user would provide a set of validPNGimage files as seeds while a mutation-based fuzzer would modify these seeds to produce semi-valid variants of each seed. The corpus of seed files may contain thousands of potentially similar inputs. Automated seed selection (or test suite reduction) allows users to pick the best seeds in order to maximize the total number of bugs found during a fuzz campaign.[30]
A generation-based fuzzer generates inputs from scratch. For instance, a smart generation-based fuzzer[31]takes the input model that was provided by the user to generate new inputs. Unlike mutation-based fuzzers, a generation-based fuzzer does not depend on the existence or quality of a corpus of seed inputs.
Some fuzzers have the capability to do both, to generate inputs from scratch and to generate inputs by mutation of existing seeds.[32]
Typically, fuzzers are used to generate inputs for programs that take structured inputs, such as afile, a sequence of keyboard or mouseevents, or a sequence ofmessages. This structure distinguishes valid input that is accepted and processed by the program from invalid input that is quickly rejected by the program. What constitutes a valid input may be explicitly specified in an input model. Examples of input models areformal grammars,file formats,GUI-models, andnetwork protocols. Even items not normally considered as input can be fuzzed, such as the contents ofdatabases,shared memory,environment variablesor the precise interleaving ofthreads. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from theparserand "invalid enough" so that they might stresscorner casesand exercise interesting program behaviours.
A smart (model-based,[32]grammar-based,[31][33]or protocol-based[34]) fuzzer leverages the input model to generate a greater proportion of valid inputs. For instance, if the input can be modelled as anabstract syntax tree, then a smart mutation-based fuzzer[33]would employ randomtransformationsto move complete subtrees from one node to another. If the input can be modelled by aformal grammar, a smart generation-based fuzzer[31]would instantiate theproduction rulesto generate inputs that are valid with respect to the grammar. However, generally the input model must be explicitly provided, which is difficult to do when the model is proprietary, unknown, or very complex. If a large corpus of valid and invalid inputs is available, agrammar inductiontechnique, such asAngluin's L* algorithm, would be able to generate an input model.[35][36]
A dumb fuzzer[37][38]does not require the input model and can thus be employed to fuzz a wider variety of programs. For instance,AFLis a dumb mutation-based fuzzer that modifies a seed file byflipping random bits, by substituting random bytes with "interesting" values, and by moving or deleting blocks of data. However, a dumb fuzzer might generate a lower proportion of valid inputs and stress theparsercode rather than the main components of a program. The disadvantage of dumb fuzzers can be illustrated by means of the construction of a validchecksumfor acyclic redundancy check(CRC). A CRC is anerror-detecting codethat ensures that theintegrityof the data contained in the input file is preserved duringtransmission. A checksum is computed over the input data and recorded in the file. When the program processes the received file and the recorded checksum does not match the re-computed checksum, then the file is rejected as invalid. Now, a fuzzer that is unaware of the CRC is unlikely to generate the correct checksum. However, there are attempts to identify and re-compute a potential checksum in the mutated input, once a dumb mutation-based fuzzer has modified the protected data.[39]
Typically, a fuzzer is considered more effective if it achieves a higher degree ofcode coverage. The rationale is, if a fuzzer does not exercise certain structural elements in the program, then it is also not able to revealbugsthat are hiding in these elements. Some program elements are considered more critical than others. For instance, a division operator might cause adivision by zeroerror, or asystem callmay crash the program.
Ablack-boxfuzzer[37][33]treats the program as ablack boxand is unaware of internal program structure. For instance, arandom testingtool that generates inputs at random is considered a blackbox fuzzer. Hence, a blackbox fuzzer can execute several hundred inputs per second, can be easily parallelized, and can scale to programs of arbitrary size. However, blackbox fuzzers may only scratch the surface and expose "shallow" bugs. Hence, there are attempts to develop blackbox fuzzers that can incrementally learn about the internal structure (and behavior) of a program during fuzzing by observing the program's output given an input. For instance, LearnLib employsactive learningto generate anautomatonthat represents the behavior of a web application.
Awhite-boxfuzzer[38][32]leveragesprogram analysisto systematically increasecode coverageor to reach certain critical program locations. For instance, SAGE[40]leveragessymbolic executionto systematically explore different paths in the program (a technique known asconcolic execution).
If theprogram's specificationis available, a whitebox fuzzer might leverage techniques frommodel-based testingto generate inputs and check the program outputs against the program specification.
A whitebox fuzzer can be very effective at exposing bugs that hide deep in the program. However, the time used for analysis (of the program or its specification) can become prohibitive. If the whitebox fuzzer takes relatively too long to generate an input, a blackbox fuzzer will be more efficient.[41]Hence, there are attempts to combine the efficiency of blackbox fuzzers and the effectiveness of whitebox fuzzers.[42]
Agray-boxfuzzer leveragesinstrumentationrather than program analysis to glean information about the program. For instance, AFL and libFuzzer utilize lightweight instrumentation to tracebasic blocktransitions exercised by an input. This leads to a reasonable performance overhead but informs the fuzzer about the increase in code coverage during fuzzing, which makes gray-box fuzzers extremely efficient vulnerability detection tools.[43]
Fuzzing is used mostly as an automated technique to exposevulnerabilitiesin security-critical programs that might beexploitedwith malicious intent.[6][16][17]More generally, fuzzing is used to demonstrate the presence of bugs rather than their absence. Running a fuzzing campaign for several weeks without finding a bug does not prove the program correct.[44]After all, the program may still fail for an input that has not been executed, yet; executing a program for all inputs is prohibitively expensive. If the objective is to prove a program correct for all inputs, aformal specificationmust exist and techniques fromformal methodsmust be used.
In order to expose bugs, a fuzzer must be able to distinguish expected (normal) from unexpected (buggy) program behavior. However, a machine cannot always distinguish a bug from a feature. In automatedsoftware testing, this is also called thetest oracleproblem.[45][46]
Typically, a fuzzer distinguishes between crashing and non-crashing inputs in the absence ofspecificationsand to use a simple and objective measure.Crashescan be easily identified and might indicate potential vulnerabilities (e.g.,denial of serviceorarbitrary code execution). However, the absence of a crash does not indicate the absence of a vulnerability. For instance, a program written inCmay or may not crash when an input causes abuffer overflow. Rather the program's behavior isundefined.
To make a fuzzer more sensitive to failures other than crashes, sanitizers can be used to inject assertions that crash the program when a failure is detected.[47][48]There are different sanitizers for different kinds of bugs:
Fuzzing can also be used to detect "differential" bugs if areference implementationis available. For automatedregression testing,[49]the generated inputs are executed on twoversionsof the same program. For automateddifferential testing,[50]the generated inputs are executed on two implementations of the same program (e.g.,lighttpdandhttpdare both implementations of a web server). If the two variants produce different output for the same input, then one may be buggy and should be examined more closely.
Static program analysisanalyzes a program without actually executing it. This might lead tofalse positiveswhere the tool reports problems with the program that do not actually exist. Fuzzing in combination withdynamic program analysiscan be used to try to generate an input that actually witnesses the reported problem.[51]
Modern web browsers undergo extensive fuzzing. TheChromiumcode ofGoogle Chromeis continuously fuzzed by the Chrome Security Team with 15,000 cores.[52]ForMicrosoft Edge [Legacy]andInternet Explorer,Microsoftperformed fuzzed testing with 670 machine-years during product development, generating more than 400 billionDOMmanipulations from 1 billion HTML files.[53][52]
A fuzzer produces a large number of inputs in a relatively short time. For instance, in 2016 the Google OSS-fuzz project produced around 4trillioninputs a week.[17]Hence, many fuzzers provide atoolchainthat automates otherwise manual and tedious tasks which follow the automated generation of failure-inducing inputs.
Automated bug triage is used to group a large number of failure-inducing inputs byroot causeand to prioritize each individual bug by severity. A fuzzer produces a large number of inputs, and many of the failure-inducing ones may effectively expose the samesoftware bug. Only some of these bugs aresecurity-criticaland should bepatchedwith higher priority. For instance theCERT Coordination Centerprovides the Linux triage tools which group crashing inputs by the producedstack traceand lists each group according to their probability to beexploitable.[54]The Microsoft Security Research Centre (MSEC) developed the "!exploitable" tool which first creates ahashfor a crashing input to determine its uniqueness and then assigns an exploitability rating:[55]
Previously unreported, triaged bugs might be automaticallyreportedto abug tracking system. For instance, OSS-Fuzz runs large-scale, long-running fuzzing campaigns for several security-critical software projects where each previously unreported, distinct bug is reported directly to a bug tracker.[17]The OSS-Fuzz bug tracker automatically informs themaintainerof the vulnerable software and checks in regular intervals whether the bug has been fixed in the most recentrevisionusing the uploaded minimized failure-inducing input.
Automated input minimization (or test case reduction) is an automateddebuggingtechnique to isolate that part of the failure-inducing input that is actually inducing the failure.[56][57]If the failure-inducing input is large and mostly malformed, it might be difficult for a developer to understand what exactly is causing the bug. Given the failure-inducing input, an automated minimization tool would remove as many input bytes as possible while still reproducing the original bug. For instance,Delta Debuggingis an automated input minimization technique that employs an extendedbinary search algorithmto find such a minimal input.[58]
The following is a list of fuzzers described as "popular", "widely used", or similar in the academic literature.[59][60]
|
https://en.wikipedia.org/wiki/Fuzzing
|
Rigetti Computing, Inc.is aBerkeley, California-based developer of Superconducting quantumintegrated circuitsused forquantum computers. Rigetti also develops a cloud platform called Forest that enables programmers to write quantum algorithms.[2]
Rigetti Computing was founded in 2013 byChad Rigetti, a physicist with a background in quantum computers fromIBM, and studied underMichel Devoret.[2][3]The company emerged from startup incubatorY Combinatorin 2014 as a so-called "spaceshot" company.[4][5]Later that year, Rigetti also participated in The Alchemist Accelerator, a venture capital programme.[5]
By February 2016, Rigetti created its firstquantum processor, a three-qubitchip made using aluminum circuits on a silicon wafer.[6]That same year, Rigetti raisedSeries Afunding of US$24 million in a round led byAndreessen Horowitz. In November, the company secured Series B funding of $40 million in a round led by investment firm Vy Capital, along with additional funding fromAndreessen Horowitzand other investors. Y Combinator also participated in both rounds.[5]
By Spring of 2017, Rigetti had advanced to testing eight-qubit quantum computers.[3]In June, the company announced the release of Forest 1.0, a quantum computing platform designed to enable developers to create quantum algorithms.[2]This was a major milestone.
In October 2021, Rigetti announced plans to go public via aSPAC merger, with estimated valuation of around US$1.5 billion.[7][8]This deal was expected to raise an additional US$458 million, bringing the total funding to US$658 million.[7]The fund will be used to accelerate the company's growth, including scaling its quantum processors from 80 qubits to 1,000 qubits by 2024, and to 4,000 by 2026.[9]The SPAC deal closed on 2 March 2022, and Rigetti began trading on the NASDAQ under the ticker symbol RGTI.[10]
In December 2022, Subodh Kulkarni became president and CEO of the company.[11]
In July 2023 Rigetti launched a single-chip 84qubitquantum processorthat can scale to even larger systems.[12]
Rigetti Computing is a full-stack quantum computing company, a term that indicates that the company designs and fabricates quantum chips, integrates them with a controlling architecture, and develops software for programmers to use to build algorithms for the chips.[13]
The company hosts a cloud computing platform called Forest, which gives developers access to quantum processors so they can write quantum algorithms for testing purposes. The computing platform is based on a custom instruction language the company developed calledQuil, which stands for Quantum Instruction Language. Quil facilitates hybrid quantum/classical computing, and programs can be built and executed using open sourcePythontools.[13][14]As of June 2017, the platform allows coders to write quantum algorithms for a simulation of a quantum chip with 36 qubits.[2]
The company operates a rapid prototyping fabrication ("fab") lab called Fab-1, designed to quickly create integrated circuits. Lab engineers design and generate experimental designs for 3D-integrated quantum circuits for qubit-based quantum hardware.[13]
The company was recognized in 2016 byX-PrizefounderPeter Diamandisas being one of the three leaders in the quantum computing space, along with IBM andGoogle.[15]MIT Technology Reviewnamed the company one of the 50 smartest companies of 2017.[16]
Rigetti Computing is headquartered in Berkeley, California, where it hosts developmental systems and cooling equipment.[15]The company also operates its Fab-1 manufacturing facility in nearby Fremont.[2]
|
https://en.wikipedia.org/wiki/Rigetti_Computing
|
Incombinatorialmathematics, alarge setofpositive integers
is one such that theinfinite sumof the reciprocals
diverges. Asmall setis any subset of the positive integers that is not large; that is, one whose sum of reciprocals converges.
Large sets appear in theMüntz–Szász theoremand in theErdős conjecture on arithmetic progressions.
Paul Erdősconjecturedthat all large sets contain arbitrarily longarithmetic progressions. He offered a prize of $3000 for a proof, more than for any of hisother conjectures, and joked that this prize offer violated the minimum wage law.[1]The question is still open.
It is not known how to identify whether a given set is large or small in general. As a result, there are many sets which are not known to be either large or small.
|
https://en.wikipedia.org/wiki/Large_set_(combinatorics)
|
1800s:Martineau·Tocqueville·Marx·Spencer·Le Bon·Ward·Pareto·Tönnies·Veblen·Simmel·Durkheim·Addams·Mead·Weber·Du Bois·Mannheim·Elias
Asocial networkis asocial structureconsisting of a set ofsocialactors (such asindividualsor organizations), networks ofdyadicties, and othersocial interactionsbetween actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures.[1]The study of these structures usessocial network analysisto identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks.
Social networks and the analysis of them is an inherentlyinterdisciplinaryacademic field which emerged fromsocial psychology,sociology,statistics, andgraph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations".[2]Jacob Morenois credited with developing the firstsociogramsin the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in thesocial and behavioral sciencesby the 1980s.[1][3]Social network analysisis now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with othercomplex networks, it forms part of the nascent field ofnetwork science.[4][5]
The social network is atheoreticalconstructuseful in thesocial sciencesto study relationships between individuals,groups,organizations, or even entiresocieties(social units, seedifferentiation). The term is used to describe asocial structuredetermined by suchinteractions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. Anaxiomof the social network approach to understandingsocial interactionis that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is thatindividual agencyis often ignored[6]although this may not be the case in practice (seeagent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations,network analyticsare useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited toanthropology,biology,communication studies,economics,geography,information science,organizational studies,social psychology,sociology, andsociolinguistics.
In the late 1890s, bothÉmile DurkheimandFerdinand Tönniesforeshadowed the idea of social networks in their theories and research ofsocial groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society").[7]Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors.[8]Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups.[9]
Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently.[6][10][11]Inpsychology, in the 1930s,Jacob L. Morenobegan systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (seesociometry). Inanthropology, the foundation for social network theory is the theoretical andethnographicwork ofBronislaw Malinowski,[12]Alfred Radcliffe-Brown,[13][14]andClaude Lévi-Strauss.[15]A group of social anthropologists associated withMax Gluckmanand theManchester School, includingJohn A. Barnes,[16]J. Clyde MitchellandElizabeth Bott Spillius,[17][18]often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom.[6]Concomitantly, British anthropologistS. F. Nadelcodified a theory of social structure that was influential in later network analysis.[19]Insociology, the early (1930s) work ofTalcott Parsonsset the stage for taking a relational approach to understanding social structure.[20][21]Later, drawing upon Parsons' theory, the work of sociologistPeter Blauprovides a strong impetus for analyzing the relational ties of social units with his work onsocial exchange theory.[22][23][24]
By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologistHarrison Whiteand his students at theHarvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time wereCharles Tilly, who focused on networks in political and community sociology and social movements, andStanley Milgram, who developed the "six degrees of separation" thesis.[25]Mark Granovetter[26]andBarry Wellman[27]are among the former students of White who elaborated and championed the analysis of social networks.[26][28][29][30]
Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such asDuncan J. Watts,Albert-László Barabási,Peter Bearman,Nicholas A. Christakis,James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks.
In general, social networks areself-organizing,emergent, andcomplex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system.[32][33]These patterns become more apparent as network size increases. However, a global network analysis[34]of, for example, allinterpersonal relationshipsin the world is not feasible and is likely to contain so muchinformationas to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis.[35][36]The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Althoughlevels of analysisare not necessarilymutually exclusive, there are three general levels into which networks may fall:micro-level,meso-level, andmacro-level.
At the micro-level, social network research typically begins with an individual,snowballingas social relationships are traced, or may begin with a small group of individuals in a particular social context.
Dyadic level: Adyadis a social relationship between two individuals. Network research on dyads may concentrate onstructureof the relationship (e.g. multiplexity, strength),social equality, and tendencies towardreciprocity/mutuality.
Triadic level: Add one individual to a dyad, and you have atriad. Research at this level may concentrate on factors such asbalanceandtransitivity, as well associal equalityand tendencies towardreciprocity/mutuality.[35]In thebalance theoryofFritz Heiderthe triad is the key to social dynamics. The discord in a rivalrouslove triangleis an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory ofsigned graphs.
Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density,centrality,prestigeand roles such asisolates, liaisons, andbridges.[37]Such analyses, are most commonly used in the fields ofpsychologyorsocial psychology,ethnographickinshipanalysis or othergenealogicalstudies of relationships between individuals.
Subset level:Subsetlevels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus ondistanceand reachability,cliques,cohesivesubgroups, or othergroup actionsorbehavior.[38]
In general, meso-level theories begin with apopulationsize that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks.[39]
Organizations: Formalorganizationsaresocial groupsthat distribute tasks for a collectivegoal.[40]Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms offormalorinformalrelationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures.[40]Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups.[41]
Randomly distributed networks:Exponential random graph modelsof social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including generaldegree-based structural effects commonly observed in many human social networks as well asreciprocityandtransitivity, and at the node-level,homophilyandattribute-based activity and popularity effects, as derived from explicit hypotheses aboutdependenciesamong network ties.Parametersare given in terms of the prevalence of smallsubgraphconfigurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior.[42]
Scale-free networks: Ascale-free networkis anetworkwhosedegree distributionfollows apower law, at leastasymptotically. Innetwork theorya scale-free ideal network is arandom networkwith adegree distributionthat unravels the size distribution of social groups.[43]Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness ofverticeswith adegreethat greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is theclustering coefficientdistribution, which decreases as the node degree increases. This distribution also follows apower law.[44]TheBarabásimodel of network evolution shown above is an example of a scale-free network.
Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such aseconomicor otherresourcetransferinteractions over a largepopulation.
Large-scale networks:Large-scale networkis a term somewhat synonymous with "macro-level." It is primarily used insocialandbehavioralsciences, and ineconomics. Originally, the term was used extensively in thecomputer sciences(seelarge-scale network mapping).
Complex networks: Most larger social networks display features ofsocial complexity, which involves substantial non-trivial features ofnetwork topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see,complexity science,dynamical systemandchaos theory), as dobiological, andtechnological networks. Suchcomplex networkfeatures include a heavy tail in thedegree distribution, a highclustering coefficient,assortativityor disassortativity among vertices,community structure(seestochastic block model), andhierarchical structure. In the case ofagency-directednetworks these features also includereciprocity, triad significance profile (TSP, seenetwork motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such aslatticesandrandom graphs, do not show these features.[45]
Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these areGraph theory,Balance theory, Social comparison theory, and more recently, theSocial identity approach.[46]
Few complete theories have been produced from social network analysis. Two that have arestructural role theoryandheterophily theory.
The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties".[47]
In the context of networks,social capitalexists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections.[48]Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters.[49]When two separate clusters possess non-redundant information, there is said to be a structural hole between them.[49]Thus, a network that bridgesstructural holeswill provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes.[49]
Networks rich in structural holes are a form of social capital in that they offerinformationbenefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters.[49]For example, inbusiness networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory ofweak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction.[50]
Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist.[51][52]Other work examines how network grouping of artists can affect an individual artist's auction performance.[53]An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career.
In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed throughtelecommunicationsdevices andsocial network services. Such devices and services require extensive and ongoing maintenance and analysis, often usingnetwork sciencemethods.Community developmentstudies, today, also make extensive use of such methods.
Complex networksrequire methods specific to modelling and interpretingsocial complexityandcomplex adaptive systems, including techniques ofdynamic network analysis.
Mechanisms such asDual-phase evolutionexplain how temporal changes in connectivity contribute to the formation of structure in social networks.
The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants incollective actionssuch asprotests; promotion of peaceful behavior,social norms, andpublic goodswithincommunitiesthrough networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats.[54]
Incriminologyandurban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength.[55]
Diffusion of ideas and innovationsstudies focus on the spread and use of ideas from one actor to another or onecultureand another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., byNicholas Christakisand collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages,[56][57]Indian slums,[58]or in the lab.[59]Still other experiments have documented the experimental induction of social contagion of voting behavior,[60]emotions,[61]risk perception,[62]and commercial products.[63]
Indemography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents.[64][65]
The field ofsociologyfocuses almost entirely on networks of outcomes of social interactions. More narrowly,economic sociologyconsiders behavioral interactions of individuals and groups throughsocial capitaland social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy.[66]
Analysis of social networks is increasingly incorporated intohealth care analytics, not only inepidemiologicalstudies but also in models ofpatient communicationand education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations andsystems.[67]
Human ecologyis aninterdisciplinaryandtransdisciplinarystudy of the relationship betweenhumansand theirnatural,social, andbuilt environments. The scientific philosophy of human ecology has a diffuse history with connections togeography,sociology,psychology,anthropology,zoology, and naturalecology.[68][69]
In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo,[70]De Nooy,[71]Senekal,[72]andLotker,[73]to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings ofEven-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped usingvisualizationfrom SNA.
Research studies offormalorinformal organizationrelationships,organizational communication,economics,economic sociology, and otherresourcetransfers. Social networks have also been used to examine how organizations interact with each other, characterizing the manyinformal connectionsthat link executives together, as well as associations and connections between individual employees at different organizations.[74]Many organizational social network studies focus onteams.[75]Withinteamnetwork studies, research assesses, for example, the predictors and outcomes ofcentralityand power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affectorganizational commitment,[76]organizational identification,[37]interpersonal citizenship behaviour.[77]
Social capitalis a form ofeconomicandcultural capitalin which social networks are central,transactionsare marked byreciprocity,trust, andcooperation, andmarketagentsproducegoods and servicesnot mainly for themselves, but for acommon good.Social capitalis split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations.[78]This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations.[78]The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions.[78]
Social capitalis a sociological concept about the value ofsocial relationsand the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use.[79][80][81]In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity.[79][82]
This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understandconsumer behaviourand drive sales.
In manyorganizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities.[48]Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economistJohn Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress."[83]Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement.
A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking.[84]In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms.[85]By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts.
There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations.[86]However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted.[48]Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career.
Computer networkscombined with social networking software produce a new medium for social interaction. A relationship over a computerizedsocial networking servicecan be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In acomputer-mediated communicationcontext, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise ofelectronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world.[87]Social network analysismethods have become essential to examining these types of computer mediated communication.
In addition, the sheer size and the volatile nature ofsocial mediahas given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data.[88]
Based on the pattern ofhomophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhoodsegregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area.[89]
|
https://en.wikipedia.org/wiki/Social_networking
|
Classified informationis confidential material that a government deems to besensitive informationwhich must be protected from unauthorized disclosure that requires special handling and dissemination controls. Access is restricted bylawor regulation to particular groups of individuals with the necessarysecurity clearancewith aneed to know.
A formalsecurity clearanceis required to view or handle classified material. The clearance process requires a satisfactory background investigation. Documents and other information must be properly marked "by the author" with one of several (hierarchical) levels of sensitivity—e.g. Confidential (C), Secret (S), and Top Secret (S). All classified documents require designation markings on the technical file which is usually located either on the cover sheet, header and footer of page. The choice of level is based on an impact assessment; governments have their own criteria, including how to determine the classification of an information asset and rules on how to protect information classified at each level. This process often includes security clearances for personnel handling the information. Mishandling of the material can incur criminal penalties.
Somecorporationsand non-government organizations also assign levels of protection to their private information, either from a desire to protecttrade secrets, or because of laws and regulations governing various matters such aspersonal privacy, sealed legal proceedings and the timing of financial information releases.
With the passage of time much classified information can become less sensitive, and may be declassified and made public. Since the late twentieth century there has beenfreedom of information legislationin some countries, whereby the public is deemed to have the right to all information that is not considered to be damaging if released. Sometimes documents are released with information still considered confidential obscured (redacted), as in the adjacent example.
The question exists among some political science and legal experts whether the definition of classified ought to be information that would cause injury to the cause of justice, human rights, etc., rather than information that would cause injury to the national interest; to distinguish when classifying information is in the collective best interest of a just society, or merely the best interest of a society acting unjustly to protect its people, government, or administrative officials from legitimate recourses consistent with a fair and justsocial contract.
The purpose of classification is to protect information. Higher classifications protect information that might endangernational security. Classification formalises what constitutes a "state secret" and accords different levels of protection based on the expected damage the information might cause in the wrong hands.
However, classified information is frequently "leaked" to reporters by officials for political purposes. Several U.S. presidents have leaked sensitive information to influence public opinion.[2][3]
Former government intelligence officials are usually able to retain their security clearance, but it is a privilege not a right, with the President being the grantor.[4]
Although the classification systems vary from country to country, most have levels corresponding to the following British definitions (from the highest level to lowest).
Top Secretis the highest level of classified information.[5]Information is further compartmented so that specific access using a code word aftertop secretis a legal way to hide collective and important information.[6]Such material would cause "exceptionally grave damage" tonational securityif made publicly available.[7]Prior to 1942, the United Kingdom and other members of the British Empire usedMost Secret, but this was later changed to match the United States' category name ofTop Secretin order to simplify Allied interoperability. The unauthorized disclosure of Top Secret (TS) information is expected to cause harm and be of grave threat to national security.
The Washington Postreported in an investigation entitled "Top Secret America" that, as of 2010, "An estimated 854,000 people ... hold top-secret security clearances" in the United States.[8]
It is desired that no document be released which refers toexperiments with humansand might have adverse effect on public opinion or result in legal suits. Documents covering such work field should be classified "secret".
Secretmaterial would cause "serious damage" to national security if it were publicly available.[11]
In the United States, operational "Secret" information can be marked with an additional "LimDis", to limit distribution.
Confidentialmaterial would cause "damage" or be prejudicial to national security if publicly available.[12]
Restrictedmaterial would cause "undesirable effects" if publicly available. Some countries do not have such a classification in public sectors, such as commercial industries. Such a level is also known as "PrivateInformation".
Official(equivalent to U.S. DOD classificationControlled Unclassified Informationor CUI) material forms the generality of government business, public service delivery and commercial activity. This includes a diverse range of information, of varying sensitivities, and with differing consequences resulting from compromise or loss. Official information must be secured against athreat modelthat is broadly similar to that faced by a large private company.
The Official Sensitive classification replaced the Restricted classification in April 2014 in the UK; Official indicates the previously used Unclassified marking.[13]
Unclassifiedis technically not a classification level. Though this is a feature of some classification schemes, used for government documents that do not merit a particular classification or which have been declassified. This is because the information is low-impact, and therefore does not require any special protection, such as vetting of personnel.
A plethora of pseudo-classifications exist under this category.[citation needed]
Clearanceis a general classification, that comprises a variety of rules controlling the level of permission required to view some classified information, and how it must be stored, transmitted, and destroyed. Additionally, access is restricted on a "need to know" basis. Simply possessing a clearance does not automatically authorize the individual to view all material classified at that level or below that level. The individual must present a legitimate "need to know" in addition to the proper level of clearance.
In addition to the general risk-based classification levels, additionalcompartmented constraints on accessexist, such as (in the U.S.) Special Intelligence (SI), which protects intelligence sources and methods, No Foreign dissemination (NoForn), which restricts dissemination to U.S. nationals, and Originator Controlled dissemination (OrCon), which ensures that the originator can track possessors of the information. Information in these compartments is usually marked with specific keywords in addition to the classification level.
Government information aboutnuclear weaponsoften has an additional marking to show it contains such information (CNWDI).
When a government agency or group shares information between an agency or group of other country's government they will generally employ a special classification scheme that both parties have previously agreed to honour.
For example, the marking Atomal, is applied to U.S. Restricted Data or Formerly Restricted Data and United Kingdom Atomic information that has been released to NATO. Atomal information is marked COSMIC Top Secret Atomal (CTSA), NATO Secret Atomal (NSAT), or NATO Confidential Atomal (NCA). BALK and BOHEMIA are also used.
For example, sensitive information shared amongstNATOallies has four levels of security classification; from most to least classified:[14][15]
A special case exists with regard to NATO Unclassified (NU) information. Documents with this marking are NATO property (copyright) and must not be made public without NATO permission.
COSMIC is an acronym for "Control of Secret Material in an International Command".[17]
Most countries employ some sort of classification system for certain government information. For example, inCanada, information that the U.S. would classify SBU (Sensitive but Unclassified) is called "protected" and further subcategorised into levels A, B, and C.
On 19 July 2011, the National Security (NS) classification marking scheme and the Non-National Security (NNS) classification marking scheme inAustraliawas unified into one structure.
As of 2018, the policy detailing howAustralian governmententities handle classified information is defined in the Protective Security Policy Framework (PSPF). The PSPF is published by theAttorney-General's Departmentand covers security governance,information security, personal security, andphysical security. A security classification can be applied to the information itself or an asset that holds information e.g., aUSBorlaptop.[23]
The Australian Government uses four security classifications: OFFICIAL: Sensitive, PROTECTED, SECRET and TOP SECRET. The relevant security classification is based on the likely damage resulting from compromise of the information's confidentiality.
All other information from business operations and services requires a routine level of protection and is treated as OFFICIAL. Information that does not form part of official duty is treated as UNOFFICIAL.
OFFICIAL and UNOFFICIAL are not security classifications and are not mandatory markings.
Caveats are a warning that the information has special protections in addition to those indicated by the security classification of PROTECTED or higher (or in the case of the NATIONAL CABINET caveat, OFFICIAL: Sensitive or higher). Australia has four caveats:
Codewords are primarily used within the national security community. Each codeword identifies a special need-to-knowcompartment.
Foreign government markings are applied to information created by Australian agencies from foreign source information. Foreign government marking caveats require protection at least equivalent to that required by the foreign government providing the source information.
Special handling instructions are used to indicate particular precautions for information handling. They include:
A releasability caveat restricts information based oncitizenship. The three in use are:
Additionally, the PSPF outlines Information Management Markers (IMM) as a way for entities to identify information that is subject to non-security related restrictions on access and use. These are:
There are three levels ofdocument classificationunder Brazilian Law No. 12.527, theAccess to Information Act:[24]ultrassecreto(top secret),secreto(secret) andreservado(restricted).
A top secret (ultrassecreto) government-issued document may be classified for a period of 25 years, which may be extended up to another 25 years.[25]Thus, no document remains classified for more than 50 years. This is mandated by the 2011 Information Access Law (Lei de Acesso à Informação), a change from the previous rule, under which documents could have their classification time length renewed indefinitely, effectively shuttering state secrets from the public. The 2011 law applies retroactively to existing documents.
The government of Canada employs two main types of sensitive information designation: Classified and Protected. The access and protection of both types of information is governed by theSecurity of Information Act, effective 24 December 2001, replacing theOfficial Secrets Act 1981.[26]To access the information, a person must have the appropriate security clearance and the need to know.
In addition, the caveat "Canadian Eyes Only" is used to restrict access to Classified or Protected information only to Canadian citizens with the appropriate security clearance and need to know.[27]
SOI is not a classification of dataper se. It is defined under theSecurity of Information Act, and unauthorised release of such information constitutes a higher breach of trust, with a penalty of up to life imprisonment if the information is shared with a foreign entity or terrorist group.
SOIs include:
In February 2025, the Department of National Defence announced a new category of Persons Permanently Bound to Security (PPBS). The protection would apply to some units, sections or elements, and select positions (both current and former), with access to sensitive Special Operational Information (SOI) for national defense and intelligence work. If a unit or organization routinely handles SOI, all members of that unit will be automatically bound to secrecy. If an individual has direct access to SOI, deemed to be integral to national security, that person may be recommended for PPBS designation. The designation is for life, punishable by imprisonment.[28]
Classified information can be designatedTop Secret,SecretorConfidential. These classifications are only used on matters of national interest.
Protected information is not classified. It pertains to any sensitive information that does not relate to national security and cannot be disclosed under the access and privacy legislation because of the potential injury to particular public or private interests.[29][30]
Federal Cabinet (King's Privy Council for Canada) papers are either protected (e.g., overhead slides prepared to make presentations to Cabinet) or classified (e.g., draft legislation, certain memos).[31]
TheCriminal Lawof thePeople's Republic of China(which is not operative in the special administrative regions ofHong KongandMacau) makes it a crime to release a state secret. Regulation and enforcement is carried out by theNational Administration for the Protection of State Secrets.
Under the 1989 "Law on Guarding State Secrets",[32]state secrets are defined as those that concern:
Secrets can be classified into three categories:
In France, classified information is defined by article 413-9 of the Penal Code.[34]The three levels of military classification are
Less sensitive information is "protected". The levels are
A further caveat,spécial France(reserved France) restricts the document to French citizens (in its entirety or by extracts). This is not a classification level.
Declassification of documents can be done by theCommission consultative du secret de la défense nationale(CCSDN), an independent authority. Transfer of classified information is done with double envelopes, the outer layer being plastified and numbered, and the inner in strong paper. Reception of the document involves examination of the physical integrity of the container and registration of the document. In foreign countries, the document must be transferred through specialised military mail ordiplomatic bag. Transport is done by an authorised conveyor or habilitated person for mail under 20 kg. The letter must bear a seal mentioning "Par Valise Accompagnee-Sacoche". Once a year, ministers have an inventory of classified information and supports by competent authorities.
Once their usage period is expired, documents are transferred to archives, where they are either destroyed (by incineration, crushing, or overvoltage), or stored.
In case of unauthorized release of classified information, competent authorities are theMinistry of Interior, the 'Haut fonctionnaire de défense et de sécurité("high civil servant for defence and security") of the relevant ministry, and the General secretary for National Defence. Violation of such secrets is an offence punishable with seven years of imprisonment and a 100,000-euro fine; if the offence is committed by imprudence or negligence, the penalties are three years of imprisonment and a 45,000-euro fine.
TheSecurity Bureauis responsible for developing policies in regards to the protection and handling of confidential government information. In general, the system used in Hong Kong is very similar to the UK system, developed from thecolonial era of Hong Kong.
Four classifications exists in Hong Kong, from highest to lowest in sensitivity:[35]
Restricted documents are not classifiedper se, but only those who have a need to know will have access to such information, in accordance with thePersonal Data (Privacy) Ordinance.[36]
New Zealanduses the Restricted classification, which is lower than Confidential. People may be given access to Restricted information on the strength of an authorisation by their Head of department, without being subjected to the backgroundvettingassociated with Confidential, Secret and Top Secret clearances. New Zealand's security classifications and the national-harm requirements associated with their use are roughly similar to those of the United States.
In addition to national security classifications there are two additional security classifications, In Confidence and Sensitive, which are used to protect information of a policy and privacy nature. There are also a number of information markings used within ministries and departments of the government, to indicate, for example, that information should not be released outside the originating ministry.
Because of strict privacy requirements around personal information, personnel files are controlled in all parts of the public and private sectors. Information relating to the security vetting of an individual is usually classified at the In Confidence level.
InRomania, classified information is referred to as "state secrets" (secrete de stat) and is defined by the Penal Code as "documents and data that manifestly appear to have this status or have been declared or qualified as such by decision of Government".[37]There are three levels of classification: "Secret" (Secret/S), "Top Secret" (Strict Secret/SS), and "Top Secret of Particular Importance" (Strict secret de interes deosebit/SSID).[38]The levels are set by theRomanian Intelligence Serviceand must be aligned with NATO regulations—in case of conflicting regulations, the latter are applied with priority. Dissemination of classified information to foreign agents or powers is punishable by up to life imprisonment, if such dissemination threatens Romania's national security.[39]
In theRussian Federation, a state secret (Государственная тайна) is information protected by the state on its military, foreign policy, economic, intelligence, counterintelligence, operational and investigative and other activities, dissemination of which could harm state security.
The Swedish classification has been updated due to increased NATO/PfP cooperation. All classified defence documents will now have both a Swedish classification (Kvalificerat hemlig,Hemlig,KonfidentiellorBegränsat Hemlig), and an English classification (Top Secret, Secret, Confidential, or Restricted).[citation needed]The termskyddad identitet, "protected identity", is used in the case of protection of a threatened person, basically implying "secret identity", accessible only to certain members of the police force and explicitly authorised officials.
At the federal level, classified information in Switzerland is assigned one of three levels, which are from lowest to highest: Internal, Confidential, Secret.[40]Respectively, these are, in German,Intern,Vertraulich,Geheim; in French,Interne,Confidentiel,Secret; in Italian,Ad Uso Interno,Confidenziale,Segreto. As in other countries, the choice of classification depends on the potential impact that the unauthorised release of the classified document would have on Switzerland, the federal authorities or the authorities of a foreign government.
According to the Ordinance on the Protection of Federal Information, information is classified as Internal if its "disclosure to unauthorised persons may be disadvantageous to national interests."[40]Information classified as Confidential could, if disclosed, compromise "the free formation of opinions and decision-making ofthe Federal Assemblyorthe Federal Council," jeopardise national monetary/economic policy, put the population at risk or adversely affect the operations of theSwiss Armed Forces. Finally, the unauthorised release of Secret information could seriously compromise the ability of either the Federal Assembly or the Federal Council to function or impede the ability of the Federal Government or the Armed Forces to act.
According to the related regulations inTurkey, there are four levels of document classification:[41]çok gizli(top secret),gizli(secret),özel(confidential) andhizmete özel(restricted). The fifth istasnif dışı, which means unclassified.
Until 2013, theUnited Kingdomused five levels of classification—from lowest to highest, they were: Protect, Restricted, Confidential, Secret and Top Secret (formerly Most Secret). TheCabinet Officeprovides guidance on how to protect information, including thesecurity clearancesrequired for personnel. Staff may be required to sign to confirm their understanding and acceptance of theOfficial Secrets Acts 1911 to 1989, although the Act applies regardless of signature. Protect is not in itself a security protective marking level (such as Restricted or greater), but is used to indicate information which should not be disclosed because, for instance, the document contains tax, national insurance, or other personal information.
Government documents without a classification may be marked as Unclassified or Not Protectively Marked.[42]
This system was replaced by theGovernment Security Classifications Policy, which has a simpler model: Top Secret, Secret, and Official from April 2014.[13]Official Sensitive is a security marking which may be followed by one of three authorised descriptors: Commercial, LocSen (location sensitive) or Personal. Secret and Top Secret may include a caveat such as UK Eyes Only.
Also useful is that scientific discoveries may be classified via theD-Noticesystem if they are deemed to have applications relevant to national security. These may later emerge when technology improves so for example the specialised processors and routing engines used in graphics cards are loosely based on top secret military chips designed for code breaking and image processing.
They may or may not have safeguards built in to generate errors when specific tasks are attempted and this is invariably independent of the card's operating system.[citation needed]
The U.S. classification system is currently established underExecutive Order 13526and has three levels of classification—Confidential, Secret, and Top Secret. The U.S. had a Restricted level duringWorld War IIbut no longer does. U.S. regulations state that information received from other countries at the Restricted level should be handled as Confidential. A variety of markings are used for material that is not classified, but whose distribution is limited administratively or by other laws, e.g.,For Official Use Only(FOUO), orsensitive but unclassified(SBU). The Atomic Energy Act of 1954 provides for the protection of information related to the design of nuclear weapons. The term "Restricted Data" is used to denote certain nuclear technology. Information about the storage, use or handling of nuclear material or weapons is marked "Formerly Restricted Data". These designations are used in addition to level markings (Confidential, Secret and Top Secret). Information protected by the Atomic Energy Act is protected by law and information classified under the Executive Order is protected by Executive privilege.
The U.S. government insists it is "not appropriate" for a court to question whether any document is legally classified.[43]In the1973 trial of Daniel Ellsberg for releasing the Pentagon Papers, the judge did not allow any testimony from Ellsberg, claiming it was "irrelevant", because the assigned classification could not be challenged. The charges against Ellsberg were ultimately dismissed after it was revealed that the government had broken the law in secretly breaking into the office of Ellsberg's psychiatrist and in tapping his telephone without a warrant. Ellsberg insists that the legal situation in the U.S. in 2014 is worse than it was in 1973, andEdward Snowdencould not get a fair trial.[44]TheState Secrets Protection Actof 2008 might have given judges the authority to review such questionsin camera, but the bill was not passed.[43]
When a government agency acquires classified information through covert means, or designates a program as classified, the agency asserts "ownership" of that information and considers any public availability of it to be a violation of their ownership—even if the same information was acquired independently through "parallel reporting" by the press or others. For example, although theCIA drone programhas been widely discussed in public since the early 2000s, and reporters personally observed and reported on drone missile strikes, the CIA still considers the very existence of the program to be classified in its entirety, and any public discussion of it technically constitutes exposure of classified information. "Parallel reporting" was an issue in determining what constitutes "classified" information during theHillary Clinton email controversywhenAssistant Secretary of State for Legislative AffairsJulia Frifieldnoted, "When policy officials obtain information from open sources, 'think tanks,' experts, foreign government officials, or others, the fact that some of the information may also have been available through intelligence channels does not mean that the information is necessarily classified."[45][46][47]
Strictly Secret and Confidential
Secret
Confidential
Reserved
US, French, EU, Japan "Confidential" marking to be handled as SECRET.[49]
Top Secret
Highly Secret
Secret
Internal
Foreign Service:Fortroligt(thin black border)
Top Secret
Secret
Confidential
For Official Use Only
Top Secret
Secret
Confidential
Limited Use
Top Secret
Secret
Confidential
Restricted Distribution
Absolute Secret
Secret
Confidential
Service Document
Class 1 Secret
Class 2 Secret
Class 3 Secret
Confidential
Philippines(Tagalog)
Matinding Lihim
Mahigpit na Lihim
Lihim
Ipinagbabawal
Strict Secret of Special Importance
Secret for Service Use
Of Special Importance (variant: Completely Secret)
Completely Secret (variant: Secret)
Secret (variant: Not To Be Disclosed (Confidential))
For Official Use
State Secret
Strictly Confidential
Confidential
Internal
Most Secret
Very Secret
Secret
Restricted
Top Secret
Secret
Confidential
Restricted
Table source:US Department of Defense(January 1995)."National Industrial Security Program - Operating Manual (DoD 5220.22-M)"(PDF). pp. B1 - B3 (PDF pages:121–123 ).Archived(PDF)from the original on 27 July 2019. Retrieved27 July2019.
Privatecorporationsoften require writtenconfidentiality agreementsand conductbackground checkson candidates for sensitive positions.[53]In the U.S., theEmployee Polygraph Protection Actprohibits private employers from requiring lie detector tests, but there are a few exceptions. Policies dictating methods for marking and safeguarding company-sensitive information (e.g. "IBM Confidential") are common and some companies have more than one level. Such information is protected undertrade secretlaws. New product development teams are often sequestered and forbidden to share information about their efforts with un-cleared fellow employees, the originalApple Macintoshproject being a famous example. Other activities, such asmergersandfinancial reportpreparation generally involve similar restrictions. However, corporate security generally lacks the elaborate hierarchical clearance and sensitivity structures and the harsh criminal sanctions that give government classification systems their particular tone.
TheTraffic Light Protocol[54][55]was developed by theGroup of Eightcountries to enable the sharing of sensitive information between government agencies and corporations. This protocol has now been accepted as a model for trusted information exchange by over 30 other countries. The protocol provides for four "information sharing levels" for the handling of sensitive information.
|
https://en.wikipedia.org/wiki/Classified_information#Canada
|
Radio-frequency identification(RFID) useselectromagnetic fieldsto automaticallyidentifyandtracktags attached to objects. An RFID system consists of a tiny radiotranspondercalled a tag, aradio receiver, and atransmitter. When triggered by an electromagnetic interrogation pulse from a nearby RFID reader device, the tag transmits digital data, usually anidentifying inventory number, back to the reader. This number can be used to trackinventorygoods.[1]
Passive tags are powered by energy from the RFID reader's interrogatingradio waves. Active tags are powered by a battery and thus can be read at a greater range from the RFID reader, up to hundreds of meters.
Unlike abarcode, the tag does not need to be within theline of sightof the reader, so it may be embedded in the tracked object. RFID is one method ofautomatic identification and data capture(AIDC).[2]
RFID tags are used in many industries. For example, an RFID tag attached to an automobile during production can be used to track its progress through theassembly line,[citation needed]RFID-tagged pharmaceuticals can be tracked through warehouses,[citation needed]andimplanting RFID microchipsin livestock and pets enables positive identification of animals.[3]Tags can also be used in shops to expedite checkout, and toprevent theftby customers and employees.[4]
Since RFID tags can be attached to physical money, clothing, and possessions, or implanted in animals and people, the possibility of reading personally-linked information withoutconsenthas raised seriousprivacyconcerns.[5]These concerns resulted in standard specifications development addressing privacy and security issues.
In 2014, the world RFID market was worth US$8.89billion, up from US$7.77 billion in 2013 and US$6.96 billion in 2012. This figure includes tags, readers, and software/services for RFID cards, labels, fobs, and all other form factors. The market value is expected to rise from US$12.08 billion in 2020 to US$16.23 billion by 2029.[6]
In 1945,Leon Theremininventedthe "Thing", a listening devicefor theSoviet Unionwhich retransmitted incident radio waves with the added audio information. Sound waves vibrated adiaphragmwhich slightly altered the shape of theresonator, which modulated the reflected radio frequency. Even though this device was acovert listening device, rather than an identification tag, it is considered to be a predecessor of RFID because it was passive, being energised and activated by waves from an outside source.[7]
Similar technology, such as theIdentification friend or foetransponder, was routinely used by the Allies and Germany inWorld War IIto identify aircraft as friendly or hostile.Transpondersare still used by most powered aircraft.[8]An early work exploring RFID is the landmark 1948 paper by Harry Stockman,[9]who predicted that "Considerable research and development work has to be done before the remaining basic problems in reflected-power communication are solved, and before the field of useful applications is explored."
Mario Cardullo's device, patented on January 23, 1973, was the first true ancestor of modern RFID,[10]as it was a passive radio transponder with memory.[11]The initial device was passive, powered by the interrogating signal, and was demonstrated in 1971 to theNew York Port Authorityand other potential users. It consisted of a transponder with 16bitmemory for use as atoll device. The basic Cardullo patent covers the use of radio frequency (RF), sound and light as transmission carriers. The original business plan presented to investors in 1969 showed uses in transportation (automotive vehicle identification, automatic toll system,electronic license plate, electronic manifest, vehicle routing, vehicle performance monitoring), banking (electronic chequebook, electronic credit card), security (personnel identification, automatic gates, surveillance) and medical (identification, patient history).[10]
In 1973, an early demonstration ofreflected power(modulated backscatter) RFID tags, both passive and semi-passive, was performed by Steven Depp, Alfred Koelle and Robert Freyman at theLos Alamos National Laboratory.[12]The portable system operated at 915 MHz and used 12-bit tags. This technique is used by the majority of today's UHFID and microwave RFID tags.[13]
In 1983, the first patent to be associated with the abbreviation RFID was granted toCharles Walton.[14]
In 1996, the first patent for a batteryless RFID passive tag with limited interference was granted to David Everett, John Frech, Theodore Wright, and Kelly Rodriguez.[15]
A radio-frequency identification system usestags, orlabelsattached to the objects to be identified. Two-way radio transmitter-receivers calledinterrogatorsorreaderssend a signal to the tag and read its response.[16]
RFID tags are made out of three pieces:
The tag information is stored in a non-volatile memory.[17]The RFID tags includes either fixed or programmable logic for processing the transmission and sensor data, respectively.[citation needed]
RFID tags can be either passive, active or battery-assisted passive. An active tag has an on-board battery and periodically transmits its ID signal.[17]A battery-assisted passive tag has a small battery on board and is activated when in the presence of an RFID reader. A passive tag is cheaper and smaller because it has no battery; instead, the tag uses the radio energy transmitted by the reader. However, to operate a passive tag, it must be illuminated with a power level roughly a thousand times stronger than an active tag for signal transmission.[18]
Tags may either be read-only, having a factory-assigned serial number that is used as a key into a database, or may be read/write, where object-specific data can be written into the tag by the system user. Field programmable tags may be write-once, read-multiple; "blank" tags may be written with an electronic product code by the user.[19]
The RFID tag receives the message and then responds with its identification and other information. This may be only a unique tag serial number, or may be product-related information such as a stock number, lot or batch number, production date, or other specific information. Since tags have individual serial numbers, the RFID system design can discriminate among several tags that might be within the range of the RFID reader and read them simultaneously.
RFID systems can be classified by the type of tag and reader. There are 3 types:[20]
Fixed readers are set up to create a specific interrogation zone which can be tightly controlled. This allows a highly defined reading area for when tags go in and out of the interrogation zone. Mobile readers may be handheld or mounted on carts or vehicles.
Signaling between the reader and the tag is done in several different incompatible ways, depending on the frequency band used by the tag. Tags operating on LF and HF bands are, in terms of radio wavelength, very close to the reader antenna because they are only a small percentage of a wavelength away. In thisnear fieldregion, the tag is closely coupled electrically with the transmitter in the reader. The tag can modulate the field produced by the reader by changing the electrical loading the tag represents. By switching between lower and higher relative loads, the tag produces a change that the reader can detect. At UHF and higher frequencies, the tag is more than one radio wavelength away from the reader, requiring a different approach. The tag canbackscattera signal. Active tags may contain functionally separated transmitters and receivers, and the tag need not respond on a frequency related to the reader's interrogation signal.[27]
AnElectronic Product Code(EPC) is one common type of data stored in a tag. When written into the tag by an RFID printer, the tag contains a 96-bit string of data. The first eight bits are a header which identifies the version of the protocol. The next 28 bits identify the organization that manages the data for this tag; the organization number is assigned by the EPCGlobal consortium. The next 24 bits are an object class, identifying the kind of product. The last 36 bits are a unique serial number for a particular tag. These last two fields are set by the organization that issued the tag. Rather like aURL, the total electronic product code number can be used as a key into a global database to uniquely identify a particular product.[28]
Often more than one tag will respond to a tag reader. For example, many individual products with tags may be shipped in a common box or on a common pallet. Collision detection is important to allow reading of data. Two different types of protocols are used to"singulate"a particular tag, allowing its data to be read in the midst of many similar tags. In aslotted Alohasystem, the reader broadcasts an initialization command and a parameter that the tags individually use to pseudo-randomly delay their responses. When using an "adaptive binary tree" protocol, the reader sends an initialization symbol and then transmits one bit of ID data at a time; only tags with matching bits respond, and eventually only one tag matches the complete ID string.[29]
Both methods have drawbacks when used with many tags or with multiple overlapping readers.[citation needed]
"Bulk reading" is a strategy for interrogating multiple tags at the same time, but lacks sufficient precision for inventory control. A group of objects, all of them RFID tagged, are read completely from one single reader position at one time. However, as tags respond strictly sequentially, the time needed for bulk reading grows linearly with the number of labels to be read. This means it takes at least twice as long to read twice as many labels. Due to collision effects, the time required is greater.[30]
A group of tags has to be illuminated by the interrogating signal just like a single tag. This is not a challenge concerning energy, but with respect to visibility; if any of the tags are shielded by other tags, they might not be sufficiently illuminated to return a sufficient response. The response conditions for inductively coupledHFRFID tags and coil antennas in magnetic fields appear better than for UHF or SHF dipole fields, but then distance limits apply and may prevent success.[citation needed][31]
Under operational conditions, bulk reading is not reliable. Bulk reading can be a rough guide for logistics decisions, but due to a high proportion of reading failures, it is not (yet)[when?]suitable for inventory management. However, when a single RFID tag might be seen as not guaranteeing a proper read, multiple RFID tags, where at least one will respond, may be a safer approach for detecting a known grouping of objects. In this respect, bulk reading is afuzzymethod for process support. From the perspective of cost and effect, bulk reading is not reported as an economical approach to secure process control in logistics.[32]
RFID tags are easy to conceal or incorporate in other items. For example, in 2009 researchers atBristol Universitysuccessfully glued RFID micro-transponders to liveantsin order to study their behavior.[33]This trend towards increasingly miniaturized RFIDs is likely to continue as technology advances.
Hitachi holds the record for the smallest RFID chip, at 0.05 mm × 0.05 mm. This is 1/64th the size of the previous record holder, the mu-chip.[34]Manufacture is enabled by using thesilicon-on-insulator(SOI) process. These dust-sized chips can store 38-digit numbers using 128-bitRead Only Memory(ROM).[35]A major challenge is the attachment of antennas, thus limiting read range to only millimeters.
In early 2020, MIT researchers demonstrated aterahertzfrequency identification (TFID) tag that is barely 1 square millimeter in size. The devices are essentially a piece of silicon that are inexpensive, small, and function like larger RFID tags. Because of the small size, manufacturers could tag any product and track logistics information for minimal cost.[36][37]
An RFID tag can be affixed to an object and used to track tools, equipment, inventory, assets, people, or other objects.
RFID offers advantages over manual systems or use ofbarcodes. The tag can be read if passed near a reader, even if it is covered by the object or not visible. The tag can be read inside a case, carton, box or other container, and unlike barcodes, RFID tags can be read hundreds at a time; barcodes can only be read one at a time using current devices. Some RFID tags, such as battery-assisted passive tags, are also able to monitor temperature and humidity.[38]
In 2011, the cost of passive tags started at US$0.09 each; special tags, meant to be mounted on metal or withstand gamma sterilization, could cost up to US$5. Active tags for tracking containers, medical assets, or monitoring environmental conditions in data centers started at US$50 and could be over US$100 each.[39]Battery-Assisted Passive (BAP) tags were in the US$3–10 range.[citation needed]
RFID can be used in a variety of applications,[40][41]such as:
In 2010, three factors drove a significant increase in RFID usage: decreased cost of equipment and tags, increased performance to a reliability of 99.9%, and a stable international standard around HF and UHF passive RFID. The adoption of these standards were driven by EPCglobal, a joint venture betweenGS1and GS1 US, which were responsible for driving global adoption of the barcode in the 1970s and 1980s. The EPCglobal Network was developed by theAuto-ID Center.[45]
RFID provides a way for organizations to identify and manage stock, tools and equipment (asset tracking), etc. without manual data entry. Manufactured products such as automobiles or garments can be tracked through the factory and through shipping to the customer. Automatic identification with RFID can be used for inventory systems. Many organisations require that their vendors place RFID tags on all shipments to improvesupply chain management.[citation needed]Warehouse Management System[clarification needed]incorporate this technology to speed up the receiving and delivery of the products and reduce the cost of labor needed in their warehouses.[46]
RFID is used foritem-level taggingin retail stores. This can enable more accurate and lower-labor-cost supply chain and store inventory tracking, as is done atLululemon, though physically locating items in stores requires more expensive technology.[47]RFID tags can be used at checkout; for example, at some stores of the French retailerDecathlon, customers performself-checkoutby either using a smartphone or putting items into a bin near the register that scans the tags without having to orient each one toward the scanner.[47]Some stores use RFID-tagged items to trigger systems that provide customers with more information or suggestions, such as fitting rooms atChaneland the "Color Bar" atKendra Scottstores.[47]
Item tagging can also provide protection against theft by customers and employees by usingelectronic article surveillance(EAS). Tags of different types can be physically removed with a special tool or deactivated electronically when payment is made.[48]On leaving the shop, customers have to pass near an RFID detector; if they have items with active RFID tags, an alarm sounds, both indicating an unpaid-for item, and identifying what it is.
Casinos can use RFID to authenticatepoker chips, and can selectively invalidate any chips known to be stolen.[49]
RFID tags are widely used inidentification badges, replacing earliermagnetic stripecards. These badges need only be held within a certain distance of the reader to authenticate the holder. Tags can also be placed on vehicles, which can be read at a distance, to allow entrance to controlled areas without having to stop the vehicle and present a card or enter an access code.[citation needed]
In 2010, Vail Resorts began using UHF Passive RFID tags in ski passes.[50]
Facebook is using RFID cards at most of their live events to allow guests to automatically capture and post photos.[citation needed][when?]
Automotive brands have adopted RFID for social media product placement more quickly than other industries. Mercedes was an early adopter in 2011 at thePGA Golf Championships,[51]and by the 2013 Geneva Motor Show many of the larger brands were using RFID for social media marketing.[52][further explanation needed]
To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.[53][when?]
Yard management, shipping and freight and distribution centers use RFID tracking. In therailroadindustry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the type, origin, destination, etc. of the commodities being carried.[54]
In commercial aviation, RFID is used to support maintenance on commercial aircraft. RFID tags are used to identify baggage and cargo at several airports and airlines.[55][56]
Some countries are using RFID for vehicle registration and enforcement.[57]RFID can help detect and retrieve stolen cars.[58][59]
RFID is used inintelligent transportation systems. InNew York City, RFID readers are deployed at intersections to trackE-ZPasstags as a means for monitoring the traffic flow. The data is fed through the broadband wireless infrastructure to the traffic management center to be used inadaptive traffic controlof the traffic lights.[60]
Where ship, rail, or highway tanks are being loaded, a fixed RFID antenna contained in a transfer hose can read an RFID tag affixed to the tank, positively identifying it.[61]
At least one company has introduced RFID to identify and locate underground infrastructure assets such asgaspipelines,sewer lines, electrical cables, communication cables, etc.[62]
The first RFID passports ("E-passport") were issued byMalaysiain 1998. In addition to information also contained on the visual data page of the passport, Malaysian e-passports record the travel history (time, date, and place) of entry into and exit out of the country.[citation needed]
Other countries that insert RFID in passports include Norway (2005),[63]Japan (March 1, 2006), mostEUcountries (around 2006), Singapore (2006), Australia, Hong Kong, the United States (2007), the United Kingdom and Northern Ireland (2006), India (June 2008), Serbia (July 2008), Republic of Korea (August 2008), Taiwan (December 2008), Albania (January 2009), The Philippines (August 2009), Republic of Macedonia (2010), Argentina (2012), Canada (2013), Uruguay (2015)[64]and Israel (2017).
Standards for RFID passports are determined by theInternational Civil Aviation Organization(ICAO), and are contained in ICAO Document 9303, Part 1, Volumes 1 and 2 (6th edition, 2006). ICAO refers to theISO/IEC 14443RFID chips in e-passports as "contactless integrated circuits". ICAO standards provide for e-passports to be identifiable by a standard e-passport logo on the front cover.
Since 2006, RFID tags included in newUnited States passportsstore the same information that is printed within the passport, and include a digital picture of the owner.[65]The United States Department of State initially stated the chips could only be read from a distance of 10 centimetres (3.9 in), but after widespread criticism and a clear demonstration that special equipment can read the test passports from 10 metres (33 ft) away,[66]the passports were designed to incorporate a thin metal lining to make it more difficult for unauthorized readers toskiminformation when the passport is closed. The department will also implementBasic Access Control(BAC), which functions as apersonal identification number(PIN) in the form of characters printed on the passport data page. Before a passport's tag can be read, this PIN must be entered into an RFID reader. The BAC also enables the encryption of any communication between the chip and interrogator.[67]
In many countries, RFID tags can be used to pay for mass transit fares on bus, trains, or subways, or to collect tolls on highways.
Somebike lockersare operated with RFID cards assigned to individual users. A prepaid card is required to open or enter a facility or locker and is used to track and charge based on how long the bike is parked.[citation needed]
TheZipcarcar-sharing service uses RFID cards for locking and unlocking cars and for member identification.[68]
In Singapore, RFID replaces paper Season Parking Ticket (SPT).[69]
RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak ofmad-cow disease, RFID has become crucial inanimal identificationmanagement. Animplantable RFID tagortranspondercan also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or "chips" on animals.[70]TheCanadian Cattle Identification Agencybegan using RFID tags as a replacement for barcode tags. Currently, CCIA tags are used inWisconsinand by United States farmers on a voluntary basis. TheUSDAis currently developing its own program.
RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well.[71]
Biocompatiblemicrochip implantsthat use RFID technology are being routinely implanted in humans. The first-ever human to receive an RFID microchip implant was American artistEduardo Kacin 1997.[72][73]Kac implanted the microchip live on television (and also live on the Internet) in the context of his artworkTime Capsule.[74]A year later, British professor ofcyberneticsKevin Warwickhad an RFID chip implanted in his arm by hisgeneral practitioner, George Boulos.[75][76]In 2004, the 'Baja Beach Club' operated byConrad ChaseinBarcelona[77]andRotterdamoffered implanted chips to identify their VIP customers, who could in turn use it to pay for service. In 2009, British scientistMark Gassonhad an advanced glass capsule RFID device surgically implanted into his left hand and subsequently demonstrated how a computer virus could wirelessly infect his implant and then be transmitted on to other systems.[78]
TheFood and Drug Administrationin the United States approved the use of RFID chips in humans in 2004.[79]
There is controversy regarding human applications of implantable RFID technology including concerns that individuals could potentially be tracked by carrying an identifier unique to them. Privacy advocates have protested against implantable RFID chips, warning of potential abuse. Some are concerned this could lead to abuse by an authoritarian government, to removal of freedoms,[80]and to the emergence of an "ultimatepanopticon", a society where all citizens behave in a socially accepted manner because others might be watching.[81]
On July 22, 2006, Reuters reported that two hackers, Newitz and Westhues, at a conference in New York City demonstrated that they could clone the RFID signal from a human implanted RFID chip, indicating that the device was not as secure as was previously claimed.[82]
The UFO religionUniverse Peopleis notorious online for their vocal opposition to human RFID chipping, which they claim is asaurianattempt to enslave the human race; one of their web domains is "dont-get-chipped".[83][84][85]
Adoption of RFID in the medical industry has been widespread and very effective.[86]Hospitals are among the first users to combine both active and passive RFID.[87]Active tags track high-value, or frequently moved items, and passive tags track smaller, lower cost items that only need room-level identification.[88]Medical facility rooms can collect data from transmissions of RFID badges worn by patients and employees, as well as from tags assigned to items such as mobile medical devices.[89]TheU.S. Department of Veterans Affairs (VA)recently announced plans to deploy RFID in hospitals across America to improve care and reduce costs.[90]
Since 2004, a number of U.S. hospitals have begun implanting patients with RFID tags and using RFID systems; the systems are typically used for workflow and inventory management.[91][92][93]The use of RFID to prevent mix-ups betweenspermandovainIVFclinics is also being considered.[94]
In October 2004, the FDA approved the USA's first RFID chips that can be implanted in humans. The 134 kHz RFID chips, from VeriChip Corp. can incorporate personal medical information and could save lives and limit injuries from errors in medical treatments, according to the company. Anti-RFID activistsKatherine AlbrechtandLiz McIntyrediscovered anFDA Warning Letterthat spelled out health risks.[95]According to the FDA, these include "adverse tissue reaction", "migration of the implanted transponder", "failure of implanted transponder", "electrical hazards" and "magnetic resonance imaging [MRI] incompatibility."
Libraries have used RFID to replace the barcodes on library items. The tag can contain identifying information or may just be a key into a database. An RFID system may replace or supplement bar codes and may offer another method of inventory management and self-service checkout by patrons. It can also act as asecuritydevice, taking the place of the more traditionalelectromagnetic security strip.[96]
It is estimated that over 30 million library items worldwide now contain RFID tags, including some in theVatican LibraryinRome.[97]
Since RFID tags can be read through an item, there is no need to open a book cover or DVD case to scan an item, and a stack of books can be read simultaneously. Book tags can be read while books are in motion on aconveyor belt, which reduces staff time. This can all be done by the borrowers themselves, reducing the need for library staff assistance. With portable readers, inventories could be done on a whole shelf of materials within seconds.[98]However, as of 2008, this technology remained too costly for many smaller libraries, and the conversion period has been estimated at 11 months for an average-size library. A 2004 Dutch estimate was that a library which lends 100,000 books per year should plan on a cost of €50,000 (borrow- and return-stations: 12,500 each, detection porches 10,000 each; tags 0.36 each). RFID taking a large burden off staff could also mean that fewer staff will be needed, resulting in some of them getting laid off,[97]but that has so far not happened in North America where recent surveys have not returned a single library that cut staff because of adding RFID.[citation needed][99]In fact, library budgets are being reduced for personnel and increased for infrastructure, making it necessary for libraries to add automation to compensate for the reduced staff size.[citation needed][99]Also, the tasks that RFID takes over are largely not the primary tasks of librarians.[citation needed][99]A finding in the Netherlands is that borrowers are pleased with the fact that staff are now more available for answering questions.[citation needed][99]
Privacy concerns have been raised surrounding library use of RFID.[100][101]Because some RFID tags can be read up to 100 metres (330 ft) away, there is some concern over whether sensitive information could be collected from an unwilling source. However, library RFID tags do not contain any patron information,[102]and the tags used in the majority of libraries use a frequency only readable from approximately 10 feet (3.0 m).[96]Another concern is that a non-library agency could potentially record the RFID tags of every person leaving the library without the library administrator's knowledge or consent. One simple option is to let the book transmit a code that has meaning only in conjunction with the library's database. Another possible enhancement would be to give each book a new code every time it is returned. In future, should readers become ubiquitous (and possibly networked), then stolen books could be traced even outside the library. Tag removal could be made difficult if the tags are so small that they fit invisibly inside a (random) page, possibly put there by the publisher.[citation needed]
RFID technologies are now[when?]also implemented in end-user applications in museums.[103]An example was the custom-designed temporary research application, "eXspot", at theExploratorium, a science museum inSan Francisco,California. A visitor entering the museum received an RF tag that could be carried as a card. The eXspot system enabled the visitor to receive information about specific exhibits. Aside from the exhibit information, the visitor could take photographs of themselves at the exhibit. It was also intended to allow the visitor to take data for later analysis. The collected information could be retrieved at home from a "personalized" website keyed to the RFID tag.[104]
In 2004, school authorities in the Japanese city ofOsakamade a decision to start chipping children's clothing, backpacks, and student IDs in a primary school.[105]Later, in 2007, a school inDoncaster, England, piloted a monitoring system designed to keep tabs on pupils by tracking radio chips in their uniforms.[106][when?]St Charles Sixth Form Collegein westLondon, England, starting in 2008, uses an RFID card system to check in and out of the main gate, to both track attendance and prevent unauthorized entrance. Similarly,Whitcliffe Mount SchoolinCleckheaton, England, uses RFID to track pupils and staff in and out of the building via a specially designed card. In the Philippines, during 2012, some schools already[when?]use RFID in IDs for borrowing books.[107][unreliable source?]Gates in those particular schools also have RFID scanners for buying items at school shops and canteens. RFID is also used in school libraries, and to sign in and out for student and teacher attendance.[99]
RFID for timing racesbegan in the early 1990s with pigeon racing, introduced by the companyDeister Electronicsin Germany. RFID can provide race start and end timings for individuals in large races where it is impossible to get accurate stopwatch readings for every entrant.[citation needed]
In races using RFID, racers wear tags that are read by antennas placed alongside the track or on mats across the track. UHF tags provide accurate readings with specially designed antennas. Rush error,[clarification needed]lap count errors and accidents at race start are avoided, as anyone can start and finish at any time without being in a batch mode.[clarification needed]
The design of the chip and of the antenna controls the range from which it can be read. Short range compact chips are twist tied to the shoe, or strapped to the ankle withhook-and-loop fasteners. The chips must be about 400 mm from the mat, therefore giving very good temporal resolution. Alternatively, a chip plus a very large (125mm square) antenna can be incorporated into the bib number worn on the athlete's chest at a height of about 1.25 m (4.1 ft).[citation needed]
Passive and active RFID systems are used in off-road events such asOrienteering,Enduroand Hare and Hounds racing. Riders have a transponder on their person, normally on their arm. When they complete a lap they swipe or touch the receiver which is connected to a computer and log their lap time.[citation needed]
RFID is being[when?]adapted by many recruitment agencies which have a PET (physical endurance test) as their qualifying procedure, especially in cases where the candidate volumes may run into millions (Indian Railway recruitment cells, police and power sector).
A number ofski resortshave adopted RFID tags to provide skiers hands-free access toski lifts. Skiers do not have to take their passes out of their pockets. Ski jackets have a left pocket into which the chip+card fits. This nearly contacts the sensor unit on the left of the turnstile as the skier pushes through to the lift. These systems were based on high frequency (HF) at 13.56MHz. The bulk of ski areas in Europe, from Verbier to Chamonix, use these systems.[108][109][110]
TheNFLin the United States equips players with RFID chips that measures speed, distance and direction traveled by each player in real-time. Currently, cameras stay focused on thequarterback; however, numerous plays are happening simultaneously on the field. The RFID chip will provide new insight into these simultaneous plays.[111]The chip triangulates the player's position within six inches and will be used to digitallybroadcastreplays. The RFID chip will make individual player information accessible to the public. The data will be available via the NFL 2015 app.[112]The RFID chips are manufactured byZebra Technologies. Zebra Technologies tested the RFID chip in 18 stadiums last year[when?]to track vector data.[113]
RFID tags are often a complement, but not a substitute, forUniversal Product Code(UPC) orEuropean Article Number(EAN) barcodes. They may never completely replace barcodes, due in part to their higher cost and the advantage of multiple data sources on the same object. Also, unlike RFID labels, barcodes can be generated and distributed electronically by e-mail or mobile phone, for printing or display by the recipient. An example is airlineboarding passes. The newEPC, along with several other schemes, is widely available at reasonable cost.
The storage of data associated with tracking items will require manyterabytes. Filtering and categorizing RFID data is needed to create useful information. It is likely that goods will be tracked by the pallet using RFID tags, and at package level with UPC or EAN from unique barcodes.
The unique identity is a mandatory requirement for RFID tags, despite special choice of the numbering scheme. RFID tag data capacity is large enough that each individual tag will have a unique code, while current barcodes are limited to a single type code for a particular product. The uniqueness of RFID tags means that a product may be tracked as it moves from location to location while being delivered to a person. This may help to combat theft and other forms of product loss. The tracing of products is an important feature that is well supported with RFID tags containing a unique identity of the tag and the serial number of the object. This may help companies cope with quality deficiencies and resulting recall campaigns, but also contributes to concern about tracking and profiling of persons after the sale.
Since around 2007, there has been increasing development in the use of RFID[when?]in thewaste managementindustry. RFID tags are installed on waste collection carts, linking carts to the owner's account for easy billing and service verification.[114]The tag is embedded into a garbage and recycle container, and the RFID reader is affixed to the garbage and recycle trucks.[115]RFID also measures a customer's set-out rate and provides insight as to the number of carts serviced by each waste collection vehicle. This RFID process replaces traditional "pay as you throw" (PAYT)municipal solid wasteusage-pricing models.
Active RFID tags have the potential to function as low-cost remote sensors that broadcasttelemetryback to a base station. Applications of tagometry data could include sensing of road conditions by implantedbeacons, weather reports, and noise level monitoring.[116]
Passive RFID tags can also report sensor data. For example, theWireless Identification and Sensing Platformis a passive tag that reports temperature, acceleration and capacitance to commercial Gen2 RFID readers.
It is possible that active or battery-assisted passive (BAP) RFID tags could broadcast a signal to an in-store receiver to determine whether the RFID tag – and by extension, the product it is attached to – is in the store.[citation needed]
To avoid injuries to humans and animals, RF transmission needs to be controlled.[117]A number of organizations have set standards for RFID, including theInternational Organization for Standardization(ISO), theInternational Electrotechnical Commission(IEC),ASTM International, theDASH7Alliance andEPCglobal.[118]
Several specific industries have also set guidelines, including the Financial Services Technology Consortium (FSTC) for tracking IT Assets with RFID, the Computer Technology Industry AssociationCompTIAfor certifying RFID engineers, and theInternational Air Transport Association(IATA) for luggage in airports.[citation needed]
Every country can set its own rules forfrequency allocationfor RFID tags, and not all radio bands are available in all countries. These frequencies are known as theISM bands(Industrial Scientific and Medical bands). The return signal of the tag may still causeinterferencefor other radio users.[citation needed]
In North America, UHF can be used unlicensed for 902–928 MHz (±13 MHz from the 915 MHz center frequency), but restrictions exist for transmission power.[citation needed]In Europe, RFID and other low-power radio applications are regulated byETSIrecommendationsEN 300 220andEN 302 208, andEROrecommendation 70 03, allowing RFID operation with somewhat complex band restrictions from 865–868 MHz.[citation needed]Readers are required to monitor a channel before transmitting ("Listen Before Talk"); this requirement has led to some restrictions on performance, the resolution of which is a subject of current[when?]research. The North American UHF standard is not accepted in France as it interferes with its military bands.[citation needed]On July 25, 2012, Japan changed its UHF band to 920 MHz, more closely matching the United States' 915 MHz band, establishing an international standard environment for RFID.[citation needed]
In some countries, a site license is needed, which needs to be applied for at the local authorities, and can be revoked.[citation needed]
As of 31 October 2014, regulations are in place in 78 countries representing approximately 96.5% of the world's GDP, and work on regulations was in progress in three countries representing approximately 1% of the world's GDP.[119]
Standardsthat have been made regarding RFID include:
In order to ensure global interoperability of products, several organizations have set up additional standards forRFID testing. These standards include conformance, performance and interoperability tests.[citation needed]
EPC Gen2 is short forEPCglobal UHF Class 1 Generation 2.
EPCglobal, a joint venture betweenGS1and GS1 US, is working on international standards for the use of mostly passive RFID and theElectronic Product Code(EPC) in the identification of many items in thesupply chainfor companies worldwide.
One of the missions of EPCglobal was to simplify the Babel of protocols prevalent in the RFID world in the 1990s. Two tag air interfaces (the protocol for exchanging information between a tag and a reader) were defined (but not ratified) by EPCglobal prior to 2003. These protocols, commonly known as Class 0 and Class 1, saw significant commercial implementation in 2002–2005.[121]
In 2004, the Hardware Action Group created a new protocol, the Class 1 Generation 2 interface, which addressed a number of problems that had been experienced with Class 0 and Class 1 tags. The EPC Gen2 standard was approved in December 2004. This was approved after a contention fromIntermecthat the standard may infringe a number of their RFID-related patents. It was decided that the standard itself does not infringe their patents, making the standard royalty free.[122]The EPC Gen2 standard was adopted with minor modifications as ISO 18000-6C in 2006.[123]
In 2007, the lowest cost of Gen2 EPC inlay was offered by the now-defunct company SmartCode, at a price of $0.05 apiece in volumes of 100 million or more.[124]
Not every successful reading of a tag (an observation) is useful for business purposes. A large amount of data may be generated that is not useful for managing inventory or other applications. For example, a customer moving a product from one shelf to another, or a pallet load of articles that passes several readers while being moved in a warehouse, are events that do not produce data that are meaningful to an inventory control system.[125]
Event filtering is required to reduce this data inflow to a meaningful depiction of moving goods passing a threshold. Various concepts[example needed]have been designed, mainly offered asmiddlewareperforming the filtering from noisy and redundant raw data to significant processed data.[citation needed]
The frequencies used for UHF RFID in the USA are as of 2007 incompatible with those of Europe or Japan. Furthermore, no emerging standard has yet become as universal as thebarcode.[126]To address international trade concerns, it is necessary to use a tag that is operational within all of the international frequency domains.
A primary RFID security concern is the illicit tracking of RFID tags. Tags, which are world-readable, pose a risk to both personal location privacy and corporate/military security. Such concerns have been raised with respect to theUnited States Department of Defense's recent[when?]adoption of RFID tags forsupply chain management.[127]More generally, privacy organizations have expressed concerns in the context of ongoing efforts to embed electronic product code (EPC) RFID tags in general-use products. This is mostly as a result of the fact that RFID tags can be read, and legitimate transactions with readers can be eavesdropped on, from non-trivial distances. RFID used in access control,[128]payment and eID (e-passport) systems operate at a shorter range than EPC RFID systems but are also vulnerable toskimmingand eavesdropping, albeit at shorter distances.[129]
A second method of prevention is by using cryptography.Rolling codesandchallenge–response authentication(CRA) are commonly used to foil monitor-repetition of the messages between the tag and reader, as any messages that have been recorded would prove to be unsuccessful on repeat transmission.[clarification needed]Rolling codes rely upon the tag's ID being changed after each interrogation, while CRA uses software to ask for acryptographicallycoded response from the tag. The protocols used during CRA can besymmetric, or may usepublic key cryptography.[130]
While a variety of secure protocols have been suggested for RFID tags,
in order to support long read range at low cost, many RFID tags have barely enough power available
to support very low-power and therefore simple security protocols such ascover-coding.[131]
Unauthorized reading of RFID tags presents a risk to privacy and to business secrecy.[132]Unauthorized readers can potentially use RFID information to identify or track packages, persons, carriers, or the contents of a package.[130]Several prototype systems are being developed to combat unauthorized reading, including RFID signal interruption,[133]as well as the possibility of legislation, and 700 scientific papers have been published on this matter since 2002.[134]There are also concerns that the database structure ofObject Naming Servicemay be susceptible to infiltration, similar todenial-of-service attacks, after the EPCglobal Network ONS root servers were shown to be vulnerable.[135]
Microchip–induced tumours have been noted during animal trials.[136][137]
In an effort to prevent the passive "skimming" of RFID-enabled cards or passports, the U.S.General Services Administration(GSA) issued a set of test procedures for evaluating electromagnetically opaque sleeves.[138]For shielding products to be in compliance with FIPS-201 guidelines, they must meet or exceed this published standard; compliant products are listed on the website of the U.S. CIO's FIPS-201 Evaluation Program.[139]The United States government requires that when new ID cards are issued, they must be delivered with an approved shielding sleeve or holder.[140]Although many wallets and passport holders are advertised to protect personal information, there is little evidence that RFID skimming is a serious threat; data encryption and use ofEMVchips rather than RFID makes this sort of theft rare.[141][142]
There are contradictory opinions as to whether aluminum can prevent reading of RFID chips. Some people claim that aluminum shielding, essentially creating aFaraday cage, does work.[143]Others claim that simply wrapping an RFID card in aluminum foil only makes transmission more difficult and is not completely effective at preventing it.[144]
Shielding effectiveness depends on the frequency being used.Low-frequencyLowFID tags, like those used in implantable devices for humans and pets, are relatively resistant to shielding, although thick metal foil will prevent most reads.High frequencyHighFID tags (13.56 MHz—smart cardsand access badges) are sensitive to shielding and are difficult to read when within a few centimetres of a metal surface.UHFUltra-HighFID tags (pallets and cartons) are difficult to read when placed within a few millimetres of a metal surface, although their read range is actually increased when they are spaced 2–4 cm from a metal surface due to positive reinforcement of the reflected wave and the incident wave at the tag.[145]
The use of RFID has engendered considerable controversy and someconsumer privacyadvocates have initiated productboycotts. Consumer privacy expertsKatherine AlbrechtandLiz McIntyreare two prominent critics of the "spychip" technology. The two main privacy concerns regarding RFID are as follows:[citation needed]
Most concerns revolve around the fact that RFID tags affixed to products remain functional even after the products have been purchased and taken home; thus, they may be used forsurveillanceand other purposes unrelated to their supply chain inventory functions.[146]
The RFID Network responded to these fears in the first episode of their syndicated cable TV series, saying that they are unfounded, and let RF engineers demonstrate how RFID works.[147]They provided images of RF engineers driving an RFID-enabled van around a building and trying to take an inventory of items inside. They also discussed satellite tracking of a passive RFID tag.
The concerns raised may be addressed in part by use of theClipped Tag. The Clipped Tag is an RFID tag designed to increase privacy for the purchaser of an item. The Clipped Tag has been suggested byIBMresearchersPaul Moskowitzand Guenter Karjoth. After the point of sale, a person may tear off a portion of the tag. This allows the transformation of a long-range tag into a proximity tag that still may be read, but only at short range – less than a few inches or centimeters. The modification of the tag may be confirmed visually. The tag may still be used later for returns, recalls, or recycling.
However, read range is a function of both the reader and the tag itself. Improvements in technology may increase read ranges for tags. Tags may be read at longer ranges than they are designed for by increasing reader power. The limit on read distance then becomes the signal-to-noise ratio of the signal reflected from the tag back to the reader. Researchers at two security conferences have demonstrated that passive Ultra-HighFID tags normally read at ranges of up to 30 feet can be read at ranges of 50 to 69 feet using suitable equipment.[148][149]
In January 2004, privacy advocates from CASPIAN and the German privacy groupFoeBuDwere invited to the METRO Future Store in Germany, where an RFID pilot project was implemented. It was uncovered by accident that METRO "Payback" customerloyalty cardscontained RFID tags with customer IDs, a fact that was disclosed neither to customers receiving the cards, nor to this group of privacy advocates. This happened despite assurances by METRO that no customer identification data was tracked and all RFID usage was clearly disclosed.[150]
During the UNWorld Summit on the Information Society(WSIS) in November 2005,Richard Stallman, the founder of thefree software movement, protested the use of RFID security cards by covering his card with aluminum foil.[151]
In 2004–2005, theFederal Trade Commissionstaff conducted a workshop and review of RFID privacy concerns and issued a report recommending best practices.[152]
RFID was one of the main topics of the 2006Chaos Communication Congress(organized by theChaos Computer ClubinBerlin) and triggered a large press debate. Topics included electronic passports, Mifare cryptography and the tickets for the FIFA World Cup 2006. Talks showed how the first real-world mass application of RFID at the 2006 FIFA Football World Cup worked. The groupmonochromstaged a "Hack RFID" song.[153]
Some individuals have grown to fear the loss of rights due to RFID human implantation.
By early 2007, Chris Paget of San Francisco, California, showed that RFID information could be pulled from aUS passport cardby using only $250 worth of equipment. This suggests that with the information captured, it would be possible to clone such cards.[154]
According to ZDNet, critics believe that RFID will lead to tracking individuals' every movement and will be an invasion of privacy.[155]In the bookSpyChips: How Major Corporations and Government Plan to Track Your Every Moveby Katherine Albrecht and Liz McIntyre, one is encouraged to "imagine a world of no privacy. Where your every purchase is monitored and recorded in a database and your every belonging is numbered. Where someone many states away or perhaps in another country has a record of everything you have ever bought. What's more, they can be tracked and monitored remotely".[156]
According to an RSA laboratories FAQ, RFID tags can be destroyed by a standard microwave oven;[157]however, some types of RFID tags, particularly those constructed to radiate using large metallic antennas (in particular RF tags andEPCtags), may catch fire if subjected to this process for too long (as would any metallic item inside a microwave oven). This simple method cannot safely be used to deactivate RFID features in electronic devices, or those implanted in living tissue, because of the risk of damage to the "host". However the time required is extremely short (a second or two of radiation) and the method works in many other non-electronic and inanimate items, long before heat or fire become of concern.[158]
Some RFID tags implement a "kill command" mechanism to permanently and irreversibly disable them. This mechanism can be applied if the chip itself is trusted or the mechanism is known by the person that wants to "kill" the tag.
UHF RFID tags that comply with the EPC2 Gen 2 Class 1 standard usually support this mechanism, while protecting the chip from being killed with a password.[159]Guessing or cracking this needed 32-bit password for killing a tag would not be difficult for a determined attacker.[160]
|
https://en.wikipedia.org/wiki/Radio-frequency_identification
|
Incomputer security, athreatis a potential negative action or event enabled by avulnerabilitythat results in an unwanted impact to a computer system or application.
A threat can be either a negative "intentional" event (i.e. hacking: an individual cracker or a criminal organization) or an "accidental" negative event (e.g. the possibility of a computer malfunctioning, or the possibility of anatural disasterevent such as anearthquake, afire, or atornado) or otherwise a circumstance, capability, action, or event (incidentis often used as a blanket term).[1]Athreat actorwho is an individual or group that can perform the threat action, such as exploiting a vulnerability to actualise a negative impact. Anexploitis a vulnerability that a threat actor used to cause an incident.
A more comprehensive definition, tied to anInformation assurancepoint of view, can be found in "Federal Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and Information Systems" byNISTofUnited States of America[2]
National Information Assurance Glossarydefinesthreatas:
ENISAgives a similar definition:[3]
The Open Groupdefinesthreatas:[4]
Factor analysis of information riskdefinesthreatas:[5]
National Information Assurance Training and Education Centergives a more articulated definition ofthreat:[6][7]
The term "threat" relates to some other basic security terms as shown in the following diagram:[1]A resource (both physical or logical) can have one or more vulnerabilities that can be exploited by a threat agent in a threat action. The result can potentially compromise theconfidentiality,integrityoravailabilityproperties of resources (potentially different than the vulnerable one) of the organization and others involved parties (customers, suppliers).The so-calledCIA triadis the basis ofinformation security.
Theattackcan beactivewhen it attempts to alter system resources or affect their operation: so it compromises Integrity or Availability. A "passive attack" attempts to learn or make use of information from the system but does not affect system resources: so it compromises Confidentiality.[1]
OWASP(see figure) depicts the same phenomenon in slightly different terms: a threat agent through an attack vector exploits a weakness (vulnerability) of the system and the relatedsecurity controlscausing a technical impact on an IT resource (asset) connected to a business impact.
A set of policies concerned with information security management, theInformation security management systems(ISMS), has been developed to manage, according torisk managementprinciples, thecountermeasuresin order to accomplish to a security strategy set up following rules and regulations applicable in a country. Countermeasures are also called security controls; when applied to the transmission of information are namedsecurity services.[8]
The overall picture represents therisk factorsof the risk scenario.[9]
The widespread of computer dependencies and the consequent raising of the consequence of a successful attack, led to a new termcyberwarfare.
Nowadays the many real attacks exploitPsychologyat least as much as technology.PhishingandPretextingand other methods are calledsocial engineeringtechniques.[10]TheWeb 2.0applications, specificallySocial network services, can be a mean to get in touch with people in charge of system administration or even system security, inducing them to reveal sensitive information.[11]One famous case isRobin Sage.[12]
The most widespread documentation oncomputer insecurityis about technical threats such as acomputer virus,trojanand othermalware, but a serious study to apply cost effective countermeasures can only be conducted following a rigorousIT risk analysisin the framework of an ISMS: a pure technical approach will let out the psychological attacks that are increasing threats.
Threats can be classified according to their type and origin:[13]
Note that a threat type can have multiple origins.
Recent trends in computer threats show an increase in ransomware attacks, supply chain attacks, and fileless malware. Ransomware attacks involve the encryption of a victim's files and a demand for payment to restore access. Supply chain attacks target the weakest links in a supply chain to gain access to high-value targets. Fileless malware attacks use techniques that allow malware to run in memory, making it difficult to detect.[14]
Below are the few common emerging threats:
Microsoftpublished a mnemonic,STRIDE,[15]from the initials of threat groups:
Microsoft previously rated the risk of security threats using five categories in a classification calledDREAD: Risk assessment model. The model is considered obsolete by Microsoft.
The categories were:
The DREAD name comes from the initials of the five categories listed.
The spread over a network of threats can lead to dangerous situations. In military and civil fields, threat level has been defined: for exampleINFOCONis a threat level used by the US. Leadingantivirus softwarevendors publish global threat level on their websites.[16][17]
The termThreat Agentis used to indicate an individual or group that can manifest a threat. It is fundamental to identify who would want to exploit the assets of a company, and how they might use them against the company.[18]
Individuals within a threat population; Practically anyone and anything can, under the right circumstances, be a threat agent – the well-intentioned, but inept, computer operator who trashes a daily batch job by typing the wrong command, the regulator performing an audit, or the squirrel that chews through a data cable.[5]
Threat agents can take one or more of the following actions against an asset:[5]
Each of these actions affects different assets differently, which drives the degree and nature of loss. For example, the potential for productivity loss resulting from a destroyed or stolen asset depends upon how critical that asset is to the organization's productivity. If a critical asset is simply illicitly accessed, there is no direct productivity loss. Similarly, the destruction of a highly sensitive asset that does not play a critical role in productivity would not directly result in a significant productivity loss. Yet that same asset, if disclosed, can result in significant loss of competitive advantage or reputation, and generate legal costs. The point is that it is the combination of the asset and type of action against the asset that determines the fundamental nature and degree of loss. Which action(s) a threat agent takes will be driven primarily by that agent's motive (e.g., financial gain, revenge, recreation, etc.) and the nature of the asset. For example, a threat agent bent on financial gain is less likely to destroy a critical server than they are to steal an easilypawnedasset like a laptop.[5]
It is important to separate the concept of the event that a threat agent get in contact with the asset (even virtually, i.e. through the network) and the event that a threat agent act against the asset.[5]
OWASP collects a list of potential threat agents to prevent system designers, and programmers insert vulnerabilities in the software.[18]
Threat Agent = Capabilities + Intentions + Past Activities
These individuals and groups can be classified as follows:[18]
Threat sources are those who wish a compromise to occur. It is a term used to distinguish them from threat agents/actors who are those who carry out the attack and who may be commissioned or persuaded by the threat source to knowingly or unknowingly carry out the attack.[19]
Threat actionis an assault on system security.A completesecurity architecturedeals with both intentional acts (i.e. attacks) and accidental events.[20]
Various kinds of threat actions are defined as subentries under "threat consequence".
Threat analysisis the analysis of the probability of occurrences and consequences of damaging actions to a system.[1]It is the basis ofrisk analysis.
Threat modelingis a process that helps organizations identify and prioritize potential threats to their systems. It involves analyzing the system's architecture, identifying potential threats, and prioritizing them based on their impact and likelihood. By using threat modeling, organizations can develop a proactive approach to security and prioritize their resources to address the most significant risks.[21]
Threat intelligenceis the practice of collecting and analyzing information about potential and current threats to an organization. This information can include indicators of compromise, attack techniques, and threat actor profiles. By using threat intelligence, organizations can develop a better understanding of the threat landscape and improve their ability to detect and respond to threats.[22]
Threat consequenceis a security violation that results from a threat action.[1]Includes disclosure, deception, disruption, and usurpation.
The following subentries describe four kinds of threat consequences, and also list and describe the kinds of threat actions that cause each consequence.[1]Threat actions that are accidental events are marked by "*".
A collection of threats in a particular domain or context, with information on identified vulnerable assets, threats, risks, threat actors and observed trends.[23][24]
Threats should be managed by operating an ISMS, performing all theIT risk managementactivities foreseen by laws, standards and methodologies.
Very large organizations tend to adoptbusiness continuity managementplans in order to protect, maintain and recover business-critical processes and systems. Some of these plans are implemented bycomputer security incident response team(CSIRT).
Threat management must identify, evaluate, and categorize threats. There are two primary methods ofthreat assessment:
Many organizations perform only a subset of these methods, adopting countermeasures based on a non-systematic approach, resulting incomputer insecurity.
Informationsecurity awarenessis a significant market. There has been a lot of software developed to deal with IT threats, including bothopen-source softwareandproprietary software.[25]
Threat management involves a wide variety of threats including physical threats like flood and fire. While ISMS risk assessment process does incorporate threat management for cyber threats such as remote buffer overflows the risk assessment process doesn't include processes such as threat intelligence management or response procedures.
Cyber threat management (CTM) is emerging as the best practice for managing cyber threats beyond the basic risk assessment found in ISMS. It enables early identification of threats, data-driven situational awareness, accurate decision-making, and timely threat mitigating actions.[26]
CTM includes:
Cyber threat huntingis "the process of proactively and iteratively searching through networks to detect and isolate advanced threats that evade existing security solutions."[27]This is in contrast to traditional threat management measures, such asfirewalls,intrusion detection systems, andSIEMs, which typically involve an investigationafterthere has been a warning of a potential threat, or an incident has occurred.
Threat hunting can be a manual process, in which a security analyst sifts through various data information using their knowledge and familiarity with the network to create hypotheses about potential threats. To be even more effective and efficient, however, threat hunting can be partially automated, or machine-assisted, as well. In this case, the analyst utilizes software that harnessesmachine learninganduser and entity behaviour analytics(UEBA) to inform the analyst of potential risks. The analyst then investigates these potential risks, tracking suspicious behaviour in the network. Thus hunting is an iterative process, meaning that it must be continuously carried out in a loop, beginning with a hypothesis. There are three types of hypotheses:
The analyst researches their hypothesis by going through vast amounts of data about the network. The results are then stored so that they can be used to improve the automated portion of the detection system and to serve as a foundation for future hypotheses.
TheSANS Institutehas conducted research and surveys on the effectiveness of threat hunting to track and disrupt cyber adversaries as early in their process as possible. According to a survey performed in 2019, "61% [of the respondents] report at least an 11% measurable improvement in their overall security posture" and 23.6% of the respondents have experienced a 'significant improvement' in reducing thedwell time.[29]
To protect yourself from computer threats, it's essential to keep your software up-to-date, use strong and unique passwords, and be cautious when clicking on links or downloading attachments. Additionally, using antivirus software and regularly backing up your data can help mitigate the impact of a threat.
|
https://en.wikipedia.org/wiki/Threat_(computer)
|
Bio-inspired computing, short forbiologically inspired computing, is a field of study which seeks to solve computer science problems using models of biology. It relates toconnectionism,social behavior, andemergence. Withincomputer science, bio-inspired computing relates to artificial intelligence and machine learning. Bio-inspired computing is a major subset ofnatural computation.
Early Ideas
The ideas behind biological computing trace back to 1936 and the first description of an abstract computer, which is now known as aTuring machine.Turingfirstly described the abstract construct using a biological specimen. Turing imagined a mathematician that has three important attributes.[1]He always has a pencil with an eraser, an unlimited number of papers and a working set of eyes. The eyes allow the mathematician to see and perceive any symbols written on the paper while the pencil allows him to write and erase any symbols that he wants. Lastly, the unlimited paper allows him to store anything he wants memory. Using these ideas he was able to describe an abstraction of the modern digital computer. However Turing mentioned that anything that can perform these functions can be considered such a machine and he even said that even electricity should not be required to describe digital computation and machine thinking in general.[2]
Neural Networks
First described in 1943 by Warren McCulloch and Walter Pitts, neural networks are a prevalent example of biological systems inspiring the creation of computer algorithms.[3]They first mathematically described that a system of simplistic neurons was able to produce simplelogical operationssuch aslogical conjunction,disjunctionandnegation. They further showed that a system of neural networks can be used to carry out any calculation that requires finite memory. Around 1970 the research around neural networks slowed down and many consider a 1969bookby Marvin Minsky and Seymour Papert as the main cause.[4][5]Their book showed that neural network models were able only model systems that are based on Boolean functions that are true only after a certain threshold value. Such functions are also known asthreshold functions. The book also showed that a large amount of systems cannot be represented as such meaning that a large amount of systems cannot be modeled by neural networks. Another book by James Rumelhart and David McClelland in 1986 brought neural networks back to the spotlight by demonstrating the linear back-propagation algorithm something that allowed the development of multi-layered neural networks that did not adhere to those limits.[6]
Ant Colonies
Douglas Hofstadter in 1979 described an idea of a biological system capable of performing intelligent calculations even though the individuals comprising the system might not be intelligent.[7]More specifically, he gave the example of an ant colony that can carry out intelligent tasks together but each individual ant cannot exhibiting something called "emergent behavior." Azimi et al. in 2009 showed that what they described as the "ant colony" algorithm, a clustering algorithm that is able to output the number of clusters and produce highly competitive final clusters comparable to other traditional algorithms.[8]Lastly Hölder and Wilson in 2009 concluded using historical data that ants have evolved to function as a single "superogranism" colony.[9]A very important result since it suggested that group selectionevolutionary algorithmscoupled together with algorithms similar to the "ant colony" can be potentially used to develop more powerful algorithms.
Some areas of study in biologically inspired computing, and their biological counterparts:
Bio-inspired computing, which work on a population of possible solutions in the context ofevolutionary algorithmsor in the context ofswarm intelligencealgorithms, are subdivided intoPopulation Based Bio-Inspired Algorithms(PBBIA).[10]They includeEvolutionary Algorithms,Particle Swarm Optimization,Ant colony optimization algorithmsandArtificial bee colony algorithms.
Bio-inspired computing can be used to train a virtual insect. The insect is trained to navigate in an unknown terrain for finding food equipped with six simple rules:
The virtual insect controlled by the trainedspiking neural networkcan find food after training in any unknown terrain.[11]After several generations of rule application it is usually the case that some forms of complex behaviouremerge. Complexity gets built upon complexity until the result is something markedly complex, and quite often completely counterintuitive from what the original rules would be expected to produce (seecomplex systems). For this reason, when modeling theneural network, it is necessary to accurately model anin vivonetwork, by live collection of "noise" coefficients that can be used to refine statistical inference and extrapolation as system complexity increases.[12]
Natural evolution is a good analogy to this method–the rules of evolution (selection,recombination/reproduction,mutationand more recentlytransposition) are in principle simple rules, yet over millions of years have produced remarkably complex organisms. A similar technique is used ingenetic algorithms.
Brain-inspired computing refers to computational models and methods that are mainly based on the mechanism of the brain, rather than completely imitating the brain. The goal is to enable the machine to realize various cognitive abilities and coordination mechanisms of human beings in a brain-inspired manner, and finally achieve or exceed Human intelligence level.
Artificial intelligenceresearchers are now aware of the benefits of learning from the brain information processing mechanism. And the progress of brain science and neuroscience also provides the necessary basis for artificial intelligence to learn from the brain information processing mechanism. Brain and neuroscience researchers are also trying to apply the understanding of brain information processing to a wider range of science field. The development of the discipline benefits from the push of information technology and smart technology and in turn brain and neuroscience will also inspire the next generation of the transformation of information technology.
Advances in brain and neuroscience, especially with the help of new technologies and new equipment, support researchers to obtain multi-scale, multi-type biological evidence of the brain through different experimental methods, and are trying to reveal the structure of bio-intelligence from different aspects and functional basis. From the microscopic neurons, synaptic working mechanisms and their characteristics, to the mesoscopicnetwork connection model, to the links in the macroscopic brain interval and their synergistic characteristics, the multi-scale structure and functional mechanisms of brains derived from these experimental and mechanistic studies will provide important inspiration for building a future brain-inspired computing model.[13]
Broadly speaking, brain-inspired chip refers to a chip designed with reference to the structure of human brain neurons and the cognitive mode of human brain. Obviously, the "neuromorphicchip" is a brain-inspired chip that focuses on the design of the chip structure with reference to the human brain neuron model and its tissue structure, which represents a major direction of brain-inspired chip research. Along with the rise and development of “brain plans” in various countries, a large number of research results on neuromorphic chips have emerged, which have received extensive international attention and are well known to the academic community and the industry. For example, EU-backedSpiNNakerand BrainScaleS, Stanford'sNeurogrid, IBM'sTrueNorth, and Qualcomm'sZeroth.
TrueNorth is a brain-inspired chip that IBM has been developing for nearly 10 years. The US DARPA program has been funding IBM to develop pulsed neural network chips for intelligent processing since 2008. In 2011, IBM first developed two cognitive silicon prototypes by simulating brain structures that could learn and process information like the brain. Each neuron of a brain-inspired chip is cross-connected with massive parallelism. In 2014, IBM released a second-generation brain-inspired chip called "TrueNorth." Compared with the first generation brain-inspired chips, the performance of the TrueNorth chip has increased dramatically, and the number of neurons has increased from 256 to 1 million; the number of programmable synapses has increased from 262,144 to 256 million; Subsynaptic operation with a total power consumption of 70 mW and a power consumption of 20 mW per square centimeter. At the same time, TrueNorth handles a nuclear volume of only 1/15 of the first generation of brain chips. At present, IBM has developed a prototype of a neuron computer that uses 16 TrueNorth chips with real-time video processing capabilities.[14]The super-high indicators and excellence of the TrueNorth chip have caused a great stir in the academic world at the beginning of its release.
In 2012, the Institute of Computing Technology of the Chinese Academy of Sciences(CAS) and the French Inria collaborated to develop the first chip in the world to support the deep neural network processor architecture chip "Cambrian".[15]The technology has won the best international conferences in the field of computer architecture, ASPLOS and MICRO, and its design method and performance have been recognized internationally. The chip can be used as an outstanding representative of the research direction of brain-inspired chips.
The human brain is a product of evolution. Although its structure and information processing mechanism are constantly optimized, compromises in the evolution process are inevitable. The cranial nervous system is a multi-scale structure. There are still several important problems in the mechanism of information processing at each scale, such as the fine connection structure of neuron scales and the mechanism of brain-scale feedback. Therefore, even a comprehensive calculation of the number of neurons and synapses is only 1/1000 of the size of the human brain, and it is still very difficult to study at the current level of scientific research.[16]Recent advances in brain simulation linked individual variability in human cognitiveprocessing speedandfluid intelligenceto thebalance of excitation and inhibitioninstructural brain networks,functional connectivity,winner-take-all decision-makingandattractorworking memory.[17]
In the future research of cognitive brain computing model, it is necessary to model the brain information processing system based on multi-scale brain neural system data analysis results, construct a brain-inspired multi-scale neural network computing model, and simulate multi-modality of brain in multi-scale. Intelligent behavioral ability such as perception, self-learning and memory, and choice. Machine learning algorithms are not flexible and require high-quality sample data that is manually labeled on a large scale. Training models require a lot of computational overhead. Brain-inspired artificial intelligence still lacks advanced cognitive ability and inferential learning ability.
Most of the existing brain-inspired chips are still based on the research of von Neumann architecture, and most of the chip manufacturing materials are still using traditional semiconductor materials. The neural chip is only borrowing the most basic unit of brain information processing. The most basic computer system, such as storage and computational fusion, pulse discharge mechanism, the connection mechanism between neurons, etc., and the mechanism between different scale information processing units has not been integrated into the study of brain-inspired computing architecture. Now an important international trend is to develop neural computing components such as brain memristors, memory containers, and sensory sensors based on new materials such as nanometers, thus supporting the construction of more complex brain-inspired computing architectures. The development of brain-inspired computers and large-scale brain computing systems based on brain-inspired chip development also requires a corresponding software environment to support its wide application.
(the following are presented in ascending order of complexity and depth, with those new to the field suggested to start from the top)
|
https://en.wikipedia.org/wiki/Biologically_inspired_computing
|
Inmathematics, aself-avoiding walk(SAW) is asequenceof moves on alattice(alattice path) that does not visit the same point more than once. This is a special case of thegraph theoreticalnotion of apath. Aself-avoiding polygon(SAP) is a closed self-avoiding walk on a lattice. Very little is known rigorously about the self-avoiding walk from a mathematical perspective, although physicists have provided numerous conjectures that are believed to be true and are strongly supported by numerical simulations.
Incomputational physics, a self-avoiding walk is a chain-like path inR2orR3with a certain number of nodes, typically a fixed step length and has the property that it doesn't cross itself or another walk. A system of SAWs satisfies the so-calledexcluded volumecondition. In higher dimensions, the SAW is believed to behave much like the ordinaryrandom walk.
SAWs and SAPs play a central role in the modeling of thetopologicalandknot-theoreticbehavior of thread- and loop-like molecules such asproteins. Indeed, SAWs may have first been introduced by the chemistPaul Flory[1][dubious–discuss]in order to model the real-life behavior of chain-like entities such assolventsandpolymers, whose physical volume prohibits multiple occupation of the same spatial point.
SAWs arefractals. For example, ind= 2thefractal dimensionis 4/3, ford= 3it is close to 5/3 while ford≥ 4the fractal dimension is2. The dimension is called the uppercritical dimensionabove which excluded volume is negligible. A SAW that does not satisfy the excluded volume condition was recently studied to model explicitsurface geometryresulting from expansion of a SAW.[2][clarification needed]
The properties of SAWs cannot be calculated analytically, so numericalsimulationsare employed. Thepivot algorithmis a common method forMarkov chain Monte Carlosimulations for the uniformmeasureonn-step self-avoiding walks. The pivot algorithm works by taking a self-avoiding walk and randomly choosing a point on this walk, and then applyingsymmetricaltransformations (rotations and reflections) on the walk after thenth step to create a new walk.
Calculating the number of self-avoiding walks in any given lattice is a commoncomputational problem. There is currently no known formula, although there are rigorous methods of approximation.[3][4]
One of the phenomena associated with self-avoiding walks and statistical physics models in general is the notion ofuniversality, that is, independence of macroscopic observables from microscopic details, such as the choice of the lattice. One important quantity that appears in conjectures for universal laws is theconnective constant, defined as follows. Letcndenote the number ofn-step self-avoiding walks. Since every(n+m)-step self avoiding walk can be decomposed into ann-step self-avoiding walk and anm-step self-avoiding walk, it follows thatcn+m≤cncm. Therefore, the sequence{logcn}issubadditiveand we can applyFekete's lemmato show that the following limit exists:
μis called theconnective constant, sincecndepends on the particular lattice chosen for the walk so doesμ. The exact value ofμis only known for the hexagonal lattice, found byStanislav SmirnovandHugo Duminil-Copin, where it is equal to:[5]
For other lattices,μhas only been approximated numerically, and is believed not to even be analgebraic number. It is conjectured that[6]
asn→ ∞, whereμdepends on the lattice, but the power law correctionn1132{\displaystyle n^{\frac {11}{32}}}does not; in other words, this law is believed to be universal.
Self-avoiding walks have also been studied in the context ofnetwork theory.[7]In this context, it is customary to treat the SAW as a dynamical process, such that in every time-step a walker randomly hops between neighboring nodes of the network. The walk ends when the walker reaches a dead-end state, such that it can no longer progress to newly un-visited nodes. It was recently found that onErdős–Rényinetworks, the distribution of path lengths of such dynamically grown SAWs can be calculated analytically, and follows theGompertz distribution.[8]For arbitrary networks, the distribution of path lengths of the walk, thedegree distributionof the non-visited network and thefirst-hitting-timedistribution to a node can be obtained by solving a set of coupled recurrence equations.[9]
Consider the uniform measure onn-step self-avoiding walks in the full plane. It is currently unknown whether the limit of the uniform measure asn→ ∞induces a measure on infinite full-plane walks. However,Harry Kestenhas shown that such a measure exists for self-avoiding walks in the half-plane. One important question involving self-avoiding walks is the existence and conformal invariance of thescaling limit, that is, the limit as the length of the walk goes to infinity and the mesh of the lattice goes to zero. Thescaling limitof the self-avoiding walk is conjectured to be described bySchramm–Loewner evolutionwith parameterκ=8/3.
|
https://en.wikipedia.org/wiki/Self-avoiding_walk
|
Mathematical puzzlesmake up an integral part ofrecreational mathematics. They have specific rules, but they do not usually involve competition between two or more players. Instead, to solve such apuzzle, the solver must find a solution that satisfies the given conditions. Mathematical puzzles require mathematics to solve them.Logic puzzlesare a common type of mathematical puzzle.
Conway's Game of Lifeandfractals, as two examples, may also be considered mathematical puzzles even though the solver interacts with them only at the beginning by providing a set of initial conditions. After these conditions are set, the rules of the puzzle determine all subsequent changes and moves. Many of the puzzles are well known because they were discussed byMartin Gardnerin his "Mathematical Games" column in Scientific American. Mathematical puzzles are sometimes used to motivate students in teaching elementary schoolmath problemsolving techniques.[1]Creative thinking– or "thinking outside the box" – often helps to find the solution.
The fields ofknot theoryandtopology, especially their non-intuitive conclusions, are often seen as a part of recreational mathematics.
|
https://en.wikipedia.org/wiki/Mathematical_puzzle
|
Asystem on a chip(SoC) is anintegrated circuitthat combines most or all key components of acomputerorelectronic systemonto a single microchip.[1]Typically, an SoC includes acentral processing unit(CPU) withmemory,input/output, anddata storagecontrol functions, along with optional features like agraphics processing unit(GPU),Wi-Ficonnectivity, and radio frequency processing. This high level of integration minimizes the need for separate, discrete components, thereby enhancingpower efficiencyand simplifying device design.
High-performance SoCs are often paired with dedicated memory, such asLPDDR, and flash storage chips, such aseUFSoreMMC, which may be stacked directly on top of the SoC in apackage-on-package(PoP) configuration or placed nearby on the motherboard. Some SoCs also operate alongside specialized chips, such ascellular modems.[2]
Fundamentally, SoCs integrate one or moreprocessor coreswith critical peripherals. This comprehensive integration is conceptually similar to how amicrocontrolleris designed, but providing far greater computational power. While this unified design delivers lower power consumption and a reducedsemiconductor diearea compared to traditional multi-chip architectures, though at the cost of reduced modularity and component replaceability.
SoCs are ubiquitous in mobile computing, where compact, energy-efficient designs are critical. They powersmartphones,tablets, andsmartwatches, and are increasingly important inedge computing, where real-time data processing occurs close to the data source. By driving the trend toward tighter integration, SoCs have reshaped modern hardware design, reshaping the design landscape for modern computing devices.[3][4]
In general, there are three distinguishable types of SoCs:
SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches, and netbooks as well asembedded systemsand in applications where previouslymicrocontrollerswould be used.
Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability andmean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers.[5]Applications includeAI acceleration, embeddedmachine vision,[6]data collection,telemetry,vector processingandambient intelligence. Often embedded SoCs target theinternet of things, multimedia, networking, telecommunications andedge computingmarkets. Some examples of SoCs for embedded applications include theSTMicroelectronicsSTM32, theRaspberry Pi LtdRP2040, and theAMDZynq 7000.
Mobile computingbased SoCs always bundle processors, memories, on-chipcaches,wireless networkingcapabilities and oftendigital camerahardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory andflash memorywill be placed right next to, or above (package on package), the SoC.[7]Some examples of mobile computing SoCs include:
In 1992,Acorn Computersproduced theA3010, A3020 and A4000 range of personal computerswith the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous AcornARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers.
Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighterintegrationof hardware andfirmwaremodules, andLTEand otherwireless networkcommunications integrated on chip (integratednetwork interface controllers).[10]
On modern laptops and mini PCs, the low-power variants ofAMD RyzenandIntel Coreprocessors use SoC design integrating CPU, IGPU, chipset and other processors in a single package. However, such x86 processors still require external memory and storage chips.
An SoC consists of hardwarefunctional units, includingmicroprocessorsthat runsoftware code, as well as acommunications subsystemto connect, control, direct and interface between these functional modules.
An SoC must have at least oneprocessor core, but typically an SoC has more than one core. Processor cores can be amicrocontroller,microprocessor(μP),[11]digital signal processor(DSP) orapplication-specific instruction set processor(ASIP) core.[12]ASIPs haveinstruction setsthat are customized for anapplication domainand designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition. TheARM architectureis a common choice for SoC processor cores because some ARM-architecture cores aresoft processorsspecified asIP cores.[11]
SoCs must havesemiconductor memoryblocks to perform their computation, as domicrocontrollersand otherembedded systems. Depending on the application, SoC memory may form amemory hierarchyandcache hierarchy. In the mobile computing market, this is common, but in manylow-powerembedded microcontrollers, this is not necessary. Memory technologies for SoCs includeread-only memory(ROM),random-access memory(RAM), Electrically Erasable Programmable ROM (EEPROM) andflash memory.[11]As in other computer systems, RAM can be subdivided into relatively faster but more expensivestatic RAM(SRAM) and the slower but cheaperdynamic RAM(DRAM). When an SoC has acachehierarchy, SRAM will usually be used to implementprocessor registersand cores'built-in cacheswhereas DRAM will be used formain memory. "Main memory" may be specific to a single processor (which can bemulti-core) when the SoChas multiple processors, in this case it isdistributed memoryand must be sent via§ Intermodule communicationon-chip to be accessed by a different processor.[12]For further discussion of multi-processing memory issues, seecache coherenceandmemory latency.
SoCs include externalinterfaces, typically forcommunication protocols. These are often based upon industry standards such asUSB,Ethernet,USART,SPI,HDMI,I²C,CSI, etc. These interfaces will differ according to the intended application.Wireless networkingprotocols such asWi-Fi,Bluetooth,6LoWPANandnear-field communicationmay also be supported.
When needed, SoCs includeanaloginterfaces includinganalog-to-digitalanddigital-to-analog converters, often forsignal processing. These may be able to interface with different types ofsensorsoractuators, includingsmart transducers. They may interface with application-specificmodulesor shields.[nb 1]Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing.
Digital signal processor(DSP) cores are often included on SoCs. They performsignal processingoperations in SoCs forsensors,actuators,data collection,data analysisand multimedia processing. DSP cores typically featurevery long instruction word(VLIW) andsingle instruction, multiple data(SIMD)instruction set architectures, and are therefore highly amenable to exploitinginstruction-level parallelismthroughparallel processingandsuperscalar execution.[12]: 4SP cores most often feature application-specific instructions, and as such are typicallyapplication-specific instruction set processors(ASIP). Such application-specific instructions correspond to dedicated hardwarefunctional unitsthat compute those instructions.
Typical DSP instructions includemultiply-accumulate,Fast Fourier transform,fused multiply-add, andconvolutions.
As with other computer systems, SoCs requiretiming sourcesto generateclock signals, control execution of SoC functions and provide time context tosignal processingapplications of the SoC, if needed. Popular time sources arecrystal oscillatorsandphase-locked loops.
SoCperipheralsincludingcounter-timers, real-timetimersandpower-on resetgenerators. SoCs also includevoltage regulatorsandpower managementcircuits.
SoCs comprise manyexecution units. These units must often send data andinstructionsback and forth. Because of this, all but the most trivial SoCs requirecommunications subsystems. Originally, as with othermicrocomputertechnologies,data busarchitectures were used, but recently designs based on sparse intercommunication networks known asnetworks-on-chip(NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future.[13]
Historically, a shared globalcomputer bustypically connected the different components, also called "blocks" of the SoC.[13]A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard.
Direct memory accesscontrollers route data directly between external interfaces and SoC memory, bypassing the CPU orcontrol unit, thereby increasing the datathroughputof the SoC. This is similar to somedevice driversof peripherals on component-basedmulti-chip modulePC architectures.
Wire delay is not scalable due to continuedminiaturization,system performancedoes not scale with the number of cores attached, the SoC'soperating frequencymust decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supportingmanycoresystems on chip.[13]: xiii
In the late 2010s, a trend of SoCs implementingcommunications subsystemsin terms of a network-like topology instead ofbus-basedprotocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost.[13]: xiiiThis has led to the emergence of interconnection networks withrouter-basedpacket switchingknown as "networks on chip" (NoCs) to overcome thebottlenecksof bus-based networks.[13]: xiii
Networks-on-chip have advantages including destination- and application-specificrouting, greater power efficiency and reduced possibility ofbus contention. Network-on-chip architectures take inspiration fromcommunication protocolslikeTCPand theInternet protocol suitefor on-chip communication,[13]although they typically have fewernetwork layers. Optimal network-on-chipnetwork architecturesare an ongoing area of much research interest. NoC architectures range from traditional distributed computingnetwork topologiessuch astorus,hypercube,meshesandtree networkstogenetic algorithm schedulingtorandomized algorithmssuch asrandom walks with branchingand randomizedtime to live(TTL).
Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limitedfloorplanningchoices as the number of cores in SoCs increase, so asthree-dimensional integrated circuits(3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs.[13]
A system on a chip consists of both thehardware, described in§ Structure, and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. Thedesign flowfor an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations (§ Optimization goals) and constraints.
Most SoCs are developed from pre-qualified hardware componentIP core specificationsfor the hardware elements andexecution units, collectively "blocks", described above, together with softwaredevice driversthat may control their operation. Of particular importance are theprotocol stacksthat drive industry-standard interfaces likeUSB. The hardware blocks are put together usingcomputer-aided designtools, specificallyelectronic design automationtools; thesoftware modulesare integrated using a softwareintegrated development environment.
SoCs components are also often designed inhigh-level programming languagessuch asC++,MATLABorSystemCand converted toRTLdesigns throughhigh-level synthesis(HLS) tools such asC to HDLorflow to HDL.[14]HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known tocomputer engineersin a manner independent of time scales, which are typically specified in HDL.[15]Other components can remain software and be compiled and embedded ontosoft-core processorsincluded in the SoC as modules in HDL asIP cores.
Once thearchitectureof the SoC has been defined, any new hardware elements are written in an abstracthardware description languagetermedregister transfer level(RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is calledglue logic.
Chips are verified for validation correctness before being sent to asemiconductor foundry. This process is calledfunctional verificationand it accounts for a significant portion of the time and energy expended in thechip design life cycle, often quoted as 70%.[16][17]With the growing complexity of chips,hardware verification languageslikeSystemVerilog,SystemC,e, and OpenVera are being used.Bugsfound in the verification stage are reported to the designer.
Traditionally, engineers have employed simulation acceleration,emulationor prototyping onreprogrammable hardwareto verify and debug hardware and software for SoC designs prior to the finalization of the design, known astape-out.Field-programmable gate arrays(FPGAs) are favored for prototyping SoCs becauseFPGA prototypesare reprogrammable, allowdebuggingand are more flexible thanapplication-specific integrated circuits(ASICs).[18][19]
With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million.[citation needed]
FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus[20]are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer.
In parallel, the hardware elements are grouped and passed through a process oflogic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as anetlistdescribing the design as a physical circuit and its interconnections. These netlists are combined with theglue logicconnecting the components to produce the schematic description of the SoC as a circuit which can beprintedonto a chip. This process is known asplace and routeand precedestape-outin the event that the SoCs are produced asapplication-specific integrated circuits(ASIC).
SoCs must optimizepower use, area ondie, communication, positioning forlocalitybetween modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use amulti-chip modulearchitecture without accounting for the area use, power consumption or performance of the system to the same extent.
Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hardcombinatorial optimizationproblem, and can indeed beNP-hardfairly easily. Therefore, sophisticatedoptimization algorithmsare often required and it may be practical to useapproximation algorithmsorheuristicsin some cases. Additionally, most SoC designs containmultiple variables to optimize simultaneously, soPareto efficientsolutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducingtrade-offsin system design.
For broader coverage of trade-offs andrequirements analysis, seerequirements engineering.
SoCs are optimized to minimize theelectrical powerused to perform the SoC's functions. Most SoCs must use low power. SoC systems often require longbattery life(such assmartphones), can potentially spend months or years without a power source while needing to maintain autonomous function, and often are limited in power use by a high number ofembeddedSoCs beingnetworked togetherin an area. Additionally, energy costs can be high and conserving energy will reduce thetotal cost of ownershipof the SoC. Finally,waste heatfrom high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is theintegralofpowerconsumed with respect to time, and theaverage rateof power consumption is the product ofcurrentbyvoltage. Equivalently, byOhm's law, power is current squared times resistance or voltage squared divided byresistance:
P=IV=V2R=I2R{\displaystyle P=IV={\frac {V^{2}}{R}}={I^{2}}{R}}SoCs are frequently embedded inportable devicessuch assmartphones,GPS navigation devices, digitalwatches(includingsmartwatches) andnetbooks. Customers want long battery lives formobile computingdevices, another reason that power consumption must be minimized in SoCs.Multimedia applicationsare often executed on these devices, including video games,video streaming,image processing; all of which have grown incomputational complexityin recent years with user demands and expectations for higher-qualitymultimedia. Computation is more demanding as expectations move towards3D videoathigh resolutionwithmultiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery.[12]: 3
SoCs are optimized to maximizepower efficiencyin performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such asedge computing,distributed processingandambient intelligencerequire a certain level ofcomputational performance, but power is limited in most SoC environments.
SoC designs are optimized to minimizewaste heatoutputon the chip. As with otherintegrated circuits, heat generated due to highpower densityare thebottleneckto furtherminiaturizationof components.[21]: 1The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erodereliabilityof the circuit over time. High temperatures and thermal stress negatively impact reliability,stress migration, decreasedmean time between failures,electromigration,wire bonding,metastabilityand other performance degradation of the SoC over time.[21]: 2–9
In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of hightransistor countson modern devices, oftentimes a layout of sufficient throughput and hightransistor densityis physically realizable fromfabrication processesbut would result in unacceptably high amounts of heat in the circuit's volume.[21]: 1
These thermal effects force SoC and other chip designers to apply conservativedesign margins, creating less performant devices to mitigate the risk ofcatastrophic failure. Due to increasedtransistor densitiesas length scales get smaller, eachprocess generationproduces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneousheat fluxes, which cannot be effectively mitigated by uniformpassive cooling.[21]: 1
SoCs are optimized to maximize computational and communicationsthroughput.
SoCs are optimized to minimizelatencyfor some or all of their functions. This can be accomplished bylaying outelements with proper proximity andlocalityto each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules,functional unitsand memories. In general, optimizing to minimize latency is anNP-completeproblem equivalent to theBoolean satisfiability problem.
Fortasksrunning on processor cores, latency and throughput can be improved withtask scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints.
Systems on chip are modeled with standard hardwareverification and validationtechniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect tomultiple-criteria decision analysison the above optimization targets.
Task schedulingis an important activity in any computer system with multipleprocessesorthreadssharing a single processor core. It is important to reduce§ Latencyand increase§ Throughputforembedded softwarerunning on an SoC's§ Processor cores. Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involvingshared resources.
Software running on SoCs often schedules tasks according tonetwork schedulingandrandomized schedulingalgorithms.
Hardware and software tasks are often pipelined inprocessor design. Pipelining is an important principle forspeedupincomputer architecture. They are frequently used inGPUs(graphics pipeline) and RISC processors (evolutions of theclassic RISC pipeline), but are also applied to application-specific tasks such asdigital signal processingand multimedia manipulations in the context of SoCs.[12]
SoCs are often analyzed thoughprobabilistic models,queueing networks, andMarkov chains. For instance,Little's lawallows SoC states and NoC buffers to be modeled as arrival processes and analyzed throughPoisson random variablesandPoisson processes.
SoCs are often modeled withMarkov chains, bothdiscrete timeandcontinuous timevariants. Markov chain modeling allowsasymptotic analysisof the SoC'ssteady state distributionof power, heat, latency and other factors to allow design decisions to be optimized for the common case.
SoC chips are typicallyfabricatedusingmetal–oxide–semiconductor(MOS) technology.[22]The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity.
When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing.
SoCs can be fabricated by several technologies, including:
ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership.[23]
SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well.
However, like mostvery-large-scale integration(VLSI) designs, the total cost[clarification needed]is higher for one large chip than for the same functionality distributed over several smaller chips, because oflower yields[clarification needed]and highernon-recurring engineeringcosts.
When it is not feasible to construct an SoC for a particular application, an alternative is asystem in package(SiP) comprising a number of chips in a singlepackage. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler.[24]Another reason SiP may be preferred iswaste heatmay be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart.
Some examples of systems on a chip are:
SoCresearch and developmentoften compares many options. Benchmarks, such as COSMIC,[25]are developed to help such evaluations.
|
https://en.wikipedia.org/wiki/Multiprocessor_system_on_a_chip
|
Abiordered set(otherwise known asboset) is amathematical objectthat occurs in the description of thestructureof the set ofidempotentsin asemigroup.
The set of idempotents in a semigroup is a biordered set and every biordered set is the set of idempotents of some semigroup.[1][2]A regular biordered set is a biordered set with an additional property. The set of idempotents in aregular semigroupis a regular biordered set, and every regular biordered set is the set of idempotents of some regular semigroup.[1]
The concept and the terminology were developed byK S S Nambooripadin the early 1970s.[3][4][1]In 2002, Patrick Jordan introduced the term boset as an abbreviation of biordered set.[5]The defining properties of a biordered set are expressed in terms of twoquasiordersdefined on the set and hence the name biordered set.
According to Mohan S. Putcha, "The axioms defining a biordered set are quite complicated. However, considering the general nature of semigroups, it is rather surprising that such a finite axiomatization is even possible."[6]Since the publication of the original definition of the biordered set by Nambooripad, several variations in the definition have been proposed. David Easdown simplified the definition and formulated the axioms in a special arrow notation invented by him.[7]
IfXandYaresetsandρ⊆X×Y, letρ(y) = {x∈X:xρy}.
LetEbe asetin which apartialbinary operation, indicated by juxtaposition, is defined. IfDEis thedomainof the partial binary operation onEthenDEis arelationonEand (e,f) is inDEif and only if the productefexists inE. The following relations can be defined inE:
IfTis anystatementaboutEinvolving the partial binary operation and the above relations inE, one can define the left-rightdualofTdenoted byT*. IfDEissymmetricthenT* is meaningful wheneverTis.
The setEis called a biordered set if the followingaxiomsand their duals hold for arbitrary elementse,f,g, etc. inE.
InM(e,f) =ωl(e) ∩ ωr(f)(theM-setofeandfin that order), define a relation≺{\displaystyle \prec }by
Then the set
is called thesandwich setofeandfin that order.
We say that a biordered setEis anM-biordered setifM(e,f) ≠ ∅ for alleandfinE.
Also,Eis called aregular biordered setifS(e,f) ≠ ∅ for alleandfinE.
In 2012 Roman S. Gigoń gave a simple proof thatM-biordered sets arise fromE-inversive semigroups.[8][clarification needed]
A subsetFof a biordered setEis a biordered subset (subboset) ofEifFis a biordered set under the partial binary operation inherited fromE.
For anyeinEthe setsωr(e),ωl(e)andω(e)are biordered subsets ofE.[1]
A mappingφ:E→Fbetween two biordered setsEandFis a biordered set homomorphism (also called a bimorphism) if for all (e,f) inDEwe have (eφ) (fφ) = (ef)φ.
LetVbe avector spaceand
whereV=A⊕Bmeans thatAandBaresubspacesofVandVis theinternal direct sumofAandB.
The partial binary operation ⋆ on E defined by
makesEa biordered set. The quasiorders inEare characterised as follows:
The setEof idempotents in a semigroupSbecomes a biordered set if a partial binary operation is defined inEas follows:efis defined inEif and only ifef=eoref=forfe=eorfe=fholds inS. IfSis a regular semigroup thenEis a regular biordered set.
As a concrete example, letSbe the semigroup of all mappings ofX= { 1, 2, 3 }into itself. Let the symbol (abc) denote the map for which1 →a, 2 →b,and3 →c. The setEof idempotents inScontains the following elements:
The following table (taking composition of mappings in the diagram order) describes the partial binary operation inE. AnXin a cell indicates that the corresponding multiplication is not defined.
|
https://en.wikipedia.org/wiki/Biordered_set
|
Sustainable managementtakes the concepts from sustainability and synthesizes them with the concepts ofmanagement.Sustainabilityhas three branches: theenvironment, the needs of present andfuture generations, and theeconomy. Using these branches, it creates the ability of a system to thrive by maintaining economic viability and also nourishing the needs of the present and future generations by limitingresource depletion.
Sustainable management is needed because it is an important part of the ability to successfully maintain the quality of life on our planet. Sustainable management can be applied to all aspects of our lives. For example, the practices of a business should be sustainable if they wish to stay in businesses, because if the business is unsustainable, then by the definition of sustainability they will cease to be able to be in competition. Communities are in a need of sustainable management, because if thecommunityis to prosper, then the management must be sustainable.Forestandnatural resourcesneed to have sustainable management if they are to be able to be continually used by our generation and future generations. Our personal lives also need to be managed sustainably. This can be by making decisions that will help sustain our immediate surroundings and environment, or it can be by managing our emotional and physical well-being. Sustainable management can be applied to many things, as it can be applied as a literal and an abstract concept. Meaning, depending on what they are applied to the meaning of what it is can change.
Managers' strategies reflect the mindset of the times. This being the case, it has been a problem for the evolution of sustainable management practices for two reasons. The first reason is that sustainable norms are continually changing. For example, things considered unthinkable a few years ago are now standard practices. And the second reason is that in order to practice sustainable management, one has to be forward thinking, not only in the short term, but also in the long term.
Management behavior is a reflection of how accepted conceptions of behavior are defined. This means that forces and beliefs outside of the given program push along the management. Themanagercan take some credit for the cultural changes in his or her program, but overall the organization’s culture reflects dominant conceptions of the public at that time. This is exemplified through the managerial actions taken during the time periods that lead up to the present day. These examples are given below:
This was a time period in which, even though there were outside concerns about the environment, the industries were able to resist pressures and make their own definitions and regulations.[1]Environmentalists were not viewed as credible sources of information during this time and usually discredited.
The norms of this period radically shifted with the creating of theU.S. Environmental Protection Agency(EPA) in 1970. The EPA became the mediator between the environmentalists and the industry, although the two sides never met.[1]During this period, the environment for the majority of industry and business management teams was only important in terms of compliance with law.[1]In 1974 a conference board survey found that the majority of companies still treated environmental management as a threat.[1]The survey noted a widespread tendency in most of industry to treat pollution control expenditures as non-recoverable investments.[1]According to the consensus environmental protection was considered at best a necessary evil, and at worst a temporary nuisance.[1]
By 1982, the EPA had lost its credibility, but at the same time activism became more influential, and there was an increase in the funding and memberships of major non-governmental organizations (NGOs).[1]Industry gradually became more cooperative with government and new managerial structures were implemented to achieve compliances with regulations.[1]
During this period, industry progressed into a proactive stance on environmental protection.[1]With this attitude, the issue became one in which they felt qualified to manage on their own. Although there was advancement in organizational power, the concern for the environment still kept being pushed down the hierarchy of important things to do.[1]
In 1995 Harvard professorMichael Porterwrote in theHarvard Business Reviewthat environmental protection was not a threat to the corporate enterprise but rather an opportunity, one that could increase competitive advantage in the marketplace.[1]Before 2000, The companies generally regarded green buildings as interesting experiments but unfeasible projects in the real business world.[2]Since then several factors, including the ones listed below, have caused major shifts in thinking.[2]The creation of reliable building rating and performance measurement systems for new construction and renovation has helped change corporate perceptions about green. In 2000, the Washington D.C.–basedUnited States Green Building Councillaunched its rigorousLeadership in Energy and Environmental Design(LEED) program.[2]Hundreds of US and international studies have proven the financial advantages of going green: lower utility costs, higher employee productivity.[2]Green building materials, mechanical systems, and furnishings have become more widely available, and prices have dropped considerably.[2]As changes are made to the norms of what is acceptable from a management perspective, more and more it becomes apparent that sustainable management is the new norm of the future. Currently, there are many programs, organizations, communities, and businesses that follow sustainable management plans. These new entities are pressing forward with the help of changing social norms and management initiatives.
A manager is a person that is held responsible for the planning of things that will benefit the situation that they are controlling. To be a manager of sustainability, one needs to be a manager that can control issues and plan solutions that will be sustainable, so that what they put into place will be able to continue for future generations. The job of a sustainable manager is like other management positions, but additionally they have to manage systems so that they are able to support and sustain themselves. Whether it is a person that is a manager of groups, business, family, communities, organizations, agriculture, or the environment, they can all use sustainable management to improve their productivity, environment, and atmosphere, among other things. Some practical skills that are needed to be able to perform the job include:
Recently, there has even been the addition of new programs in colleges and universities in order to be able to offer Bachelor of Science and Master of Science degrees in Sustainable management.
In business, time and time again, environmentalists are seen facing off against industry, and there is usually very little "meeting in the middle" or compromises. When these two sides agree to disagree, the result is a more powerful message, and it becomes one that allows more people to understand and embrace.
Organizations need to face the fact that the boundaries of accountability are moving fast. The trend towards sustainable management means that organizations are beginning to implement a systems wide approach that links in the various parts of the business with the greater environment at large.
As sustainable management institutions adapt, it becomes imperative that they include an image of sustainable responsibility that is projected for the public to see. This is because firms are socially based organizations. But this can be a double edged sword, because sometimes they end up focusing too much on their image rather than actually focusing on implementing what they are trying to project to the public; this is called green washing. It is important that the execution of sustainable management practices is not put aside while the firm tries to appeal to the public with their sustainable management “practices.”
Additionally, companies must make the connection between sustainability as a vision and sustainability as a practice. Managers need to think systematically and realistically about the application of traditional business principles to environmental problems. By melding the two concepts together, new ideas of business principles emerge and can enable some companies-those with the right industry structure, competitive position, and managerial skills- to deliver increased value to shareholders while making improvements in their environmental performance.[4]
Any corporation can become green on a standard budget.[2]By focusing on the big picture, a company can generate more savings and better performance. By using planning, design, and construction based on sustainable values, sustainable management strives to gain LEED points by reducing footprint of the facility by sustainably planning the site with focus on these three core ideas.[2]To complete a successful green building, or business, the management also applies cost benefit analysis in order to allocate funds appropriately.
The economic system, like all systems, is subject to the laws of thermodynamics, which define the limit at which the Earth can successfully process energy and wastes.[5]Managers need to understand that their values are critical factors in their decisions. Many of current business values are based on unrealistic economic assumptions; adopting new economic models that take the Earth into account in the decision-making process is at the core of sustainable management.[5]This new management addresses the interrelatedness of the ecosystem and the economic system.[5]
The strategic vision that is based on core values of the firm guides the firm’s decision-making processes at all levels. Thus, the sustainable management requires finding out what business activities fit into the Earth’s carrying capacity, and also defining the optimal levels of those activities.[5]Sustainability values form the basis of the strategic management, process the costs and benefits of the firm’s operations, and are measured against the survival needs of the planets stakeholders.[5]Sustainability is the core value because it supports a strategic vision of firms in the long term by integrating economic profits with the responsibility to protect the whole environment.[5]
Changing industrial processes so that they actually replenish and magnify the stock ofnatural capitalis another component of sustainable management. One way managers have figured out how to do this is by using a service model of business.[6]This focuses on building relationships with customers, instead of focusing on making and selling products.[6]This type of model represents a fundamental change in the way businesses behave. It allows for managers to be aware of the lifecycle of their products by leaving the responsibility up to the company to take care of the product throughout the life cycle.[6]The service model, because the product is the responsibility of the business, creates an avenue in which the managers can see ways in which they can reduce the use of resources through recycling and product construction.
For communities to be able to improve, sustainable management needs to be in practice. If a community relies on the resources that are in the surrounding area, then they need to be used in a sustainable manner to insure the indefinite supply of the resources. A community needs to work together to be able to be productive, and when there is a need to get things done, management needs to take the lead. If sustainable management is in practice in a community, then people will want to stay in that community, and other people will realize the success, and they will also want to live in a similar environment, as their own unsustainable towns fail. Part of a sustainable management system in a community is the education, the cooperation, and the responsiveness of the people that live in the community.[7]
There are new ideals to how a community can be sustainable. This can includeurban planning, which allow people to move about a city that are more sustainable for the environment. If management plans a community that allows for people to move without cars, it helps make a community sustainable by increasing mass transit or other modes of transportation. People would spend less time in traffic while improving the environment, and on an occasions exercise.[8]
Sustainable management provides plans that can improve multiple parts of people lives, environment, and future generations. If a community sets goals, then people are more likely to reduce energy, water, and waste, but a community cannot set goals unless they have the management in place to set goals.[9]
A part of sustainable management for a community is communicating the ideals and plans for an area to the people that will be carrying out the plan. It is important to note that sustainable management is not sustainable if the person that is managing a situation is not communicating what needs to be improved, how it should be improved, why it is important to them, and how they are involved it in the process.
For a person to be responsible for their action is a part of managing, and that is part of being managed sustainable. To be able to manage oneself sustainable there are many factors to consider, because to be able to manage oneself a person needs to be able to see what they are doing unsustainable, and how to become sustainable. By using plastic bags at a check out line is unsustainable because it creates pollutants, but using reusable biodegradable bags can resolve the problem. This is not only environmentally sustainable, but it also improves the physical and mental sustainability of the person that uses the reusable bags. It is physical improvement because people do not have to live with the countless plastic bags on the Earth and the pollution that comes with it. It is also an improvement to mental sustainability, because the person that uses the reusable bags has feeling of accomplishment that comes from doing the right thing. Deciding to buy local food to make the community stronger through community sustainable management, can also be emotionally, environmentally, and physically rewarding.
In Figure 1[9]Mckenzie shows how a person can look at a behavior that they are doing and determine if it is sustainable or not, and what they could replace the bad behavior with. Education of an individual would be the first step to deciding to take a step towards managing their lives sustainable. To manage a person life the benefits needs to be high and the barriers low. Good managing would come up with a competing behavior that has no barriers to it. To come up with a Competing behavior that does not have a barrier to it would involve good problem solving.
Figure 2[9]Mckenzie is an example of what a person might try to change in their life to make it more sustainable. Walking instead of taking the taxi helps the environment, but it also loses time spent with family. The bus is in the middle of walking and taking a taxi, but another option that is not on the list is riding a bike. Good sustainable management would include all the options that are possible, and new options that were not available before. These figures are tools that can be used in helping people manage their lives sustainably, but there are other ways to think about their lives to become more sustainable.
There are very practical needs for sustainable management of forest. Since forests provide many as per as resources to the people, and to the world, management of the forests are critical to keep those resources available. To be able to manage a forest, knowledge of how the natural systems work is needed. If a manager knows how the natural system works, then when manager of the forest makes plans how the resources are to remove from the forest, the manager will know how the resources can be removed without damaging the forest. Since many forests are under management of the government that is in the region, the forest are not truly functioning how the ecosystem was naturally developed, and how it is meant to be. An example is the pineflatwoodsinFlorida. To be able to maintain that ecosystem frequent burnings of the forest needs to happen. Fires are a natural part of the ecosystem, but since wild fires can spread to communities near the forest, control of the wild fires is requested from the communities. To maintain flatwoods forest control burning or prescribe burning is part of the management to sustain the forest.[10]
|
https://en.wikipedia.org/wiki/Sustainable_management
|
Autocephaly recognized by some autocephalous Churchesde jure:
Autocephaly and canonicity recognized by Constantinople and 3 other autocephalous Churches:
Spiritual independence recognized by Georgian Orthodox Church:
Semi-Autonomous:
In theEastern Orthodox Church,Catholic Church,[1]and in the teachings of theChurch Fatherswhich undergirds thetheologyof those communions,economyoroeconomy(Greek:οἰκονομία,oikonomia) has several meanings.[2]The basic meaning of the word is "handling" or "disposition" or "management" of a thing, or more literally "housekeeping", usually assuming or implyinggoodorprudenthandling (as opposed topoorhandling) of the matter at hand. In short,economiais a discretionary deviation from the letter of the law in order to adhere to the spirit of thelawandcharity. This is in contrast tolegalism, orakribia(Greek:ακριβεια), which is strict adherence to the letter of the law of the church.
The divine economy, in Eastern Orthodoxy, not only refers to God's actions to bring about the world'ssalvationandredemption, but toallof God's dealings with, and interactions with, the world, including the Creation.[3][verification needed]
According toLossky,theology(literally, "words about God" or "teaching about God") was concerned with all that pertains to God alone, in himself, i.e. the teaching on theTrinity, thedivine attributes, and so on; but it was not concerned with anything pertaining to the creation or the redemption. Lossky writes: "The distinction betweenοικονομια[economy] andθεολογια[theology] [...] remains common to most of the GreekFathersand to all of theByzantinetradition.θεολογια[...] means, in the fourth century, everything which can be said of God considered in Himself, outside of His creative and redemptive economy. To reach this 'theology' properly so-called, one therefore must go beyond [...] God as Creator of the universe, in order to be able to extricate the notion of the Trinity from the cosmological implications proper to the 'economy.' "[3]
TheEcumenical Patriarchateconsiders that through "extreme oikonomia [economy]", those who arebaptizedin theOriental Orthodox, Roman Catholic,Lutheran,Old Catholic,Moravian,Anglican,Methodist,Reformed,Presbyterian,Church of the Brethren,Assemblies of God, orBaptisttraditions can be received into the Eastern Orthodox Church through the sacrament ofChrismationand not throughre-baptism.[4]
In thecanon law of the Eastern Orthodox Church, the notions ofakriveiaandeconomia(economy) also exist.Akriveia, which is harshness, "is the strict application (sometimes even extension) of thepenancegiven to an unrepentant and habitual offender."Economia, which is sweetness, "is a judicious relaxation of the penance when the sinner shows remorse andrepentance."[5]
According to the Catechism of the Catholic Church:[6]
The Fathers of the Church distinguish between theology (theologia) and economy (oikonomia). "Theology" refers to the mystery of God's inmost life within the Blessed Trinity and "economy" to all the works by which God reveals himself and communicates his life. Through the oikonomia the theologia is revealed to us; but conversely, the theologia illuminates the whole oikonomia. God's works reveal who he is in himself; the mystery of his inmost being enlightens our understanding of all his works. So it is, analogously, among human persons. A person discloses himself in his actions, and the better we know a person, the better we understand his actions.
|
https://en.wikipedia.org/wiki/Economy_(religion)
|
Azapis a form of politicaldirect actionthat came into use in the 1970s in the United States. Popularized by the earlygay liberationgroupGay Activists Alliance, a zap was a raucous public demonstration designed to embarrass a public figure or celebrity while calling the attention of both gays and straights to issues of gay rights.
Although Americanhomophileorganizations had engaged inpublic demonstrationsas early as 1959, these demonstrations tended to be peacefulpicket lines. Following the 1969Stonewall riots, considered the flashpoint of the modern gay liberation movement, younger, more radical gay activists were less interested in the staid tactics of the previous generation. Zaps targeted politicians and other public figures and many addressed the portrayal of gay people in the popular media. LGBT and AIDS activist groups continued to use zap-like tactics into the 1990s and beyond.
Beginning in 1959,[1]and continuing for the next ten years, gay people occasionally demonstrated against discriminatory attitudes toward and treatment of homosexuals. Although these sometimes took the form ofsit-ins,[2]and on at least two occasions riots,[1][3]for the most part these were picket lines. Many of these pickets were organized by Eastern affiliates of such groups as theMattachine Societychapters out of New York City and Washington, D.C., Philadelphia'sJanus Societyand the New York chapter ofDaughters of Bilitis, These groups acted under the collective nameEast Coast Homophile Organizations(ECHO).[4]Organized pickets tended to be in large urban population centers because these centers were where the largest concentration of homophile activists were located.[5]Picketers at ECHO-organized events were required to follow strict dress codes. Men had to wear ties, preferably with a jacket. Women were required to wear skirts. The dress code was imposed by Mattachine Society Washington founderFrank Kameny, with the goal of portraying homosexuals as "presentable and 'employable'".[6]
On June 28, 1969, the patrons of theStonewall Inn, agay barlocated in New York City'sGreenwich Village, resisted a police raid. Gay people returned to the Stonewall and the surrounding neighborhood for the next several nights for additional confrontations.[7]Although there had been two smaller riots — in Los Angeles in 1959 andSan Francisco in 1966— it is the Stonewall riots that have come to be seen as the flashpoint of a new gay liberation movement.[8][9]
In the weeks and months following Stonewall, a dramatic increase in gay political organizing took place. Among the many groups that formed was theGay Activists Alliance, which focused more exclusively on organizing around gay issues and less of the general leftist political perspective taken by such other new groups as theGay Liberation Frontand Red Butterfly.[10]GAA member Marty Robinson is credited with developing the zap following a March 7, 1970, police raid on a gay bar called the Snake Pit.[11]Police arrested 167 patrons. One, an Argentine national namedDiego Viñales, so feared the possibility of deportation that he leapt from a second-story window of the police station, impaling himself on the spikes of an iron fence.[12]Gay journalist and activistArthur Evanslater recalled how the raid and Viñales' critical injuries inspired the technique:
The Snake Pit incident truly outraged us, and we put out a leaflet saying that, in effect, regardless of how you looked at it, Diego Viñales was pushed out the window and we were determined to stop it....There was no division for us between the political and personal. We were never given the option to make that division. We lived it. So we decided that people on the other side of the power structure were going to have the same thing happen to them. The wall that they had built protecting themselves from the personal consequences of their political decisions was going to be torn down and politics was going to become personal for them.[13]
Zaps typically included sudden onset against vulnerable targets, noisiness, verbal assaults and media attention. Tactics included sit-ins, disruptive actions and street confrontations.[14]
GAA founding memberArthur Bellexplained the philosophy of the zap, which he described as "political theater for educating the gay masses":
Gays who have as yet no sense of gay pride see a zap on television or read about it in the press. First they are vaguely disturbed at the demonstrators for "rocking the boat"; eventually, when they see how the straight establishment responds, they feel anger. This anger gradually focuses on the heterosexual oppressors, and the gays develop a sense of class-consciousness. And the no-longer-closeted gays realize that assimilation into the heterosexual mainstream is no answer: gays must unite among themselves, organize their common resources for collective action, and resist.[15]
Thus, obtaining media coverage of the zap became more important than the subject of the zap itself.[16]It was precisely this anti-assimilationist attitude that led some mainstream gay people and groups to oppose zapping as a strategy. TheNational Gay Task Force'smedia director, Ronald Gold, despite having been involved in early GAA zaps, came to urge GAA not to engage in the tactic. As zaps and other activism began opening doors for nascent gay organizations like NGTF and the Gay Media Task Force, these groups became more invested in negotiating with the people within the mainstream power structures rather than in maintaining a tactic they saw as being of the outsider.[17]
One area of special interest to GAA was how LGBT people were portrayed on television and on film. There were very few gay characters on television in the1960sandearly 1970s, and many of them were negative. Several in particular, including episodes ofMarcus Welby, M.D.in 1973 and 1974 and a 1974 episode ofPolice Woman, were deemed especially egregious, with their presentation ofhomosexuality as a mental illness, gays as child molesters and lesbians as psychotic killers echoing similar portrayals that continueda trend that dated back to before 1961.
In response to the 1973Welbyepisode, "The Other Martin Loring", a GAA representative tried to negotiate with ABC,[18]but when negotiations failed GAA zapped ABC's New York headquarters on February 16, 1973, picketing ABC's New York City headquarters and sending 30-40 members to occupy the office of ABC presidentLeonard Goldenson. Executives offered to meet with two GAA representatives but GAA insisted that all protesters be present. The network refused. All but six of the zappers then left; the final six were arrested but charges were later dropped.[19]
When NBC aired "Flowers of Evil", an episode ofPolice Womanabout a trio of lesbians murdering nursing home residents for their money, it was met with a zap byLesbian Feminist Liberation. LFL, which had split from GAA over questions of lack of male attention to women's issues, zapped NBC's New York office on November 19, occupying the office of vice presidentHerminio Traviesasovernight. NBC agreed not to rerun the episode.[20]LFL had earlier zapped an episode ofThe Dick Cavett Showon which anti-feminist authorGeorge Gilderwas the guest.[21]
Zaps could sometimes involve physical altercations and vandalism. GAA co-founder Morty Manford got into scuffles with security and administration during his successful effort to found the student club Gay People at Columbia University in 1971, as well as at a famous protest against homophobia at the eliteInner Circleevent in 1972 (which led Morty's motherJeanne Manfordto foundPFLAG).[22][23]GAA was later associated with a series of combative "super-zaps" against homophobic politicians and anti-gay business owners in the summer of 1977. On one occasion activists threw eggs and firecrackers at the home of Adam Walinsky, a state official who had denounced new gay rights legislation for New York, and cut the phone lines of his house. AlthoughTimemagazine derided them as "Gay goons", and Walinsky won an injunction against protests near his home, the actions succeeded in keeping the conservative backlash of the late-1970s out of New York state.[24][25][26]
ActivistMark Segalwas a very active zapper, usually acting alone, sometimes with a compatriot operating under the name "Gay Raiders". His guerilla zaps frequently drew national news coverage, sometimes from the target of the zaps themselves. Some of his more successful zaps include: chaining himself to a railing at a taping ofThe Tonight Show Starring Johnny Carsonin early March 1973;[27]handcuffing himself and a friend to a camera at a 7 May 1973 taping ofThe Mike Douglas Showafter producers cancelled a planned discussion of gay issues;[28][29]disrupting a live broadcast ofThe Today Showon 26 October 1973[30](resulting in an off-camera interview withBarbara Walters, who explained the reason for the zap);[31]and interruptingWalter Cronkiteduring a live newscast of theCBS Evening Newson 11 December 1973 by rushing the set with a sign readingGays Protest CBS Prejudice(after a brief interruption, Cronkite reported the zap).[32]
Politicians and other public figures were also the targets of zaps. New York MayorJohn Lindsaywas an early and frequent GAA target, with GAA insisting that Lindsay take a public stance on gay rights issues. Lindsay, elected as a liberal Republican, preferred quiet coalition building and also feared that publicly endorsing gay rights would damage his chances at the Presidency; he refused to speak publicly in favor of gay rights and refused to meet with GAA to discuss passing a citywide anti-discrimination ordinance.[16]The group's first zap, on April 13, 1970,[33]involved infiltrating opening night of the 1970Metropolitan Operaseason, shouting gay slogans when the mayor and his wife made their entrance.[34]Lindsay was zapped again on April 19 as he taped an episode of his weekly television program,With Mayor Lindsay. Approximately 40 GAA members obtained tickets to the taping. Some GAA members rushed the stage calling for the mayor to endorse gay rights; others called out comments from the audience, booed, stomped their feet and otherwise disrupted taping. One notable exchange came when the mayor noted it was illegal to blow car horns in New York, drawing the response "It's illegal to blow a lot of things!"[35]When Lindsay announced his candidacy for the Presidency in the 1972 election, GAA saw the opportunity to bring gay issues to national attention and demanded of each potential candidate a pledge to support anti-discrimination. Lindsay was among those who responded favorably.[clarification needed][36]
Zapping migrated to the West Coast as early as 1970, when a coalition of several Los Angeles groups targetedBarney's Beanery. Barney's had long displayed a wooden sign at its bar reading "FAGOTS [sic] – STAY OUT". Although there were few reports of actual anti-gay discrimination at Barney's, activists found the sign's presence galling and refused to patronize the place, even when gay gatherings were held there. On February 7, over 100 people converged on Barney's. They engaged in picketing and leafletting outside and occupied tables for long periods inside with small orders.[note 1]The owner of Barney's not only refused to take down the sign, he put up more signs made of cardboard, harassed the gay customers inside, refused service to them, ordered them out of the restaurant and eventually assaulted a customer and called the sheriff. After several hours and consultation with the sheriff's department, the original wooden sign was taken down and stored out of sight and the new cardboard signs were removed and distributed among the demonstrators.[37][note 2]
Encouraged by GAA co-founderArthur Bell, in his capacity as a columnist forThe Village Voice, activists employed zaps againstWilliam Friedkinand the cast and crew of the 1980 filmCruising. In 1979,Cruisingopponents blew whistles, shined lights into camera lenses and otherwise disrupted filming to protest how the gay community and the leather sub-culture in particular were being portrayed.[38]
Emerging activist groups in other countries adopted the zap as a tactic. The British GLFzapped the Festival of Light, a morality campaign, in 1971. GLF memberPeter Tatchellhas continued to engage in zaps in the intervening decades, both singly and in association with such organizations as the British GLF andOutRage!. In Australia, Sydney Gay Liberation perpetrated a series of zaps beginning in 1973, including engaging in public displays of affection, leafletting and sitting in at a pub rumored to be refusing service to gay customers. Gay Activists Alliance in Adelaide zapped a variety of targets, including a gynecologist perceived to be anti-lesbian, a religious conference atParkin-Wesley Collegeand politicians and public figures such asSteele Hall,Ernie Sigley, John Court andMary Whitehouse.[39]
In response to theAIDSepidemic, the direct action groupAIDS Coalition To Unleash Power(ACT UP) formed in 1987. ACT UP adopted a zap-like form of direct action reminiscent of the earlier GAA-style zaps. Some of these included: a March 24, 1987 "die-in" onWall Street, in which 250 people demonstrated against what they saw as price gouging for anti-HIV drugs; the October 1988 attempted shut-down of theFood and Drug Administrationheadquarters inRockville, Maryland, to protest perceived foot-dragging in approving new AIDS treatments; and perhaps most notoriously,Stop the Church, a December 12, 1989, demonstration in and aroundSt. Patrick's Cathedralin opposition to the Catholic Church's opposition to condom use to prevent the spread of HIV.[40]
Queer Nationformed in 1990 and adopted the militant tactics of ACT UP and applied them more generally to LGBT issues. Queer Nation members were known for entering social spaces like straight bars and clubs and engaging in straight-identified behaviour like playingspin the bottleto make the point that most public spaces were straight spaces. QN would stage "kiss-ins" in public places like shopping malls or sidewalks, both as a shock tactic directed at heterosexuals and to point out that gay people should be able to engage in the same public behaviours as straight people. Echoing the disruption a decade earlier during the filming ofCruising, Queer Nation and other direct action groups disrupted filming ofBasic Instinctover what they believed were negative portrayals of lesbian and bisexual women.[41]
|
https://en.wikipedia.org/wiki/Zap_(action)
|
Inmathematical analysis,asymptotic analysis, also known asasymptotics, is a method of describinglimitingbehavior.
As an illustration, suppose that we are interested in the properties of a functionf(n)asnbecomes very large. Iff(n) =n2+ 3n, then asnbecomes very large, the term3nbecomes insignificant compared ton2. The functionf(n)is said to be "asymptotically equivalentton2, asn→ ∞". This is often written symbolically asf(n) ~n2, which is read as "f(n)is asymptotic ton2".
An example of an important asymptotic result is theprime number theorem. Letπ(x)denote theprime-counting function(which is not directly related to the constantpi), i.e.π(x)is the number ofprime numbersthat are less than or equal tox. Then the theorem states thatπ(x)∼xlnx.{\displaystyle \pi (x)\sim {\frac {x}{\ln x}}.}
Asymptotic analysis is commonly used incomputer scienceas part of theanalysis of algorithmsand is often expressed there in terms ofbig O notation.
Formally, given functionsf(x)andg(x), we define a binary relationf(x)∼g(x)(asx→∞){\displaystyle f(x)\sim g(x)\quad ({\text{as }}x\to \infty )}if and only if (de Bruijn 1981, §1.4)limx→∞f(x)g(x)=1.{\displaystyle \lim _{x\to \infty }{\frac {f(x)}{g(x)}}=1.}
The symbol~is thetilde. The relation is anequivalence relationon the set of functions ofx; the functionsfandgare said to beasymptotically equivalent. Thedomainoffandgcan be any set for which the limit is defined: e.g. real numbers, complex numbers, positive integers.
The same notation is also used for other ways of passing to a limit: e.g.x→ 0,x↓ 0,|x| → 0. The way of passing to the limit is often not stated explicitly, if it is clear from the context.
Although the above definition is common in the literature, it is problematic ifg(x)is zero infinitely often asxgoes to the limiting value. For that reason, some authors use an alternative definition. The alternative definition, inlittle-o notation, is thatf~gif and only iff(x)=g(x)(1+o(1)).{\displaystyle f(x)=g(x)(1+o(1)).}
This definition is equivalent to the prior definition ifg(x)is not zero in someneighbourhoodof the limiting value.[1][2]
Iff∼g{\displaystyle f\sim g}anda∼b{\displaystyle a\sim b}, then, under some mild conditions,[further explanation needed]the following hold:
Such properties allow asymptotically equivalent functions to be freely exchanged in many algebraic expressions.
Also, if we further haveg∼h{\displaystyle g\sim h}, then, because the asymptote is atransitive relation, then we also havef∼h{\displaystyle f\sim h}.
Anasymptotic expansionof a functionf(x)is in practice an expression of that function in terms of aseries, thepartial sumsof which do not necessarily converge, but such that taking any initial partial sum provides an asymptotic formula forf. The idea is that successive terms provide an increasingly accurate description of the order of growth off.
In symbols, it means we havef∼g1,{\displaystyle f\sim g_{1},}but alsof−g1∼g2{\displaystyle f-g_{1}\sim g_{2}}andf−g1−⋯−gk−1∼gk{\displaystyle f-g_{1}-\cdots -g_{k-1}\sim g_{k}}for each fixedk. In view of the definition of the∼{\displaystyle \sim }symbol, the last equation meansf−(g1+⋯+gk)=o(gk){\displaystyle f-(g_{1}+\cdots +g_{k})=o(g_{k})}in thelittle o notation, i.e.,f−(g1+⋯+gk){\displaystyle f-(g_{1}+\cdots +g_{k})}is much smaller thangk.{\displaystyle g_{k}.}
The relationf−g1−⋯−gk−1∼gk{\displaystyle f-g_{1}-\cdots -g_{k-1}\sim g_{k}}takes its full meaning ifgk+1=o(gk){\displaystyle g_{k+1}=o(g_{k})}for allk, which means thegk{\displaystyle g_{k}}form anasymptotic scale. In that case, some authors mayabusivelywritef∼g1+⋯+gk{\displaystyle f\sim g_{1}+\cdots +g_{k}}to denote the statementf−(g1+⋯+gk)=o(gk).{\displaystyle f-(g_{1}+\cdots +g_{k})=o(g_{k}).}One should however be careful that this is not a standard use of the∼{\displaystyle \sim }symbol, and that it does not correspond to the definition given in§ Definition.
In the present situation, this relationgk=o(gk−1){\displaystyle g_{k}=o(g_{k-1})}actually follows from combining stepskandk−1; by subtractingf−g1−⋯−gk−2=gk−1+o(gk−1){\displaystyle f-g_{1}-\cdots -g_{k-2}=g_{k-1}+o(g_{k-1})}fromf−g1−⋯−gk−2−gk−1=gk+o(gk),{\displaystyle f-g_{1}-\cdots -g_{k-2}-g_{k-1}=g_{k}+o(g_{k}),}one getsgk+o(gk)=o(gk−1),{\displaystyle g_{k}+o(g_{k})=o(g_{k-1}),}i.e.gk=o(gk−1).{\displaystyle g_{k}=o(g_{k-1}).}
In case the asymptotic expansion does not converge, for any particular value of the argument there will be a particular partial sum which provides the best approximation and adding additional terms will decrease the accuracy. This optimal partial sum will usually have more terms as the argument approaches the limit value.
Asymptotic expansions often occur when an ordinary series is used in a formal expression that forces the taking of values outside of its domain of convergence. For example, we might start with the ordinary series11−w=∑n=0∞wn{\displaystyle {\frac {1}{1-w}}=\sum _{n=0}^{\infty }w^{n}}
The expression on the left is valid on the entire complex planew≠1{\displaystyle w\neq 1}, while the right hand side converges only for|w|<1{\displaystyle |w|<1}. Multiplying bye−w/t{\displaystyle e^{-w/t}}and integrating both sides yields∫0∞e−wt1−wdw=∑n=0∞tn+1∫0∞e−uundu{\displaystyle \int _{0}^{\infty }{\frac {e^{-{\frac {w}{t}}}}{1-w}}\,dw=\sum _{n=0}^{\infty }t^{n+1}\int _{0}^{\infty }e^{-u}u^{n}\,du}
The integral on the left hand side can be expressed in terms of theexponential integral. The integral on the right hand side, after the substitutionu=w/t{\displaystyle u=w/t}, may be recognized as thegamma function. Evaluating both, one obtains the asymptotic expansione−1tEi(1t)=∑n=0∞n!tn+1{\displaystyle e^{-{\frac {1}{t}}}\operatorname {Ei} \left({\frac {1}{t}}\right)=\sum _{n=0}^{\infty }n!\;t^{n+1}}
Here, the right hand side is clearly not convergent for any non-zero value oft. However, by keepingtsmall, and truncating the series on the right to a finite number of terms, one may obtain a fairly good approximation to the value ofEi(1/t){\displaystyle \operatorname {Ei} (1/t)}. Substitutingx=−1/t{\displaystyle x=-1/t}and noting thatEi(x)=−E1(−x){\displaystyle \operatorname {Ei} (x)=-E_{1}(-x)}results in the asymptotic expansion given earlier in this article.
Inmathematical statistics, anasymptotic distributionis a hypothetical distribution that is in a sense the "limiting" distribution of a sequence of distributions. A distribution is an ordered set of random variablesZifori= 1, …,n, for some positive integern. An asymptotic distribution allowsito range without bound, that is,nis infinite.
A special case of an asymptotic distribution is when the late entries go to zero—that is, theZigo to 0 asigoes to infinity. Some instances of "asymptotic distribution" refer only to this special case.
This is based on the notion of anasymptoticfunction which cleanly approaches a constant value (theasymptote) as the independent variable goes to infinity; "clean" in this sense meaning that for any desired closeness epsilon there is some value of the independent variable after which the function never differs from the constant by more than epsilon.
Anasymptoteis a straight line that a curve approaches but never meets or crosses. Informally, one may speak of the curve meeting the asymptote "at infinity" although this is not a precise definition. In the equationy=1x,{\displaystyle y={\frac {1}{x}},}ybecomes arbitrarily small in magnitude asxincreases.
Asymptotic analysis is used in severalmathematical sciences. Instatistics, asymptotic theory provides limiting approximations of theprobability distributionofsample statistics, such as thelikelihood ratiostatisticand theexpected valueof thedeviance. Asymptotic theory does not provide a method of evaluating the finite-sample distributions of sample statistics, however. Non-asymptotic bounds are provided by methods ofapproximation theory.
Examples of applications are the following.
Asymptotic analysis is a key tool for exploring theordinaryandpartialdifferential equations which arise in themathematical modellingof real-world phenomena.[3]An illustrative example is the derivation of theboundary layer equationsfrom the fullNavier-Stokes equationsgoverning fluid flow. In many cases, the asymptotic expansion is in power of a small parameter,ε: in the boundary layer case, this is thenondimensionalratio of the boundary layer thickness to a typical length scale of the problem. Indeed, applications of asymptotic analysis in mathematical modelling often[3]center around a nondimensional parameter which has been shown, or assumed, to be small through a consideration of the scales of the problem at hand.
Asymptotic expansions typically arise in the approximation of certain integrals (Laplace's method,saddle-point method,method of steepest descent) or in the approximation of probability distributions (Edgeworth series). TheFeynman graphsinquantum field theoryare another example of asymptotic expansions which often do not converge.
De Bruijn illustrates the use of asymptotics in the following dialog between Dr. N.A., a Numerical Analyst, and Dr. A.A., an Asymptotic Analyst:
N.A.: I want to evaluate my functionf(x){\displaystyle f(x)}for large values ofx{\displaystyle x}, with a relative error of at most 1%.
A.A.:f(x)=x−1+O(x−2)(x→∞){\displaystyle f(x)=x^{-1}+\mathrm {O} (x^{-2})\qquad (x\to \infty )}.
N.A.: I am sorry, I don't understand.
A.A.:|f(x)−x−1|<8x−2(x>104).{\displaystyle |f(x)-x^{-1}|<8x^{-2}\qquad (x>10^{4}).}
N.A.: But my value ofx{\displaystyle x}is only 100.
A.A.: Why did you not say so? My evaluations give
|f(x)−x−1|<57000x−2(x>100).{\displaystyle |f(x)-x^{-1}|<57000x^{-2}\qquad (x>100).}
N.A.: This is no news to me. I know already that0<f(100)<1{\displaystyle 0<f(100)<1}.
A.A.: I can gain a little on some of my estimates. Now I find that
|f(x)−x−1|<20x−2(x>100).{\displaystyle |f(x)-x^{-1}|<20x^{-2}\qquad (x>100).}
N.A.: I asked for 1%, not for 20%.
A.A.: It is almost the best thing I possibly can get. Why don't you take larger values ofx{\displaystyle x}?
N.A.: !!! I think it's better to ask my electronic computing machine.
Machine: f(100) = 0.01137 42259 34008 67153
A.A.: Haven't I told you so? My estimate of 20% was not far off from the 14% of the real error.
N.A.: !!! . . . !
Some days later, Miss N.A. wants to know the value of f(1000), but her machine would take a month of computation to give the answer. She returns to her Asymptotic Colleague, and gets a fully satisfactory reply.[4]
|
https://en.wikipedia.org/wiki/Asymptotic_analysis
|
Thepartition functionorconfiguration integral, as used inprobability theory,information theoryanddynamical systems, is a generalization of the definition of apartition function in statistical mechanics. It is a special case of anormalizing constantin probability theory, for theBoltzmann distribution. The partition function occurs in many problems of probability theory because, in situations where there is a natural symmetry, its associatedprobability measure, theGibbs measure, has theMarkov property. This means that the partition function occurs not only in physical systems with translation symmetry, but also in such varied settings as neural networks (theHopfield network), and applications such asgenomics,corpus linguisticsandartificial intelligence, which employMarkov networks, andMarkov logic networks. The Gibbs measure is also the unique measure that has the property of maximizing theentropyfor a fixed expectation value of the energy; this underlies the appearance of the partition function inmaximum entropy methodsand the algorithms derived therefrom.
The partition function ties together many different concepts, and thus offers a general framework in which many different kinds of quantities may be calculated. In particular, it shows how to calculateexpectation valuesandGreen's functions, forming a bridge toFredholm theory. It also provides a natural setting for theinformation geometryapproach to information theory, where theFisher information metriccan be understood to be acorrelation functionderived from the partition function; it happens to define aRiemannian manifold.
When the setting for random variables is oncomplex projective spaceorprojective Hilbert space, geometrized with theFubini–Study metric, the theory ofquantum mechanicsand more generallyquantum field theoryresults. In these theories, the partition function is heavily exploited in thepath integral formulation, with great success, leading to many formulas nearly identical to those reviewed here. However, because the underlying measure space is complex-valued, as opposed to the real-valuedsimplexof probability theory, an extra factor ofiappears in many formulas. Tracking this factor is troublesome, and is not done here. This article focuses primarily on classical probability theory, where the sum of probabilities total to one.
Given a set ofrandom variablesXi{\displaystyle X_{i}}taking on valuesxi{\displaystyle x_{i}}, and some sort ofpotential functionorHamiltonianH(x1,x2,…){\displaystyle H(x_{1},x_{2},\dots )}, the partition function is defined as
Z(β)=∑xiexp(−βH(x1,x2,…)){\displaystyle Z(\beta )=\sum _{x_{i}}\exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
The functionHis understood to be a real-valued function on the space of states{X1,X2,…}{\displaystyle \{X_{1},X_{2},\dots \}}, whileβ{\displaystyle \beta }is a real-valued free parameter (conventionally, theinverse temperature). The sum over thexi{\displaystyle x_{i}}is understood to be a sum over all possible values that each of the random variablesXi{\displaystyle X_{i}}may take. Thus, the sum is to be replaced by anintegralwhen theXi{\displaystyle X_{i}}are continuous, rather than discrete. Thus, one writes
Z(β)=∫exp(−βH(x1,x2,…))dx1dx2⋯{\displaystyle Z(\beta )=\int \exp \left(-\beta H(x_{1},x_{2},\dots )\right)\,dx_{1}\,dx_{2}\cdots }
for the case of continuously-varyingXi{\displaystyle X_{i}}.
WhenHis anobservable, such as a finite-dimensionalmatrixor an infinite-dimensionalHilbert spaceoperatoror element of aC-star algebra, it is common to express the summation as atrace, so that
Z(β)=tr(exp(−βH)){\displaystyle Z(\beta )=\operatorname {tr} \left(\exp \left(-\beta H\right)\right)}
WhenHis infinite-dimensional, then, for the above notation to be valid, the argument must betrace class, that is, of a form such that the summation exists and is bounded.
The number of variablesXi{\displaystyle X_{i}}need not becountable, in which case the sums are to be replaced byfunctional integrals. Although there are many notations for functional integrals, a common one would be
Z=∫Dφexp(−βH[φ]){\displaystyle Z=\int {\mathcal {D}}\varphi \exp \left(-\beta H[\varphi ]\right)}
Such is the case for thepartition function in quantum field theory.
A common, useful modification to the partition function is to introduce auxiliary functions. This allows, for example, the partition function to be used as agenerating functionforcorrelation functions. This is discussed in greater detail below.
The role or meaning of the parameterβ{\displaystyle \beta }can be understood in a variety of different ways. In classical thermodynamics, it is aninverse temperature. More generally, one would say that it is the variable that isconjugateto some (arbitrary) functionH{\displaystyle H}of the random variablesX{\displaystyle X}. The wordconjugatehere is used in the sense of conjugategeneralized coordinatesinLagrangian mechanics, thus, properlyβ{\displaystyle \beta }is aLagrange multiplier. It is not uncommonly called thegeneralized force. All of these concepts have in common the idea that one value is meant to be kept fixed, as others, interconnected in some complicated way, are allowed to vary. In the current case, the value to be kept fixed is theexpectation valueofH{\displaystyle H}, even as many differentprobability distributionscan give rise to exactly this same (fixed) value.
For the general case, one considers a set of functions{Hk(x1,…)}{\displaystyle \{H_{k}(x_{1},\dots )\}}that each depend on the random variablesXi{\displaystyle X_{i}}. These functions are chosen because one wants to hold their expectation values constant, for one reason or another. To constrain the expectation values in this way, one applies the method ofLagrange multipliers. In the general case,maximum entropy methodsillustrate the manner in which this is done.
Some specific examples are in order. In basic thermodynamics problems, when using thecanonical ensemble, the use of just one parameterβ{\displaystyle \beta }reflects the fact that there is only one expectation value that must be held constant: thefree energy(due toconservation of energy). For chemistry problems involving chemical reactions, thegrand canonical ensembleprovides the appropriate foundation, and there are two Lagrange multipliers. One is to hold the energy constant, and another, thefugacity, is to hold the particle count constant (as chemical reactions involve the recombination of a fixed number of atoms).
For the general case, one has
Z(β)=∑xiexp(−∑kβkHk(xi)){\displaystyle Z(\beta )=\sum _{x_{i}}\exp \left(-\sum _{k}\beta _{k}H_{k}(x_{i})\right)}
withβ=(β1,β2,…){\displaystyle \beta =(\beta _{1},\beta _{2},\dots )}a point in a space.
For a collection of observablesHk{\displaystyle H_{k}}, one would write
Z(β)=tr[exp(−∑kβkHk)]{\displaystyle Z(\beta )=\operatorname {tr} \left[\,\exp \left(-\sum _{k}\beta _{k}H_{k}\right)\right]}
As before, it is presumed that the argument oftristrace class.
The correspondingGibbs measurethen provides a probability distribution such that the expectation value of eachHk{\displaystyle H_{k}}is a fixed value. More precisely, one has
∂∂βk(−logZ)=⟨Hk⟩=E[Hk]{\displaystyle {\frac {\partial }{\partial \beta _{k}}}\left(-\log Z\right)=\langle H_{k}\rangle =\mathrm {E} \left[H_{k}\right]}
with the angle brackets⟨Hk⟩{\displaystyle \langle H_{k}\rangle }denoting the expected value ofHk{\displaystyle H_{k}}, andE[⋅]{\displaystyle \operatorname {E} [\,\cdot \,]}being a common alternative notation. A precise definition of this expectation value is given below.
Although the value ofβ{\displaystyle \beta }is commonly taken to be real, it need not be, in general; this is discussed in the sectionNormalizationbelow. The values ofβ{\displaystyle \beta }can be understood to be the coordinates of points in a space; this space is in fact amanifold, as sketched below. The study of these spaces as manifolds constitutes the field ofinformation geometry.
The potential function itself commonly takes the form of a sum:
H(x1,x2,…)=∑sV(s){\displaystyle H(x_{1},x_{2},\dots )=\sum _{s}V(s)\,}
where the sum oversis a sum over some subset of thepower setP(X) of the setX={x1,x2,…}{\displaystyle X=\lbrace x_{1},x_{2},\dots \rbrace }. For example, instatistical mechanics, such as theIsing model, the sum is over pairs of nearest neighbors. In probability theory, such asMarkov networks, the sum might be over thecliquesof a graph; so, for the Ising model and otherlattice models, the maximal cliques are edges.
The fact that the potential function can be written as a sum usually reflects the fact that it is invariant under theactionof agroup symmetry, such astranslational invariance. Such symmetries can be discrete or continuous; they materialize in thecorrelation functionsfor the random variables (discussed below). Thus a symmetry in the Hamiltonian becomes a symmetry of the correlation function (and vice versa).
This symmetry has a critically important interpretation in probability theory: it implies that theGibbs measurehas theMarkov property; that is, it is independent of the random variables in a certain way, or, equivalently, the measure is identical on theequivalence classesof the symmetry. This leads to the widespread appearance of the partition function in problems with the Markov property, such asHopfield networks.
The value of the expressionexp(−βH(x1,x2,…)){\displaystyle \exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
can be interpreted as a likelihood that a specificconfigurationof values(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}occurs in the system. Thus, given a specific configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )},
P(x1,x2,…)=1Z(β)exp(−βH(x1,x2,…)){\displaystyle P(x_{1},x_{2},\dots )={\frac {1}{Z(\beta )}}\exp \left(-\beta H(x_{1},x_{2},\dots )\right)}
is theprobabilityof the configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}occurring in the system, which is now properly normalized so that0≤P(x1,x2,…)≤1{\displaystyle 0\leq P(x_{1},x_{2},\dots )\leq 1}, and such that the sum over all configurations totals to one. As such, the partition function can be understood to provide ameasure(aprobability measure) on theprobability space; formally, it is called theGibbs measure. It generalizes the narrower concepts of thegrand canonical ensembleandcanonical ensemblein statistical mechanics.
There exists at least one configuration(x1,x2,…){\displaystyle (x_{1},x_{2},\dots )}for which the probability is maximized; this configuration is conventionally called theground state. If the configuration is unique, the ground state is said to benon-degenerate, and the system is said to beergodic; otherwise the ground state isdegenerate. The ground state may or may not commute with the generators of the symmetry; if commutes, it is said to be aninvariant measure. When it does not commute, the symmetry is said to bespontaneously broken.
Conditions under which a ground state exists and is unique are given by theKarush–Kuhn–Tucker conditions; these conditions are commonly used to justify the use of the Gibbs measure in maximum-entropy problems.[citation needed]
The values taken byβ{\displaystyle \beta }depend on themathematical spaceover which the random field varies. Thus, real-valued random fields take values on asimplex: this is the geometrical way of saying that the sum of probabilities must total to one. For quantum mechanics, the random variables range overcomplex projective space(or complex-valuedprojective Hilbert space), where the random variables are interpreted asprobability amplitudes. The emphasis here is on the wordprojective, as the amplitudes are still normalized to one. The normalization for the potential function is theJacobianfor the appropriate mathematical space: it is 1 for ordinary probabilities, andifor Hilbert space; thus, inquantum field theory, one seesitH{\displaystyle itH}in the exponential, rather thanβH{\displaystyle \beta H}. The partition function is very heavily exploited in thepath integral formulationof quantum field theory, to great effect. The theory there is very nearly identical to that presented here, aside from this difference, and the fact that it is usually formulated on four-dimensional space-time, rather than in a general way.
The partition function is commonly used as aprobability-generating functionforexpectation valuesof various functions of the random variables. So, for example, takingβ{\displaystyle \beta }as an adjustable parameter, then the derivative oflog(Z(β)){\displaystyle \log(Z(\beta ))}with respect toβ{\displaystyle \beta }
E[H]=⟨H⟩=−∂log(Z(β))∂β{\displaystyle \operatorname {E} [H]=\langle H\rangle =-{\frac {\partial \log(Z(\beta ))}{\partial \beta }}}
gives the average (expectation value) ofH. In physics, this would be called the averageenergyof the system.
Given the definition of the probability measure above, the expectation value of any functionfof the random variablesXmay now be written as expected: so, for discrete-valuedX, one writes⟨f⟩=∑xif(x1,x2,…)P(x1,x2,…)=1Z(β)∑xif(x1,x2,…)exp(−βH(x1,x2,…)){\displaystyle {\begin{aligned}\langle f\rangle &=\sum _{x_{i}}f(x_{1},x_{2},\dots )P(x_{1},x_{2},\dots )\\&={\frac {1}{Z(\beta )}}\sum _{x_{i}}f(x_{1},x_{2},\dots )\exp \left(-\beta H(x_{1},x_{2},\dots )\right)\end{aligned}}}
The above notation makes sense for a finite number of discrete random variables. In more general settings, the summations should be replaced with integrals over aprobability space.
Thus, for example, theentropyis given by
S=−kB⟨lnP⟩=−kB∑xiP(x1,x2,…)lnP(x1,x2,…)=kB(β⟨H⟩+logZ(β)){\displaystyle {\begin{aligned}S&=-k_{\text{B}}\langle \ln P\rangle \\[1ex]&=-k_{\text{B}}\sum _{x_{i}}P(x_{1},x_{2},\dots )\ln P(x_{1},x_{2},\dots )\\&=k_{\text{B}}\left(\beta \langle H\rangle +\log Z(\beta )\right)\end{aligned}}}
The Gibbs measure is the unique statistical distribution that maximizes the entropy for a fixed expectation value of the energy; this underlies its use inmaximum entropy methods.
The pointsβ{\displaystyle \beta }can be understood to form a space, and specifically, amanifold. Thus, it is reasonable to ask about the structure of this manifold; this is the task ofinformation geometry.
Multiple derivatives with regard to the Lagrange multipliers gives rise to a positive semi-definitecovariance matrixgij(β)=∂2∂βi∂βj(−logZ(β))=⟨(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)⟩{\displaystyle g_{ij}(\beta )={\frac {\partial ^{2}}{\partial \beta ^{i}\partial \beta ^{j}}}\left(-\log Z(\beta )\right)=\langle \left(H_{i}-\langle H_{i}\rangle \right)\left(H_{j}-\langle H_{j}\rangle \right)\rangle }This matrix is positive semi-definite, and may be interpreted as ametric tensor, specifically, aRiemannian metric. Equipping the space of Lagrange multipliers with a metric in this way turns it into aRiemannian manifold.[1]The study of such manifolds is referred to asinformation geometry; the metric above is theFisher information metric. Here,β{\displaystyle \beta }serves as a coordinate on the manifold. It is interesting to compare the above definition to the simplerFisher information, from which it is inspired.
That the above defines the Fisher information metric can be readily seen by explicitly substituting for the expectation value:gij(β)=⟨(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)⟩=∑xP(x)(Hi−⟨Hi⟩)(Hj−⟨Hj⟩)=∑xP(x)(Hi+∂logZ∂βi)(Hj+∂logZ∂βj)=∑xP(x)∂logP(x)∂βi∂logP(x)∂βj{\displaystyle {\begin{aligned}g_{ij}(\beta )&=\left\langle \left(H_{i}-\left\langle H_{i}\right\rangle \right)\left(H_{j}-\left\langle H_{j}\right\rangle \right)\right\rangle \\&=\sum _{x}P(x)\left(H_{i}-\left\langle H_{i}\right\rangle \right)\left(H_{j}-\left\langle H_{j}\right\rangle \right)\\&=\sum _{x}P(x)\left(H_{i}+{\frac {\partial \log Z}{\partial \beta _{i}}}\right)\left(H_{j}+{\frac {\partial \log Z}{\partial \beta _{j}}}\right)\\&=\sum _{x}P(x){\frac {\partial \log P(x)}{\partial \beta ^{i}}}{\frac {\partial \log P(x)}{\partial \beta ^{j}}}\\\end{aligned}}}
where we've writtenP(x){\displaystyle P(x)}forP(x1,x2,…){\displaystyle P(x_{1},x_{2},\dots )}and the summation is understood to be over all values of all random variablesXk{\displaystyle X_{k}}. For continuous-valued random variables, the summations are replaced by integrals, of course.
Curiously, theFisher information metriccan also be understood as the flat-spaceEuclidean metric, after appropriate change of variables, as described in the main article on it. When theβ{\displaystyle \beta }are complex-valued, the resulting metric is theFubini–Study metric. When written in terms ofmixed states, instead ofpure states, it is known as theBures metric.
By introducing artificial auxiliary functionsJk{\displaystyle J_{k}}into the partition function, it can then be used to obtain the expectation value of the random variables. Thus, for example, by writing
Z(β,J)=Z(β,J1,J2,…)=∑xiexp(−βH(x1,x2,…)+∑nJnxn){\displaystyle {\begin{aligned}Z(\beta ,J)&=Z(\beta ,J_{1},J_{2},\dots )\\&=\sum _{x_{i}}\exp \left(-\beta H(x_{1},x_{2},\dots )+\sum _{n}J_{n}x_{n}\right)\end{aligned}}}
one then hasE[xk]=⟨xk⟩=∂∂JklogZ(β,J)|J=0{\displaystyle \operatorname {E} [x_{k}]=\langle x_{k}\rangle =\left.{\frac {\partial }{\partial J_{k}}}\log Z(\beta ,J)\right|_{J=0}}
as the expectation value ofxk{\displaystyle x_{k}}. In thepath integral formulationofquantum field theory, these auxiliary functions are commonly referred to assource fields.
Multiple differentiations lead to theconnected correlation functionsof the random variables. Thus the correlation functionC(xj,xk){\displaystyle C(x_{j},x_{k})}between variablesxj{\displaystyle x_{j}}andxk{\displaystyle x_{k}}is given by:
C(xj,xk)=∂∂Jj∂∂JklogZ(β,J)|J=0{\displaystyle C(x_{j},x_{k})=\left.{\frac {\partial }{\partial J_{j}}}{\frac {\partial }{\partial J_{k}}}\log Z(\beta ,J)\right|_{J=0}}
For the case whereHcan be written as aquadratic forminvolving adifferential operator, that is, as
H=12∑nxnDxn{\displaystyle H={\frac {1}{2}}\sum _{n}x_{n}Dx_{n}}
then partition function can be understood to be a sum orintegralover Gaussians. The correlation functionC(xj,xk){\displaystyle C(x_{j},x_{k})}can be understood to be theGreen's functionfor the differential operator (and generally giving rise toFredholm theory). In the quantum field theory setting, such functions are referred to aspropagators; higher order correlators are called n-point functions; working with them defines theeffective actionof a theory.
When the random variables are anti-commutingGrassmann numbers, then the partition function can be expressed as a determinant of the operatorD. This is done by writing it as aBerezin integral(also called Grassmann integral).
Partition functions are used to discusscritical scaling,universalityand are subject to therenormalization group.
|
https://en.wikipedia.org/wiki/Partition_function_(mathematics)
|
Inprobabilityandstatistics, amixture distributionis theprobability distributionof arandom variablethat is derived from a collection of other random variables as follows: first, a random variable is selected by chance from the collection according to given probabilities of selection, and then the value of the selected random variable is realized. The underlying random variables may be random real numbers, or they may berandom vectors(each having the same dimension), in which case the mixture distribution is amultivariate distribution.
In cases where each of the underlying random variables iscontinuous, the outcome variable will also be continuous and itsprobability density functionis sometimes referred to as amixture density. Thecumulative distribution function(and theprobability density functionif it exists) can be expressed as aconvex combination(i.e. a weighted sum, with non-negative weights that sum to 1) of other distribution functions and density functions. The individual distributions that are combined to form the mixture distribution are called themixture components, and the probabilities (or weights) associated with each component are called themixture weights. The number of components in a mixture distribution is often restricted to being finite, although in some cases the components may becountably infinitein number. More general cases (i.e. anuncountableset of component distributions), as well as the countable case, are treated under the title ofcompound distributions.
A distinction needs to be made between arandom variablewhose distribution function or density is the sum of a set of components (i.e. a mixture distribution) and a random variable whose value is the sum of the values of two or more underlying random variables, in which case the distribution is given by theconvolutionoperator. As an example, the sum of twojointly normally distributedrandom variables, each with different means, will still have a normal distribution. On the other hand, a mixture density created as a mixture of two normal distributions with different means will have two peaks provided that the two means are far enough apart, showing that this distribution is radically different from a normal distribution.
Mixture distributions arise in many contexts in the literature and arise naturally where astatistical populationcontains two or moresubpopulations. They are also sometimes used as a means of representing non-normal distributions. Data analysis concerningstatistical modelsinvolving mixture distributions is discussed under the title ofmixture models, while the present article concentrates on simple probabilistic and statistical properties of mixture distributions and how these relate to properties of the underlying distributions.
Given a finite set of probability density functionsp1(x), ...,pn(x), or corresponding cumulative distribution functionsP1(x),...,Pn(x)andweightsw1, ...,wnsuch thatwi≥ 0and∑wi= 1, the mixture distribution can be represented by writing either the density,f, or the distribution function,F, as a sum (which in both cases is a convex combination):F(x)=∑i=1nwiPi(x),{\displaystyle F(x)=\sum _{i=1}^{n}\,w_{i}\,P_{i}(x),}f(x)=∑i=1nwipi(x).{\displaystyle f(x)=\sum _{i=1}^{n}\,w_{i}\,p_{i}(x).}This type of mixture, being a finite sum, is called afinite mixture,and in applications, an unqualified reference to a "mixture density" usually means a finite mixture. The case of a countably infinite set of components is covered formally by allowingn=∞{\displaystyle n=\infty \!}.
Where the set of component distributions isuncountable, the result is often called acompound probability distribution. The construction of such distributions has a formal similarity to that of mixture distributions, with either infinite summations or integrals replacing the finite summations used for finite mixtures.
Consider a probability density functionp(x;a)for a variablex, parameterized bya. That is, for each value ofain some setA,p(x;a)is a probability density function with respect tox. Given a probability density functionw(meaning thatwis nonnegative and integrates to 1), the function
f(x)=∫Aw(a)p(x;a)da{\displaystyle f(x)=\int _{A}\,w(a)\,p(x;a)\,da}
is again a probability density function forx. A similar integral can be written for the cumulative distribution function. Note that the formulae here reduce to the case of a finite or infinite mixture if the densitywis allowed to be ageneralized functionrepresenting the "derivative" of the cumulative distribution function of adiscrete distribution.
The mixture components are often not arbitrary probability distributions, but instead are members of aparametric family(such as normal distributions), with different values for a parameter or parameters. In such cases, assuming that it exists, the density can be written in the form of a sum as:f(x;a1,…,an)=∑i=1nwip(x;ai){\displaystyle f(x;a_{1},\ldots ,a_{n})=\sum _{i=1}^{n}\,w_{i}\,p(x;a_{i})}for one parameter, orf(x;a1,…,an,b1,…,bn)=∑i=1nwip(x;ai,bi){\displaystyle f(x;a_{1},\ldots ,a_{n},b_{1},\ldots ,b_{n})=\sum _{i=1}^{n}\,w_{i}\,p(x;a_{i},b_{i})}for two parameters, and so forth.
A generallinear combinationof probability density functions is not necessarily a probability density, since it may be negative or it may integrate to something other than 1. However, aconvex combinationof probability density functions preserves both of these properties (non-negativity and integrating to 1), and thus mixture densities are themselves probability density functions.
LetX1, ...,Xndenote random variables from thencomponent distributions, and letXdenote a random variable from the mixture distribution. Then, for any functionH(·)for whichE[H(Xi)]{\displaystyle \operatorname {E} [H(X_{i})]}exists, and assuming that the component densitiespi(x)exist,
E[H(X)]=∫−∞∞H(x)∑i=1nwipi(x)dx=∑i=1nwi∫−∞∞pi(x)H(x)dx=∑i=1nwiE[H(Xi)].{\displaystyle {\begin{aligned}\operatorname {E} [H(X)]&=\int _{-\infty }^{\infty }H(x)\sum _{i=1}^{n}w_{i}p_{i}(x)\,dx\\&=\sum _{i=1}^{n}w_{i}\int _{-\infty }^{\infty }p_{i}(x)H(x)\,dx=\sum _{i=1}^{n}w_{i}\operatorname {E} [H(X_{i})].\end{aligned}}}
Thejth moment about zero (i.e. choosingH(x) =xj) is simply a weighted average of thej-th moments of the components. Moments about the meanH(x) = (x − μ)jinvolve a binomial expansion:[1]
E[(X−μ)j]=∑i=1nwiE[(Xi−μi+μi−μ)j]=∑i=1nwi∑k=0j(jk)(μi−μ)j−kE[(Xi−μi)k],{\displaystyle {\begin{aligned}\operatorname {E} \left[{\left(X-\mu \right)}^{j}\right]&=\sum _{i=1}^{n}w_{i}\operatorname {E} \left[{\left(X_{i}-\mu _{i}+\mu _{i}-\mu \right)}^{j}\right]\\&=\sum _{i=1}^{n}w_{i}\sum _{k=0}^{j}{\binom {j}{k}}{\left(\mu _{i}-\mu \right)}^{j-k}\operatorname {E} \left[{\left(X_{i}-\mu _{i}\right)}^{k}\right],\end{aligned}}}
whereμidenotes the mean of thei-th component.
In the case of a mixture of one-dimensional distributions with weightswi, meansμiand variancesσi2, the total mean and variance will be:E[X]=μ=∑i=1nwiμi,{\displaystyle \operatorname {E} [X]=\mu =\sum _{i=1}^{n}w_{i}\mu _{i},}E[(X−μ)2]=σ2=E[X2]−μ2(standard variance reformulation)=(∑i=1nwiE[Xi2])−μ2=∑i=1nwi(σi2+μi2)−μ2(σi2=E[Xi2]−μi2⟹E[Xi2]=σi2+μi2){\displaystyle {\begin{aligned}\operatorname {E} \left[(X-\mu )^{2}\right]&=\sigma ^{2}\\&=\operatorname {E} [X^{2}]-\mu ^{2}&({\text{standard variance reformulation}})\\&=\left(\sum _{i=1}^{n}w_{i}\operatorname {E} \left[X_{i}^{2}\right]\right)-\mu ^{2}\\&=\sum _{i=1}^{n}w_{i}(\sigma _{i}^{2}+\mu _{i}^{2})-\mu ^{2}&(\sigma _{i}^{2}=\operatorname {E} [X_{i}^{2}]-\mu _{i}^{2}\implies \operatorname {E} [X_{i}^{2}]=\sigma _{i}^{2}+\mu _{i}^{2})\end{aligned}}}
These relations highlight the potential of mixture distributions to display non-trivial higher-order moments such asskewnessandkurtosis(fat tails) and multi-modality, even in the absence of such features within the components themselves. Marron and Wand (1992) give an illustrative account of the flexibility of this framework.[2]
The question ofmultimodalityis simple for some cases, such as mixtures ofexponential distributions: all such mixtures areunimodal.[3]However, for the case of mixtures ofnormal distributions, it is a complex one. Conditions for the number of modes in a multivariate normal mixture are explored by Ray & Lindsay[4]extending earlier work on univariate[5][6]and multivariate[7]distributions.
Here the problem of evaluation of the modes of anncomponent mixture in aDdimensional space is reduced to identification of critical points (local minima, maxima andsaddle points) on amanifoldreferred to as theridgeline surface, which is the image of the ridgeline functionx∗(α)=[∑i=1nαiΣi−1]−1×[∑i=1nαiΣi−1μi],{\displaystyle x^{*}(\alpha )=\left[\sum _{i=1}^{n}\alpha _{i}\Sigma _{i}^{-1}\right]^{-1}\times \left[\sum _{i=1}^{n}\alpha _{i}\Sigma _{i}^{-1}\mu _{i}\right],}whereα{\displaystyle \alpha }belongs to the(n−1){\displaystyle (n-1)}-dimensional standardsimplex:Sn={α∈Rn:αi∈[0,1],∑i=1nαi=1}{\displaystyle {\mathcal {S}}_{n}=\left\{\alpha \in \mathbb {R} ^{n}:\alpha _{i}\in [0,1],\sum _{i=1}^{n}\alpha _{i}=1\right\}}andΣi∈RD×D,μi∈RD{\displaystyle \Sigma _{i}\in \mathbb {R} ^{D\times D},\,\mu _{i}\in \mathbb {R} ^{D}}correspond to the covariance and mean of thei-th component. Ray & Lindsay[4]consider the case in whichn−1<D{\displaystyle n-1<D}showing a one-to-one correspondence of modes of the mixture and those on theridge elevation functionh(α)=q(x∗(α)){\displaystyle h(\alpha )=q(x^{*}(\alpha ))}thus one may identify the modes by solvingdh(α)dα=0{\displaystyle {\frac {dh(\alpha )}{d\alpha }}=0}with respect toα{\displaystyle \alpha }and determining the valuex∗(α){\displaystyle x^{*}(\alpha )}.
Using graphical tools, the potential multi-modality of mixtures with number of componentsn∈{2,3}{\displaystyle n\in \{2,3\}}is demonstrated; in particular it is shown that the number of modes may exceedn{\displaystyle n}and that the modes may not be coincident with the component means. For two components they develop a graphical tool for analysis by instead solving the aforementioned differential with respect to the first mixing weightw1{\displaystyle w_{1}}(which also determines the second mixing weight throughw2=1−w1{\displaystyle w_{2}=1-w_{1}}) and expressing the solutions as a functionΠ(α),α∈[0,1]{\displaystyle \Pi (\alpha ),\,\alpha \in [0,1]}so that the number and location of modes for a given value ofw1{\displaystyle w_{1}}corresponds to the number of intersections of the graph on the lineΠ(α)=w1{\displaystyle \Pi (\alpha )=w_{1}}. This in turn can be related to the number of oscillations of the graph and therefore to solutions ofdΠ(α)dα=0{\displaystyle {\frac {d\Pi (\alpha )}{d\alpha }}=0}leading to an explicit solution for the case of a two component mixture withΣ1=Σ2=Σ{\displaystyle \Sigma _{1}=\Sigma _{2}=\Sigma }(sometimes called ahomoscedasticmixture) given by1−α(1−α)dM(μ1,μ2,Σ)2{\displaystyle 1-\alpha (1-\alpha )d_{M}(\mu _{1},\mu _{2},\Sigma )^{2}}wheredM(μ1,μ2,Σ)=(μ2−μ1)TΣ−1(μ2−μ1){\textstyle d_{M}(\mu _{1},\mu _{2},\Sigma )={\sqrt {(\mu _{2}-\mu _{1})^{\mathsf {T}}\Sigma ^{-1}(\mu _{2}-\mu _{1})}}}is theMahalanobis distancebetweenμ1{\displaystyle \mu _{1}}andμ2{\displaystyle \mu _{2}}.
Since the above is quadratic it follows that in this instance there are at most two modes irrespective of the dimension or the weights.
For normal mixtures with generaln>2{\displaystyle n>2}andD>1{\displaystyle D>1}, a lower bound for the maximum number of possible modes, and – conditionally on the assumption that the maximum number is finite – an upper bound are known. For those combinations ofn{\displaystyle n}andD{\displaystyle D}for which the maximum number is known, it matches the lower bound.[8]
Simple examples can be given by a mixture of two normal distributions. (SeeMultimodal distribution#Mixture of two normal distributionsfor more details.)
Given an equal (50/50) mixture of two normal distributions with the same standard deviation and different means (homoscedastic), the overall distribution will exhibit lowkurtosisrelative to a single normal distribution – the means of the subpopulations fall on the shoulders of the overall distribution. If sufficiently separated, namely by twice the (common) standard deviation, so|μ1−μ2|>2σ,{\displaystyle \left|\mu _{1}-\mu _{2}\right|>2\sigma ,}these form abimodal distribution, otherwise it simply has a wide peak.[9]The variation of the overall population will also be greater than the variation of the two subpopulations (due to spread from different means), and thus exhibitsoverdispersionrelative to a normal distribution with fixed variationσ, though it will not be overdispersed relative to a normal distribution with variation equal to variation of the overall population.
Alternatively, given two subpopulations with the same mean and different standard deviations, the overall population will exhibit high kurtosis, with a sharper peak and heavier tails (and correspondingly shallower shoulders) than a single distribution.
The following example is adapted from Hampel,[10]who creditsJohn Tukey.
Consider the mixture distribution defined by
The mean ofi.i.d.observations fromF(x)behaves "normally" except for exorbitantly large samples, although the mean ofF(x)does not even exist.
Mixture densities are complicated densities expressible in terms of simpler densities (the mixture components), and are used both because they provide a good model for certain data sets (where different subsets of the data exhibit different characteristics and can best be modeled separately), and because they can be more mathematically tractable, because the individual mixture components can be more easily studied than the overall mixture density.
Mixture densities can be used to model astatistical populationwithsubpopulations, where the mixture components are the densities on the subpopulations, and the weights are the proportions of each subpopulation in the overall population.
Mixture densities can also be used to modelexperimental erroror contamination – one assumes that most of the samples measure the desired phenomenon, with some samples from a different, erroneous distribution.
Parametric statistics that assume no error often fail on such mixture densities – for example, statistics that assume normality often fail disastrously in the presence of even a fewoutliers– and instead one usesrobust statistics.
Inmeta-analysisof separate studies,study heterogeneitycauses distribution of results to be a mixture distribution, and leads tooverdispersionof results relative to predicted error. For example, in astatistical survey, themargin of error(determined by sample size) predicts thesampling errorand hence dispersion of results on repeated surveys. The presence of study heterogeneity (studies have differentsampling bias) increases the dispersion relative to the margin of error.
|
https://en.wikipedia.org/wiki/Mixture_distribution
|
Subitizingis the rapid, accurate, and effortless ability to perceive small quantities of items in aset, typically when there are four or fewer items, without relying on linguistic or arithmetic processes. The term refers to the sensation of instantly knowing how many objects are in the visual scene when their number falls within the subitizing range.[1]
Sets larger than about four to five items cannot be subitized unless the items appear in a pattern with which the person is familiar (such as the six dots on one face of a die). Large, familiar sets might becountedone-by-one (or the person might calculate the number through a rapid calculation if they can mentally group the elements into a few small sets). A person could alsoestimatethe number of a large set—a skill similar to, but different from, subitizing. The term subitizing was coined in 1949 by E. L. Kaufman et al.,[1]and is derived from the Latin adjectivesubitus(meaning "sudden").
The accuracy, speed, and confidence with which observers make judgments of the number of items are critically dependent on the number of elements to be enumerated. Judgments made for displays composed of around one to four items are rapid,[2]accurate,[3]and confident.[4]However, once there are more than four items to count, judgments are made with decreasing accuracy and confidence.[1]In addition, response times rise in a dramatic fashion, with an extra 250–350ms added for each additional item within the display beyond about four.[5]
While the increase in response time for each additional element within a display is 250–350ms per item outside the subitizing range, there is still a significant, albeit smaller, increase of 40–100ms per item within the subitizing range.[2]A similar pattern of reaction times is found in young children, although with steeper slopes for both the subitizing range and the enumeration range.[6]This suggests there is no span ofapprehensionas such, if this is defined as the number of items which can be immediately apprehended by cognitive processes, since there is an extra cost associated with each additional item enumerated. However, the relative differences in costs associated with enumerating items within the subitizing range are small, whether measured in terms of accuracy, confidence, orspeed of response. Furthermore, the values of all measures appear to differ markedly inside and outside the subitizing range.[1]So, while there may be no span of apprehension, there appear to be real differences in the ways in which a small number of elements is processed by the visual system (i.e. approximately four or fewer items), compared with larger numbers of elements (i.e. approximately more than four items).
A 2006 study demonstrated that subitizing and counting are not restricted to visual perception, but also extend to tactile perception, when observers had to name the number of stimulated fingertips.[7]A 2008 study also demonstrated subitizing and counting in auditory perception.[8]Even though the existence of subitizing in tactile perception has been questioned,[9]this effect has been replicated many times and can be therefore considered as robust.[10][11][12]The subitizing effect has also been obtained in tactile perception with congenitally blind adults.[13]Together, these findings support the idea that subitizing is a general perceptual mechanism extending to auditory and tactile processing.
As the derivation of the term "subitizing" suggests, the feeling associated with making a number judgment within the subitizing range is one of immediately being aware of the displayed elements.[3]When the number of objects presented exceeds the subitizing range, this feeling is lost, and observers commonly report an impression of shifting their viewpoint around the display, until all the elements presented have been counted.[1]The ability of observers to count the number of items within a display can be limited, either by the rapid presentation and subsequent masking of items,[14]or by requiring observers to respond quickly.[1]Both procedures have little, if any, effect on enumeration within the subitizing range. These techniques may restrict the ability of observers to count items by limiting the degree to which observers can shift their "zone of attention"[15]successively to different elements within the display.
Atkinson, Campbell, and Francis[16]demonstrated that visualafterimagescould be employed in order to achieve similar results. Using a flashgun to illuminate a line of white disks, they were able to generate intense afterimages in dark-adapted observers. Observers were required to verbally report how many disks had been presented, both at 10s and at 60s after the flashgun exposure. Observers reported being able to see all the disks presented for at least 10s, and being able to perceive at least some of the disks after 60s. Unlike simply displaying the images for 10 and 60 second intervals, when presented in the form of afterimages, eye movement cannot be employed for the purpose of counting: when the subjects move their eyes, the images also move. Despite a long period of time to enumerate the number of disks presented when the number of disks presented fell outside the subitizing range (i.e., 5–12 disks), observers made consistent enumeration errors in both the 10s and 60s conditions. In contrast, no errors occurred within the subitizing range (i.e., 1–4 disks), in either the 10s or 60s conditions.[17]
The work on theenumerationof afterimages[16][17]supports the view that different cognitive processes operate for the enumeration of elements inside and outside the subitizing range, and as such raises the possibility that subitizing and counting involve different brain circuits. However,functional imagingresearch has been interpreted both to support different[18]and shared processes.[19]
Social theory supporting the view that subitizing and counting may involve functionally and anatomically distinct brain areas comes from patients withsimultanagnosia, one of the key components ofBálint's syndrome.[20]Patients with this disorder suffer from an inability to perceive visual scenes properly, being unable to localize objects in space, either by looking at the objects, pointing to them, or by verbally reporting their position.[20]Despite these dramatic symptoms, such patients are able to correctly recognize individual objects.[21]Crucially, people with simultanagnosia are unable to enumerate objects outside the subitizing range, either failing to count certain objects, or alternatively counting the same object several times.[22]
However, people with simultanagnosia have no difficulty enumerating objects within the subitizing range.[23]The disorder is associated with bilateral damage to theparietal lobe, an area of the brain linked with spatial shifts of attention.[18]These neuropsychological results are consistent with the view that the process of counting, but not that of subitizing, requires active shifts of attention. However, recent research has questioned this conclusion by finding that attention also affects subitizing.[24]
A further source of research on the neural processes of subitizing compared to counting comes frompositron emission tomography(PET) research on normal observers. Such research compares the brain activity associated with enumeration processes inside (i.e., 1–4 items) for subitizing, and outside (i.e., 5–8 items) for counting.[18][19]
Such research finds that within the subitizing and counting range activation occurs bilaterally in the occipital extrastriate cortex and superior parietal lobe/intraparietal sulcus. This has been interpreted as evidence that shared processes are involved.[19]However, the existence of further activations during counting in the right inferior frontal regions, and theanterior cingulatehave been interpreted as suggesting the existence of distinct processes during counting related to the activation of regions involved in the shifting of attention.[18]
Historically, many systems have attempted to use subitizing to identify full or partial quantities. In the twentieth century, mathematics educators started to adopt some of these systems, as reviewed in the examples below, but often switched to more abstract color-coding to represent quantities up to ten.
In the 1990s, babies three weeks old were shown to differentiate between 1–3 objects, that is, to subitize.[22]A more recent meta-study summarizing five different studies concluded that infants are born with an innate ability to differentiate quantities within a small range, which increases over time.[25]By the age of seven that ability increases to 4–7 objects. Some practitioners claim that with training, children are capable of subitizing 15+ objects correctly.[citation needed]
The hypothesized use ofyupana, an Inca counting system, placed up to five counters in connected trays for calculations.
In each place value, the Chineseabacususes four or five beads to represent units, which are subitized, and one or two separate beads, which symbolize fives. This allows multi-digit operations such as carrying and borrowing to occur without subitizing beyond five.
European abacuses use ten beads in each register, but usually separate them into fives by color.
The idea of instant recognition of quantities has been adopted by several pedagogical systems, such asMontessori,CuisenaireandDienes. However, these systems only partially use subitizing, attempting to make all quantities from 1 to 10 instantly recognizable. To achieve it, they code quantities by color and length of rods or bead strings representing them. Recognizing such visual or tactile representations and associating quantities with them involves different mental operations from subitizing.
One of the most basic applications is indigit groupingin large numbers, which allow one to tell the size at a glance, rather than having to count. For example, writing one million (1000000) as 1,000,000 (or 1.000.000 or1000000) or one (short) billion (1000000000) as 1,000,000,000 (or other forms, such as 1,00,00,00,000 in theIndian numbering system) makes it much easier to read. This is particularly important in accounting and finance, as an error of a single decimal digit changes the amount by a factor of ten. This is also found in computerprogramming languagesforliteralvalues, some of which usedigit separators.
Dice,playing cardsand other gaming devices traditionally split quantities into subitizable groups with recognizable patterns. The behavioural advantage of this grouping method has been scientifically investigated by Ciccione andDehaene,[26]who showed that counting performances are improved if the groups share the same amount of items and the same repeated pattern.
A comparable application is to split up binary and hexadecimal number representations, telephone numbers, bank account numbers (e.g.,IBAN, social security numbers, number plates, etc.) into groups ranging from 2 to 5 digits separated by spaces, dots, dashes, or other separators. This is done to support overseeing completeness of a number when comparing or retyping. This practice of grouping characters also supports easier memorization of large numbers and character structures.
There is at least one game that can be played online to self assess one's ability to subitize.[27]
|
https://en.wikipedia.org/wiki/Subitizing_and_counting
|
Inprogramming language theory,semanticsis the rigorous mathematical study of the meaning ofprogramming languages.[1]Semantics assignscomputationalmeaning to validstringsin aprogramming language syntax. It is closely related to, and often crosses over with, thesemantics of mathematical proofs.
Semanticsdescribes the processes a computer follows whenexecutinga program in that specific language. This can be done by describing the relationship between the input and output of a program, or giving an explanation of how the program will be executed on a certainplatform, thereby creating amodel of computation.
In 1967,Robert W. Floydpublished the paperAssigning meanings to programs; his chief aim was "a rigorous standard for proofs about computer programs, includingproofs of correctness, equivalence, and termination".[2][3]Floyd further wrote:[2]
A semantic definition of a programming language, in our approach, is founded on asyntacticdefinition. It must specify which of the phrases in a syntactically correct program representcommands, and whatconditionsmust be imposed on an interpretation in the neighborhood of each command.
In 1969,Tony Hoarepublished a paper onHoare logicseeded by Floyd's ideas, now sometimes collectively calledaxiomatic semantics.[4][5]
In the 1970s, the termsoperational semanticsanddenotational semanticsemerged.[5]
The field of formal semantics encompasses all of the following:
It has close links with other areas ofcomputer sciencesuch asprogramming language design,type theory,compilersandinterpreters,program verificationandmodel checking.
There are many approaches to formal semantics; these belong to three major classes:
Apart from the choice between denotational, operational, or axiomatic approaches, most variations in formal semantic systems arise from the choice of supporting mathematical formalism.[citation needed]
Some variations of formal semantics include the following:
For a variety of reasons, one might wish to describe the relationships between different formal semantics. For example:
It is also possible to relate multiple semantics throughabstractionsvia the theory ofabstract interpretation.[citation needed]
|
https://en.wikipedia.org/wiki/Formal_semantics_of_programming_languages
|
Anillegal opcode, also called anunimplemented operation,[1]unintended opcode[2]orundocumented instruction, is aninstructionto aCPUthat is not mentioned in any official documentation released by the CPU's designer or manufacturer, which nevertheless has an effect. Illegal opcodes were common on older CPUs designed during the 1970s, such as theMOS Technology6502,Intel8086, and theZilogZ80. Unlike modern processors, those older processors have a very limited transistor budget, and thus to save space their designers often omitted circuitry to detect invalid opcodes and generate atrapto an error handler. The operation of many of these opcodes happens as aside effectof the wiring oftransistorsin the CPU, and usually combines functions of the CPU that were not intended to be combined. On old and modern processors, there are also instructions intentionally included in the processor by the manufacturer, but that are not documented in any official specification.
While most accidental illegal instructions have useless or even highly undesirable effects (such as crashing the computer), some can have useful functions in certain situations. Such instructions were sometimes exploited incomputer gamesof the 1970s and 1980s to speed up certain time-critical sections. Another common use was in the ongoing battle betweencopy protectionimplementations andcracking. Here, they were a form ofsecurity through obscurity, and their secrecy usually did not last very long.
A danger associated with the use of illegal instructions was that, given the fact that the manufacturer does not guarantee their existence and function, they might disappear or behave differently with any change of the CPU internals or any new revision of the CPU, rendering programs that use them incompatible with the newer revisions. For example, a number of olderApple IIgames did not work correctly on the newerApple IIc, because the latter used a newer CPU revision –65C02– that did away with illegal opcodes.
Later CPUs, such as the80186,80286,68000and its descendants, do not have illegal opcodes that are widely known/used. Ideally, the CPU will behave in a well-defined way when it finds an unknown opcode in the instruction stream, such as triggering a certainexceptionorfaultcondition. Theoperating system's exception or fault handler will then usually terminate the application that caused the fault, unless the program had previously established its own exception/fault handler, in which case that handler would receive control. Another, less common way of handling illegal instructions is by defining them to do nothing except taking up time and space (equivalent to the CPU's officialNOPinstruction); this method is used by theTMS9900and65C02processors, among others.Alternatively, unknown instructions can be emulated in software (e.g.LOADALL), or even "new" pseudo-instructions can be implemented. SomeBIOSes, memory managers, and operating systems take advantage of this, for example, to let V86 tasks communicate with the underlying system, i.e. BOP (from "BIOS Operation") utilized by the WindowsNTVDM.[3]
In spite of Intel's guarantee against such instructions, research using techniques such asfuzzinguncovered a vast number of undocumented instructions in x86 processors as late as 2018.[4]Some of these instructions are shared across processor manufacturers, indicating that Intel andAMDare both aware of the instruction and its purpose, despite it not appearing in any official specification. Other instructions are specific to manufacturers or specific product lines. The purpose of the majority of x86 undocumented instructions is unknown.
Today, the details of these instructions are mainly of interest for exactemulationof older systems.
|
https://en.wikipedia.org/wiki/Unintended_instructions
|
ASD-STE100 Simplified Technical English(STE) is acontrolled natural languagedesigned to simplify and clarify technical documentation. It was originally developed during the 1980's by the European Association of Aerospace Industries (AECMA), at the request of the European Airline industry, who wanted a standardized form of English for aircraft maintenance documentation that could be easily understood by non-native English speakers. It has since been adopted in many other fields outside the aerospace, defense, and maintenance domains for its clear, consistent, and comprehensive nature. The current edition of the STE Standard, published in January 2025, consists of 53 writing rules and a dictionary of approximately 900 approved words.
The first attempts towards controlled English were made as early as the 1930s and 1970s withBasic English,[1]Caterpillar Fundamental English[2][3]and Eastman Kodak (KISL).[4]In 1979, aerospace documentation was written in American English (Boeing, Douglas, Lockheed, etc.), in British English (Hawker Siddeley,British Aircraft Corporation, etc.) and by companies whose native language was not English (Fokker,Aeritalia,Aerospatiale, and some of the companies that formedAirbusat the time).
Because European airlines needed to translate parts of their maintenance documentation into other languages for local mechanics, the European Airline industry approached AECMA (the European Association of Aerospace Industries) to investigate the possibility of using a controlled or standardized form of English, with a strong focus onreadabilityand comprehensibility. In 1983, after an investigation into the different types of controlled languages that existed in other industries, AECMA decided to produce its own controlled English. The AIA (Aerospace Industries Association of America) was also invited to participate in this project. The result of this collaborative work was the release of the AECMA Document, PSC-85-16598 (known as the AECMA Simplified English Guide) in 1985. Subsequently, several changes, issues and revisions were released up to the present issue (Issue 9).
After the merger of AECMA with two other associations to form theAerospace, Security and Defence Industries Association of Europe(ASD) in 2004, the document was renamedASD Simplified Technical English, Specification ASD-STE100. Thus, STE evolved from Guide to Specification. With Issue 9, it has transitioned to international Standard. This change in designation (the subtitle of the document is Standard for Technical Documentation) is not just a reclassification, but a significant step that reinforces the global applicability of STE.
ASD-STE100 is maintained by the Simplified Technical English Maintenance Group (STEMG), a working group of ASD, formed in 1983. The copyright of ASD-STE100 is fully owned by ASD.[5][6]
Due to the ever-evolving nature of technology and technical language, the STEMG also relies on user feedback for suggested changes and updates.[7]Starting from Issue 6 in 2013, the Standard became free of charge. Over the years, 18,981 official copies of Issues 6, 7, and 8 were distributed. Since Issue 9 was released in January 2025, almost 1,000 official copies have been distributed (distribution log updated March 2025). Usually, a new issue is released every three years.
A free official copy of the ASD-STE100 Standard can be requested through theASD-STE100 websiteand through ASD.
Simplified Technical English can:
The ASD-STE100 Simplified Technical English Standard consists of two parts:
The writing rules cover aspects of grammar and style. The rules also differentiate between two types of texts: procedures and descriptions. A non-exhaustive list of the writing rules includes the concepts that follow:
The table that follows is an extract from a page of the ASD-STE100 dictionary:
(Part of speech)
ALTERNATIVES
Explanation of the four columns:
Word (part of speech)– This column has information on the word and its part of speech. Every approved word in STE is only permitted as a specific part of speech. For example, the word "test" is only approved as a noun (the test) but not as a verb (to test). There are few exceptions to the "One word, one part of speech, one meaning" principle.
Approved meaning/ALTERNATIVES– This column gives the approved meaning (or definition) of an approved word in STE. In the example table, "ACCESS" and "ACCIDENT" are approved (they are written in uppercase). The text in these definitions is not written in STE. If a meaning is not given in the dictionary, one cannot use the word in that meaning. Use an alternative word. For words that are not approved (they are written in lowercase, such as "acceptance" and "accessible" in the example table), this column gives approved alternatives that one can use to replace the words that are not approved. These alternatives are in uppercase, and they are only suggestions. It is possible that the suggested alternative for an unapproved word has a different part of speech. Usually, the first suggested alternative has the same part of speech as the word that is not approved.
STE EXAMPLE– This column shows how to use the approved word or how to use the approved alternative (usually a word-for-word replacement). It also shows how to keep the same meaning with a different construction. The wording given in the STE examples is not mandatory. It shows only one method to write a text with approved words. One can frequently use different constructions with other approved words and keep the same meaning.
Non-STE example– This column (text in lowercase) gives examples that show how the word that is not approved is frequently used in standard technical writing. The examples also help one to understand the use the approved alternatives or different constructions to give the same information. For approved words, this column is empty unless there is a help symbol (lightbulb) related to other meanings or restrictions.
The dictionary includes entries for words that are approved and for words that are not approved. The approved words can only be used according to their specified meaning. For example, the word "close" (v) can only be used in one of two meanings:
The verb can expressto close a doororto close a circuit, but it cannot be used with other connotations (e.g.,to close a meetingorto close a business). The adjective "close" appears in the dictionary as a word that is not approved with the suggested approved alternative "NEAR" (prep). Thus, STE does not allowdo not go close to the landing gear, but it does allowdo not go near the landing gear. In addition to the general STE vocabulary listed in the dictionary, Section 1, Words, gives specific guidelines for using technical nouns and technical verbs that writers need to describe technical information. For example, nouns, multi-word nouns, or verbs such asgrease,discoloration,propeller,aural warning system,overhead panel,to ream, andto drillare not listed in the dictionary, but they qualify as approved terms according to Part 1, Section 1 (specifically, writing rules 1.5 and 1.12).
"Simplified Technical English" is sometimes used as a generic term for a controlled natural language. The standard started as an industry-regulated writing standard for aircraft maintenance documentation, but it has become a requirement for an increasing number of military land vehicles, seacraft, and weapons programs. Although it was not initially intended for use as a general writing standard, it has been successfully adopted by other industries and for a wide range of document types. The US government'sPlain Englishlacks the strict vocabulary restrictions of the aerospace standard, but represents an attempt at a more general writing standard.[9]
Since 1986, STE has been a requirement of the ATA Specification i2200 (formerly ATA100) and ATA104 (Training). STE is also a requirement of theS1000DSpecification. The European Defence Standards Reference (EDSTAR) recommends STE as one of thebest practicestandards for writing technical documentation to be applied for defense contracting by all EDA (European Defence Agency) participating member states.
Today, the success of STE is such that other industries use it beyond its initial purpose for maintenance documentation and outside the aerospace and defense domains. At the end of Issue 8 distribution in December 2024, the Issue 8 STE distribution log shows that 64% of users come from outside these two industries. STE is successfully applied in the automotive, renewable energies, and offshore logistics sectors, and is further expanding within medical devices and the pharmaceutical sector. STE interest is also increasing within the academic world, including the disciplines ofinformation engineering,applied linguistics, andcomputational linguistics).
Several unrelated software products exist to support the application of STE, but the STEMG and ASD do not endorse or certify these products.[10]
Boeing developed the Boeing Simplified English Checker (BSEC). This linguistic-based checker uses a sophisticated 350-rule English parser, which is augmented with special functions that check for violations of the Simplified Technical English specification.[11]
HyperSTE is a plugin tool offered by Etteplan to check content for adherence to the rules and grammar of the standard.
Congree offers a Simplified Technical English Checker based on linguistic algorithms. It supports all rules of Simplified Technical English issue 7 that are relevant to the text composition and provides an integrated Simplified Technical English dictionary.[12]
The TechScribe term checker for ASD-STE100 helps writers to find text that does not conform to ASD-STE100.[13]
|
https://en.wikipedia.org/wiki/Simplified_Technical_English
|
Thenull coalescing operatoris abinary operatorthat is part of the syntax for a basicconditional expressionin severalprogramming languages, such as (in alphabetical order):C#[1]since version 2.0,[2]Dart[3]since version 1.12.0,[4]PHPsince version 7.0.0,[5]Perlsince version 5.10 aslogical defined-or,[6]PowerShellsince 7.0.0,[7]andSwift[8]asnil-coalescing operator. It is most commonly written asx ?? y, but varies across programming languages.
While its behavior differs between implementations, the null coalescing operator generally returns the result of its left-most operand if it exists and is notnull, and otherwise returns the right-most operand. This behavior allows a default value to be defined for cases where a more specific value is not available.
Like the binaryElvis operator, usually written asx ?: y, the null coalescing operator is ashort-circuiting operatorand thus does not evaluate the second operand if its value is not used, which is significant if its evaluation hasside-effects.
InBourne shell(and derivatives), "Ifparameteris unset or null, the expansion ofwordis substituted. Otherwise, the value ofparameteris substituted":[9]
InC#, the null coalescing operator is??.
It is most often used to simplify expressions as follows:
For example, if one wishes to implement some C# code to give a page a default title if none is present, one may use the following statement:
instead of the more verbose
or
The three forms result in the same value being stored into the variable namedpageTitle.
suppliedTitleis referenced only once when using the??operator, and twice in the other two code examples.
The operator can also be used multiple times in the same expression:
Once a non-null value is assigned to number, or it reaches the final value (which may or may not be null), the expression is completed.
If, for example, a variable should be changed to another value if its value evaluates to null, since C# 8.0 the??=null coalescing assignment operator can be used:
Which is a more concise version of:
In combination with thenull-conditional operator?.or the null-conditional element access operator?[]the null coalescing operator can be used to provide a default value if an object or an object's member is null. For example, the following will return the default title if either thepageobject is null orpageis not null but itsTitleproperty is:
As ofColdFusion11,[10]Railo4.1,[11]CFMLsupports the null coalescing operator as a variation of the ternary operator,?:. It is functionally and syntactically equivalent to its C# counterpart, above. Example:
Missing values inApache FreeMarkerwill normally cause exceptions. However, both missing and null values can be handled, with an optional default value:[12]
or, to leave the output blank:
JavaScript's nearest operator is??, the "nullish coalescing operator", which was added to the standard inECMAScript's 11th edition.[13]In earlier versions, it could be used via aBabelplugin, and inTypeScript. It evaluates its left-hand operand and, if the result value isnot"nullish" (nullorundefined), takes that value as its result; otherwise, it evaluates the right-hand operand and takes the resulting value as its result.
In the following example,awill be assigned the value ofbif the value ofbis notnullorundefined, otherwise it will be assigned 3.
Before the nullish coalescing operator, programmers would use the logical OR operator (||). But where??looks specifically fornullorundefined, the||operator looks for anyfalsyvalue:null,undefined,"",0,NaN, and of course,false.
In the following example,awill be assigned the value ofbif the value ofbistruthy, otherwise it will be assigned 3.
Kotlinuses the?:operator.[14]This is an unusual choice of symbol, given that?:is typically used for theElvis operator, not null coalescing, but it was inspired byGroovy (programming language)where null is considered false.
InObj-C, the nil coalescing operator is?:. It can be used to provide a default for nil references:
This is the same as writing
InPerl(starting with version 5.10), the operator is//and the equivalent Perl code is:
Thepossibly_null_valueis evaluated asnullornot-null(in Perl terminology,undefinedordefined). On the basis of the evaluation, the expression returns eithervalue_if_nullwhenpossibly_null_valueis null, orpossibly_null_valueotherwise. In the absence ofside-effectsthis is similar to the wayternary operators(?:statements) work in languages that support them. The above Perl code is equivalent to the use of the ternary operator below:
This operator's most common usage is to minimize the amount of code used for a simple null check.
Perl additionally has a//=assignment operator, where
is largely equivalent to:
This operator differs from Perl's older||and||=operators in that it considersdefinedness,nottruth. Thus they behave differently on values that are false but defined, such as 0 or "" (a zero-length string):
PHP 7.0 introduced[15]a null-coalescing operator with the??syntax. This checks strictly for NULL or a non-existent variable/array index/property. In this respect, it acts similarly to PHP'sisset()pseudo-function:
Version 7.4 of PHP introduced the Null Coalescing Assignment Operator with the??=syntax:[16]
Since PowerShell 7, the??null coalescing operator provides this functionality.[7]
SinceRversion 4.4.0 the%||%operator is included in base R (previously it was a feature of some packages likerlang).[17]
While there's nonullinRust,tagged unionsare used for the same purpose. For example,Result<T, E>orOption<T>.
Any type implementing the Try trait can be unwrapped.
unwrap_or()serves a similar purpose as the null coalescing operator in other languages. Alternatively,unwrap_or_else()can be used to use the result of a function as a default value.
In Oracle'sPL/SQL, theNVL() function provides the same outcome:
InSQL Server/Transact-SQLthere is the ISNULL function that follows the same prototype pattern:
Attention should be taken to not confuseISNULLwithIS NULL– the latter serves to evaluate whether some contents are defined to beNULLor not.
The ANSI SQL-92 standard includes the COALESCE function implemented inOracle,[18]SQL Server,[19]PostgreSQL,[20]SQLite[21]andMySQL.[22]The COALESCE function returns the first argument that is not null. If all terms are null, returns null.
The difference between ISNULL and COALESCE is that the type returned by ISNULL is the type of the leftmost value while COALESCE returns the type of the first non-null value.
InSwift, the nil coalescing operator is??. It is used to provide a default when unwrapping anoptional type:
For example, if one wishes to implement some Swift code to give a page a default title if none is present, one may use the following statement:
instead of the more verbose
|
https://en.wikipedia.org/wiki/Null_coalescing_operator
|
Dependency grammar(DG) is a class of moderngrammaticaltheories that are all based on the dependency relation (as opposed to theconstituency relationofphrase structure) and that can be traced back primarily to the work ofLucien Tesnière. Dependency is the notion that linguistic units, e.g. words, are connected to each other by directed links. The (finite) verb is taken to be the structural center of clause structure. All other syntactic units (words) are either directly or indirectly connected to the verb in terms of the directed links, which are calleddependencies. Dependency grammar differs fromphrase structure grammarin that while it can identify phrases it tends to overlook phrasal nodes. A dependency structure is determined by the relation between a word (ahead) and its dependents. Dependency structures are flatter than phrase structures in part because they lack afiniteverb phraseconstituent, and they are thus well suited for the analysis of languages with free word order, such asCzechorWarlpiri.
The notion of dependencies between grammatical units has existed since the earliest recorded grammars, e.g.Pāṇini, and the dependency concept therefore arguably predates that of phrase structure by many centuries.[1]Ibn Maḍāʾ, a 12th-centurylinguistfromCórdoba, Andalusia, may have been the first grammarian to use the termdependencyin the grammatical sense that we use it today. In early modern times, the dependency concept seems to have coexisted side by side with that of phrase structure, the latter having entered Latin, French, English and other grammars from the widespread study ofterm logicof antiquity.[2]Dependency is also concretely present in the works ofSámuel Brassai(1800–1897), a Hungarian linguist,Franz Kern(1830–1894), a German philologist, and ofHeimann Hariton Tiktin(1850–1936), a Romanian linguist.[3]
Modern dependency grammars, however, begin primarily with the work of Lucien Tesnière. Tesnière was a Frenchman, apolyglot, and a professor of linguistics at the universities in Strasbourg and Montpellier. His major workÉléments de syntaxe structuralewas published posthumously in 1959 – he died in 1954. The basic approach to syntax he developed has at least partially influenced the work of others in the 1960s, although it is not clear in what way these works were inspired by other sources.[4]A number of other dependency-based grammars have gained prominence since those early works.[5]DG has generated a lot of interest in Germany[6]in both theoretical syntax and language pedagogy. In recent years, the great development surrounding dependency-based theories has come fromcomputational linguisticsand is due, in part, to the influential work thatDavid Haysdid in machine translation at theRAND Corporationin the 1950s and 1960s. Dependency-based systems are increasingly being used to parse natural language and generatetree banks. Interest in dependency grammar is growing at present, international conferences on dependency linguistics being a relatively recent development (Depling 2011,Depling 2013,Depling 2015,Depling 2017,Depling 2019Archived2019-03-06 at theWayback Machine).
Dependency is a one-to-one correspondence: for every element (e.g. word or morph) in the sentence, there is exactly one node in the structure of that sentence that corresponds to that element. The result of this one-to-one correspondence is that dependency grammars are word (or morph) grammars. All that exist are the elements and the dependencies that connect the elements into a structure. This situation should be compared withphrase structure. Phrase structure is a one-to-one-or-more correspondence, which means that, for every element in a sentence, there are one or more nodes in the structure that correspond to that element. The result of this difference is that dependency structures are minimal[7]compared to their phrase structure counterparts, since they tend to contain many fewer nodes.
These trees illustrate two possible ways to render the dependency and phrase structure relations (see below). This dependency tree is an "ordered" tree, i.e. it reflects actual word order. Many dependency trees abstract away from linear order and focus just on hierarchical order, which means they do not show actual word order. This constituency (= phrase structure) tree follows the conventions ofbare phrase structure(BPS), whereby the words themselves are employed as the node labels.
The distinction between dependency and phrase structure grammars derives in large part from the initial division of the clause. The phrase structure relation derives from an initial binary division, whereby the clause is split into a subjectnoun phrase(NP) and apredicateverb phrase(VP). This division is certainly present in the basic analysis of the clause that we find in the works of, for instance,Leonard BloomfieldandNoam Chomsky. Tesnière, however, argued vehemently against this binary division, preferring instead to position the verb as the root of all clause structure. Tesnière's stance was that the subject-predicate division stems fromterm logicand has no place in linguistics.[8]The importance of this distinction is that if one acknowledges the initial subject-predicate division in syntax is real, then one is likely to go down the path of phrase structure grammar, while if one rejects this division, then one must consider the verb as the root of all structure, and so go down the path of dependency grammar.
The following frameworks are dependency-based:
Link grammaris similar to dependency grammar, but link grammar does not include directionality between the linked words, and thus does not describe head-dependent relationships. Hybrid dependency/phrase structure grammar uses dependencies between words, but also includes dependencies between phrasal nodes – see for example theQuranic Arabic Dependency Treebank. The derivation trees oftree-adjoining grammarare dependency structures, although the full trees of TAG rendered in terms of phrase structure, so in this regard, it is not clear whether TAG should be viewed more as a dependency or phrase structure grammar.
There are major differences between the grammars just listed. In this regard, the dependency relation is compatible with other major tenets of theories of grammar. Thus like phrase structure grammars, dependency grammars can be mono- or multistratal, representational or derivational, construction- or rule-based.
There are various conventions that DGs employ to represent dependencies. The following schemata (in addition to the tree above and the trees further below) illustrate some of these conventions:
The representations in (a–d) are trees, whereby the specific conventions employed in each tree vary. Solid lines aredependency edgesand lightly dotted lines areprojection lines. The only difference between tree (a) and tree (b) is that tree (a) employs the category class to label the nodes whereas tree (b) employs the words themselves as the node labels.[9]Tree (c) is a reduced tree insofar as the string of words below and projection lines are deemed unnecessary and are hence omitted. Tree (d) abstracts away from linear order and reflects just hierarchical order.[10]The arrow arcs in (e) are an alternative convention used to show dependencies and are favored byWord Grammar.[11]The brackets in (f) are seldom used, but are nevertheless quite capable of reflecting the dependency hierarchy; dependents appear enclosed in more brackets than their heads. And finally, the indentations like those in (g) are another convention that is sometimes employed to indicate the hierarchy of words.[12]Dependents are placed underneath their heads and indented. Like tree (d), the indentations in (g) abstract away from linear order.
The point to these conventions is that they are just that, namely conventions. They do not influence the basic commitment to dependency as the relation that is grouping syntactic units.
The dependency representations above (and further below) show syntactic dependencies. Indeed, most work in dependency grammar focuses on syntactic dependencies. Syntactic dependencies are, however, just one of three or four types of dependencies.Meaning–text theory, for instance, emphasizes the role of semantic and morphological dependencies in addition to syntactic dependencies.[13]A fourth type, prosodic dependencies, can also be acknowledged. Distinguishing between these types of dependencies can be important, in part because if one fails to do so, the likelihood that semantic, morphological, and/or prosodic dependencies will be mistaken for syntactic dependencies is great. The following four subsections briefly sketch each of these dependency types. During the discussion, the existence of syntactic dependencies is taken for granted and used as an orientation point for establishing the nature of the other three dependency types.
Semantic dependencies are understood in terms ofpredicatesand theirarguments.[14]The arguments of a predicate are semantically dependent on that predicate. Often, semantic dependencies overlap with and point in the same direction as syntactic dependencies. At times, however, semantic dependencies can point in the opposite direction of syntactic dependencies, or they can be entirely independent of syntactic dependencies. The hierarchy of words in the following examples show standard syntactic dependencies, whereas the arrows indicate semantic dependencies:
The two argumentsSamandSallyin tree (a) are dependent on the predicatelikes, whereby these arguments are also syntactically dependent onlikes. What this means is that the semantic and syntactic dependencies overlap and point in the same direction (down the tree). Attributive adjectives, however, are predicates that take their head noun as their argument, hencebigis a predicate in tree (b) that takesbonesas its one argument; the semantic dependency points up the tree and therefore runs counter to the syntactic dependency. A similar situation obtains in (c), where the preposition predicateontakes the two argumentsthe pictureandthe wall; one of these semantic dependencies points up the syntactic hierarchy, whereas the other points down it. Finally, the predicateto helpin (d) takes the one argumentJimbut is not directly connected toJimin the syntactic hierarchy, which means that semantic dependency is entirely independent of the syntactic dependencies.
Morphological dependencies obtain between words or parts of words.[15]When a given word or part of a word influences the form of another word, then the latter is morphologically dependent on the former. Agreement and concord are therefore manifestations of morphological dependencies. Like semantic dependencies, morphological dependencies can overlap with and point in the same direction as syntactic dependencies, overlap with and point in the opposite direction of syntactic dependencies, or be entirely independent of syntactic dependencies. The arrows are now used to indicate morphological dependencies.
The pluralhousesin (a) demands the plural of the demonstrative determiner, hencetheseappears, notthis, which means there is a morphological dependency that points down the hierarchy fromhousestothese. The situation is reversed in (b), where the singular subjectSamdemands the appearance of the agreement suffix-son the finite verbworks, which means there is a morphological dependency pointing up the hierarchy fromSamtoworks. The type of determiner in the German examples (c) and (d) influences the inflectional suffix that appears on the adjectivealt. When the indefinite articleeinis used, the strong masculine ending-erappears on the adjective. When the definite articlederis used, in contrast, the weak ending-eappears on the adjective. Thus since the choice of determiner impacts the morphological form of the adjective, there is a morphological dependency pointing from the determiner to the adjective, whereby this morphological dependency is entirely independent of the syntactic dependencies. Consider further the following French sentences:
The masculine subjectle chienin (a) demands the masculine form of the predicative adjectiveblanc, whereas the feminine subjectla maisondemands the feminine form of this adjective. A morphological dependency that is entirely independent of the syntactic dependencies therefore points again across the syntactic hierarchy.
Morphological dependencies play an important role intypological studies. Languages are classified as mostlyhead-marking(Sam work-s) or mostlydependent-marking(these houses), whereby most if not all languages contain at least some minor measure of both head and dependent marking.[16]
Prosodic dependencies are acknowledged in order to accommodate the behavior ofclitics.[17]A clitic is a syntactically autonomous element that is prosodically dependent on a host. A clitic is therefore integrated into the prosody of its host, meaning that it forms a single word with its host. Prosodic dependencies exist entirely in the linear dimension (horizontal dimension), whereas standard syntactic dependencies exist in the hierarchical dimension (vertical dimension). Classic examples of clitics in English are reduced auxiliaries (e.g.-ll,-s,-ve) and the possessive marker-s. The prosodic dependencies in the following examples are indicated with hyphens and the lack of a vertical projection line:
A hyphen that appears on the left of the clitic indicates that the clitic is prosodically dependent on the word immediately to its left (He'll,There's), whereas a hyphen that appears on the right side of the clitic (not shown here) indicates that the clitic is prosodically dependent on the word that appears immediately to its right. A given clitic is often prosodically dependent on its syntactic dependent (He'll,There's) or on its head (would've). At other times, it can depend prosodically on a word that is neither its head nor its immediate dependent (Florida's).
Syntactic dependencies are the focus of most work in DG, as stated above. How the presence and the direction of syntactic dependencies are determined is of course often open to debate. In this regard, it must be acknowledged that the validity of syntactic dependencies in the trees throughout this article is being taken for granted. However, these hierarchies are such that many DGs can largely support them, although there will certainly be points of disagreement. The basic question about how syntactic dependencies are discerned has proven difficult to answer definitively. One should acknowledge in this area, however, that the basic task of identifying and discerning the presence and direction of the syntactic dependencies of DGs is no easier or harder than determining the constituent groupings of phrase structure grammars. A variety of heuristics are employed to this end, basictests for constituentsbeing useful tools; the syntactic dependencies assumed in the trees in this article are grouping words together in a manner that most closely matches the results of standard permutation, substitution, and ellipsis tests for constituents.Etymologicalconsiderations also provide helpful clues about the direction of dependencies. A promising principle upon which to base the existence of syntactic dependencies is distribution.[18]When one is striving to identify the root of a given phrase, the word that is most responsible for determining the distribution of that phrase as a whole is its root.
Traditionally, DGs have had a different approach to linear order (word order) than phrase structure grammars. Dependency structures are minimal compared to their phrase structure counterparts, and these minimal structures allow one to focus intently on the two ordering dimensions.[19]Separating the vertical dimension (hierarchical order) from the horizontal dimension (linear order) is easily accomplished. This aspect of dependency structures has allowed DGs, starting with Tesnière (1959), to focus on hierarchical order in a manner that is hardly possible for phrase structure grammars. For Tesnière, linear order was secondary to hierarchical order insofar as hierarchical order preceded linear order in the mind of a speaker. The stemmas (trees) that Tesnière produced reflected this view; they abstracted away from linear order to focus almost entirely on hierarchical order. Many DGs that followed Tesnière adopted this practice, that is, they produced tree structures that reflect hierarchical order alone, e.g.
The traditional focus on hierarchical order generated the impression that DGs have little to say about linear order, and it has contributed to the view that DGs are particularly well-suited to examine languages with free word order. A negative result of this focus on hierarchical order, however, is that there is a dearth of DG explorations of particular word order phenomena, such as of standarddiscontinuities. Comprehensive dependency grammar accounts oftopicalization,wh-fronting,scrambling, andextrapositionare mostly absent from many established DG frameworks. This situation can be contrasted with phrase structure grammars, which have devoted tremendous effort to exploring these phenomena.
The nature of the dependency relation does not, however, prevent one from focusing on linear order. Dependency structures are as capable of exploring word order phenomena as phrase structures. The following trees illustrate this point; they represent one way of exploring discontinuities using dependency structures. The trees suggest the manner in which common discontinuities can be addressed. An example from German is used to illustrate a scramblingdiscontinuity:
The a-trees on the left showprojectivityviolations (= crossing lines), and the b-trees on the right demonstrate one means of addressing these violations. The displaced constituent takes on a word as itsheadthat is not itsgovernor. The words in red mark thecatena(=chain) of words that extends from the root of the displaced constituent to thegovernorof that constituent.[20]Discontinuities are then explored in terms of these catenae. The limitations on topicalization,wh-fronting, scrambling, and extraposition can be explored and identified by examining the nature of the catenae involved.
Traditionally, DGs have treated the syntactic functions (= grammatical functions,grammatical relations) as primitive. They posit an inventory of functions (e.g. subject, object, oblique, determiner, attribute, predicative, etc.). These functions can appear as labels on the dependencies in the tree structures, e.g.[21]
The syntactic functions in this tree are shown in green: ATTR (attribute), COMP-P (complement of preposition), COMP-TO (complement of to), DET (determiner), P-ATTR (prepositional attribute), PRED (predicative), SUBJ (subject), TO-COMP (to complement). The functions chosen and abbreviations used in the tree here are merely representative of the general stance of DGs toward the syntactic functions. The actual inventory of functions and designations employed vary from DG to DG.
As a primitive of the theory, the status of these functions is very different from that in some phrase structure grammars. Traditionally, phrase structure grammars derive the syntactic functions from the constellation. For instance, the object is identified as the NP appearing inside finite VP, and the subject as the NP appearing outside of finite VP. Since DGs reject the existence of a finite VP constituent, they were never presented with the option to view the syntactic functions in this manner. The issue is a question of what comes first: traditionally, DGs take the syntactic functions to be primitive and they then derive the constellation from these functions, whereas phrase structure grammars traditionally take the constellation to be primitive and they then derive the syntactic functions from the constellation.
This question about what comes first (the functions or the constellation) is not an inflexible matter. The stances of both grammar types (dependency and phrase structure) are not narrowly limited to the traditional views. Dependency and phrase structure are both fully compatible with both approaches to the syntactic functions. Indeed, monostratal systems, that are solely based on dependency or phrase structure, will likely reject the notion that the functions are derived from the constellation or that the constellation is derived from the functions. They will take both to be primitive, which means neither can be derived from the other.
|
https://en.wikipedia.org/wiki/Dependency_grammar
|
Indigital electronics, aNAND(NOT AND)gateis alogic gatewhich produces an output which is false only if all its inputs are true; thus its output iscomplementto that of anAND gate. A LOW (0) output results only if all the inputs to the gate are HIGH (1); if any input is LOW (0), a HIGH (1) output results. A NAND gate is made using transistors and junction diodes. ByDe Morgan's laws, a two-input NAND gate's logic may be expressed asA¯∨B¯=A⋅B¯{\displaystyle {\overline {A}}\lor {\overline {B}}={\overline {A\cdot B}}}, making a NAND gate equivalent toinvertersfollowed by anOR gate.
The NAND gate is significant because anyBoolean functioncan be implemented by using a combination of NAND gates. This property is called "functional completeness". It shares this property with theNOR gate. Digital systems employing certain logic circuits take advantage of NAND's functional completeness.
NAND gates with two or more inputs are available asintegrated circuitsintransistor–transistor logic,CMOS, and otherlogic families.
There are three symbols for NAND gates: theMIL/ANSIsymbol, theIECsymbol and the deprecatedDINsymbol sometimes found on old schematics. The ANSI symbol for the NAND gate is a standard AND gate with an inversion bubble connected.
The functionNAND(a1,a2, ...,an)islogically equivalenttoNOT(a1ANDa2AND ... ANDan).
One way of expressing A NAND B isA∧B¯{\displaystyle {\overline {A\land B}}}, where the symbol∧{\displaystyle {\land }}signifies AND and the bar signifies the negation of the expression under it: in essence, simply¬(A∧B){\displaystyle {\displaystyle \lnot (A\land B)}}.
The basic implementations can be understood from the image on the left below: If either of the switches S1 or S2 is open, thepull-up resistorR will set the output signal Q to 1 (high). If S1 and S2 are both closed, the pull-up resistor will be overridden by the switches, and the output will be 0 (low).
In thedepletion-load NMOS logicrealization in the middle below, the switches are the transistors T2 and T3, and the transistor T1 fulfills the function of the pull-up resistor.
In theCMOSrealization on the right below, the switches are then-typetransistors T3 and T4, and the pull-up resistor is made up of thep-typetransistors T1 and T2, which form the complement of transistors T3 and T4.
In CMOS, NAND gates are more efficient thanNOR gates. This is due to the faster charge mobility in n-MOSFETs compared to p-MOSFETs, so that the parallel connection of two p-MOSFETs (T1 and T2) realised in the NAND gate is more favourable than their series connection in the NOR gate. For this reason, NAND gates are generally preferred over NOR gates in CMOS circuits.[1]
NAND gates are basic logic gates, and as such they are recognised inTTLandCMOSICs.
The standard,4000 series,CMOSICis the 4011, which includes four independent, two-input, NAND gates. These devices are available from many semiconductor manufacturers. These are usually available in both through-holeDILandSOICformats. Datasheets are readily available in mostdatasheet databases.
The standard two-, three-, four- and eight-input NAND gates are available:
The NAND gate has the property offunctional completeness, which it shares with theNOR gate. That is, any other logic function (AND, OR, etc.) can be implemented using only NAND gates.[2]An entire processor can be created using NAND gates alone. In TTL ICs using multiple-emittertransistors, it also requires fewer transistors than a NOR gate.
As NOR gates are also functionally complete, if no specific NAND gates are available, one can be made fromNORgates usingNOR logic.[2]
|
https://en.wikipedia.org/wiki/NAND_gate
|
TheBlue Brain Projectwas a Swiss brain research initiative that aimed to create adigital reconstructionof the mouse brain. The project was founded in May 2005 by the Brain Mind Institute ofÉcole Polytechnique Fédérale de Lausanne(EPFL) in Switzerland. The project ended in December 2024. Its mission was to use biologically-detailed digital reconstructions andsimulations of the mammalian brainto identify the fundamental principles of brain structure and function.
The project was headed by the founding directorHenry Markram—who also launched the EuropeanHuman Brain Project—and was co-directed by Felix Schürmann, Adriana Salvatore andSean Hill. Using aBlue Genesupercomputerrunning Michael Hines'sNEURON, the simulation involved a biologically realistic model ofneurons[1][2][3]and an empirically reconstructed modelconnectome.
There were a number of collaborations, including theCajal Blue Brain, which is coordinated by theSupercomputing and Visualization Center of Madrid(CeSViMa), and others run by universities and independent laboratories.
In 2006, the project made its first model of aneocortical columnwith simplified neurons.[4]In November 2007, it completed an initial model of the rat neocortical column. This marked the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.[5][4][6]
Neocortical columns are considered by some researchers to be the smallest functional units of theneocortex,[7][8]and they are thought to be responsible for higher functions such asconscious thought. In humans, each column is about 2 mm (0.079 in) in length, has a diameter of 0.5 mm (0.020 in) and contains about 60,000 neurons.Ratneocortical columns are very similar in structure but contain only 10,000 neurons and 108synapses.
In 2009, Henry Markram claimed that a "detailed, functional artificial human brain can be built within the next 10 years".[9]He conceived theHuman Brain Project, to which the Blue Brain Project contributed,[4]and which became funded in 2013 by the European Union with up to $1.3 billion.[10]
In 2015, the project simulated part of a rat brain with 30,000 neurons.[11]Also in 2015, scientists atÉcole Polytechnique Fédérale de Lausanne(EPFL) developed a quantitative model of the previously unknown relationship between the neurons and theastrocytes. This model describes the energy management of the brain through the function of the neuro-glial vascular unit (NGV). The additional layer of neuron andglial cellsis being added to Blue Brain Project models to improve functionality of the system.[12]
In 2017, Blue Brain Project discovered thatneural cliquesconnected to one another in up to eleven dimensions. The project's director suggested that the difficulty of understanding the brain is partly because the mathematics usually applied for studyingneural networkscannot detect that many dimensions. The Blue Brain Project was able to model these networks usingalgebraic topology.[13]
In 2018, Blue Brain Project released its first digital 3D brain cell atlas[14]which, according toScienceDaily, is like "going from hand-drawn maps to Google Earth", providing information about major cell types, numbers, and positions in 737 regions of the brain.[15]
In 2019, Idan Segev, one of thecomputational neuroscientistsworking on the Blue Brain Project, gave a talk titled: "Brain in the computer: what did I learn from simulating the brain." In his talk, he mentioned that the whole cortex for the mouse brain was complete and virtualEEGexperiments would begin soon. He also mentioned that the model had become too heavy on the supercomputers they were using at the time, and that they were consequently exploring methods in which every neuron could be represented as anartificial neural network(see citation for details).[16]
In 2022, scientists at the Blue Brain Project used algebraic topology to create an algorithm, Topological Neuronal Synthesis, that generates a large number of unique cells using only a few examples, synthesizing millions of unique neuronal morphologies. This allows them to replicate both healthy and diseased states of the brain. In a paper Kenari et al. were able to digitally synthesize dendritic morphologies from the mouse brain using this algorithm. They mapped entire brain regions from just a few reference cells. Since it is open source, this will enable the modelling of brain diseases and eventually, the algorithm could lead to digital twins of brains.[17]
The Blue Brain Project has developed a number of software to reconstruct and to simulate the mouse brain. All software tools mentioned below areopen source softwareand available for everyone onGitHub.[18][19][20][21][22][23]
Blue Brain Nexus[24][25][26]is a data integration platform which uses aknowledge graphto enable users to search, deposit, and organise data. It stands on theFAIR dataprinciples to provide flexible data management solutions beyond neuroscience studies.
BluePyOpt[27]is a tool that is used to build electrical models of single neurons. For this, it usesevolutionary algorithmsto constrain the parameters to experimental electrophysiological data. Attempts to reconstruct single neurons using BluePyOpt are reported by Rosanna Migliore,[28]and Stefano Masori.[29]
CoreNEURON[30]is a supplemental tool toNEURON, which allows large scale simulation by boosting memory usage and computational speed.
NeuroMorphoVis[31]is a visualisation tool for morphologies of neurons.
SONATA[32]is a joint effort between Blue Brain Project andAllen Institute for Brain Science, to develop a standard for data format, which realises a multiple platform working environment with greater computational memory and efficiency.
The project was funded primarily by theSwiss governmentand theFuture and Emerging Technologies(FET) Flagship grant from theEuropean Commission,[33]and secondarily by grants and donations from private individuals. The EPFL bought the Blue Gene computer at a reduced cost because it was still a prototype and IBM was interested in exploring how applications would perform on the machine. BBP was viewed as a validation of theBlue Genesupercomputer concept.[34]
Although the Blue Brain Project is often associated with theHuman Brain Project(HBP), it is important to distinguish between the two. While the Blue Brain Project was a key participant of the HBP, much of the criticism regarding targets and management issues actually pertains to theHuman Brain Projectrather than the Blue Brain Project itself.[35][36]
Voices raised as early as September 2014 highlighted concerns over the trajectory of the Human Brain Project, noting challenges in meeting its high-level goals and questioning its organizational structure and the project's key promoter, Professor Henry Markram.[37][38]In 2016, the HBP underwent a restructuring with resources originally earmarked for brain simulation redistributed to support a wider array of neuroscience research groups. Since then, scientists and engineers from the Blue Brain Project have contributed to various aspects of the HBP, including the Neuroinformatics, EBRAINS, Neurorobotics, and High-Performance Computing Platforms.[39]This distinction is important because some of the criticism directed at the initial incarnation of HBP may have been misattributed to the Blue Brain Project due to their shared leadership and early involvement in the initiative.
The Cajal Blue Brain Project is coordinated by theTechnical University of Madridled byJavier de Felipeand uses the facilities of theSupercomputing and Visualization Center of Madridand its supercomputerMagerit.[40]TheCajal Institutealso participates in this collaboration. The main lines of research currently being pursued atCajal Blue Braininclude neurological experimentation and computer simulations.[41]Nanotechnology, in the form of a newly designed brain microscope, plays an important role in its research plans.[42]
Noah Huttoncreated the documentary filmIn Silicoover a 10-year period. The film was released in April 2021.[43]The film covers the "shifting goals and landmarks"[44]of the Blue Brain Project as well as the drama, "In the end, this isn’t about science. It’s about the universals of power, greed, ego, and fame."[45][46]
|
https://en.wikipedia.org/wiki/Blue_Brain_Project
|
Innatural language processing, asentence embeddingis a representation of a sentence as avectorof numbers which encodes meaningful semantic information.[1][2][3][4][5][6][7]
State of the art embeddings are based on the learned hidden layer representation of dedicated sentence transformer models.BERTpioneered an approach involving the use of a dedicated [CLS] token prepended to the beginning of each sentence inputted into the model; the final hidden state vector of this token encodes information about the sentence and can be fine-tuned for use in sentence classification tasks. In practice however, BERT's sentence embedding with the [CLS] token achieves poor performance, often worse than simply averaging non-contextual word embeddings.SBERTlater achieved superior sentence embedding performance[8]by fine tuning BERT's [CLS] token embeddings through the usage of asiamese neural networkarchitecture on the SNLI dataset.
Other approaches are loosely based on the idea ofdistributional semanticsapplied to sentences.Skip-Thoughttrains an encoder-decoder structure for the task of neighboring sentences predictions; this has been shown to achieve worse performance than approaches such asInferSentor SBERT.
An alternative direction is to aggregate word embeddings, such as those returned byWord2vec, into sentence embeddings. The most straightforward approach is to simply compute the average of word vectors, known as continuous bag-of-words (CBOW).[9]However, more elaborate solutions based on word vector quantization have also been proposed. One such approach is the vector of locally aggregated word embeddings (VLAWE),[10]which demonstrated performance improvements in downstream text classification tasks.
In recent years, sentence embedding has seen a growing level of interest due to its applications in natural language queryable knowledge bases through the usage of vector indexing for semantic search.LangChainfor instance utilizes sentence transformers for purposes of indexing documents. In particular, an indexing is generated by generating embeddings for chunks of documents and storing (document chunk, embedding) tuples. Then given a query in natural language, the embedding for the query can be generated. A top k similarity search algorithm is then used between the query embedding and the document chunk embeddings to retrieve the most relevant document chunks as context information forquestion answeringtasks. This approach is also known formally asretrieval-augmented generation[11]
Though not as predominant as BERTScore, sentence embeddings are commonly used for sentence similarity evaluation which sees common use for the task of optimizing aLarge language model's generation parameters is often performed via comparing candidate sentences against reference sentences. By using the cosine-similarity of the sentence embeddings of candidate and reference sentences as the evaluation function, a grid-search algorithm can be utilized to automatehyperparameter optimization[citation needed].
A way of testing sentence encodings is to apply them on Sentences Involving Compositional Knowledge (SICK) corpus[12]for both entailment (SICK-E) and relatedness (SICK-R).
In[13]the best results are obtained using aBiLSTM networktrained on theStanford Natural Language Inference (SNLI) Corpus. ThePearson correlation coefficientfor SICK-R is 0.885 and the result for SICK-E is 86.3. A slight improvement over previous scores is presented in:[14]SICK-R: 0.888 and SICK-E: 87.8 using a concatenation of bidirectionalGated recurrent unit.
|
https://en.wikipedia.org/wiki/Sentence_embedding
|
Li-Fi(also written asLiFi) is awireless communicationtechnology which utilizes light to transmit data and position between devices. The term was first introduced byHarald Haasduring a 2011TEDGlobaltalk inEdinburgh.[1]
Li-Fi is a light communication system that is capable of transmittingdataat high speeds over thevisible light,ultraviolet, andinfraredspectrums. In its present state, onlyLED lampscan be used for the transmission of data in visible light.[2]
In terms of itsend user, the technology is similar toWi-Fi– the key technical difference being that Wi-Fi usesradio frequencyto induce an electric tension in an antenna to transmit data, whereas Li-Fi uses the modulation of light intensity to transmit data. Li-Fi is able to function in areas otherwise susceptible toelectromagnetic interference(e.g.aircraft cabins, hospitals, or the military).[3]
Li-Fi is a derivative ofoptical wireless communications(OWC) technology, which uses light fromlight-emitting diodes(LEDs) as a medium to deliver network, mobile, high-speed communication in a similar manner toWi-Fi.[4]The Li-Fi market was projected to have acompound annual growth rateof 82% from 2013 to 2018 and to be worth over $6 billion per year by 2018.[5]However, the market has not developed as such and Li-Fi remains a niche technology.[6]
Visible light communications(VLC) works by switching the current to the LEDs off and on at a very high speed, beyond the human eye's ability to notice.[7]Technologies that allow roaming between various Li-Fi cells, also known as handover, may allow seamless transition between such cells. The light waves cannot penetrate walls which translates to a much shorter range, and a lowerhackingpotential, relative to Wi-Fi.[8][9]Direct line of sight is not always necessary for Li-Fi to transmit a signal and light reflected off walls can achieve 70Mbit/s.[10][11]
Li-Fi can potentially be useful in electromagnetic sensitive areas without causingelectromagnetic interference.[8][12][9]Both Wi-Fi and Li-Fi transmit data over theelectromagnetic spectrum, but whereas Wi-Fi utilizes radio waves, Li-Fi uses visible, ultraviolet, and infrared light.[13]Researchers have reached data rates of over 224 Gbit/s,[14]which was much faster than typical fastbroadbandin 2013.[15][16]Li-Fi was expected to be ten times cheaper than Wi-Fi.[17]The first commercially available Li-Fi system was presented at the 2014Mobile World Congressin Barcelona.
Although Li-Fi LEDs would have to be kept on to transmit data, they could be dimmed to below human visibility while still emitting enough light to carry data.[17]This is also a major bottleneck of the technology when based on the visible spectrum, as it is restricted to the illumination purpose and not ideally adjusted to a mobile communication purpose, given that other sources of light, for example the sun, will interfere with the signal.[18]
Since Li-Fi's short wave range is unable to penetrate walls, transmitters would need to be installed in every room of a building to ensure even Li-Fi distribution. The high installation costs associated with this requirement to achieve a level of practicality of the technology is one of the potential downsides.[5][7][19]
The initial research on Visible Light Communication (VLC) was published bythe Fraunhofer Institute for Telecommunicationsin September 2009, showcasing data rates of 125 Mbit/s over a 5 m distance using a standard white LED.[20]In 2010, transmission rates were already increased to 513 Mbit/s using the DMT modulation format.[21]
During his 2011 TED Global Talk, ProfessorHarald Haas, a Mobile Communications expert at theUniversity of Edinburgh, introduced the term "Li-Fi" while discussing the concept of "wireless data from every light".[22]
The general term "visible light communication" (VLC), whose history dates back to the 1880s, includes any use of the visible light portion of the electromagnetic spectrum to transmit information. The D-Light project, funded from January 2010 to January 2012 at Edinburgh's Institute for Digital Communications, was instrumental in advancing this technology, with Haas also contributing to the establishment of a company for its commercialization.[23][24]
In October 2011, theFraunhoferIPMS research organization and industry partners formed theLi-Fi Consortium, to promote high-speed optical wireless systems and to overcome the limited amount of radio-based wireless spectrum available by exploiting a completely different part of the electromagnetic spectrum.[25]
The practical demonstration of VLC technology using Li-Fi[26]took place in 2012, with transmission rates exceeding 1 Gbit/s achieved under laboratory conditions.[27]In 2013, laboratory tests achieved speed of up to 10 Gbit/s. By August 2013, data rates of approximately 1.6 Gbit/s were demonstrated over a single color LED.[28]A significant milestone was reached in September 2013 when it was stated that Li-Fi, or VLC systems in general, did not absolutely require line-of-sight conditions.[29]In October 2013, it was reported Chinese manufacturers were working on Li-Fi development kits.[30]
In April 2014, the Russian company Stins Coman announced the BeamCaster Li-Fi wireless local network, capable of data transfer speeds up to 1.25gigabytesper second (GB/s). They foresee boosting speeds up to 5 GB/s in the near future.[31]In the same year, Sisoft, a Mexican company, set a new record by transferring data at speeds of up to 10 GB/s across a light spectrum emitted by LED lamps.[32]
The advantages of operating detectors such as APDs inGeiger-modeassingle photon avalanche diode(SPAD) were demonstrated in May 2014, highlighting enhanced energy efficiency and receiver sensitivity .[33]This operational mode also facilitatedquantum-limitedsensitivity, enabling receivers to detect weak signals from considerable distances.[34]
In June 2018, Li-Fi successfully underwent testing at aBMWplant inMunichfor industrial applications under the auspices of the Fraunhofer Heinrich-Hertz-Institute.[35]
In August 2018,Kyle AcademyinScotland, piloted the usage within its premises, enabling students to receive data through rapid on–off transitions of room lighting.[36]
In June 2019, Oledcomm, a French company, showcased its Li-Fi technology at the 2019Paris Air Show.[37]
Like Wi-Fi, Li-Fi is wireless and usessimilar 802.11 protocols, but it also usesultraviolet,infrared, andvisible light communication.[38]
One part of VLC is modeled after communication protocols established by theIEEE802 workgroup. However, theIEEE 802.15.7 standard is out-of-date: it fails to consider the latest technological developments in the field of optical wireless communications, specifically with the introduction of opticalorthogonal frequency-division multiplexing(O-OFDM) modulation methods which have been optimized for data rates, multiple-access, and energy efficiency.[39]The introduction of O-OFDM means that a new drive for standardization of optical wireless communications is required.[citation needed]
Nonetheless, the IEEE 802.15.7 standard defines thephysical layer(PHY) andmedia access control(MAC) layer. The standard is able to deliver enough data rates to transmit audio, video, and multimedia services. It takes into account optical transmission mobility, its compatibility with artificial lighting present in infrastructures, and the interference which may be generated by ambient lighting.
The MAC layer permits using the link with the other layers as with theTCP/IPprotocol.[citation needed]
The standard defines three PHY layers with different rates:
The modulation formats recognized for PHY I and PHY II areon–off keying(OOK) and variablepulse-position modulation(VPPM). TheManchester codingused for the PHY I and PHY II layers includes the clock inside the transmitted data by representing a logic 0 with an OOK symbol "01" and a logic 1 with an OOK symbol "10", all with a DC component. The DC component avoids light extinction in case of an extended run of logic 0's.[citation needed]
In July 2023, the IEEE published the802.11bbstandard for light-based networking, intended to provide a vendor-neutral standard for the Li-Fi market.
Many experts foresee a movement towards Li-Fi in homes because it has the potential for faster speeds and its security benefits with how the technology works. Because the light sends the data, the network can be contained in a single physical room or building reducing the possibility of a remote network attack. Though this has more implications in enterprise and other sectors, home usage may be pushed forward with the rise of home automation that requires large volumes of data to be transferred through the local network.[41]
Mostremotely operated underwater vehicles(ROVs) are controlled by wired connections. The length of their cabling places a hard limit on their operational range, and other potential factors such as the cable's weight and fragility may be restrictive. Since light can travel through water, Li-Fi based communications could offer much greater mobility.[42][unreliable source]Li-Fi's utility is limited by the distance light can penetrate water. Significant amounts of light do not penetrate further than 200 meters. Past 1000 meters, no light penetrates.[43]
Efficient communication of data is possible in airborne environments such as a commercialpassenger aircraftutilizing Li-Fi. Using this light-based data transmission will not interfere with equipment on the aircraft that relies onradio wavessuch as itsradar lifi connectivity.[44]
Increasingly, medical facilities are using remote examinations and even procedures. Li-Fi systems could offer a better system to transmit low latency, high volume data across networks.[citation needed]Besides providing a higher speed, light waves also have reduced effects onmedical instruments. An example of this would be the possibility of wireless devices being used inMRIssimilar radio sensitive procedures.[44]Another application of LiFi in hospitals is localisation of assets and personnel.[45]
Vehiclescould communicate with one another via front and back lights to increase road safety. Street lights and traffic signals could also provide information about current road situations.[46]
Due to the specific properties of light, the optical beams can be bundled especially well in comparison to radio-based devices, allowing highly directional Li-Fi systems to be implemented. Devices have been developed for outdoor use that make it more difficult to access the data due to their low beam angle, thus increasing the security of the transmission. These can be used, for example, for building-to-building communication or for networking small radio cells.
Anywhere in industrial areas data has to be transmitted, Li-Fi is capable of replacingslip rings, sliding contacts, and short cables, such asIndustrial Ethernet. Due to the real-time of Li-Fi (which is often required for automation processes), it is also an alternative to common industrialWireless LANstandards. Fraunhofer IPMS, a research organization inGermanystates that they have developed a component which is very appropriate for industrial applications with time-sensitive data transmission.[47]
Street lampscan be used to display advertisements for nearby businesses or attractions oncellular devicesas an individual passes through. A customer walking into a store and passing through the store's front lights can show current sales and promotions on the customer's cellular device.[48]
In warehousing, indoor positioning and navigation is a crucial element. 3D positioning helpsrobotsto get a more detailed and realistic visual experience. Visible light from LED bulbs is used to send messages to the robots and other receivers and hence can be used to calculate the positioning of the objects.[49]
|
https://en.wikipedia.org/wiki/Li-Fi
|
This is a list ofnotableapplications(apps) that run on theAndroid platformwhich meet guidelines forfree softwareandopen-source software.
There are a number of third-party maintained lists of open-source Android applications, including:
|
https://en.wikipedia.org/wiki/List_of_free_and_open-source_Android_applications
|
Advanced planning and scheduling(APS, also known asadvanced manufacturing) refers to amanufacturing management processby whichraw materialsand production capacity are optimally allocated to meet demand.[1]APS is especially well-suited to environments where simpler planning methods cannot adequately address complex trade-offs between competing priorities. Production scheduling is intrinsically very difficult due to the (approximately)factorialdependence of the size of the solution space on the number of items/products to be manufactured.
Traditionalproduction planningandschedulingsystems (such asmanufacturing resource planning) use a stepwise procedure to allocate material and production capacity. This approach is simple but cumbersome, and does not readily adapt to changes in demand, resource capacity or material availability. Materials and capacity are planned separately, and many systems do not consider material or capacity constraints, leading to infeasible plans. However, attempts to change to the new system have not always been successful, which has called for the combination of management philosophy with manufacturing.
Unlike previous systems, APS simultaneously plans and schedules production based on available materials, labor and plant capacity.
APS has commonly been applied where one or more of the following conditions are present:
Advanced planning & scheduling software enables manufacturing scheduling and advanced scheduling optimization within these environments.
[1]
|
https://en.wikipedia.org/wiki/Advanced_planning_and_scheduling
|
Instochastic processes,chaos theoryandtime series analysis,detrended fluctuation analysis(DFA) is a method for determining the statisticalself-affinityof a signal. It is useful for analysingtime seriesthat appear to belong-memoryprocesses (divergingcorrelation time, e.g. power-law decayingautocorrelation function) or1/f noise.
The obtained exponent is similar to theHurst exponent, except that DFA may also be applied to signals whose underlying statistics (such as mean and variance) or dynamics arenon-stationary(changing with time). It is related to measures based upon spectral techniques such asautocorrelationandFourier transform.
Penget al. introduced DFA in 1994 in a paper that has been cited over 3,000 times as of 2022[1]and represents an extension of the (ordinary)fluctuation analysis(FA), which is affected by non-stationarities.
Systematic studies of the advantages and limitations of the DFA method were performed by PCh Ivanov et al. in a series of papers focusing on the effects of different types of nonstationarities in real-world signals: (1) types of trends;[2](2) random outliers/spikes, noisy segments, signals composed of parts with different correlation;[3](3) nonlinear filters;[4](4) missing data;[5](5) signal coarse-graining procedures[6]and comparing DFA performance with moving average techniques[7](cumulative citations > 4,000).Datasetsgenerated to test DFA are available on PhysioNet.[8]
Given: atime seriesx1,x2,...,xN{\displaystyle x_{1},x_{2},...,x_{N}}.
Compute its average value⟨x⟩=1N∑t=1Nxt{\displaystyle \langle x\rangle ={\frac {1}{N}}\sum _{t=1}^{N}x_{t}}.
Sum it into a processXt=∑i=1t(xi−⟨x⟩){\displaystyle X_{t}=\sum _{i=1}^{t}(x_{i}-\langle x\rangle )}. This is thecumulative sum, orprofile, of the original time series. For example, the profile of ani.i.d.white noiseis a standardrandom walk.
Select a setT={n1,...,nk}{\displaystyle T=\{n_{1},...,n_{k}\}}of integers, such thatn1<n2<⋯<nk{\displaystyle n_{1}<n_{2}<\cdots <n_{k}}, the smallestn1≈4{\displaystyle n_{1}\approx 4}, the largestnk≈N{\displaystyle n_{k}\approx N}, and the sequence is roughly distributed evenly in log-scale:log(n2)−log(n1)≈log(n3)−log(n2)≈⋯{\displaystyle \log(n_{2})-\log(n_{1})\approx \log(n_{3})-\log(n_{2})\approx \cdots }. In other words, it is approximately ageometric progression.[9]
For eachn∈T{\displaystyle n\in T}, divide the sequenceXt{\displaystyle X_{t}}into consecutive segments of lengthn{\displaystyle n}. Within each segment, compute theleast squaresstraight-line fit (thelocal trend). LetY1,n,Y2,n,...,YN,n{\displaystyle Y_{1,n},Y_{2,n},...,Y_{N,n}}be the resulting piecewise-linear fit.
Compute theroot-mean-square deviationfrom the local trend (local fluctuation):F(n,i)=1n∑t=in+1in+n(Xt−Yt,n)2.{\displaystyle F(n,i)={\sqrt {{\frac {1}{n}}\sum _{t=in+1}^{in+n}\left(X_{t}-Y_{t,n}\right)^{2}}}.}And their root-mean-square is the total fluctuation:
(IfN{\displaystyle N}is not divisible byn{\displaystyle n}, then one can either discard the remainder of the sequence, or repeat the procedure on the reversed sequence, then take their root-mean-square.[10])
Make thelog-log plotlogn−logF(n){\displaystyle \log n-\log F(n)}.[11][12]
A straight line of slopeα{\displaystyle \alpha }on the log-log plot indicates a statisticalself-affinityof formF(n)∝nα{\displaystyle F(n)\propto n^{\alpha }}. SinceF(n){\displaystyle F(n)}monotonically increases withn{\displaystyle n}, we always haveα>0{\displaystyle \alpha >0}.
The scaling exponentα{\displaystyle \alpha }is a generalization of theHurst exponent, with the precise value giving information about the series self-correlations:
Because the expected displacement in anuncorrelated random walkof length N grows likeN{\displaystyle {\sqrt {N}}}, an exponent of12{\displaystyle {\tfrac {1}{2}}}would correspond to uncorrelated white noise. When the exponent is between 0 and 1, the result isfractional Gaussian noise.
Though the DFA algorithm always produces a positive numberα{\displaystyle \alpha }for any time series, it does not necessarily imply that the time series is self-similar.Self-similarityrequires the log-log graph to be sufficiently linear over a wide range ofn{\displaystyle n}. Furthermore, a combination of techniques includingmaximum likelihood estimation(MLE), rather than least-squares has been shown to better approximate the scaling, or power-law, exponent.[13]
Also, there are many scaling exponent-like quantities that can be measured for a self-similar time series, including the divider dimension andHurst exponent. Therefore, the DFA scaling exponentα{\displaystyle \alpha }is not afractal dimension, and does not have certain desirable properties that theHausdorff dimensionhas, though in certain special cases it is related to thebox-counting dimensionfor the graph of a time series.
The standard DFA algorithm given above removes a linear trend in each segment. If we remove a degree-n polynomial trend in each segment, it is called DFAn, or higher order DFA.[14]
SinceXt{\displaystyle X_{t}}is a cumulative sum ofxt−⟨x⟩{\displaystyle x_{t}-\langle x\rangle }, a linear trend inXt{\displaystyle X_{t}}is a constant trend inxt−⟨x⟩{\displaystyle x_{t}-\langle x\rangle }, which is a constant trend inxt{\displaystyle x_{t}}(visible as short sections of "flat plateaus"). In this regard, DFA1 removes the mean from segments of the time seriesxt{\displaystyle x_{t}}before quantifying the fluctuation.
Similarly, a degree n trend inXt{\displaystyle X_{t}}is a degree (n-1) trend inxt{\displaystyle x_{t}}. For example, DFA1 removes linear trends from segments of the time seriesxt{\displaystyle x_{t}}before quantifying the fluctuation, DFA1 removes parabolic trends fromxt{\displaystyle x_{t}}, and so on.
The HurstR/S analysisremoves constant trends in the original sequence and thus, in its detrending it is equivalent to DFA1.
DFA can be generalized by computingFq(n)=(1N/n∑i=1N/nF(n,i)q)1/q{\displaystyle F_{q}(n)=\left({\frac {1}{N/n}}\sum _{i=1}^{N/n}F(n,i)^{q}\right)^{1/q}}then making the log-log plot oflogn−logFq(n){\displaystyle \log n-\log F_{q}(n)}, If there is a strong linearity in the plot oflogn−logFq(n){\displaystyle \log n-\log F_{q}(n)}, then that slope isα(q){\displaystyle \alpha (q)}.[15]DFA is the special case whereq=2{\displaystyle q=2}.
Multifractal systems scale as a functionFq(n)∝nα(q){\displaystyle F_{q}(n)\propto n^{\alpha (q)}}. Essentially, the scaling exponents need not be independent of the scale of the system. In particular, DFA measures the scaling-behavior of the second moment-fluctuations.
Kantelhardt et al. intended this scaling exponent as a generalization of the classical Hurst exponent. The classical Hurst exponent corresponds toH=α(2){\displaystyle H=\alpha (2)}for stationary cases, andH=α(2)−1{\displaystyle H=\alpha (2)-1}for nonstationary cases.[15][16][17]
The DFA method has been applied to many systems, e.g. DNA sequences;[18][19]heartbeat dynamics in sleep and wake,[20]sleep stages,[21][22]rest and exercise,[23]and across circadian phases;[24][25]locomotor gate and wrist dynamics,[26][27][28][29]neuronal oscillations,[17]speech pathology detection,[30]and animal behavior pattern analysis.[31][32]
In the case of power-law decaying auto-correlations, thecorrelation functiondecays with an exponentγ{\displaystyle \gamma }:C(L)∼L−γ{\displaystyle C(L)\sim L^{-\gamma }\!\ }.
In addition thepower spectrumdecays asP(f)∼f−β{\displaystyle P(f)\sim f^{-\beta }\!\ }.
The three exponents are related by:[18]
The relations can be derived using theWiener–Khinchin theorem. The relation of DFA to the power spectrum method has been well studied.[33]
Thus,α{\displaystyle \alpha }is tied to the slope of the power spectrumβ{\displaystyle \beta }and is used to describe thecolor of noiseby this relationship:α=(β+1)/2{\displaystyle \alpha =(\beta +1)/2}.
Forfractional Gaussian noise(FGN), we haveβ∈[−1,1]{\displaystyle \beta \in [-1,1]}, and thusα∈[0,1]{\displaystyle \alpha \in [0,1]}, andβ=2H−1{\displaystyle \beta =2H-1}, whereH{\displaystyle H}is theHurst exponent.α{\displaystyle \alpha }for FGN is equal toH{\displaystyle H}.[34]
Forfractional Brownian motion(FBM), we haveβ∈[1,3]{\displaystyle \beta \in [1,3]}, and thusα∈[1,2]{\displaystyle \alpha \in [1,2]}, andβ=2H+1{\displaystyle \beta =2H+1}, whereH{\displaystyle H}is theHurst exponent.α{\displaystyle \alpha }for FBM is equal toH+1{\displaystyle H+1}.[16]In this context, FBM is the cumulative sum or theintegralof FGN, thus, the exponents of their
power spectra differ by 2.
|
https://en.wikipedia.org/wiki/Detrended_fluctuation_analysis
|
Many countries around the world maintainmarinesandnaval infantrymilitary units. Even if only a few nations have the capabilities to launch major amphibious assault operations, most marines and naval infantry forces are able to carry out limitedamphibious landings, riverine andcoastal warfaretasks. The list includes also army units specifically trained to operate as marines or naval infantry forces, and navy units with specialized naval security and boarding tasks.
TheMarine Fusiliers Regimentsare the marine infantry regiments of theAlgerian Navyand they are specialised inamphibious warfare.[1]
The RFM have about 7000 soldiers in their ranks.
Within the Algerian navy there are 8 regiments of marine fusiliers:
Future marine fusiliers andmarine commandosare trained in:
Army
Navy
Army
Navy
The IDF's35th Parachute Brigade "Flying Serpent"is aparatroopersbrigade that also exercises sea landing capabilities.
The Italian Army'sCavalry Brigade "Pozzuolo del Friuli"forms with theItalian Navy's 3rd Naval Division andSan Marco Marine BrigadetheItalian military's National Sea Projection Capability (Forza di proiezione dal mare).
Additionally the 17th Anti-aircraft Artillery Regiment "Sforzesca" provides air-defense assets:
|
https://en.wikipedia.org/wiki/List_of_marines_and_similar_forces
|
Āryabhata's sine tableis a set of twenty-four numbers given in the astronomical treatiseĀryabhatiyacomposed by the fifth centuryIndian mathematicianand astronomerĀryabhata(476–550 CE), for the computation of thehalf-chordsof a certain set of arcs of a circle. The set of numbers appears in verse 12 in Chapter 1Dasagitikaof Aryabhatiya and is the first table of sines.[1][2]It is not a table in the modern sense of a mathematical table; that is, it is not a set of numbers arranged into rows and columns.[3][4][5]Āryabhaṭa's table is also not a set of values of the trigonometric sine function in a conventional sense; it is a table of thefirst differencesof the values oftrigonometric sinesexpressed inarcminutes, and because of this the table is also referred to asĀryabhaṭa's table of sine-differences.[6][7]
Āryabhaṭa's table was the first sine table ever constructed in thehistory of mathematics.[8]The now lost tables ofHipparchus(c. 190 BC – c. 120 BC) andMenelaus(c. 70–140 CE) andthose ofPtolemy(c. AD 90 – c. 168) were all tables ofchordsand not of half-chords.[8]Āryabhaṭa's table remained as the standard sine table of ancient India. There were continuous attempts to improve the accuracy of this table. These endeavors culminated in the eventual discovery of thepower series expansionsof the sine and cosine functions byMadhava of Sangamagrama(c. 1350 – c. 1425), the founder of theKerala school of astronomy and mathematics, and the tabulation of asine table by Madhavawith values accurate to seven or eight decimal places.
Some historians of mathematics have argued that the sine table given in Āryabhaṭiya was an adaptation of earlier such tables constructed by mathematicians and astronomers of ancient Greece.[9]David Pingree, one of America's foremost historians of the exact sciences in antiquity, was an exponent of such a view. Assuming this hypothesis,G. J. Toomer[10][11][12]writes, "Hardly any documentation exists for the earliest arrival of Greek astronomical models in India, or for that matter what those models would have looked like. So it is very difficult to ascertain the extent to which what has come down to us represents transmitted knowledge, and what is original with Indian scientists. ... The truth is probably a tangled mixture of both."[13]
The values encoded in Āryabhaṭa's Sanskrit verse can be decoded using thenumerical schemeexplained inĀryabhaṭīya, and the decoded numbers are listed in the table below. In the table, the angle measures relevant to Āryabhaṭa's sine table are listed in the second column. The third column contains the list the numbers contained in the Sanskrit verse given above inDevanagariscript. For the convenience of users unable to read Devanagari, these word-numerals are reproduced in the fourth column inISO 15919transliteration. The next column contains these numbers in theHindu-Arabic numerals. Āryabhaṭa's numbers are the first differences in the values of sines. The corresponding value of sine (or more precisely, ofjya) can be obtained by summing up the differences up to that difference. Thus the value ofjyacorresponding to
18° 45′ is the sum 225 + 224 + 222 + 219 + 215 = 1105. For assessing the accuracy of Āryabhaṭa's computations, the modern values ofjyas are given in the last column of the table.
In the Indian mathematical tradition, the sine ( orjya) of an angle is not a ratio of numbers. It is the length of a certain line segment, a certain half-chord. The radius of the base circle is basic parameter for the construction of such tables. Historically, several tables have been constructed using different values for this parameter. Āryabhaṭa has chosen the number 3438 as the value of radius of the base circle for the computation of his sine table. The rationale of the choice of this parameter is the idea of measuring the circumference of a circle in angle measures. In astronomical computations distances are measured indegrees,minutes,seconds, etc. In this measure, the circumference of a circle is 360° = (60 × 360) minutes = 21600 minutes. The radius of the circle, the measure of whose circumference is 21600 minutes, is 21600 / 2π minutes. Computing this using the valueπ= 3.1416 known toAryabhataone gets the radius of the circle as 3438 minutes approximately. Āryabhaṭa's sine table is based on this value for the radius of the base circle. It has not yet been established who is the first ever to use this value for the base radius. ButAryabhatiyais the earliest surviving text containing a reference to this basic constant.[14]
The second section of Āryabhaṭiya, titled Ganitapādd, a contains a stanza indicating a method for the computation of the sine table. There are several ambiguities in correctly interpreting the meaning of this verse. For example, the following is a translation of the verse given by Katz wherein the words in square brackets are insertions of the translator and not translations of texts in the verse.[14]
This may be referring to the fact that the second derivative of the sine function is equal to the negative of the sine function.
|
https://en.wikipedia.org/wiki/Aryabhata%27s_sine_table
|
Incomputer science,locality-sensitive hashing(LSH) is afuzzy hashingtechnique that hashes similar input items into the same "buckets" with high probability.[1](The number of buckets is much smaller than the universe of possible input items.)[1]Since similar items end up in the same buckets, this technique can be used fordata clusteringandnearest neighbor search. It differs fromconventional hashing techniquesin thathash collisionsare maximized, not minimized. Alternatively, the technique can be seen as a way toreduce the dimensionalityof high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
Hashing-based approximatenearest-neighbor searchalgorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).[2][3]
Locality-preserving hashing was initially devised as a way to facilitatedata pipeliningin implementations ofmassively parallelalgorithms that userandomized routinganduniversal hashingto reduce memorycontentionandnetwork congestion.[4][5]
A finite familyF{\displaystyle {\mathcal {F}}}of functionsh:M→S{\displaystyle h\colon M\to S}is defined to be anLSH family[1][6][7]for
if it satisfies the following condition. For any two pointsa,b∈M{\displaystyle a,b\in M}and a hash functionh{\displaystyle h}chosen uniformly at random fromF{\displaystyle {\mathcal {F}}}:
Such a familyF{\displaystyle {\mathcal {F}}}is called(r,cr,p1,p2){\displaystyle (r,cr,p_{1},p_{2})}-sensitive.
Alternatively[8]it is possible to define an LSH family on a universe of itemsUendowed with a similarity functionϕ:U×U→[0,1]{\displaystyle \phi \colon U\times U\to [0,1]}. In this setting, a LSH scheme is a family ofhash functionsHcoupled with aprobability distributionDoverHsuch that a functionh∈H{\displaystyle h\in H}chosen according toDsatisfiesPr[h(a)=h(b)]=ϕ(a,b){\displaystyle Pr[h(a)=h(b)]=\phi (a,b)}for eacha,b∈U{\displaystyle a,b\in U}.
Given a(d1,d2,p1,p2){\displaystyle (d_{1},d_{2},p_{1},p_{2})}-sensitive familyF{\displaystyle {\mathcal {F}}}, we can construct new familiesG{\displaystyle {\mathcal {G}}}by either the AND-construction or OR-construction ofF{\displaystyle {\mathcal {F}}}.[1]
To create an AND-construction, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis constructed fromkrandom functionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}. We then say that for a hash functiong∈G{\displaystyle g\in {\mathcal {G}}},g(x)=g(y){\displaystyle g(x)=g(y)}if and only if allhi(x)=hi(y){\displaystyle h_{i}(x)=h_{i}(y)}fori=1,2,…,k{\displaystyle i=1,2,\ldots ,k}. Since the members ofF{\displaystyle {\mathcal {F}}}are independently chosen for anyg∈G{\displaystyle g\in {\mathcal {G}}},G{\displaystyle {\mathcal {G}}}is a(d1,d2,p1k,p2k){\displaystyle (d_{1},d_{2},p_{1}^{k},p_{2}^{k})}-sensitive family.
To create an OR-construction, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis constructed fromkrandom functionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}. We then say that for a hash functiong∈G{\displaystyle g\in {\mathcal {G}}},g(x)=g(y){\displaystyle g(x)=g(y)}if and only ifhi(x)=hi(y){\displaystyle h_{i}(x)=h_{i}(y)}for one or more values ofi. Since the members ofF{\displaystyle {\mathcal {F}}}are independently chosen for anyg∈G{\displaystyle g\in {\mathcal {G}}},G{\displaystyle {\mathcal {G}}}is a(d1,d2,1−(1−p1)k,1−(1−p2)k){\displaystyle (d_{1},d_{2},1-(1-p_{1})^{k},1-(1-p_{2})^{k})}-sensitive family.
LSH has been applied to several problem domains, including:
One of the easiest ways to construct an LSH family is by bit sampling.[7]This approach works for theHamming distanceoverd-dimensional vectors{0,1}d{\displaystyle \{0,1\}^{d}}. Here, the familyF{\displaystyle {\mathcal {F}}}of hash functions is simply the family of all the projections of points on one of thed{\displaystyle d}coordinates, i.e.,F={h:{0,1}d→{0,1}∣h(x)=xifor somei∈{1,…,d}}{\displaystyle {\mathcal {F}}=\{h\colon \{0,1\}^{d}\to \{0,1\}\mid h(x)=x_{i}{\text{ for some }}i\in \{1,\ldots ,d\}\}}, wherexi{\displaystyle x_{i}}is thei{\displaystyle i}th coordinate ofx{\displaystyle x}. A random functionh{\displaystyle h}fromF{\displaystyle {\mathcal {F}}}simply selects a random bit from the input point. This family has the following parameters:P1=1−R/d{\displaystyle P_{1}=1-R/d},P2=1−cR/d{\displaystyle P_{2}=1-cR/d}.
That is, any two vectorsx,y{\displaystyle x,y}with Hamming distance at mostR{\displaystyle R}collide under a randomh{\displaystyle h}with probability at leastP1{\displaystyle P_{1}}.
Anyx,y{\displaystyle x,y}with Hamming distance at leastcR{\displaystyle cR}collide with probability at mostP2{\displaystyle P_{2}}.
SupposeUis composed of subsets of some ground set of enumerable itemsSand the similarity function of interest is theJaccard indexJ. Ifπis a permutation on the indices ofS, forA⊆S{\displaystyle A\subseteq S}leth(A)=mina∈A{π(a)}{\displaystyle h(A)=\min _{a\in A}\{\pi (a)\}}. Each possible choice ofπdefines a single hash functionhmapping input sets to elements ofS.
Define the function familyHto be the set of all such functions and letDbe theuniform distribution. Given two setsA,B⊆S{\displaystyle A,B\subseteq S}the event thath(A)=h(B){\displaystyle h(A)=h(B)}corresponds exactly to the event that the minimizer ofπoverA∪B{\displaystyle A\cup B}lies insideA∩B{\displaystyle A\cap B}. Ashwas chosen uniformly at random,Pr[h(A)=h(B)]=J(A,B){\displaystyle Pr[h(A)=h(B)]=J(A,B)\,}and(H,D){\displaystyle (H,D)\,}define an LSH scheme for the Jaccard index.
Because thesymmetric grouponnelements has sizen!, choosing a trulyrandom permutationfrom the full symmetric group is infeasible for even moderately sizedn. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosenπ. It has been established that a min-wise independent family of permutations is at least of sizelcm{1,2,…,n}≥en−o(n){\displaystyle \operatorname {lcm} \{\,1,2,\ldots ,n\,\}\geq e^{n-o(n)}},[20]and that this bound is tight.[21]
Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families.
Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at mostk.[22]Approximate min-wise independence differs from the property by at most a fixedε.[23]
Nilsimsais a locality-sensitive hashing algorithm used inanti-spamefforts.[24]The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements:
Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.[25]
TLSHis locality-sensitive hashing algorithm designed for a range of security and digital forensic applications.[18]The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar.
An implementation of TLSH is available asopen-source software.[26]
The random projection method of LSH due toMoses Charikar[8]calledSimHash(also sometimes called arccos[27]) uses an approximation of thecosine distancebetween vectors. The technique was used to approximate the NP-completemax-cutproblem.[8]
The basic idea of this technique is to choose a randomhyperplane(defined by a normal unit vectorr) at the outset and use the hyperplane to hash input vectors.
Given an input vectorvand a hyperplane defined byr, we leth(v)=sgn(v⋅r){\displaystyle h(v)=\operatorname {sgn}(v\cdot r)}. That is,h(v)=±1{\displaystyle h(v)=\pm 1}depending on which side of the hyperplanevlies. This way, each possible choice of a random hyperplanercan be interpreted as a hash functionh(v){\displaystyle h(v)}.
For two vectorsu,vwith angleθ(u,v){\displaystyle \theta (u,v)}between them, it can be shown that
Since the ratio betweenθ(u,v)π{\displaystyle {\frac {\theta (u,v)}{\pi }}}and1−cos(θ(u,v)){\displaystyle 1-\cos(\theta (u,v))}is at least 0.439 whenθ(u,v)∈[0,π]{\displaystyle \theta (u,v)\in [0,\pi ]},[8][28]the probability of two vectors being on different sides of the random hyperplane is approximately proportional to thecosine distancebetween them.
The hash function[29]ha,b(υ):Rd→N{\displaystyle h_{\mathbf {a} ,b}({\boldsymbol {\upsilon }}):{\mathcal {R}}^{d}\to {\mathcal {N}}}maps ad-dimensional vectorυ{\displaystyle {\boldsymbol {\upsilon }}}onto the set of integers. Each hash function
in the family is indexed by a choice of randoma{\displaystyle \mathbf {a} }andb{\displaystyle b}wherea{\displaystyle \mathbf {a} }is ad-dimensional
vector with
entries chosen independently from astable distributionandb{\displaystyle b}is
a real number chosen uniformly from the range [0,r]. For a fixeda,b{\displaystyle \mathbf {a} ,b}the hash functionha,b{\displaystyle h_{\mathbf {a} ,b}}is
given byha,b(υ)=⌊a⋅υ+br⌋{\displaystyle h_{\mathbf {a} ,b}({\boldsymbol {\upsilon }})=\left\lfloor {\frac {\mathbf {a} \cdot {\boldsymbol {\upsilon }}+b}{r}}\right\rfloor }.
Other construction methods for hash functions have been proposed to better fit the data.[30]In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.
Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have highersemantic similarity.[31]The hashcodes are found via training of anartificial neural networkorgraphical model.[citation needed]
One of the main applications of LSH is to provide a method for efficient approximatenearest neighbor searchalgorithms. Consider an LSH familyF{\displaystyle {\mathcal {F}}}. The algorithm has two main parameters: the width parameterkand the number of hash tablesL.
In the first step, we define a new familyG{\displaystyle {\mathcal {G}}}of hash functionsg, where each functiongis obtained by concatenatingkfunctionsh1,…,hk{\displaystyle h_{1},\ldots ,h_{k}}fromF{\displaystyle {\mathcal {F}}}, i.e.,g(p)=[h1(p),…,hk(p)]{\displaystyle g(p)=[h_{1}(p),\ldots ,h_{k}(p)]}. In other words, a random hash functiongis obtained by concatenatingkrandomly chosen hash functions fromF{\displaystyle {\mathcal {F}}}. The algorithm then constructsLhash tables, each corresponding to a different randomly chosen hash functiong.
In the preprocessing step we hash allnd-dimensional points from the data setSinto each of theLhash tables. Given that the resulting hash tables have onlynnon-zero entries, one can reduce the amount of memory used per each hash table toO(n){\displaystyle O(n)}using standardhash functions.
Given a query pointq, the algorithm iterates over theLhash functionsg. For eachgconsidered, it retrieves the data points that are hashed into the same bucket asq. The process is stopped as soon as a point within distancecRfromqis found.
Given the parameterskandL, the algorithm has the following performance guarantees:
For a fixed approximation ratioc=1+ϵ{\displaystyle c=1+\epsilon }and probabilitiesP1{\displaystyle P_{1}}andP2{\displaystyle P_{2}}, one can setk=⌈lognlog1/P2⌉{\displaystyle k=\left\lceil {\tfrac {\log n}{\log 1/P_{2}}}\right\rceil }andL=⌈P1−k⌉=O(nρP1−1){\displaystyle L=\lceil P_{1}^{-k}\rceil =O(n^{\rho }P_{1}^{-1})}, whereρ=logP1logP2{\displaystyle \rho ={\tfrac {\log P_{1}}{\log P_{2}}}}. Then one obtains the following performance guarantees:
Whentis large, it is possible to reduce the hashing time fromO(nρ){\displaystyle O(n^{\rho })}.
This was shown by[32]and[33]which gave
It is also sometimes the case that the factor1/P1{\displaystyle 1/P_{1}}can be very large.
This happens for example withJaccard similaritydata, where even the most similar neighbor often has a quite low Jaccard similarity with the query.
In[34]it was shown how to reduce the query time toO(nρ/P11−ρ){\displaystyle O(n^{\rho }/P_{1}^{1-\rho })}(not including hashing costs) and similarly the space usage.
|
https://en.wikipedia.org/wiki/Locality-sensitive_hashing
|
Educational psychologyis the branch ofpsychologyconcerned with the scientific study of humanlearning. The study of learning processes, from bothcognitiveandbehavioralperspectives, allows researchers to understand individual differences inintelligence,cognitivedevelopment,affect,motivation, self-regulation, and self-concept, as well as their role inlearning. The field of educational psychology relies heavily on quantitative methods, including testing and measurement, to enhance educational activities related to instructional design,classroom management, and assessment, which serve to facilitate learning processes in various educational settings across the lifespan.[1]
Educational psychology can in part be understood through its relationship with other disciplines. It is informed primarily bypsychology, bearing a relationship to that discipline analogous to the relationship betweenmedicineandbiology. It is also informed byneuroscience. Educational psychology in turn informs a wide range of specialties within educational studies, includinginstructional design,educational technology,curriculum development,organizational learning,special education,classroom management, and student motivation. Educational psychology both draws from and contributes tocognitive scienceand thelearning theory. In universities, departments of educational psychology are usually housed within faculties of education, possibly accounting for the lack of representation of educational psychology content in introductory psychology textbooks.[2]
The field of educational psychology involves the study ofmemory, conceptual processes, and individual differences (via cognitive psychology) in conceptualizing new strategies for learning processes in humans. Educational psychology has been built upon theories ofoperant conditioning,functionalism,structuralism,constructivism,humanistic psychology,Gestalt psychology, andinformation processing.[1]
Educational psychology has seen rapid growth and development as a profession in the last twenty years.[3]School psychologybegan with the concept of intelligence testing leading to provisions for special education students, who could not follow the regular classroom curriculum in the early part of the 20th century.[3]Another main focus of school psychology was to help close the gap for children of colour, as the fight against racial inequality and segregation was still very prominent, during the early to mid-1900s. However, "school psychology" itself has built a fairly new profession based upon the practices and theories of several psychologists among many different fields. Educational psychologists are working side by side with psychiatrists, social workers, teachers, speech and language therapists, and counselors in an attempt to understand the questions being raised when combining behavioral, cognitive, and social psychology in the classroom setting.[3]
As a field of study, educational psychology is fairly new and was not considered a specific practice until the 20th century. Reflections on everyday teaching and learning allowed some individuals throughout history to elaborate on developmental differences in cognition, the nature of instruction, and the transfer of knowledge and learning. These topics are important to education and, as a result, they are important in understanding human cognition, learning, and social perception.[4]
Some of the ideas and issues pertaining to educational psychology date back to the time ofPlatoandAristotle.Philosophersas well assophistsdiscussed the purpose ofeducation, training of the body and the cultivation of psycho-motor skills, the formation of good character, the possibilities and limits of moraleducation. Some other educational topics they spoke about were the effects of music, poetry, and the other arts on the development of the individual, role of the teacher, and the relations between teacher and student.[4]Plato saw knowledge acquisition as an innate ability, which evolves through experience and understanding of the world. This conception of human cognition has evolved into a continuing argument ofnature vs. nurturein understanding conditioning and learning today.Aristotle, on the other hand, ascribed to the idea of knowledge by association orschema. His fourlaws of associationincluded succession, contiguity, similarity, and contrast. His studies examined recall and facilitated learning processes.[5]
John Lockeis considered one of the most influential philosophers in post-renaissance Europe, a time period that began around the mid-1600s. Locke is considered the "Father of English Psychology". One of Locke's most important works was written in 1690, namedAn Essay Concerning Human Understanding. In this essay, he introduced the term "tabula rasa" meaning "blank slate." Locke explained that learning was attained through experience only and that we are all born without knowledge.[6]
He followed by contrasting Plato's theory of innate learning processes. Locke believed the mind was formed by experiences, not innate ideas. Locke introduced this idea as "empiricism", or the understanding that knowledge is only built on knowledge and experience.[7]
In the late 1600s, John Locke advanced the hypothesis that people learn primarily from external forces. He believed that the mind was like a blank tablet (tabula rasa), and that successions of simple impressions give rise to complex ideas through association and reflection. Locke is credited with establishing "empiricism" as a criterion for testing the validity of knowledge, thus providing a conceptual framework for later development of experimental methodology in the natural and social sciences.[8]
In the 18th century the philosopherJean-Jacques Rousseauespoused a set of theories which would become highly influential in the field of education, particularly through hisphilosophical novelEmile, or On Education. Despite stating that the book should not be used as a practical guide to nurturing children, the pedagogical approach outlined in it was lauded byEnlightenmentcontemporaries includingImmanuel KantandJohann Wolfgang von Goethe. Rousseau advocated achild-centeredapproach to education, and that the age of the child should be accounted for in choosing what and how to teach them. In particular he insisted on the primacy ofexperiential education, in order to develop the child's ability to reason autonomously. Rousseau's philosophy influenced educational reformers includingJohann Bernhard Basedow, whose practice in his model school thePhilanthropinumdrew upon his ideas, as well asJohann Heinrich Pestalozzi. More generally Rousseau's thinking had significant direct and indirect influence on the development of pedagogy in Germany, Switzerland and the Netherlands. In addition, Jean Piaget's stage-based approach to child development has been observed to have parallels to Rousseau's theories.[9]
Philosophers of education such as Juan Vives, Johann Pestalozzi, Friedrich Fröbel, and Johann Herbart had examined, classified and judged the methods of education centuries before the beginnings of psychology in the late 1800s.
Juan Vives(1493–1540) proposed induction as the method of study and believed in the directobservationand investigation of the study ofnature. His studies focused on humanisticlearning, which opposed scholasticism and was influenced by a variety of sources includingphilosophy,psychology,politics,religion, andhistory.[10]He was one of the first prominent thinkers to emphasize that the location of a school is important tolearning.[11]He suggested that a school should be located away from disturbing noises; the air quality should be good and there should be plenty of food for the students and teachers.[11]Vives emphasized the importance of understanding individual differences of the students and suggested practice as an important tool for learning.[11]
Vives introduced his educational ideas in his writing, "De anima et vita" in 1538. In this publication, Vives exploresmoral philosophyas a setting for his educational ideals; with this, he explains that the different parts of the soul (similar to that of Aristotle's ideas) are each responsible for different operations, which function distinctively. The first book covers the different "souls": "The Vegetative Soul"; this is the soul ofnutrition, growth, and reproduction, "The Sensitive Soul", which involves the five external senses; "The Cogitative soul", which includes internal senses andcognitivefacilities. The second book involves functions of the rational soul: mind, will, and memory. Lastly, the third book explains the analysis of emotions.[12]
Johann Pestalozzi(1746–1827), a Swiss educational reformer, emphasized the child rather than the content of the school.[13]Pestalozzi fostered an educational reform backed by the idea that early education was crucial for children, and could be manageable for mothers. Eventually, this experience with early education would lead to a "wholesome person characterized by morality."[14]Pestalozzi has been acknowledged for opening institutions for education, writing books for mother's teaching home education, and elementary books for students, mostly focusing on the kindergarten level. In his later years, he published teaching manuals and methods of teaching.[14]
During the time ofThe Enlightenment, Pestalozzi's ideals introduced "educationalization". This created the bridge between social issues and education by introducing the idea of social issues to be solved through education. Horlacher describes the most prominent example of this during The Enlightenment to be "improving agricultural production methods."[14]
Johann Herbart(1776–1841) is considered the father of educationalpsychology.[15]He believed thatlearningwas influenced by interest in the subject and the teacher.[15]He thought that teachers should consider the students' existing mental sets—what they already know—when presenting new information or material.[15]Herbart came up with what are now known as the formal steps. The 5 steps that teachers should use are:
There were three major figures in educational psychology in this period: William James, G. Stanley Hall, and John Dewey. These three men distinguished themselves in general psychology and educational psychology, which overlapped significantly at the end of the 19th century.[4]
The period of 1890–1920 is considered the golden era of educational psychology when aspirations of the new discipline rested on the application of the scientific methods of observation and experimentation to educational problems. From 1840 to 1920 37 million people immigrated to the United States.[10]This created an expansion of elementary schools and secondary schools. The increase in immigration also provided educational psychologists the opportunity to use intelligence testing to screen immigrants at Ellis Island.[10]Darwinisminfluenced the beliefs of the prominent educational psychologists.[10]Even in the earliest years of the discipline, educational psychologists recognized the limitations of this new approach. The pioneering American psychologistWilliam Jamescommented that:
Psychology is a science, and teaching is an art; and sciences never generate arts directly out of themselves. An intermediate inventive mind must make that application, by using its originality".[16]
James is the father of psychology in America, but he also made contributions to educational psychology. In his famous series of lecturesTalks to Teachers on Psychology, published in 1899, Jamesdefines educationas "the organization of acquired habits of conduct and tendencies to behavior".[16]He states that teachers should "train the pupil to behavior"[16]so that he fits into the social and physical world. Teachers should also realize the importance of habit and instinct. They should present information that is clear and interesting and relate this new information and material to things the student already knows about.[16]He also addresses important issues such as attention, memory, and association of ideas.
Alfred BinetpublishedMental Fatiguein 1898, in which he attempted to apply the experimental method to educational psychology.[10]In this experimental method he advocated for two types of experiments, experiments done in the lab and experiments done in the classroom. In 1904 he was appointed the Minister of Public Education.[10]This is when he began to look for a way to distinguish children with developmental disabilities.[10]Binet strongly supported special education programs because he believed that "abnormality" could be cured.[10]The Binet-Simon test was the first intelligence test and was the first to distinguish between "normal children" and those with developmental disabilities.[10]Binet believed that it was important to study individual differences between age groups and children of the same age.[10]He also believed that it was important for teachers to take into account individual students' strengths and also the needs of the classroom as a whole when teaching and creating a good learning environment.[10]He also believed that it was important to train teachers in observation so that they would be able to see individual differences among children and adjust the curriculum to the students.[10]Binet also emphasized that practice of material was important. In 1916Lewis Termanrevised the Binet-Simon so that the average score was always 100.[15]The test became known as the Stanford-Binet and was one of the most widely used tests of intelligence. Terman, unlike Binet, was interested in using intelligence test to identify gifted children who had high intelligence.[10]In his longitudinal study of gifted children, who became known as the Termites, Terman found that gifted children become gifted adults.[15]
Edward Thorndike(1874–1949) supported the scientific movement in education. He based teaching practices on empirical evidence and measurement.[10]Thorndike developed the theory ofinstrumental conditioningor the law of effect. The law of effect states that associations are strengthened when it is followed by something pleasing and associations are weakened when followed by something not pleasing. He also found thatlearningis done a little at a time or in increments, learning is an automatic process and its principles apply to all mammals. Thorndike's research withRobert Woodworthon the theory of transfer found that learning one subject will only influence your ability to learn another subject if the subjects are similar.[10]This discovery led to less emphasis on learning theclassicsbecause they found that studying the classics does not contribute to overall general intelligence.[10]Thorndike was one of the first to say that individual differences incognitivetasks were due to how many stimulus-response patterns a person had rather than general intellectual ability.[10]He contributed word dictionaries that werescientificallybased to determine the words and definitions used.[10]The dictionaries were the first to take into consideration the users' maturity level.[10]He also integrated pictures and easier pronunciation guide into each of the definitions.[10]Thorndike contributedarithmeticbooks based onlearning theory. He made all the problems more realistic and relevant to what was being studied, not just to improve the generalintelligence.[10]He developed tests that were standardized to measure performance in school-related subjects.[10]His biggest contribution to testing was the CAVD intelligence test which used a multidimensional approach to intelligence and was the first to use a ratio scale.[10]His later work was on programmed instruction, mastery learning, and computer-based learning:
If, by a miracle of mechanical ingenuity, a book could be so arranged that only to him who had done what was directed on page one would page two become visible, and so on, much that now requires personal instruction could be managed by print.[17]
John Dewey(1859–1952) had a major influence on the development ofprogressive educationin the United States. He believed that the classroom should prepare children to be good citizens and facilitate creative intelligence.[10]He pushed for the creation of practical classes that could be applied outside of a school setting.[10]He also thought that education should be student-oriented, not subject-oriented. For Dewey, education was a social experience that helped bring together generations of people. He stated that students learn by doing. He believed in an active mind that was able to be educated through observation, problem-solving, and enquiry. In his 1910 bookHow We Think, he emphasizes that material should be provided in a way that is stimulating and interesting to the student since it encourages original thought and problem-solving.[18]He also stated that material should be relative to the student's own experience.[18]
"The material furnished by way of information should be relevant to a question that is vital in the students own experience"[18]
Jean Piaget(1896–1980) was one of the most powerful researchers in of developmental psychology during the 20th century. He developed thetheory of cognitive development.[10]The theory stated that intelligence developed in four different stages. The stages are the sensorimotor stage from birth to 2 years old, the preoperational state from 2 to 7 years old, the concrete operational stage from 7 to 10 years old, and the formal operational stage from 12 years old and up.[10]He also believed that learning was constrained to the child's cognitive development. Piaget influenced educational psychology because he was the first to believe that cognitive development was important and something that should be paid attention to in education.[10]Most of the research on Piagetian theory was carried out by American educational psychologists.
The number of people receiving a high school and college education increased dramatically from 1920 to 1960.[10]Because very few jobs were available to teens coming out of eighth grade, there was an increase in high school attendance in the 1930s.[10]The progressive movement in the United States took off at this time and led to the idea ofprogressive education. John Flanagan, an educational psychologist, developed tests for combat trainees and instructions in combat training.[10]In 1954 the work of Kenneth Clark and his wife on the effects of segregation on black and white children was influential in the Supreme Court caseBrown v. Board of Education.[15]From the 1960s to present day, educational psychology has switched from a behaviorist perspective to a more cognitive-based perspective because of the influence and development ofcognitive psychologyat this time.[10]
Jerome Bruneris notable for integratingPiaget'scognitiveapproaches into educationalpsychology.[10]He advocated fordiscovery learningwhere teachers create aproblem solvingenvironment that allows the student to question,exploreand experiment.[10]In his bookThe Process of EducationBruner stated that the structure of the material and thecognitiveabilities of the person are important inlearning.[10]He emphasized the importance of the subject matter. He also believed that how the subject was structured was important for the student's understanding of the subject and that it was the goal of the teacher to structure the subject in a way that was easy for the student to understand.[10]In the early 1960s, Bruner went toAfricato teach math and science to school children, which influenced his view as schooling as aculturalinstitution. Bruner was also influential in the development of MACOS,Man: a Course of Study, which was an educational program that combinedanthropologyandscience.[10]The program exploredhuman evolutionandsocial behavior. He also helped with the development of the head start program. He was interested in the influence of culture on education and looked at the impact of poverty on educational development.[10]
Benjamin Bloom(1903–1999) spent over 50 years at theUniversity of Chicago, where he worked in the department of education.[10]He believed that all students can learn. He developed thetaxonomy of educational objectives.[10]The objectives were divided into three domains: cognitive, affective, and psychomotor. The cognitive domain deals with how we think.[19]It is divided into categories that are on a continuum from easiest to more complex.[19]The categories are knowledge or recall, comprehension, application, analysis, synthesis, and evaluation.[19]The affective domain deals with emotions and has 5 categories.[19]The categories are receiving phenomenon, responding to that phenomenon, valuing, organization, and internalizing values.[19]The psychomotor domain deals with the development of motor skills, movement, and coordination and has 7 categories that also go from simplest to most complex.[19]The 7 categories of the psychomotor domain are perception, set, guided response, mechanism, complex overt response, adaptation, and origination.[19]The taxonomy provided broad educational objectives that could be used to help expand the curriculum to match the ideas in the taxonomy.[10]The taxonomy is considered to have a greater influence internationally than in the United States. Internationally, the taxonomy is used in every aspect of education from the training of the teachers to the development of testing material.[10]Bloom believed in communicating clear learning goals and promoting an active student. He thought that teachers should provide feedback to the students on their strengths and weaknesses.[10]Bloom also did research on college students and their problem-solving processes. He found that they differ in understanding the basis of the problem and the ideas in the problem. He also found that students differ in process of problem-solving in their approach and attitude toward the problem.[10]
Nathaniel Gage(1917–2008) is an important figure in educational psychology as his research focused on improving teaching and understanding the processes involved in teaching.[10]He edited the bookHandbook of Research on Teaching(1963), which helped develop early research in teaching and educational psychology.[10]Gage founded the Stanford Center for Research and Development in Teaching, which contributed research on teaching as well as influencing the education of important educational psychologists.[10]
Applied behavior analysis, a research-based science utilizing behavioral principles ofoperant conditioning, is effective in a range of educational settings.[20]For example, teachers can alter student behavior by systematically rewarding students who follow classroom rules with praise, stars, or tokens exchangeable for sundry items.[21][22]Despite the demonstrated efficacy of awards in changing behavior, their use in education has been criticized by proponents ofself-determination theory, who claim that praise and other rewards undermineintrinsic motivation. There is evidence that tangible rewards decrease intrinsic motivation in specific situations, such as when the student already has a high level of intrinsic motivation to perform the goal behavior.[23]But the results showing detrimental effects are counterbalanced by evidence that, in other situations, such as when rewards are given for attaining a gradually increasing standard of performance, rewards enhance intrinsic motivation.[24][25]Many effective therapies have been based on the principles of applied behavior analysis, includingpivotal response therapywhich is used to treatautism spectrum disorders.[citation needed]
Among current educational psychologists, the cognitive perspective is more widely held than the behavioral perspective, perhaps because it admits causally related mental constructs such astraits,beliefs,memories,motivations, andemotions.[26]Cognitive theories claim that memory structures determine how information isperceived,processed, stored,retrievedandforgotten. Among the memory structures theorized by cognitive psychologists are separate but linked visual and verbal systems described byAllan Paivio'sdual coding theory. Educational psychologists have useddual coding theoryandcognitive loadtheory to explain how people learn frommultimediapresentations.[27]
Thespaced learningeffect, acognitivephenomenon strongly supported by psychological research, has broad applicability withineducation.[29]For example, students have been found to perform better on a test of knowledge about a text passage when a second reading of the passage is delayed rather than immediate (see figure).[28]Educational psychology research has confirmed the applicability to the education of other findings from cognitive psychology, such as the benefits of usingmnemonicsfor immediate and delayed retention of information.[30]
Problem solving, according to prominent cognitive psychologists, is fundamental tolearning. It resides as an important research topic in educational psychology. A student is thought to interpret a problem by assigning it to aschemaretrieved fromlong-term memory. A problem students run into while reading is called "activation." This is when the student's representations of the text are present duringworking memory. This causes the student to read through the material without absorbing the information and being able to retain it. When working memory is absent from the reader's representations of the working memory, they experience something called "deactivation." When deactivation occurs, the student has an understanding of the material and is able to retain information. If deactivation occurs during the first reading, the reader does not need to undergo deactivation in the second reading. The reader will only need to reread to get a "gist" of the text to spark theirmemory. When the problem is assigned to the wrong schema, the student's attention is subsequently directed away from features of the problem that are inconsistent with the assigned schema.[31]The critical step of finding a mapping between the problem and a pre-existing schema is often cited as supporting the centrality ofanalogicalthinking to problem-solving.
Each person has an individual profile of characteristics, abilities, and challenges that result from predisposition, learning, and development. These manifest as individual differences inintelligence,creativity,cognitive style,motivation, and the capacity to process information, communicate, and relate to others. The most prevalent disabilities found among school age children areattention deficit hyperactivity disorder(ADHD),learning disability,dyslexia, andspeech disorder. Less common disabilities includeintellectual disability,hearing impairment,cerebral palsy,epilepsy, andblindness.[32]
Although theories ofintelligencehave been discussed by philosophers sincePlato,intelligence testingis an invention of educational psychology and is coincident with the development of that discipline. Continuing debates about the nature of intelligence revolve on whether it can be characterized by a singlefactorknown asgeneral intelligence,[33]multiple factors (e.g.,Gardner'stheory of multiple intelligences[34]), or whether it can be measured at all. In practice, standardized instruments such as theStanford-Binet IQ testand theWISC[35]are widely used in economically developed countries to identify children in need of individualized educational treatment. Children classified asgiftedare often provided with accelerated or enriched programs. Children with identified deficits may be provided with enhanced education in specific skills such asphonological awareness. In addition to basic abilities, the individual's personalitytraitsare also important, with people higher inconscientiousnessandhopeattaining superior academic achievements, even after controlling for intelligence and past performance.[36]
Developmental psychology, and especially the psychology of cognitive development, opens a special perspective for educational psychology. This is so because education and the psychology of cognitive development converge on a number of crucial assumptions. First, the psychology of cognitive development defines human cognitive competence at successive phases of development. Education aims to help students acquire knowledge and develop skills that are compatible with their understanding and problem-solving capabilities at different ages. Thus, knowing the students' level on a developmental sequence provides information on the kind and level of knowledge they can assimilate, which, in turn, can be used as a frame for organizing the subject matter to be taught at different school grades. This is the reason whyPiaget's theory of cognitive developmentwas so influential for education, especially mathematics and science education.[37]In the same direction, theneo-Piagetian theories of cognitive developmentsuggest that in addition to the concerns above, sequencing of concepts and skills in teaching must take account of the processing andworking memorycapacities that characterize successive age levels.[38][39]
Second, the psychology ofcognitive developmentinvolves understanding howcognitivechange takes place and recognizing the factors and processes which enable cognitive competence to develop.Educationalso capitalizes oncognitivechange, because the construction of knowledge presupposes effective teaching methods that would move the student from a lower to a higher level of understanding. Mechanisms such as reflection on actual ormentalactions vis-à-vis alternative solutions to problems, tagging new concepts or solutions to symbols that help one recall and mentally manipulate them are just a few examples of how mechanisms of cognitive development may be used to facilitate learning.[39][40]
Finally, the psychology of cognitive development is concerned with individual differences in the organization of cognitive processes and abilities, in their rate of change, and in their mechanisms of change. The principles underlying intra- and inter-individual differences could be educationally useful, because knowing how students differ in regard to the various dimensions of cognitive development, such as processing and representational capacity, self-understanding and self-regulation, and the various domains of understanding, such as mathematical, scientific, or verbal abilities, would enable the teacher to cater for the needs of the different students so that no one is left behind.[39][41]
Constructivism is a category of learning theory in which emphasis is placed on the agency and prior "knowing" and experience of the learner, and often on the social and cultural determinants of the learning process. Educational psychologists distinguish individual (or psychological) constructivism, identified withPiaget's theory of cognitive development, fromsocial constructivism. The social constructivist paradigm views the context in which the learning occurs as central to the learning itself.[42]It regards learning as a process of enculturation. People learn by exposure to the culture of practitioners. They observe and practice the behavior of practitioners and 'pick up relevant jargon, imitate behavior, and gradually start to act in accordance with the norms of the practice'.[43]So, a student learns to become a mathematician through exposure to mathematician using tools to solve mathematical problems. So in order to master a particular domain of knowledge it is not enough for students to learn the concepts of the domain. They should be exposed to the use of the concepts in authentic activities by the practitioners of the domain.[43]
A dominant influence on the social constructivist paradigm isLev Vygotsky's work on sociocultural learning, describing how interactions with adults, more capable peers, and cognitive tools are internalized to form mental constructs. "Zone of Proximal Development" (ZPD) is a term Vygotsky used to characterize an individual's mental development. He believed that tasks individuals can do on their own do not give a complete understanding of their mental development. He originally defined the ZPD as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers.”[44]He cited a famous example to make his case. Two children in school who originally can solve problems at an eight-year-old developmental level (that is, typical for children who were age 8) might be at different developmental levels. If each child received assistance from an adult, one was able to perform at a nine-year-old level and one was able to perform at a twelve-year-old level. He said “This difference between twelve and eight, or between nine and eight, is what we callthe zone of proximal development.”[44]He further said that the ZPD “defines those functions that have not yet matured but are in the process of maturation, functions that will mature tomorrow but are currently in an embryonic state.”[44]The zone is bracketed by the learner's current ability and the ability they can achieve with the aid of an instructor of some capacity.
Vygotsky viewed the ZPD as a better way to explain the relation between children's learning and cognitive development. Prior to the ZPD, the relation between learning and development could be boiled down to the following three major positions: 1) Development always precedes learning (e.g.,constructivism): children first need to meet a particular maturation level before learning can occur; 2) Learning and development cannot be separated, but instead occur simultaneously (e.g.,behaviorism): essentially, learning is development; and 3) learning and development are separate, but interactive processes (e.g.,gestaltism): one process always prepares the other process, and vice versa. Vygotsky rejected these three major theories because he believed that learning should always precede development in the ZPD. According to Vygotsky, through the assistance of a more knowledgeable other, a child can learn skills or aspects of a skill that go beyond the child's actual developmental or maturational level. The lower limit of ZPD is the level of skill reached by the child working independently (also referred to as the child's developmental level). The upper limit is the level of potential skill that the child can reach with the assistance of a more capable instructor. In this sense, the ZPD provides a prospective view of cognitive development, as opposed to a retrospective view that characterizes development in terms of a child's independent capabilities. The advancement through and attainment of the upper limit of the ZPD is limited by the instructional and scaffolding-related capabilities of the more knowledgeable other (MKO). The MKO is typically assumed to be an older, more experienced teacher or parent, but often can be a learner's peer or someone their junior. The MKO need not even be a person, it can be a machine or book, or other source of visual and/or audio input.[45]
Elaborating on Vygotsky's theory,Jerome Brunerand other educational psychologists developed the important concept ofinstructional scaffolding, in which the social or information environment offers supports for learning that are gradually withdrawn as they become internalized.[46]
Jean Piagetwas interested in how an organism adapts to its environment. Piaget hypothesized that infants are born with aschemaoperating at birth that he called "reflexes". Piaget identified four stages in cognitive development. The four stages are sensorimotor stage, pre-operational stage, concrete operational stage, and formal operational stage.[47]
To understand the characteristics of learners inchildhood,adolescence,adulthood, andold age, educational psychology develops and applies theories of humandevelopment.[48]Often represented as stages through which people pass as they mature, developmental theories describe changes in mental abilities (cognition), social roles, moral reasoning, and beliefs about the nature of knowledge.
For example, educational psychologists have conducted research on the instructional applicability ofJean Piaget's theory of development, according to which children mature through four stages of cognitive capability. Piaget hypothesized that children are not capable of abstract logical thought until they are older than about 11 years, and therefore younger children need to be taught using concrete objects and examples. Researchers have found that transitions, such as from concrete to abstract logical thought, do not occur at the same time in all domains. A child may be able to think abstractly about mathematics but remain limited to concrete thought when reasoning about human relationships. Perhaps Piaget's most enduring contribution is his insight that people actively construct their understanding through a self-regulatory process.[32]
Piaget proposed a developmental theory ofmoral reasoningin which children progress from a naïve understanding ofmoralitybased on behavior and outcomes to a more advanced understanding based on intentions. Piaget's views of moral development were elaborated byLawrence Kohlberginto astage theory of moral development. There is evidence that the moral reasoning described in stage theories is not sufficient to account for moral behavior. For example, other factors such asmodeling(as described by thesocial cognitive theory of morality) are required to explainbullying.
Rudolf Steiner's model ofchild developmentinterrelates physical, emotional, cognitive, and moral development[49]in developmental stages similar to those later described byPiaget.[50]
Developmental theories are sometimes presented not as shifts between qualitatively different stages, but as gradual increments on separate dimensions. Development ofepistemologicalbeliefs (beliefs about knowledge) have been described in terms of gradual changes in people's belief in: certainty and permanence of knowledge, fixedness of ability, and credibility of authorities such as teachers and experts. People develop more sophisticated beliefs about knowledge as they gain in education and maturity.[51]
Motivationis an internal state that activates, guides and sustains behavior. Motivation can have several impacting effects on how students learn and how they behave towards subject matter:[52]
Educational psychology research on motivation is concerned with thevolitionorwillthat students bring to a task, their level of interest andintrinsic motivation, the personally heldgoalsthat guide their behavior, and their belief about the causes of their success or failure. As intrinsic motivation deals with activities that act as their own rewards, extrinsic motivation deals with motivations that are brought on by consequences or punishments. A form ofattribution theorydeveloped byBernard Weiner[53]describes how students' beliefs about the causes of academic success or failure affect their emotions and motivations. For example, when students attribute failure to lack of ability, and ability is perceived as uncontrollable, they experience the emotions ofshameandembarrassmentand consequently decrease effort and show poorer performance. In contrast, when students attribute failure to lack of effort, and effort is perceived as controllable, they experience the emotion ofguiltand consequently increase effort and show improved performance.[53]
Theself-determination theory(SDT) was developed by psychologistsEdward Deciand Richard Ryan. SDT focuses on the importance ofintrinsic and extrinsic motivationin driving human behavior and posits inherent growth and development tendencies. It emphasizes the degree to which an individual's behavior is self-motivated and self-determined. When applied to the realm of education, the self-determination theory is concerned primarily with promoting in students an interest in learning, a value of education, and a confidence in their own capacities and attributes.[54]
Motivational theories also explain howlearners' goalsaffect the way they engage with academic tasks.[55]Those who havemastery goalsstrive to increase their ability and knowledge. Those who haveperformance approach goalsstrive for high grades and seek opportunities to demonstrate their abilities. Those who haveperformance avoidancegoals are driven by fear of failure and avoid situations where their abilities are exposed. Research has found that mastery goals are associated with many positive outcomes such as persistence in the face of failure, preference for challenging tasks,creativity, andintrinsic motivation. Performance avoidance goals are associated with negative outcomes such as poorconcentrationwhile studying, disorganized studying, less self-regulation, shallow information processing, andtest anxiety. Performance approach goals are associated with positive outcomes, and some negative outcomes such as an unwillingness to seek help and shallow information processing.[55]
Locus of controlis a salient factor in the successful academic performance of students. During the 1970s and '80s,Cassandra B. Whytedid significant educational research studying locus of control as related to the academic achievement of students pursuing higher education coursework. Much of her educational research and publications focused upon the theories ofJulian B. Rotterin regard to the importance of internal control and successful academic performance.[56]Whyte reported that individuals who perceive and believe that their hard work may lead to more successful academic outcomes, instead of depending on luck or fate, persist and achieve academically at a higher level. Therefore, it is important to provide education and counseling in this regard.[57]
Instructional design, the systematic design of materials, activities, and interactive environments for learning, is broadly informed by educational psychology theories and research. For example, in defining learning goals or objectives, instructional designers often use ataxonomy of educational objectivescreated byBenjamin Bloomand colleagues.[58]Bloom also researchedmastery learning, an instructional strategy in which learners only advance to a new learning objective after they have mastered its prerequisite objectives. Bloom[59]discovered that a combination of mastery learning with one-to-one tutoring is highly effective, producing learning outcomes far exceeding those normally achieved in classroom instruction.Gagné, another psychologist, had earlier developed an influential method oftask analysisin which a terminal learning goal is expanded into a hierarchy of learning objectives[60]connected by prerequisite relationships.
The following list of technological resources incorporate computer-aided instruction and intelligence for educational psychologists and their students:
Technology is essential to the field of educational psychology, not only for the psychologist themselves as far as testing, organization, and resources, but also for students. Educational psychologists who reside in the K-12 setting focus most of their time on special education students. It has been found that students with disabilities learning through technology such as iPad applications and videos are more engaged and motivated to learn in the classroom setting. Liu et al. explain that learning-based technology allows for students to be more focused, and learning is more efficient with learning technologies. The authors explain that learning technology also allows for students with social-emotional disabilities to participate in distance learning.[61]
Research onclassroom managementandpedagogyis conducted to guide teaching practice and form a foundation for teacher education programs. The goals of classroom management are to create an environment conducive to learning and to develop students' self-management skills. More specifically, classroom management strives to create positive teacher-student and peer relationships, manage student groups to sustain on-task behavior, and use counseling and other psychological methods to aid students who present persistent psychosocial problems.[63]
Introductory educational psychology is a commonly required area of study in most North American teacher education programs. When taught in that context, its content varies, but it typically emphasizes learning theories (especially cognitively oriented ones), issues about motivation, assessment of students' learning, and classroom management. A developingWikibook about educational psychologygives more detail about the educational psychology topics that are typically presented in preservice teacher education.
In order to become an educational psychologist, students can complete an undergraduate degree of their choice. They then must go to graduate school to study education psychology, counseling psychology, or school counseling. Most students today are also receiving theirdoctoraldegrees in order to hold the "psychologist" title. Educational psychologists work in a variety of settings. Some work in university settings where they carry out research on the cognitive and social processes of human development, learning and education. Educational psychologists may also work as consultants in designing and creating educational materials, classroom programs and online courses. Educational psychologists who work in K–12 school settings (closely related areschool psychologistsin the US and Canada) are trained at themaster'sand doctoral levels. In addition to conducting assessments, school psychologists provide services such as academic and behavioral intervention, counseling, teacher consultation, and crisis intervention. However, school psychologists are generally more individual-oriented towards students.[64]
Many high schools and colleges are increasingly offering educational psychology courses, with some colleges offering it as a general education requirement. Similarly, colleges offer students opportunities to obtain a Ph.D. in educational psychology.
Within the UK, students must hold a degree that is accredited by the British Psychological Society (either undergraduate or at the master's level) before applying for a three-year doctoral course that involves further education, placement, and a research thesis.
In recent years, many university training programs in the US have included curriculum that focuses on issues of race, gender, disability, trauma, and poverty, and how those issues affect learning and academic outcomes. A growing number of universities offer specialized certificates that allow professionals to work and study in these fields (i.e. autism specialists, trauma specialists).
Anticipated to grow by 18–26%, employment for psychologists in the United States is expected to grow faster than most occupations in 2014. One in four psychologists is employed in educational settings. In the United States, themediansalary for psychologists in primary and secondary schools is US$58,360 as of May 2004.[65]
In recent decades, the participation of women as professional researchers in North American educational psychology has risen dramatically.[66]
As opposed to some other fields ofeducational research,quantitative methodsare the predominant mode of inquiry in educational psychology, but qualitative and mixed-methods studies are also common.[67]Educational psychology, as much as any other field ofpsychologyrelies on a balance ofobservational, correlational, and experimental study designs. Given the complexities of modelingdependent dataand psychological variables in school settings, educational psychologists have been at the forefront of the development of several common statistical tools, includingpsychometric methods,meta-analysis,regression discontinuityandlatent variable modeling.
|
https://en.wikipedia.org/wiki/Educational_psychology
|
Incombinatorics, theinclusion–exclusion principleis a counting technique which generalizes the familiar method of obtaining the number of elements in theunionof twofinite sets; symbolically expressed as
whereAandBare two finite sets and |S| indicates thecardinalityof a setS(which may be considered as the number of elements of the set, if the set isfinite). The formula expresses the fact that the sum of the sizes of the two sets may be too large since some elements may be counted twice. The double-counted elements are those in theintersectionof the two sets and the count is corrected by subtracting the size of the intersection.
The inclusion-exclusion principle, being a generalization of the two-set case, is perhaps more clearly seen in the case of three sets, which for the setsA,BandCis given by
This formula can be verified by counting how many times each region in theVenn diagramfigure is included in the right-hand side of the formula. In this case, when removing the contributions of over-counted elements, the number of elements in the mutual intersection of the three sets has been subtracted too often, so must be added back in to get the correct total.
Generalizing the results of these examples gives the principle of inclusion–exclusion. To find the cardinality of the union ofnsets:
The name comes from the idea that the principle is based on over-generousinclusion, followed by compensatingexclusion.
This concept is attributed toAbraham de Moivre(1718),[1]although it first appears in a paper ofDaniel da Silva(1854)[2]and later in a paper byJ. J. Sylvester(1883).[3]Sometimes the principle is referred to as the formula of Da Silva or Sylvester, due to these publications. The principle can be viewed as an example of thesieve methodextensively used innumber theoryand is sometimes referred to as thesieve formula.[4]
As finite probabilities are computed as counts relative to the cardinality of theprobability space, the formulas for the principle of inclusion–exclusion remain valid when the cardinalities of the sets are replaced by finite probabilities. More generally, both versions of the principle can be put under the common umbrella ofmeasure theory.
In a very abstract setting, the principle of inclusion–exclusion can be expressed as the calculation of the inverse of a certain matrix.[5]This inverse has a special structure, making the principle an extremely valuable technique in combinatorics and related areas of mathematics. AsGian-Carlo Rotaput it:[6]
"One of the most useful principles of enumeration in discrete probability and combinatorial theory is the celebrated principle of inclusion–exclusion. When skillfully applied, this principle has yielded the solution to many a combinatorial problem."
In its general formula, the principle of inclusion–exclusion states that for finite setsA1, ...,An, one has the identity
This can be compactly written as
or
In words, to count the number of elements in a finite union of finite sets, first sum the cardinalities of the individual sets, then subtract the number of elements that appear in at least two sets, then add back the number of elements that appear in at least three sets, then subtract the number of elements that appear in at least four sets, and so on. This process always ends since there can be no elements that appear in more than the number of sets in the union. (For example, ifn=4,{\displaystyle n=4,}there can be no elements that appear in more than4{\displaystyle 4}sets; equivalently, there can be no elements that appear in at least5{\displaystyle 5}sets.)
In applications it is common to see the principle expressed in its complementary form. That is, lettingSbe a finiteuniversal setcontaining all of theAiand lettingAi¯{\displaystyle {\bar {A_{i}}}}denote the complement ofAiinS, byDe Morgan's lawswe have
As another variant of the statement, letP1, ...,Pnbe a list of properties that elements of a setSmay or may not have, then the principle of inclusion–exclusion provides a way to calculate the number of elements ofSthat have none of the properties. Just letAibe the subset of elements ofSwhich have the propertyPiand use the principle in its complementary form. This variant is due toJ. J. Sylvester.[1]
Notice that if you take into account only the firstm<nsums on the right (in the general form of the principle), then you will get an overestimate ifmis odd and an underestimate ifmis even.
A more complex example is the following.
Suppose there is a deck ofncards numbered from 1 ton. Suppose a card numberedmis in the correct position if it is themthcard in the deck. How many ways,W, can the cards be shuffled with at least 1 card being in the correct position?
Begin by defining setAm, which is all of the orderings of cards with themthcard correct. Then the number of orders,W, withat leastone card being in the correct position,m, is
Apply the principle of inclusion–exclusion,
Each valueAm1∩⋯∩Amp{\displaystyle A_{m_{1}}\cap \cdots \cap A_{m_{p}}}represents the set of shuffles having at leastpvaluesm1, ...,mpin the correct position. Note that the number of shuffles with at leastpvalues correct only depends onp, not on the particular values ofm{\displaystyle m}. For example, the number of shuffles having the 1st, 3rd, and 17th cards in the correct position is the same as the number of shuffles having the 2nd, 5th, and 13th cards in the correct positions. It only matters that of thencards, 3 were chosen to be in the correct position. Thus there are(np){\textstyle {n \choose p}}equal terms in thepthsummation (seecombination).
|A1∩⋯∩Ap|{\displaystyle |A_{1}\cap \cdots \cap A_{p}|}is the number of orderings havingpelements in the correct position, which is equal to the number of ways of ordering the remainingn−pelements, or (n−p)!. Thus we finally get:
A permutation wherenocard is in the correct position is called aderangement. Takingn! to be the total number of permutations, the probabilityQthat a random shuffle produces a derangement is given by
a truncation ton+ 1 terms of theTaylor expansionofe−1. Thus the probability of guessing an order for a shuffled deck of cards and being incorrect about every card is approximatelye−1or 37%.
The situation that appears in the derangement example above occurs often enough to merit special attention.[7]Namely, when the size of the intersection sets appearing in the formulas for the principle of inclusion–exclusion depend only on the number of sets in the intersections and not on which sets appear. More formally, if the intersection
has the same cardinality, sayαk= |AJ|, for everyk-element subsetJof {1, ...,n}, then
Or, in the complementary form, where the universal setShas cardinalityα0,
Given afamily (repeats allowed) of subsetsA1,A2, ...,Anof a universal setS, the principle of inclusion–exclusion calculates the number of elements ofSin none of these subsets. A generalization of this concept would calculate the number of elements ofSwhich appear in exactly some fixedmof these sets.
LetN= [n] = {1,2,...,n}. If we defineA∅=S{\displaystyle A_{\emptyset }=S}, then the principle of inclusion–exclusion can be written as, using the notation of the previous section; the number of elements ofScontained in none of theAiis:
IfIis a fixed subset of the index setN, then the number of elements which belong toAifor alliinIand for no other values is:[8]
Define the sets
We seek the number of elements in none of theBkwhich, by the principle of inclusion–exclusion (withB∅=AI{\displaystyle B_{\emptyset }=A_{I}}), is
The correspondenceK↔J=I∪Kbetween subsets ofN\Iand subsets ofNcontainingIis a bijection and ifJandKcorrespond under this map thenBK=AJ, showing that the result is valid.
Inprobability, for eventsA1, ...,Anin aprobability space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )}, the inclusion–exclusion principle becomes forn= 2
forn= 3
and in general
which can be written in closed form as
where the last sum runs over all subsetsIof the indices 1, ...,nwhich contain exactlykelements, and
denotes the intersection of all thoseAiwith index inI.
According to theBonferroni inequalities, the sum of the first terms in the formula is alternately an upper bound and a lower bound for theLHS. This can be used in cases where the full formula is too cumbersome.
For a generalmeasure space(S,Σ,μ) andmeasurablesubsetsA1, ...,Anoffinite measure, the above identities also hold when the probability measureP{\displaystyle \mathbb {P} }is replaced by the measureμ.
If, in the probabilistic version of the inclusion–exclusion principle, the probability of the intersectionAIonly depends on the cardinality ofI, meaning that for everykin {1, ...,n} there is anaksuch that
then the above formula simplifies to
due to the combinatorial interpretation of thebinomial coefficient(nk){\textstyle {\binom {n}{k}}}. For example, if the eventsAi{\displaystyle A_{i}}areindependent and identically distributed, thenP(Ai)=p{\displaystyle \mathbb {P} (A_{i})=p}for alli, and we haveak=pk{\displaystyle a_{k}=p^{k}}, in which case the expression above simplifies to
(This result can also be derived more simply by considering the intersection of the complements of the eventsAi{\displaystyle A_{i}}.)
An analogous simplification is possible in the case of a general measure space(S,Σ,μ){\displaystyle (S,\Sigma ,\mu )}and measurable subsetsA1,…,An{\displaystyle A_{1},\dots ,A_{n}}of finite measure.
There is another formula used inpoint processes. LetS{\displaystyle S}be a finite set andP{\displaystyle P}be a random subset ofS{\displaystyle S}. LetA{\displaystyle A}be any subset ofS{\displaystyle S}, then
P(P=A)=P(P⊃A)−∑j1∈S∖AP(P⊃A∪j1)+∑j1,j2∈S∖Aj1≠j2P(P⊃A∪j1,j2)+…+(−1)|S|−|A|P(P⊃S)=∑A⊂J⊂S(−1)|J|−|A|P(P⊃J).{\displaystyle {\begin{aligned}\mathbb {P} (P=A)&=\mathbb {P} (P\supset A)-\sum _{j_{1}\in S\setminus A}\mathbb {P} (P\supset A\cup {j_{1}})\\&+\sum _{j_{1},j_{2}\in S\setminus A\ j_{1}\neq j_{2}}\mathbb {P} (P\supset A\cup {j_{1},j_{2}})+\dots \\&+(-1)^{|S|-|A|}\mathbb {P} (P\supset S)\\&=\sum _{A\subset J\subset S}(-1)^{|J|-|A|}\mathbb {P} (P\supset J).\end{aligned}}}
The principle is sometimes stated in the form[9]that says that if
then
The combinatorial and the probabilistic version of the inclusion–exclusion principle are instances of (2).
Takem_={1,2,…,m}{\displaystyle {\underline {m}}=\{1,2,\ldots ,m\}},f(m_)=0{\displaystyle f({\underline {m}})=0}, and
respectively for allsetsS{\displaystyle S}withS⊊m_{\displaystyle S\subsetneq {\underline {m}}}. Then we obtain
respectively for all setsA{\displaystyle A}withA⊊m_{\displaystyle A\subsetneq {\underline {m}}}. This is becauseelementsa{\displaystyle a}of∩i∈m_∖AAi{\displaystyle \cap _{i\in {\underline {m}}\smallsetminus A}A_{i}}can becontainedin otherAi{\displaystyle A_{i}}(Ai{\displaystyle A_{i}}withi∈A{\displaystyle i\in A}) as well, and the∩∖∪{\displaystyle \cap \smallsetminus \cup }-formularuns exactly through all possible extensions of the sets{Ai∣i∈m_∖A}{\displaystyle \{A_{i}\mid i\in {\underline {m}}\smallsetminus A\}}with otherAi{\displaystyle A_{i}}, countinga{\displaystyle a}only for the set that matches the membership behavior ofa{\displaystyle a}, ifS{\displaystyle S}runs through allsubsetsofA{\displaystyle A}(as in the definition ofg(A){\displaystyle g(A)}).
Sincef(m_)=0{\displaystyle f({\underline {m}})=0}, we obtain from (2) withA=m_{\displaystyle A={\underline {m}}}that
and by interchanging sides, the combinatorial and the probabilistic version of the inclusion–exclusion principle follow.
If one sees a numbern{\displaystyle n}as a set of its prime factors, then (2) is a generalization ofMöbius inversion formulaforsquare-freenatural numbers. Therefore, (2) is seen as the Möbius inversion formula for theincidence algebraof thepartially ordered setof all subsets ofA.
For a generalization of the full version of Möbius inversion formula, (2) must be generalized tomultisets. For multisets instead of sets, (2) becomes
whereA−S{\displaystyle A-S}is the multiset for which(A−S)⊎S=A{\displaystyle (A-S)\uplus S=A}, and
Notice thatμ(A−S){\displaystyle \mu (A-S)}is just the(−1)|A|−|S|{\displaystyle (-1)^{|A|-|S|}}of (2) in caseA−S{\displaystyle A-S}is a set.
Substituteg(S)=∑T⊆Sf(T){\displaystyle g(S)=\sum _{T\subseteq S}f(T)}on the right hand side of (3). Notice thatf(A){\displaystyle f(A)}appears once on both sides of (3). So we must show that for allT{\displaystyle T}withT⊊A{\displaystyle T\subsetneq A}, the termsf(T){\displaystyle f(T)}cancel out on the right hand side of (3). For that purpose, take a fixedT{\displaystyle T}such thatT⊊A{\displaystyle T\subsetneq A}and take an arbitrary fixeda∈A{\displaystyle a\in A}such thata∉T{\displaystyle a\notin T}.
Notice thatA−S{\displaystyle A-S}must be a set for eachpositiveornegativeappearance off(T){\displaystyle f(T)}on the right hand side of (3) that is obtained by way of the multisetS{\displaystyle S}such thatT⊆S⊆A{\displaystyle T\subseteq S\subseteq A}. Now each appearance off(T){\displaystyle f(T)}on the right hand side of (3) that is obtained by way ofS{\displaystyle S}such thatA−S{\displaystyle A-S}is a set that containsa{\displaystyle a}cancels out with the one that is obtained by way of the correspondingS{\displaystyle S}such thatA−S{\displaystyle A-S}is a set that does not containa{\displaystyle a}. This gives the desired result.
The inclusion–exclusion principle is widely used and only a few of its applications can be mentioned here.
A well-known application of the inclusion–exclusion principle is to the combinatorial problem of counting allderangementsof a finite set. Aderangementof a setAis abijectionfromAinto itself that has no fixed points. Via the inclusion–exclusion principle one can show that if the cardinality ofAisn, then the number of derangements is [n! /e] where [x] denotes thenearest integertox; a detailed proof is availablehereand also seethe examples sectionabove.
The first occurrence of the problem of counting the number of derangements is in an early book on games of chance:Essai d'analyse sur les jeux de hazardby P. R. de Montmort (1678 – 1719) and was known as either "Montmort's problem" or by the name he gave it, "problème des rencontres."[10]The problem is also known as thehatcheck problem.
The number of derangements is also known as thesubfactorialofn, written !n. It follows that if all bijections are assigned the same probability then the probability that a random bijection is a derangement quickly approaches 1/easngrows.
The principle of inclusion–exclusion, combined withDe Morgan's law, can be used to count the cardinality of the intersection of sets as well. LetAk¯{\displaystyle {\overline {A_{k}}}}represent the complement ofAkwith respect to some universal setAsuch thatAk⊆A{\displaystyle A_{k}\subseteq A}for eachk. Then we have
thereby turning the problem of finding an intersection into the problem of finding a union.
The inclusion exclusion principle forms the basis of algorithms for a number of NP-hard graph partitioning problems, such asgraph coloring.[11]
A well known application of the principle is the construction of thechromatic polynomialof a graph.[12]
The number ofperfect matchingsof abipartite graphcan be calculated using the principle.[13]
Given finite setsAandB, how manysurjective functions(onto functions) are there fromAtoB?Without any loss of generalitywe may takeA= {1, ...,k} andB= {1, ...,n}, since only the cardinalities of the sets matter. By usingSas the set of allfunctionsfromAtoB, and defining, for eachiinB, the propertyPias "the function misses the elementiinB" (iis not in theimageof the function), the principle of inclusion–exclusion gives the number of onto functions betweenAandBas:[14]
Apermutationof the setS= {1, ...,n} where each element ofSis restricted to not being in certain positions (here the permutation is considered as an ordering of the elements ofS) is called apermutation with forbidden positions. For example, withS= {1,2,3,4}, the permutations with the restriction that the element 1 can not be in positions 1 or 3, and the element 2 can not be in position 4 are: 2134, 2143, 3124, 4123, 2341, 2431, 3241, 3421, 4231 and 4321. By lettingAibe the set of positions that the elementiis not allowed to be in, and the propertyPito be the property that a permutation puts elementiinto a position inAi, the principle of inclusion–exclusion can be used to count the number of permutations which satisfy all the restrictions.[15]
In the given example, there are 12 = 2(3!) permutations with propertyP1, 6 = 3! permutations with propertyP2and no permutations have propertiesP3orP4as there are no restrictions for these two elements. The number of permutations satisfying the restrictions is thus:
The final 4 in this computation is the number of permutations having both propertiesP1andP2. There are no other non-zero contributions to the formula.
TheStirling numbers of the second kind,S(n,k) count the number ofpartitionsof a set ofnelements intoknon-empty subsets (indistinguishableboxes). An explicit formula for them can be obtained by applying the principle of inclusion–exclusion to a very closely related problem, namely, counting the number of partitions of ann-set intoknon-empty but distinguishable boxes (orderednon-empty subsets). Using the universal set consisting of all partitions of then-set intok(possibly empty) distinguishable boxes,A1,A2, ...,Ak, and the propertiesPimeaning that the partition has boxAiempty, the principle of inclusion–exclusion gives an answer for the related result. Dividing byk! to remove the artificial ordering gives the Stirling number of the second kind:[16]
A rook polynomial is thegenerating functionof the number of ways to place non-attackingrookson aboard Bthat looks like a subset of the squares of acheckerboard; that is, no two rooks may be in the same row or column. The boardBis any subset of the squares of a rectangular board withnrows andmcolumns; we think of it as the squares in which one is allowed to put a rook. Thecoefficient,rk(B) ofxkin the rook polynomialRB(x) is the number of wayskrooks, none of which attacks another, can be arranged in the squares ofB. For any boardB, there is a complementary boardB′{\displaystyle B'}consisting of the squares of the rectangular board that are not inB. This complementary board also has a rook polynomialRB′(x){\displaystyle R_{B'}(x)}with coefficientsrk(B′).{\displaystyle r_{k}(B').}
It is sometimes convenient to be able to calculate the highest coefficient of a rook polynomial in terms of the coefficients of the rook polynomial of the complementary board. Without loss of generality we can assume thatn≤m, so this coefficient isrn(B). The number of ways to placennon-attacking rooks on the completen×m"checkerboard" (without regard as to whether the rooks are placed in the squares of the boardB) is given by thefalling factorial:
LettingPibe the property that an assignment ofnnon-attacking rooks on the complete board has a rook in columniwhich is not in a square of the boardB, then by the principle of inclusion–exclusion we have:[17]
Euler's totient or phi function,φ(n) is anarithmetic functionthat counts the number of positive integers less than or equal tonthat arerelatively primeton. That is, ifnis apositive integer, then φ(n) is the number of integerskin the range 1 ≤k≤nwhich have no common factor withnother than 1. The principle of inclusion–exclusion is used to obtain a formula for φ(n). LetSbe the set {1, ...,n} and define the propertyPito be that a number inSis divisible by the prime numberpi, for 1 ≤i≤r, where theprime factorizationof
Then,[18]
The Dirichlet hyperbola method re-expresses a sum of amultiplicative functionf(n){\displaystyle f(n)}by selecting a suitableDirichlet convolutionf=g∗h{\displaystyle f=g\ast h}, recognizing that the sum
can be recast as a sum over thelattice pointsin a region bounded byx≥1{\displaystyle x\geq 1},y≥1{\displaystyle y\geq 1}, andxy≤n{\displaystyle xy\leq n}, splitting this region into two overlapping subregions, and finally using the inclusion–exclusion principle to conclude that
In many cases where the principle could give an exact formula (in particular, countingprime numbersusing thesieve of Eratosthenes), the formula arising does not offer useful content because the number of terms in it is excessive. If each term individually can be estimated accurately, the accumulation of errors may imply that the inclusion–exclusion formula is not directly applicable. Innumber theory, this difficulty was addressed byViggo Brun. After a slow start, his ideas were taken up by others, and a large variety ofsieve methodsdeveloped. These for example may try to find upper bounds for the "sieved" sets, rather than an exact formula.
LetA1, ...,Anbe arbitrary sets andp1, ...,pnreal numbers in the closed unit interval[0, 1]. Then, for every even numberkin {0, ...,n}, theindicator functionssatisfy the inequality:[19]
Choose an element contained in the union of all sets and letA1,A2,…,At{\displaystyle A_{1},A_{2},\dots ,A_{t}}be the individual sets containing it. (Note thatt> 0.) Since the element is counted precisely once by the left-hand side of equation (1), we need to show that it is counted precisely once by the right-hand side. On the right-hand side, the only non-zero contributions occur when all the subsets in a particular term contain the chosen element, that is, all the subsets are selected fromA1,A2,…,At{\displaystyle A_{1},A_{2},\dots ,A_{t}}. The contribution is one for each of these sets (plus or minus depending on the term) and therefore is just the (signed) number of these subsets used in the term. We then have:
By thebinomial theorem,
Using the fact that(t0)=1{\displaystyle {\binom {t}{0}}=1}and rearranging terms, we have
and so, the chosen element is counted only once by the right-hand side of equation (1).
An algebraic proof can be obtained usingindicator functions(also known as characteristic functions). The indicator function of a subsetSof a setXis the function
IfA{\displaystyle A}andB{\displaystyle B}are two subsets ofX{\displaystyle X}, then
LetAdenote the union⋃i=1nAi{\textstyle \bigcup _{i=1}^{n}A_{i}}of the setsA1, ...,An. To prove the inclusion–exclusion principle in general, we first verify the identity
for indicator functions, where:
The following function
is identically zero because: ifxis not inA, then all factors are 0−0 = 0; and otherwise, ifxdoes belong to someAm, then the correspondingmthfactor is 1−1=0. By expanding the product on the left-hand side, equation (4) follows.
To prove the inclusion–exclusion principle for the cardinality of sets, sum the equation (4) over allxin the union ofA1, ...,An. To derive the version used in probability, take theexpectationin (4). In general,integratethe equation (4) with respect toμ. Always use linearity in these derivations.
This article incorporates material from principle of inclusion–exclusion onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
|
https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle
|
Derivative-free optimization(sometimes referred to asblackbox optimization) is a discipline inmathematical optimizationthat does not usederivativeinformation in the classical sense to find optimal solutions: Sometimes information about the derivative of the objective functionfis unavailable, unreliable or impractical to obtain. For example,fmight be non-smooth, or time-consuming to evaluate, or in some way noisy, so that methods that rely on derivatives or approximate them viafinite differencesare of little use. The problem to find optimal points in such situations is referred to as derivative-free optimization, algorithms that do not use derivatives or finite differences are calledderivative-free algorithms.[1]
The problem to be solved is to numerically optimize an objective functionf:A→R{\displaystyle f\colon A\to \mathbb {R} }for somesetA{\displaystyle A}(usuallyA⊂Rn{\displaystyle A\subset \mathbb {R} ^{n}}), i.e. findx0∈A{\displaystyle x_{0}\in A}such that without loss of generalityf(x0)≤f(x){\displaystyle f(x_{0})\leq f(x)}for allx∈A{\displaystyle x\in A}.
When applicable, a common approach is to iteratively improve a parameter guess by local hill-climbing in the objective function landscape. Derivative-based algorithms use derivative information off{\displaystyle f}to find a good search direction, since for example the gradient gives the direction of steepest ascent. Derivative-based optimization is efficient at finding local optima for continuous-domain smooth single-modal problems. However, they can have problems when e.g.A{\displaystyle A}is disconnected, or (mixed-)integer, or whenf{\displaystyle f}is expensive to evaluate, or is non-smooth, or noisy, so that (numeric approximations of) derivatives do not provide useful information. A slightly different problem is whenf{\displaystyle f}is multi-modal, in which case local derivative-based methods only give local optima, but might miss the global one.
In derivative-free optimization, various methods are employed to address these challenges using only function values off{\displaystyle f}, but no derivatives. Some of these methods can be proved to discover optima, but some are rather metaheuristic since the problems are in general more difficult to solve compared toconvex optimization. For these, the ambition is rather to efficiently find "good" parameter values which can be near-optimal given enough resources, but optimality guarantees can typically not be given. One should keep in mind that the challenges are diverse, so that one can usually not use one algorithm for all kinds of problems.
Notable derivative-free optimization algorithms include:
There exist benchmarks for blackbox optimization algorithms, see e.g. the bbob-biobj tests.[2]
|
https://en.wikipedia.org/wiki/Derivative-free_optimization
|
This is a list of notableprogramming languages, grouped by type.
The groupings are overlapping; not mutually exclusive. A language can be listed in multiple groupings.
Agent-oriented programming allows the developer to build, extend and usesoftware agents, which are abstractions of objects that can message other agents.
Array programming(also termedvectorormultidimensional) languages generalize operations on scalars to apply transparently tovectors,matrices, andhigher-dimensional arrays.
Aspect-oriented programming enables developers to add new functionality to code, known as "advice", without modifying that code itself; rather, it uses apointcutto implement the advice into code blocks.
Assembly languagesdirectly correspond to amachine language(seebelow), so machine code instructions appear in a form understandable by humans, although there may not be a one-to-one mapping between an individual statement and an individual instruction. Assembly languages let programmers use symbolic addresses, which theassemblerconverts to absolute orrelocatableaddresses. Most assemblers also supportmacrosandsymbolic constants.
Anauthoring languageis a programming language designed for use by a non-computer expert to easily create tutorials, websites, and other interactive computer programs.
Command-line interface (CLI) languages are also called batch languages or job control languages. Examples:
These are languages typically processed bycompilers, though theoretically any language can be compiled or interpreted.
Aconcatenative programming languageis apoint-freecomputerprogramming languagein which all expressions denotefunctions, and thejuxtapositionofexpressionsdenotesfunction composition.
Message passinglanguages provide language constructs forconcurrency. The predominant paradigm for concurrency in mainstream languages such asJavaisshared memoryconcurrency. Concurrent languages that make use of message passing have generally been inspired by process calculi such ascommunicating sequential processes(CSP) or theπ-calculus.
Aconstraint programminglanguage is adeclarative programminglanguage where relationships between variables are expressed asconstraints. Execution proceeds by attempting to find values for the variables which satisfy all declared constraints.
Acurly bracketorcurly bracelanguage has syntax that defines a block as the statements betweencurly brackets, a.k.a. braces,{}. This syntax originated withBCPL(1966), and was popularized byC. Many curly bracket languagesdescend from or are strongly influenced by C. Examples:
Dataflow programminglanguages rely on a (usually visual) representation of the flow of data to specify the program. Frequently used for reacting to discrete events or for processing streams of data. Examples of dataflow languages include:
Data-oriented languages provide powerful ways of searching and manipulating the relations that have been described as entity relationship tables which map one set of things into other sets.[citation needed]Examples of data-oriented languages include:
Decision tablescan be used as an aid to clarifying the logic before writing a program in any language, but in the 1960s a number of languages were developed where the main logic is expressed directly in the form of a decision table, including:
Declarative languagesexpress the logic of a computation without describing its control flow in detail.Declarative programmingstands in contrast toimperative programmingvia imperative programming languages, where control flow is specified by serial orders (imperatives). (Pure)functionalandlogic-basedprogramming languages are also declarative, and constitute the major subcategories of the declarative category. This section lists additional examples not in those subcategories.
Source embeddable languages embed small pieces of executable code inside a piece of free-form text, often a web page.
Client-side embedded languages are limited by the abilities of the browser or intended client. They aim to provide dynamism to web pages without the need to recontact the server.
Server-side embedded languages are much more flexible, since almost any language can be built into a server. The aim of having fragments of server-side code embedded in a web page is to generate additional markup dynamically; the code itself disappears when the page is served, to be replaced by its output.
The above examples are particularly dedicated to this purpose. A large number of other languages, such asErlang,Scala,Perl,RingandRubycan be adapted (for instance, by being made intoApachemodules).
A wide variety of dynamic or scripting languages can be embedded in compiled executable code. Basically, object code for the language'sinterpreterneeds to be linked into the executable. Source code fragments for the embedded language can then be passed to an evaluation function as strings. Application control languages can be implemented this way, if the source code is input by the user. Languages with small interpreters are preferred.
Languages developed primarily for the purpose of teaching and learning of programming.
Anesoteric programming languageis a programming language designed as a test of the boundaries of computer programming language design, as a proof of concept, or as a joke.
Extension programming languagesare languages embedded into another program and used to harness its features in extension scripts.
Fourth-generation programming languagesarehigh-level programming languagesbuilt arounddatabasesystems. They are generally used in commercial environments.
Functional programminglanguages define programs and subroutines as mathematical functions and treat them as first-class. Many so-called functional languages are "impure", containing imperative features. Many functional languages are tied to mathematical calculation tools. Functional languages include:
In electronics, ahardware description language(HDL) is a specialized computer language used to describe the structure, design, and operation of electronic circuits, and most commonly, digital logic circuits. The two most widely used and well-supported HDL varieties used in industry areVerilogandVHDL. Hardware description languages include:
Imperative programming languages may be multi-paradigm and appear in other classifications. Here is a list of programming languages that follow theimperative paradigm:
Known asREPL- Interactive mode languages act as a kind of shell: expressions or statements can be entered one at a time, and the result of their evaluation seen immediately.
Interpreted languagesare programming languages in which programs may be executed from source code form, by an interpreter. Theoretically, any language can be compiled or interpreted, so the terminterpreted languagegenerally refers to languages that are usually interpreted rather than compiled.
Iterative languages are built around or offeringgenerators.
Garbage Collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory that was allocated by the program but is no longer used.
Some programming languages without the inherent ability to manually manage memory, likeCython,[25]Swift,[c]andScala[26](Scala Native only), are able to import or call functions likemallocandfreefromCthrough aforeign function interface.
List-based languages are a type ofdata-structured languagethat are based on thelistdata structure.
Little languages[29]serve a specialized problem domain.
Logic-basedlanguages specify a set of attributes that a solution must-have, rather than a set of steps to obtain a solution.
Notable languages following thisprogramming paradigminclude:
Machine languagesare directly executable by a computer's CPU. They are typically formulated as bit patterns, usually represented inoctalorhexadecimal. Each bit pattern causes the circuits in the CPU to execute one of the fundamental operations of the hardware. The activation of specific electrical inputs (e.g., CPU package pins for microprocessors), and logical settings for CPU state values, control the processor's computation. Individual machine languages are specific to a family of processors; machine-language code for one family of processors cannot run directly on processors in another family unless the processors in question have additional hardware to support it (for example, DEC VAX processors included a PDP-11 compatibility mode). They are (essentially) always defined by the CPU developer, not by 3rd parties.[e]The symbolic version, the processor'sassembly language, is also defined by the developer, in most cases. Some commonly used machine codeinstruction setsare:
Macrolanguages transform one source code file into another. A "macro" is essentially a short piece of text that expands into a longer one (not to be confused withhygienic macros), possibly with parameter substitution. They are often used topreprocesssource code. Preprocessors can also supply facilities likefile inclusion.
Macro languages may be restricted to acting on specially labeled code regions (pre-fixed with a#in the case of the C preprocessor). Alternatively, they may not, but in this case it is still often undesirable to (for instance) expand a macro embedded in astring literal, so they still need a rudimentary awareness of syntax. That being the case, they are often still applicable to more than one language. Contrast with source-embeddable languages likePHP, which are fully featured.
Scripting languagessuch asTclandECMAScript(ActionScript,ECMAScript for XML,JavaScript,JScript) have been embedded into applications. These are sometimes called "macro languages", although in a somewhat different sense to textual-substitution macros likem4.
Metaprogrammingis the writing of programs that write or manipulate other programs, including themselves, as their data or that do part of the work that is otherwise done atrun timeduringcompile time. In many cases, this allows programmers to get more done in the same amount of time as they would take to write all the code manually.
Multiparadigm languagessupport more than oneprogramming paradigm. They allow aprogramto use more than oneprogrammingstyle. The goal is to allow programmers to use the best tool for a job, admitting that no one paradigm solves all problems in the easiest or most efficient way.
Several general-purpose programming languages, such asCandPython, are also used for technical computing, this list focuses on languages almost exclusively used for technical computing.
Class-basedobject-oriented programminglanguages supportobjectsdefined by their class. Class definitions include member data.Message passingis a key concept, if not the main concept, in object-oriented languages.
Polymorphic functions parameterized by the class of some of their arguments are typically calledmethods. In languages withsingle dispatch, classes typically also include method definitions. In languages withmultiple dispatch, methods are defined bygeneric functions. There are exceptions wheresingle dispatchmethods aregeneric functions(e.g.Bigloo's object system).
Prototype-based languagesare object-oriented languages where the distinction between classes and instances has been removed:
Off-side rulelanguages denote blocks of code by theirindentation.
Procedural programminglanguages are based on the concept of the unit and scope (the data viewing range) of an executable code statement. A procedural program is composed of one or more units or modules, either user coded or provided in a code library; each module is composed of one or more procedures, also called a function, routine, subroutine, or method, depending on the language. Examples of procedural languages include:
Reflective programminglanguages let programs examine and possibly modify their high-level structure at runtime or compile-time. This is most common in high-level virtual machine programming languages likeSmalltalk, and less common in lower-level programming languages likeC. Languages and platforms supporting reflection:
Rule-based languages instantiate rules when activated by conditions in a set of data. Of all possible activations, some set is selected and the statements belonging to those rules execute. Rule-based languages include:[citation needed]
Stack-based languages are a type ofdata-structured languagethat are based on thestackdata structure.
Synchronous programming languagesare optimized for programming reactive systems, systems that are often interrupted and must respond quickly. Many such systems are also calledrealtime systems, and are used often inembedded systems.
Examples:
Ashading languageis a graphics programming language adapted to programming shader effects. Such language forms usually consist of special data types, like "color" and "normal". Due to the variety of target markets for 3D computer graphics.
They provide both higher hardware abstraction and a more flexible programming model than previous paradigms which hardcoded transformation and shading equations. This gives the programmer greater control over the rendering process and delivers richer content at lower overhead.
Shading languages used in offline rendering produce maximum image quality. Processing such shaders is time-consuming. The computational power required can be expensive because of their ability to produce photorealistic results.
These languages assist with generatinglexical analyzersandparsersforcontext-free grammars.
Thesystem programming languagesare for low-level tasks like memory management or task management. A system programming language usually refers to a programming language used for system programming; such languages are designed for writing system software, which usually requires different development approaches when compared with application software.
System software is computer software designed to operate and control the computer hardware, and to provide a platform for running application software. System software includes software categories such as operating systems, utility software, device drivers, compilers, and linkers. Examples of system languages include:
Transformation languagesserve the purpose of transforming (translating) source code specified in a certain formal language into a defined destination format code. It is most commonly used in intermediate components of more complex super-systems in order to adopt internal results for input into a succeeding processing routine.
Visual programming languageslet users specify programs in a two-(or more)-dimensional way, instead of as one-dimensional text strings, via graphic layouts of various types. Somedataflow programminglanguages are also visual languages.
Computer scientistNiklaus Wirthdesigned and implemented several influential languages.
These are languages based on or that operate onXML.
|
https://en.wikipedia.org/wiki/List_of_programming_languages_by_type#Numerical_analysis
|
Anevil maid attackis an attack on an unattended device, in which an attacker with physical access alters it in some undetectable way so that they can later access the device, or the data on it.
The name refers to the scenario where amaidcould subvert a device left unattended in a hotel room – but the concept itself also applies to situations such as a device being intercepted while in transit, or taken away temporarily by airport or law enforcement personnel.
In a 2009 blog post, security analystJoanna Rutkowskacoined the term "Evil Maid Attack" due to hotel rooms being a common place where devices are left unattended.[1][2]The post detailed a method for compromising the firmware on an unattended computer via an external USB flash drive – and therefore bypassingTrueCryptdisk encryption.[2]
D. Defreez, a computer security professional, first mentioned the possibility of an evil maid attack on Android smartphones in 2011.[1]He talked about the WhisperCore Android distribution and its ability to provide disk encryption for Androids.[1]
In 2007, former U.S. Commerce SecretaryCarlos Gutierrezwas allegedly targeted by an evil maid attack during a business trip to China.[3]He left his computer unattended during a trade talk in Beijing, and he suspected that his device had been compromised.[3]Although the allegations have yet to be confirmed or denied, the incident caused the U.S. government to be more wary of physical attacks.[3]
In 2009,SymantecCTO Mark Bregman was advised by several U.S. agencies to leave his devices in the U.S. before travelling to China.[4]He was instructed to buy new ones before leaving and dispose of them when he returned so that any physical attempts to retrieve data would be ineffective.[4]
The attack begins when the victim leaves their device unattended.[5]The attacker can then proceed to tamper with the system. If the victim's device does not have password protection or authentication, an intruder can turn on the computer and immediately access the victim's information.[6]However, if the device is password protected, as with full diskencryption, the firmware of the device needs to be compromised, usually done with an external drive.[6]The compromised firmware then provides the victim with a fake password prompt identical to the original.[6]Once the password is input, the compromised firmware sends the password to the attacker and removes itself after a reboot.[6]In order to successfully complete the attack, the attacker must return to the device once it has been unattended a second time to steal the now-accessible data.[5][7]
Another method of attack is through aDMA attackin which an attacker accesses the victim's information through hardware devices that connect directly to the physical address space.[6]The attacker simply needs to connect to the hardware device in order to access the information.
An evil maid attack can also be done by replacing the victim's device with an identical device.[1]If the original device has abootloaderpassword, then the attacker only needs to acquire a device with an identical bootloader password input screen.[1]If the device has alock screen, however, the process becomes more difficult as the attacker must acquire the background picture to put on the lock screen of the mimicking device.[1]In either case, when the victim inputs their password on the false device, the device sends the password to the attacker, who is in possession of the original device.[1]The attacker can then access the victim's data.[1]
LegacyBIOSis considered insecure against evil maid attacks.[8]Its architecture is old, updates andOption ROMsareunsigned, and configuration is unprotected.[8]Additionally, it does not supportsecure boot.[8]These vulnerabilities allow an attacker to boot from an external drive and compromise the firmware.[8]The compromised firmware can then be configured tosend keystrokesto the attacker remotely.[8]
Unified Extensible Firmware Interface(UEFI) provides many necessary features for mitigating evil maid attacks.[8]For example, it offers a framework for secure boot, authenticated variables at boot-time, andTPMinitialization security.[8]Despite these available security measures, platform manufacturers are not obligated to use them.[8]Thus, security issues may arise when these unused features allow an attacker to exploit the device.[8]
Manyfull disk encryptionsystems, such as TrueCrypt andPGP Whole Disk Encryption, are susceptible to evil maid attacks due to their inability to authenticate themselves to the user.[9]An attacker can still modify disk contents despite the device being powered off and encrypted.[9]The attacker can modify the encryption system's loader codes to steal passwords from the victim.[9]
The ability to create a communication channel between the bootloader and the operating system to remotely steal the password for a disk protected byFileVault2, is also explored.[10]On a macOS system, this attack has additional implications due to "password forwarding" technology, in which a user's account password also serves as the FileVault password, enabling an additional attack surface through privilege escalation.
In 2019 a vulnerability named "Thunderclap" in IntelThunderboltports found on many PCs was announced which could allow a rogue actor to gain access to the system viadirect memory access(DMA). This is possible despite use of an input/outputmemory management unit(IOMMU).[11][12]This vulnerability was largely patched by vendors. This was followed in 2020 by "Thunderspy" which is believed to be unpatchable and allows similar exploitation of DMA to gain total access to the system bypassing all security features.[13]
Any unattended device can be vulnerable to a network evil maid attack.[1]If the attacker knows the victim's device well enough, they can replace the victim's device with an identical model with a password-stealing mechanism.[1]Thus, when the victim inputs their password, the attacker will instantly be notified of it and be able to access the stolen device's information.[1]
One approach is to detect that someone is close to, or handling the unattended device.
Proximity alarms, motion detector alarms, and wireless cameras, can be used to alert the victim when an attacker is nearby their device, thereby nullifying the surprise factor of an evil maid attack.[14]TheHaven Android appwas created in 2017 byEdward Snowdento do such monitoring, and transmit the results to the user's smartphone.[15]
In the absence of the above,tamper-evident technologyof various kinds can be used to detect whether the device has been taken apart – including the low-cost solution of putting glitter nail polish over the screw holes.[16]
After an attack has been suspected, the victim can have their device checked to see if any malware was installed, but this is challenging. Suggested approaches are checking the hashes of selected disk sectors and partitions.[2]
If the device is under surveillance at all times, an attacker cannot perform an evil maid attack.[14]If left unattended, the device may also be placed inside a lockbox so that an attacker will not have physical access to it.[14]However, there will be situations, such as a device being taken away temporarily by airport or law enforcement personnel where this is not practical.
Basic security measures such as having the latest up-to-date firmware and shutting down the device before leaving it unattended prevent an attack from exploiting vulnerabilities in legacy architecture and allowing external devices into open ports, respectively.[5]
CPU-based disk encryption systems, such asTRESORand Loop-Amnesia, prevent data from being vulnerable to a DMA attack by ensuring it does not leak into system memory.[17]
TPM-based secure boothas been shown to mitigate evil maid attacks by authenticating the device to the user.[18]It does this by unlocking itself only if the correct password is given by the user and if it measures that no unauthorized code has been executed on the device.[18]These measurements are done by root of trust systems, such as Microsoft'sBitLockerand Intel's TXT technology.[9]The Anti Evil Maid program builds upon TPM-based secure boot and further attempts to authenticate the device to the user.[1]
|
https://en.wikipedia.org/wiki/Evil_maid_attack
|
Rouché's theorem, named afterEugène Rouché, states that for any twocomplex-valuedfunctionsfandgholomorphicinside some regionK{\displaystyle K}with closed contour∂K{\displaystyle \partial K}, if|g(z)| < |f(z)|on∂K{\displaystyle \partial K}, thenfandf+ghave the same number of zeros insideK{\displaystyle K}, where each zero is counted as many times as itsmultiplicity. This theorem assumes that the contour∂K{\displaystyle \partial K}is simple, that is, without self-intersections. Rouché's theorem is an easy consequence of a stronger symmetric Rouché's theorem described below.
The theorem is usually used to simplify the problem of locating zeros, as follows. Given an analytic function, we write it as the sum of two parts, one of which is simpler and grows faster than (thus dominates) the other part. We can then locate the zeros by looking at only the dominating part. For example, the polynomialz5+3z3+7{\displaystyle z^{5}+3z^{3}+7}has exactly 5 zeros in the disk|z|<2{\displaystyle |z|<2}since|3z3+7|≤31<32=|z5|{\displaystyle |3z^{3}+7|\leq 31<32=|z^{5}|}for every|z|=2{\displaystyle |z|=2}, andz5{\displaystyle z^{5}}, the dominating part, has five zeros in the disk.
It is possible to provide an informal explanation of Rouché's theorem.
LetCbe a closed, simple curve (i.e., not self-intersecting). Leth(z) =f(z) +g(z). Iffandgare both holomorphic on the interior ofC, thenhmust also be holomorphic on the interior ofC. Then, with the conditions imposed above, the Rouche's theorem in its original (and not symmetric) form says that
Notice that the condition |f(z)| > |h(z) −f(z)| means that for anyz, the distance fromf(z) to the origin is larger than the length ofh(z) −f(z), which in the following picture means that for each point on the blue curve, the segment joining it to the origin is larger than the green segment associated with it. Informally we can say that the blue curvef(z) is always closer to the red curveh(z) than it is to the origin.
The previous paragraph shows thath(z) must wind around the origin exactly as many times asf(z). The index of both curves around zero is therefore the same, so by theargument principle,f(z)andh(z)must have the same number of zeros insideC.
One popular, informal way to summarize this argument is as follows: If a person were to walk a dog on a leash around and around a tree, such that the distance between the person and the tree is always greater than the length of the leash, then the person and the dog go around the tree the same number of times.
Consider the polynomialz2+2az+b2{\displaystyle z^{2}+2az+b^{2}}witha>b>0{\displaystyle a>b>0}. By thequadratic formulait has two zeros at−a±a2−b2{\displaystyle -a\pm {\sqrt {a^{2}-b^{2}}}}. Rouché's theorem can be used to obtain some hint about their positions. Since|z2+b2|≤2b2<2a|z|for all|z|=b,{\displaystyle |z^{2}+b^{2}|\leq 2b^{2}<2a|z|{\text{ for all }}|z|=b,}
Rouché's theorem says that the polynomial has exactly one zero inside the disk|z|<b{\displaystyle |z|<b}. Since−a−a2−b2{\displaystyle -a-{\sqrt {a^{2}-b^{2}}}}is clearly outside the disk, we conclude that the zero is−a+a2−b2{\displaystyle -a+{\sqrt {a^{2}-b^{2}}}}.
In general, a polynomialf(z)=anzn+⋯+a0{\displaystyle f(z)=a_{n}z^{n}+\cdots +a_{0}}. If|ak|rk>∑j≠k|aj|rj{\displaystyle |a_{k}|r^{k}>\sum _{j\neq k}|a_{j}|r^{j}}for somer>0,k∈0:n{\displaystyle r>0,k\in 0:n}, then by Rouche's theorem, the polynomial has exactlyk{\displaystyle k}roots insideB(0,r){\displaystyle B(0,r)}.
This sort of argument can be useful in locating residues when one applies Cauchy'sresidue theorem.
Rouché's theorem can also be used to give a short proof of thefundamental theorem of algebra. Letp(z)=a0+a1z+a2z2+⋯+anzn,an≠0{\displaystyle p(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots +a_{n}z^{n},\quad a_{n}\neq 0}and chooseR>0{\displaystyle R>0}so large that:|a0+a1z+⋯+an−1zn−1|≤∑j=0n−1|aj|Rj<|an|Rn=|anzn|for|z|=R.{\displaystyle |a_{0}+a_{1}z+\cdots +a_{n-1}z^{n-1}|\leq \sum _{j=0}^{n-1}|a_{j}|R^{j}<|a_{n}|R^{n}=|a_{n}z^{n}|{\text{ for }}|z|=R.}Sinceanzn{\displaystyle a_{n}z^{n}}hasn{\displaystyle n}zeros inside the disk|z|<R{\displaystyle |z|<R}(becauseR>0{\displaystyle R>0}), it follows from Rouché's theorem thatp{\displaystyle p}also has the same number of zeros inside the disk.
One advantage of this proof over the others is that it shows not only that a polynomial must have a zero but the number of its zeros is equal to its degree (counting, as usual, multiplicity).
Another use of Rouché's theorem is to prove theopen mapping theoremfor analytic functions. We refer to the article for the proof.
A stronger version of Rouché's theorem was published byTheodor Estermannin 1962.[1]It states: letK⊂G{\displaystyle K\subset G}be a bounded region with continuous boundary∂K{\displaystyle \partial K}. Two holomorphic functionsf,g∈H(G){\displaystyle f,\,g\in {\mathcal {H}}(G)}have the same number of roots (counting multiplicity) inK{\displaystyle K}, if the strict inequality|f(z)−g(z)|<|f(z)|+|g(z)|(z∈∂K){\displaystyle |f(z)-g(z)|<|f(z)|+|g(z)|\qquad \left(z\in \partial K\right)}holds on the boundary∂K.{\displaystyle \partial K.}
THIS PART HAS TO BE REVISED, IT IS BASED ON HAVING + IN THE ABOVE FORMULA. THE PROOF BELOW IS CORRECT.The original version of Rouché's theorem then follows from this symmetric version applied to the functionsf+g,f{\displaystyle f+g,f}together with the trivial inequality|f(z)+g(z)|≥0{\displaystyle |f(z)+g(z)|\geq 0}(in fact this inequality is strict sincef(z)+g(z)=0{\displaystyle f(z)+g(z)=0}for somez∈∂K{\displaystyle z\in \partial K}would imply|g(z)|=|f(z)|{\displaystyle |g(z)|=|f(z)|}).
The statement can be understood intuitively as follows.
By considering−g{\displaystyle -g}in place ofg{\displaystyle g}, the condition can be rewritten as|f(z)+g(z)|<|f(z)|+|g(z)|{\displaystyle |f(z)+g(z)|<|f(z)|+|g(z)|}forz∈∂K{\displaystyle z\in \partial K}.
Since|f(z)+g(z)|≤|f(z)|+|g(z)|{\displaystyle |f(z)+g(z)|\leq |f(z)|+|g(z)|}always holds by the triangle inequality, this is equivalent to saying that|f(z)+g(z)|≠|f(z)|+|g(z)|{\displaystyle |f(z)+g(z)|\neq |f(z)|+|g(z)|}on∂K{\displaystyle \partial K}, which in turn means that forz∈∂K{\displaystyle z\in \partial K}the functionsf(z){\displaystyle f(z)}andg(z){\displaystyle g(z)}are non-vanishing andargf(z)≠argg(z){\displaystyle \arg {f(z)}\neq \arg {g(z)}}.
Intuitively, if the values off{\displaystyle f}andg{\displaystyle g}never pass through the origin and never point in the same direction asz{\displaystyle z}circles along∂K{\displaystyle \partial K}, thenf(z){\displaystyle f(z)}andg(z){\displaystyle g(z)}must wind around the origin the same number of times.
LetC:[0,1]→C{\displaystyle C\colon [0,1]\to \mathbb {C} }be a simple closed curve whose image is the boundary∂K{\displaystyle \partial K}. The hypothesis implies thatfhas no roots on∂K{\displaystyle \partial K}, hence by theargument principle, the numberNf(K) of zeros offinKis12πi∮Cf′(z)f(z)dz=12πi∮f∘Cdzz=Indf∘C(0),{\displaystyle {\frac {1}{2\pi i}}\oint _{C}{\frac {f'(z)}{f(z)}}\,dz={\frac {1}{2\pi i}}\oint _{f\circ C}{\frac {dz}{z}}=\mathrm {Ind} _{f\circ C}(0),}i.e., thewinding numberof the closed curvef∘C{\displaystyle f\circ C}around the origin; similarly forg. The hypothesis ensures thatg(z) is not a negative real multiple off(z) for anyz=C(x), thus 0 does not lie on the line segment joiningf(C(x)) tog(C(x)), andHt(x)=(1−t)f(C(x))+tg(C(x)){\displaystyle H_{t}(x)=(1-t)f(C(x))+tg(C(x))}is ahomotopybetween the curvesf∘C{\displaystyle f\circ C}andg∘C{\displaystyle g\circ C}avoiding the origin. The winding number is homotopy-invariant: the functionI(t)=IndHt(0)=12πi∮Htdzz{\displaystyle I(t)=\mathrm {Ind} _{H_{t}}(0)={\frac {1}{2\pi i}}\oint _{H_{t}}{\frac {dz}{z}}}is continuous and integer-valued, hence constant. This showsNf(K)=Indf∘C(0)=Indg∘C(0)=Ng(K).{\displaystyle N_{f}(K)=\mathrm {Ind} _{f\circ C}(0)=\mathrm {Ind} _{g\circ C}(0)=N_{g}(K).}
|
https://en.wikipedia.org/wiki/Rouch%C3%A9%27s_theorem
|
Abeaconis an intentionally conspicuous device designed to attract attention to aspecific location. A common example is thelighthouse, which draws attention to a fixed point that can be used to navigate around obstacles or into port. More modern examples include a variety ofradio beaconsthat can be read onradio direction findersin all weather, andradar transpondersthat appear onradardisplays.
Beacons can also be combined with semaphoric or other indicators to provide importantinformation, such as the status of an airport, by the colour and rotational pattern of itsairport beacon, or of pending weather as indicated on aweather beaconmounted at the top of a tall building or similar site. When used in such fashion, beacons can be considered a form ofoptical telegraphy.
Beacons help guidenavigatorsto their destinations. Types of navigational beacons includeradarreflectors,radio beacons, sonic and visual signals. Visual beacons range from small, single-pile structures to largelighthousesor light stations and can be located on land or on water. Lighted beacons are calledlights; unlighted beacons are calleddaybeacons.Aerodrome beaconsare used to indicate locations of airports and helipads.[1]
In the United States, a series of beacons were constructed across the country in the 1920s and 1930s to help guide pilots deliveringair mail. They were placed about 25 miles apart from each other, and included large concrete arrows with accompanying lights to illuminate them.[2]
Handheld beacons are also employed inaircraft marshalling, and are used by the marshal to deliver instructions to the crew of aircraft as they move around an active airport, heliport or aircraft carrier.[citation needed]
Historically, beacons were fires lit at well-known locations on hills or high places, used either aslighthousesfornavigation at sea, or for signalling over land that enemy troops were approaching, in order to alert defenses. As signals, beacons are an ancient form ofoptical telegraphand were part of arelay league.
Systems of this kind have existed for centuries over much of the world. The ancient Greeks called themphryctoriae, while beacons figure on several occasions on thecolumn of Trajan.
In imperial China, sentinels on and near theGreat Wall of Chinaused a sophisticated system of daytime smoke and nighttime flame to send signals along long chains of beacon towers.[3]
Legend has it thatKing You of Zhouplayed a trick multiple times in order to amuse his often melancholy concubine, ordering beacon towers lit to fool his vassals and soldiers. But when enemies, led by theMarquess of Shenreally arrived at the wall, although the towers were lit, no defenders came, leading to King Yōu's death and the collapse of the Western Zhou dynasty.[3][4][5]China's system of beacon towers was not extant prior to theHan dynasty.
Thucydideswrote that during thePeloponnesian War, thePeloponnesianswho were inCorcyrawere informed by night-time beacon signals of the approach of sixty Athenian vessels fromLefkada.[6]
In the 10th century, during theArab–Byzantine wars, theByzantine Empireused abeacon systemto transmit messages from the border with theAbbasid Caliphate, acrossAnatoliato theimperial palacein the Byzantine capital,Constantinople. It was devised byLeo the Mathematicianfor EmperorTheophilos, but either abolished or radically curtailed by Theophilos' son and successor,Michael III.[7]Beacons were later used in Greece as well, while the surviving parts of the beacon system in Anatolia seem to have been reactivated in the 12th century by EmperorManuel I Komnenos.[7]
In the Nordic countries,hill fortsand beacon networks were important for warning against invasions.[8]In Sweden and Finland, these beacons, known asvårdkasarorböte, formed an extensive coastal warning system from the Late Iron Age and through the Middle Ages. Beacons were strategically placed on high ground for visibility, constructed fromtar-rich wood to ensure bright flames. They were mentioned in medieval laws likeUpplandslagenand described by Swedish writerOlaus Magnusin 1555 as tools for mobilising armed defenders during crises.[8]In Finland, similar beacons calledvainovalkeat("persecution fires") orvartiotulet("guard fires") warned settlements of raids.[9]
InWales, theBrecon Beaconswere named for beacons used to warn of approaching English raiders. In England, the most famous examples are the beacons used inElizabethan Englandto warn of the approachingSpanish Armada. Many hills in England were named Beacon Hill after such beacons. In England the authority to erect beacons originally lay with the King and later was delegated to theLord High Admiral. The money due for the maintenance of beacons was calledBeaconagiumand was levied by the sheriff of each county.[10]In theScottish borderscountry, a system of beacon fires was at one time established to warn of incursions by the English.Humeand Eggerstone castles and Soltra Edge were part of this network.[11]
In Spain, the border ofGranadain the territory of theCrown of Castilehad a complex beacon network to warn against Moorish raiders and military campaigns.[12]Due to the progressive advance of the borders throughout the process of the Reconquista, the entire Spanish geography is full of defensive lines of castles, towers and fortifications, visually connected to each other, which served as fortified beacons. Some examples are the Route of the Vinalopó castles or the distribution of the castles in Jaén.
In later centuries, advancements in technology, such as thetelegraph, rendered beacon systems obsolete for rapid communication.[13]The use of such beacons transitioned from practical communication to symbolic and ceremonial roles,[14]where the lighting of beacons was repurposed to mark significant national events.
Beacons were lit across the United Kingdom to celebrate Queen Victoria'sDiamond Jubileein 1897, Queen Elizabeth II'sPlatinum Jubilee in 2022,[15]and to commemorate events such as the 70th anniversary ofVE Day, and the 80th anniversary of theD-Day landingsin 2024.[14]
South Korea maintains a daily ceremonial beacon lighting atNamsanBeacon Mound inSeoul, where visitors witness a reenactment of the traditionalbongsuceremony, which historically signaled emergencies.[16]
Infraredstrobes and other infrared beacons have increasingly been used in modern combat when operating at night as they can only be seen throughnight vision goggles. As a result, they are often used to mark friendly positions as a form ofIFFto prevent friendly fire and improve coordination. Soldiers will typically affix them to theirhelmetsor other gear so they are easily visible to others using night vision including other infantry, ground vehicles, and aerial platforms (drones, helicopters, planes, etc.).[17]
Passive markers include IR patches, which reflect infrared light, andchemlights. The earliest such beacons were often IR chemlights taped to helmets.
As time went on, more sophisticated options began to emerge with electronically powered infrared strobes with specific mounting solutions for attaching to helmets or load bearing equipment. These strobes may have settings which allow constant on or strobes of IR light, hence the name.[18]
Advancements in near-peer technology, however, present risk since if friendly units can see the strobe with night vision so could enemies with night vision capabilities. As a result, some in the American military have stressed that efforts should be made to improve training regarding light discipline (IR and visible) and other means of reducing a unit's visible signature.[17]
Vehicular beacons are rotating or flashing lights affixed to the top of a vehicle to attract the attention of surrounding vehicles and pedestrians.Emergency vehiclessuch as fire engines, ambulances, police cars, tow trucks, construction vehicles, and snow-removal vehicles carry beacon lights.
The color of the lamps varies by jurisdiction; typical colors are blue and/or red for police, fire, and medical-emergency vehicles; amber for hazards (slow-moving vehicles, wide loads, tow trucks, security personnel, construction vehicles, etc.); green for volunteer firefighters or for medical personnel, and violet for funerary vehicles. Beacons may be constructed withhalogen bulbssimilar to those used in vehicleheadlamps, xenonflashtubes, orLEDs.[19]Incandescent and xenon light sources require the vehicle's engine to continue running to ensure that the battery is not depleted when the lights are used for a prolonged period. The low power consumption of LEDs allows the vehicle's engine to remain turned off while the lights operate.
Beacons have also allegedly been abused byshipwreckers. An illicit fire at a wrong position would be used to direct a ship againstshoalsorbeaches, so that its cargo could be looted after the ship sank or ran aground. There are, however, no historically substantiated occurrences of such intentional shipwrecking.
In wireless networks, abeaconis a type offramewhich is sent by the access point (or WiFi router) to indicate that it is on.
Bluetooth based beacons periodically send out a data packet and this could be used by software to identify the beacon location. This is typically used byindoor navigation and positioningapplications.[20]
Beaconingis the process that allows a network to self-repair network problems. The stations on the network notify the other stations on the ring when they are not receiving the transmissions. Beaconing is used in Token ring and FDDI networks.
InAeschylus' tragedyAgamemnon,[21]a chain of eight beacons staffed by so-calledlampadóphoroiinformClytemnestrainArgos, within a single night's time, thatTroyhas just fallen under her husband king Agamemnon's control, after a famousten years siege.
InJ. R. R. Tolkien'shigh fantasynovel,The Lord of the Rings, aseries of beaconsalerts the entire realm ofGondorwhen the kingdom is under attack. These beacon posts were staffed by messengers who would carry word of their lighting to eitherRohanorBelfalas.[22]InPeter Jackson'sfilm adaptation of the novel, the beacons serve as a connection between the two realms of Rohan and Gondor, alerting one another directly when they require military aid, as opposed to relying on messengers as in the novel.
The Beaconwas an influential Caribbean magazine published in Trinidad in the 1930s.New Beacon Bookswas the first Caribbean publishing house in England, founded in London in 1966, was named after theBeaconjournal.[23]
Beacons are sometimes used in retail to send digital coupons or invitations to customers passing by.[24][25]
An infrared beacon (IR beacon) transmits a modulated light beam in the infrared spectrum, which can be identified easily and positively. A line of sight clear of obstacles between the transmitter and the receiver is essential. IR beacons have a number of applications inroboticsand inCombat Identification(CID).
Infrared beacons are the key infrastructure for the Universal Traffic Management System (UTMS) in Japan. They perform two-way communication with travelling vehicles based on highly directional infrared communication technology and have a vehicle detecting capability to provide more accurate traffic information.[26]
A sonar beacon is an underwater device which transmits sonic or ultrasonic signals for the purpose of providing bearing information. The most common type is that of a rugged watertight sonar transmitter attached to a submarine and capable of operating independently of the electrical system of the boat. It can be used in cases of emergencies to guide salvage vessels to the location of a disabled submarine.[27]
|
https://en.wikipedia.org/wiki/Beacons
|
Dependency hellis acolloquial termfor the frustration of some software users who have installedsoftware packageswhich havedependencieson specificversionsof other software packages.[1]
The dependency issue arises when several packages have dependencies on the samesharedpackages or libraries, but they depend on different and incompatible versions of the shared packages. If the shared package or library can only be installed in a single version, the user may need to address the problem by obtaining newer or older versions of the dependent packages. This, in turn, may break other dependencies and push the problem to another set of packages.
Dependency hell takes several forms:
On specificcomputing platforms, "dependency hell" often goes by a local specific name, generally the name of components.
|
https://en.wikipedia.org/wiki/Dependency_hell
|
Stratisis auser-spaceconfigurationdaemonthat configures and monitors existing components fromLinux's underlying storage components oflogical volume management(LVM) andXFSfilesystem viaD-Bus.
Stratis is not a user-levelfilesystemlike theFilesystem in Userspace(FUSE) system. Stratis configuration daemon was originally developed byRed Hatto have feature parity withZFSandBtrfs. The hope was due to Stratis configuration daemon being in userland, it would more quickly reach maturity versus the years of kernel level development of file systems ZFS and Btrfs.[2][3]It is built upon enterprise-tested components LVM and XFS with over a decade of enterprise deployments and the lessons learned from System Storage Manager inRed Hat Enterprise Linux7.[4]
Stratis provides ZFS/Btrfs-style features by integrating layers of existing technology: Linux'sdevice mappersubsystem, and the XFS filesystem. Thestratisddaemon manages collections of block devices, and provides a D-BusAPI. Thestratis-cliDNFpackageprovides a command-line toolstratis, which itself uses the D-Bus API to communicate withstratisd.
|
https://en.wikipedia.org/wiki/Stratis_(configuration_daemon)
|
Simicsis afull-system simulatoror virtual platform used to run unchanged production binaries of the target hardware. Simics was originally developed by theSwedish Institute of Computer Science(SICS), and then spun off toVirtutechfor commercial development in 1998. Virtutech was acquired byIntelin 2010. Currently, Simics is provided byIntelin a public release[1]and sold commercially byWind River Systems, which was in the past a subsidiary of Intel.
Simics contains bothinstruction set simulatorsand hardware models, and is or has been used to simulate systems such asAlpha,ARM(32- and 64-bit),IA-64,MIPS(32- and 64-bit),MSP430,PowerPC(32-and64-bit),RISC-V(32-and64-bit),SPARC-V8 and V9, andx86andx86-64CPUs.
Many different operating systems have been run on various simulated virtual platforms, includingLinux,MS-DOS,Windows,VxWorks,OSE,Solaris,FreeBSD,QNX,RTEMS,UEFI, andZephyr.
TheNetBSDAMD64 port was initially developed using Simics before the public release of the chip.[2]The purpose of simulation in Simics is often to develop software for a particular type of hardware without requiring access to that precise hardware, using Simics as avirtual platform. This can applied both to pre-release and pre-silicon software development for future hardware, as well as for existing hardware. Intel uses Simics to provide its ecosystem with access to future platform months or years ahead of the hardware launch.[3]
The current version of Simics is 6 which was released publicly in 2019.[4][5]Simics runs on 64-bit x86-64 machines runningMicrosoft WindowsandLinux(32-bit support was dropped with the release of Simics 5, since 64-bit provides significant performance advantages and is universally available on current hardware). The previous version, Simics 5, was released in 2015.[6]
Simics has the ability to execute a system in forward and reverse direction.[7]Reverse debuggingcan illuminate how an exceptional condition orbugoccurred. When executing an OS such asLinuxin reverse using Simics, previously deleted files reappear when the deletion point is passed in reverse and scrolling and other graphical display and console updates occur backwards as well.
Simics is built for high performance execution of full-system models, and uses bothbinary translationandhardware-assisted virtualizationto increase simulation speed. It is natively multithreaded and can simulate multiple target (or guest) processors and boards using multiple host threads. It has been used to run simulations containing hundreds of target processors.
Thisemulation-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Simics
|
FreeOTPis afree and open-sourceauthenticatorbyRedHat. It implementsmulti-factor authenticationusingHOTPandTOTP.Tokenscan be added by scanning aQR codeor by manually entering the token configuration. It is licensed under theApache 2.0 license, and supportsAndroidandiOS.[4][5][6]
This mobile software article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/FreeOTP
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.