text stringlengths 16 172k | source stringlengths 32 122 |
|---|---|
Ingame theory, anextensive-form gameis a specification of agameallowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, theirchoices at every decision point, the (possiblyimperfect) information each player has about the other player's moves when they make a decision, and their payoffs for all possible game outcomes. Extensive-form games also allow for the representation ofincomplete informationin the form of chance events modeled as "moves by nature". Extensive-form representations differ fromnormal-formin that they provide a more complete description of the game in question, whereas normal-form simply boils down the game into a payoff matrix.
Some authors, particularly in introductory textbooks, initially define the extensive-form game as being just agame treewith payoffs (no imperfect or incomplete information), and add the other elements in subsequent chapters as refinements. Whereas the rest of this article follows this gentle approach with motivating examples, we present upfront the finite extensive-form games as (ultimately) constructed here. This general definition was introduced byHarold W. Kuhnin 1953, who extended an earlier definition ofvon Neumannfrom 1928. Following the presentation fromHart (1992), ann-player extensive-form game thus consists of the following:
A play is thus a path through the tree from the root to a terminal node. At any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. At any rational player's node, the player must choose one of the equivalence classes for the edges, which determines precisely one outgoing edge except (in general) the player doesn't know which one is being followed. (An outside observer knowing every other player's choices up to that point, and therealizationof Nature's moves, can determine the edge precisely.) Apure strategyfor a player thus consists of aselection—choosing precisely one class of outgoing edges for every information set (of his). In a game of perfect information, the information sets aresingletons. It's less evident how payoffs should be interpreted in games with Chance nodes. It is assumed that each player has avon Neumann–Morgenstern utility functiondefined for every game outcome; this assumption entails that every rational player will evaluate ana priorirandom outcome by itsexpectedutility.
The above presentation, while precisely defining the mathematical structure over which the game is played, elides however the more technical discussion of formalizing statements about how the game is played like "a player cannot distinguish between nodes in the same information set when making a decision". These can be made precise usingepistemic modal logic; seeShoham & Leyton-Brown (2009, chpt. 13) for details.
Aperfect informationtwo-player game over agame tree(as defined incombinatorial game theoryandartificial intelligence) can be represented as an extensive form game with outcomes (i.e. win, lose, ordraw). Examples of such games includetic-tac-toe,chess, andinfinite chess.[1][2]A game over anexpectminimax tree, like that ofbackgammon, has no imperfect information (all information sets are singletons) but has moves of chance. For example,pokerhas both moves of chance (the cards being dealt) and imperfect information (the cards secretly held by other players). (Binmore 2007, chpt. 2)
A complete extensive-form representation specifies:
The game on the right has two players: 1 and 2. The numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players (e.g. 2,1 represents a payoff of 2 to player 1 and a payoff of 1 to player 2). The labels by every edge of the graph are the name of the action that edge represents.
The initial node belongs to player 1, indicating that player 1 moves first. Play according to the tree is as follows: player 1 chooses betweenUandD; player 2 observes player 1's choice and then chooses betweenU'andD'. The payoffs are as specified in the tree. There are four outcomes represented by the four terminal nodes of the tree: (U,U'), (U,D'), (D,U') and (D,D'). The payoffs associated with each outcome respectively are as follows (0,0), (2,1), (1,2) and (3,1).
If player 1 playsD, player 2 will playU'to maximise their payoff and so player 1 will only receive 1. However, if player 1 playsU, player 2 maximises their payoff by playingD'and player 1 receives 2. Player 1 prefers 2 to 1 and so will playUand player 2 will playD'. This is thesubgame perfect equilibrium.
An advantage of representing the game in this way is that it is clear what the order of play is. The tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this. One player does not always observe the choice of another (for example, moves may be simultaneous or a move may be hidden). Aninformation setis a set of decision nodes such that:
In extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set.
If a game has an information set with more than one member that game is said to haveimperfect information. A game withperfect informationis such that at any stage of the game, every player knows exactly what has taken place earlier in the game; i.e. every information set is asingletonset.[1][2]Any game without perfect information has imperfect information.
The game on the right is the same as the above game except that player 2 does not know what player 1 does when they come to play. The first game described has perfect information; the game on the right does not. If both players are rational and both know that both players are rational and everything that is known by any player is known to be known by every player (i.e. player 1 knows player 2 knows that player 1 is rational and player 2 knows this, etc.ad infinitum), play in the first game will be as follows: player 1 knows that if they playU, player 2 will playD'(because for player 2 a payoff of 1 is preferable to a payoff of 0) and so player 1 will receive 2. However, if player 1 playsD, player 2 will playU'(because to player 2 a payoff of 2 is better than a payoff of 1) and player 1 will receive 1. Hence, in the first game, the equilibrium will be (U,D') because player 1 prefers to receive 2 to 1 and so will playUand so player 2 will playD'.
In the second game it is less clear: player 2 cannot observe player 1's move. Player 1 would like to fool player 2 into thinking they have playedUwhen they have actually playedDso that player 2 will playD'and player 1 will receive 3. In fact in the second game there is aperfect Bayesian equilibriumwhere player 1 playsDand player 2 playsU'and player 2 holds the belief that player 1 will definitely playD. In this equilibrium, every strategy is rational given the beliefs held and every belief is consistent with the strategies played. Notice how the imperfection of information changes the outcome of the game.
To more easily solve this game for theNash equilibrium,[3]it can be converted to thenormal form.[4]Given this is asimultaneous/sequentialgame, player one and player two each have twostrategies.[5]
We will have a two by two matrix with a unique payoff for each combination of moves. Using the normal form game, it is now possible to solve the game and identify dominant strategies for both players.
These preferences can be marked within the matrix, and any box where both players have a preference provides a nash equilibrium. This particular game has a single solution of (D,U’) with a payoff of (1,2).
In games with infinite action spaces and imperfect information, non-singleton information sets are represented, if necessary, by inserting a dotted line connecting the (non-nodal) endpoints behind the arc described above or by dashing the arc itself. In theStackelberg competitiondescribed above, if the second player had not observed the first player's move the game would no longer fit the Stackelberg model; it would beCournot competition.
It may be the case that a player does not know exactly what the payoffs of the game are or of whattypetheir opponents are. This sort of game hasincomplete information. In extensive form it is represented as a game with complete but imperfect information using the so-calledHarsanyitransformation. This transformation introduces to the game the notion ofnature's choiceorGod's choice. Consider a game consisting of an employer considering whether to hire a job applicant. The job applicant's ability might be one of two things: high or low. Their ability level is random; they either have low ability with probability 1/3 or high ability with probability 2/3. In this case, it is convenient to model nature as another player of sorts who chooses the applicant's ability according to those probabilities. Nature however does not have any payoffs. Nature's choice is represented in the game tree by a non-filled node. Edges coming from a nature's choice node are labelled with the probability of the event it represents occurring.
The game on the left is one of complete information (all the players and payoffs are known to everyone) but of imperfect information (the employer doesn't know what nature's move was.) The initial node is in the centre and it is not filled, so nature moves first. Nature selects with the same probability the type of player 1 (which in this game is tantamount to selecting the payoffs in the subgame played), either t1 or t2. Player 1 has distinct information sets for these; i.e. player 1 knows what type they are (this need not be the case). However, player 2 does not observe nature's choice. They do not know the type of player 1; however, in this game they do observe player 1's actions; i.e. there is perfect information. Indeed, it is now appropriate to alter the above definition of complete information: at every stage in the game, every player knows what has been playedby the other players. In the case of private information, every player knows what has been played by nature. Information sets are represented as before by broken lines.
In this game, if nature selects t1 as player 1's type, the game played will be like the very first game described, except that player 2 does not know it (and the very fact that this cuts through their information sets disqualify it fromsubgamestatus). There is oneseparatingperfect Bayesian equilibrium; i.e. an equilibrium in which different types do different things.
If both types play the same action (pooling), an equilibrium cannot be sustained. If both playD, player 2 can only form the belief that they are on either node in the information set with probability 1/2 (because this is the chance of seeing either type). Player 2 maximises their payoff by playingD'. However, if they playD', type 2 would prefer to playU. This cannot be an equilibrium. If both types playU, player 2 again forms the belief that they are at either node with probability 1/2. In this case player 2 playsD', but then type 1 prefers to playD.
If type 1 playsUand type 2 playsD, player 2 will playD'whatever action they observe, but then type 1 prefersD. The only equilibrium hence is with type 1 playingD, type 2 playingUand player 2 playingU'if they observeDand randomising if they observeU. Through their actions, player 1 hassignalledtheir type to player 2.
Formally, a finite game in extensive form is a structureΓ=⟨K,H,[(Hi)i∈I],{A(H)}H∈H,a,ρ,u⟩{\displaystyle \Gamma =\langle {\mathcal {K}},\mathbf {H} ,[(\mathbf {H} _{i})_{i\in {\mathcal {I}}}],\{A(H)\}_{H\in \mathbf {H} },a,\rho ,u\rangle }where:
∀H∈H,∀v∈H{\displaystyle \forall H\in \mathbf {H} ,\forall v\in H}, the restrictionav:s(v)→A(H){\displaystyle a_{v}:s(v)\rightarrow A(H)}ofa{\displaystyle a}ons(v){\displaystyle s(v)}is a bijection, withs(v){\displaystyle s(v)}the set of successor nodes ofv{\displaystyle v}.
It may be that a player has an infinite number of possible actions to choose from at a particular decision node. The device used to represent this is an arc joining two edges protruding from the decision node in question. If the action space is a continuum between two numbers, the lower and upper delimiting numbers are placed at the bottom and top of the arc respectively, usually with a variable that is used to express the payoffs. The infinite number of decision nodes that could result are represented by a single node placed in the centre of the arc. A similar device is used to represent action spaces that, whilst not infinite, are large enough to prove impractical to represent with an edge for each action.
The tree on the left represents such a game, either with infinite action spaces (anyreal numberbetween 0 and 5000) or with very large action spaces (perhaps anyintegerbetween 0 and 5000). This would be specified elsewhere. Here, it will be supposed that it is the former and, for concreteness, it will be supposed it represents two firms engaged inStackelberg competition. The payoffs to the firms are represented on the left, withq1{\displaystyle q_{1}}andq2{\displaystyle q_{2}}as the strategy they adopt andc1{\displaystyle c_{1}}andc2{\displaystyle c_{2}}as some constants (here marginal costs to each firm). Thesubgame perfect Nash equilibriaof this game can be found by taking thefirst partial derivative[citation needed]of each payoff function with respect to the follower's (firm 2) strategy variable (q2{\displaystyle q_{2}}) and finding itsbest responsefunction,q2(q1)=5000−q1−c22{\displaystyle q_{2}(q_{1})={\tfrac {5000-q_{1}-c_{2}}{2}}}. The same process can be done for the leader except that in calculating its profit, it knows that firm 2 will play the above response and so this can be substituted into its maximisation problem. It can then solve forq1{\displaystyle q_{1}}by taking the first derivative, yieldingq1∗=5000+c2−2c12{\displaystyle q_{1}^{*}={\tfrac {5000+c_{2}-2c_{1}}{2}}}. Feeding this into firm 2's best response function,q2∗=5000+2c1−3c24{\displaystyle q_{2}^{*}={\tfrac {5000+2c_{1}-3c_{2}}{4}}}and(q1∗,q2∗){\displaystyle (q_{1}^{*},q_{2}^{*})}is the subgame perfect Nash equilibrium.
Historical papers | https://en.wikipedia.org/wiki/Extensive-form_game |
Game classificationis the classification of games, forming agame taxonomy. Many different methods of classifying games exist.
There are four basic approaches to classifying the games used inphysical education:[1]
Games further divided as per the physical activity are mainly divided into three categories: soft active sports, medium active sports, and highly active sports.
There are several methods of classifying video games, alongside the system ofvideo game genrescommonly used by retailers and player communities.
Solomon[3]puts forward a "commonsense, but broad" classification of video games, into simulations (the game reflects reality), abstract games (the game itself is the focus of interest), and sports. In addition to these, he points out that games (in general, not just video games) fall into classes according to the number of players. Games with two players encompassboard gamessuch aschess. Games with multiple players encompasscard gamessuch aspoker, and marketed family games such asMonopolyandScrabble. Puzzles andSolitaireare one-player games. He also includes zero-player games, such asConway's Game of Life, although acknowledging that others argue that such games do not constitute a game, because they lack any element of competition. He asserts that such zero-player games are nonetheless games because they are used recreationally.
Another method, developed by Wright,[4]divides games into the following categories:educationalor informative, sports, sensorimotor (e.g.action games, video games,fightingandshoot 'em upgames, and driving andracing simulators), other vehicular simulators (not covered by driving and racing),strategy games(e.g. adventure games, war games, strategic simulations,role-playing games, and puzzles), and "other".[5]
A third method, developed by Funk and Buchman,[6]and refined by others, classifies electronic games into six categories: general entertainment (no fighting or destruction), educational (learning or problem-solving), fantasy violence (cartoon characters that must fight or destroy things, and risk being killed, to achieve a goal), human violence (like fantasy violence, but with human rather than cartoon characters), nonviolent sports (no fighting or destruction), and sports violence (fighting or destruction involved).[5]
Games can be categorized by the source of uncertainty which confront the players:[7][8]
Based on these three causes, three classes of games arise:
Game theory classifies games according to several criteria: whether a game is asymmetric gameor an asymmetric one, what a game's "sum" is (zero-sum, constant sum, and so forth), whether a game is asequential gameor a simultaneous one, whether a game comprisesperfect informationor imperfect information, and whether a game isdeterminate. | https://en.wikipedia.org/wiki/Game_classification |
Grundy's gameis a two-player mathematical game of strategy. The starting configuration is a single heap of objects, and the two players take turn splitting a single heap into two heaps of different sizes. The game ends when only heaps of size two and smaller remain, none of which can be split unequally. The game is usually played as anormal playgame, which means that the last person who can make an allowed move wins.
A normal play game starting with a single heap of 8 is a win for the first player provided they start by splitting the heap into heaps of 7 and 1:
Player 2 now has three choices: splitting the 7-heap into 6 + 1, 5 + 2, or 4 + 3. In each of these cases, player 1 can ensure that on the next move he hands back to his opponent a heap of size 4 plus heaps of size 2 and smaller:
Now player 2 has to split the 4-heap into 3 + 1, and player 1 subsequently splits the 3-heap into 2 + 1:
The game can be analysed using theSprague–Grundy theorem. This requires the heap sizes in the game to be mapped onto equivalentnim heap sizes. This mapping is captured in theOn-Line Encyclopedia of Integer SequencesasOEIS:A002188:
Using this mapping, the strategy for playing the gameNimcan also be used for Grundy's game. Whether the sequence of nim-values of Grundy's game ever becomes periodic is an unsolved problem.Elwyn Berlekamp,John Horton ConwayandRichard Guyhave conjectured[1]that the sequence does become periodic eventually, but despite the calculation of the first 235values byAchim Flammenkamp, the question has not been resolved. | https://en.wikipedia.org/wiki/Grundy%27s_game |
Apositional game[1][2]ingame theoryis a kind of acombinatorial gamefor two players. It is described by:
During the game, players alternately claim previously-unclaimed positions, until one of the players wins. If all positions inX{\displaystyle X}are taken while no player wins, the game is considered a draw.
The classic example of a positional game istic-tac-toe. In it,X{\displaystyle X}contains the 9 squares of the game-board,F{\displaystyle {\mathcal {F}}}contains the 8 lines that determine a victory (3 horizontal, 3 vertical and 2 diagonal), and the winning criterion is: the first player who holds an entire winning-set wins. Other examples of positional games areHexand theShannon switching game.
For every positional game there are exactly three options: either the first player has awinning strategy, or the second player has a winning strategy, or both players have strategies to enforce a draw.[2]: 7The main question of interest in the study of these games is which of these three options holds in any particular game.
A positional game is finite, deterministic and hasperfect information; therefore, in theory it is possible to create the fullgame treeand determine which of these three options holds. In practice, however, the game-tree might be enormous. Therefore, positional games are usually analyzed via more sophisticated combinatorial techniques.
Often, the input to a positional game is considered ahypergraph. In this case:
There are many variants of positional games, differing in their rules and their winning criteria.
The following table lists some specific positional games that were widely studied in the literature. | https://en.wikipedia.org/wiki/Positional_game |
Sylver coinageis amathematical gamefor two players, invented byJohn H. Conway.[1]The two players take turns naming positiveintegersthat are not the sum of nonnegative multiples of previously named integers. The player who names 1 loses. For instance, if player A opens with 2, B can win by naming 3 as A is forced to name 1.[2]Sylver coinage is an example of a game usingmisère playbecause the player who is last able to move loses.
Sylver coinage is named afterJames Joseph Sylvester,[2][3]who proved that ifaandbarerelatively primepositive integers, then (a− 1)(b− 1) − 1 is the largest number that is not a sum of nonnegative multiples ofaandb.[4]Thus, ifaandbare the first two moves in a game of sylver coinage, this formula gives the largest number that can still be played. More generally, if thegreatest common divisorof the moves played so far isg, then only finitely many multiples ofgcan remain to be played, and after they are all played thengmust decrease on the next move. Therefore, every game of sylver coinage must eventually end.[2]When a sylver coinage game has only a finite number of remaining moves, the largest number that can still be played is called theFrobenius number, and finding this number is called thecoin problem.[5]
A sample game between A and B:
Each of A's moves was to a winning position.
Unlike many similar mathematical games, sylver coinage has not been completely solved, mainly because many positions have infinitely many possible moves. Furthermore, the main theorem that identifies a class of winning positions, due to R. L. Hutchings, guarantees that such a position has a winning strategy but does not identify the strategy. Hutchings's Theorem states that any of theprime numbers5, 7, 11, 13, …, wins as a first move, but very little is known about the subsequent winning moves: these are the only winning openings known.[2][5]
When thegreatest common divisorof the moves that have been made so far is 1, the remaining set of numbers that can be played will be afinite set, and can be described mathematically as the set of gaps of anumerical semigroup. Some of these finite positions, including all of the positions after the second player has responded to one of Hutchings' winning moves, allow a special move that Sicherman calls an "ender".
An ender is a number that may only be played immediately: playing any other number would rule it out. If an ender exists, it is always the largest number that can still be played. For instance, after the moves (4,5), the largest number that can still be played is 11. Playing 11 cannot rule out any smaller numbers, but playing any of the smaller available numbers (1, 2, 3, 6, or 7) would rule out playing 11, so 11 is an ender. When an ender exists, the next player can win by following astrategy-stealing argument. If one of the non-ender moves can win, the next player takes that winning move. And if none of the non-ender moves wins, then the next player can win by playing the ender and forcing the other player to make one of the other non-winning moves. However, although this argument proves that the next player can win, it does not identify a winning strategy for the player. After playing a prime number that is 5 or larger as a first move, the first player in a game of sylver coinage can always win by following this (non-constructive) ender strategy on their next turn.[2][3]
If there are any other winning openings, they must be 3-smooth numbers(numbers of the form2i3jfor non-negative integersiandj).
For, if any numbernthat is not of this form and is not prime is played, then the second player can win by choosing a large prime factor ofn.
The first few 3-smooth numbers, 1, 2, 3, 4, 6, 8, 9, and 12, are all losing openings, for which complete strategies are known by which the second player can win.
ByDickson's lemma(applied to the pairs of exponents(i,j)of these numbers), only finitely many 3-smooth numbers can be winning openings, but it is not known whether any of them are.[2][5]In 2017,Conway (2017)offered a $1000 prize for determining who wins in the first unsolved case, the opening move 16, as part of a set of prize problems also includingConway's 99-graph problem, the minimum spacing ofDanzer sets, and thethrackle conjecture.[6] | https://en.wikipedia.org/wiki/Sylver_coinage |
Wythoff's gameis a two-player mathematicalsubtraction game, played with two piles of counters. Players take turns removing counters from one or both piles; when removing counters from both piles, the numbers of counters removed from each pile must be equal. The game ends when one player removes the last counter or counters, thus winning.
An equivalent description of the game is that a singlechess queenis placed somewhere on a large grid of squares, and each player can move the queen towards the lower left corner of the grid: south, west, or southwest, any number of steps. The winner is the player who moves the queen into the corner. The twoCartesian coordinatesof the queen correspond to the sizes of two piles in the formulation of the game involving removing counters from piles.
Martin Gardnerin his March 1977 "Mathematical Games column" inScientific Americanclaims that the game was played in China under the name 捡石子jiǎn shízǐ("picking stones").[1]The Dutch mathematicianW. A. Wythoffpublished a mathematical analysis of the game in 1907.[2]
Any position in the game can be described by a pair ofintegers(n,m) withn≤m, describing the size of both piles in the position or the coordinates of the queen. The strategy of the game revolves aroundcold positionsandhot positions: in a cold position, the player whose turn it is to move will lose with best play, while in a hot position, the player whose turn it is to move will win with best play. Theoptimalstrategy from a hot position is to move to any reachable cold position.
The classification of positions into hot and cold can be carried outrecursivelywith the following three rules:
For instance, all positions of the form (0,m) and (m,m) withm> 0 are hot, by rule 2. However, the position (1,2) is cold, because the only positions that can be reached from it, (0,1), (0,2), (1,0) and (1,1), are all hot. The cold positions (n,m) with the smallest values ofnandmare (0, 0), (1, 2), (3, 5), (4, 7), (6, 10) and (8, 13). (sequenceA066096andA090909inOEIS) (Also seeOEIS:A072061)
Formisère gameof this game, (0, 1) and (2, 2) are cold positions, and a position (n,m) withm,n> 2 is cold if and only if (n,m) in normal game is cold.
Wythoff discovered that the cold positions follow a regular pattern determined by thegolden ratio. Specifically, ifkis any natural number and
where φ is the golden ratio and we are using thefloor function, then (nk,mk) is thekthcold position. These two sequences of numbers are recorded in theOnline Encyclopedia of Integer SequencesasOEIS:A000201andOEIS:A001950, respectively.
The two sequencesnkandmkare theBeatty sequencesassociated with the equation
As is true in general for pairs of Beatty sequences, these two sequences arecomplementary: each positive integer appears exactly once in either sequence. | https://en.wikipedia.org/wiki/Wythoff%27s_game |
Inmathematics, atopological gameis an infinite game ofperfect informationplayed between two players on atopological space. Players choose objects with topological properties such as points,open sets,closed setsandopen coverings. Time is generally discrete, but the plays may havetransfinitelengths, and extensions to continuum time have been put forth. The conditions for a player to win can involve notions liketopological closureandconvergence.
It turns out that some fundamental topological constructions have a natural counterpart in topological games; examples of these are theBaire property,Baire spaces, completeness and convergence properties, separation properties, covering and base properties, continuous images, Suslin sets, and singular spaces. At the same time, some topological properties that arise naturally in topological games can be generalized beyond agame-theoreticcontext: by virtue of this duality, topological games have been widely used to describe new properties of topological spaces, and to put known properties under a different light. There are also close links withselection principles.
The termtopological gamewas first introduced byClaude Berge,[1][2][3]who defined the basic ideas and formalism in analogy withtopological groups. A different meaning fortopological game, the concept of “topological properties defined by games”, was introduced in the paper of Rastislav Telgársky,[4]and later "spaces defined by topological games";[5]this approach is based on analogies with matrix games,differential gamesand statistical games, and defines and studies topological games within topology. After more than 35 years, the term “topological game” became widespread, and appeared in several hundreds of publications. The survey paper of Telgársky[6]emphasizes the origin of topological games from theBanach–Mazur game.
There are two other meanings of topological games, but these are used less frequently.
Many frameworks can be defined for infinitepositional gamesof perfect information.
The typical setup is a game between two players,IandII, who alternately pick subsets of a topological spaceX. In thenth round, playerIplays a subsetInofX, and player II responds with a subsetJn. There is a round for every natural numbern, and after all rounds are played, playerIwins if the sequence
satisfies some property, and otherwise playerIIwins.
The game is defined by the target property and the allowed moves at each step. For example, in theBanach–Mazur gameBM(X), the allowed moves are nonempty open subsets of the previous move, and playerIwins if⋂nIn≠∅{\displaystyle \bigcap _{n}I_{n}\neq \emptyset }.
This typical setup can be modified in various ways. For example, instead of being a subset ofX, each move might consist of a pair(I,p){\displaystyle (I,p)}whereI⊆X{\displaystyle I\subseteq X}andp∈x{\displaystyle p\in x}. Alternatively, the sequence of moves might have length someordinal numberother thanω.
The first topological game studied was the Banach–Mazur game, which is a motivating example of the connections between game-theoretic notions and topological properties.
LetYbe a topological space, and letXbe a subset ofY, called thewinning set. PlayerIbegins the game by picking a nonempty open subsetI0⊆Y{\displaystyle I_{0}\subseteq Y}, and playerIIresponds with a nonempty open subsetJ0⊆I0{\displaystyle J_{0}\subseteq I_{0}}. Play continues in this fashion, with players alternately picking a nonempty open subset of the previous play. After an infinite sequence of moves, one for each natural number, the game is finished, andIwins if and only if
The game-theoretic and topological connections demonstrated by the game include:
Some other notable topological games are:
Many more games have been introduced over the years, to study, among others: theKuratowskicoreduction principle; separation and reduction properties of sets in close projective classes;Luzinsieves; invariantdescriptive set theory;Suslin sets; theclosed graph theorem;webbed spaces; MP-spaces; theaxiom of choice;computable functions. Topological games have also been related to ideas inmathematical logic,model theory,infinitely-long formulas, infinite strings of alternating quantifiers,ultrafilters,partially ordered sets, and thechromatic numberof infinite graphs.
For a longer list and a more detailed account see the 1987 survey paper of Telgársky.[6] | https://en.wikipedia.org/wiki/Topological_game |
Zugzwang(fromGerman'compulsion to move';pronounced[ˈtsuːktsvaŋ]) is a situation found inchessand otherturn-based gameswherein oneplayeris put at a disadvantage because of their obligation to make a move; a player is said to be "in zugzwang" when any legal move will worsen their position.[1]
Although the term is used less precisely in games such as chess, it is used specifically incombinatorial game theoryto denote a move that directly changes the outcome of the game from a win to a loss.[2][3]Putting the opponent in zugzwang is a common way to help the superior side win a game, and in some cases it is necessary in order to make the win possible.[4]More generally, the term can also be used to describe a situation where none of the available options lead to a good outcome.[5][6][7]
The termzugzwangwas used in German chess literature in 1858 or earlier,[8]and the first known use of the term in English was byWorld ChampionEmanuel Laskerin 1905.[9]The concept of zugzwang was known to chess players many centuries before the term was coined, appearing in anendgame studypublished in 1604 byAlessandro Salvio, one of the first writers on the game, and inshatranjstudies dating back to the early 9th century, over 1000 years before the first known use of the term. Internationalchess notationuses the symbol "⊙" to indicate a zugzwang position.
Positions with zugzwang occur fairly often in chessendgames, especially inking and pawn endgamesand elementary checkmates (such as a rook and king against a lone king). According toJohn Nunn, positions of reciprocal zugzwang are surprisingly important in the analysis of endgames.[10][11]
The word comes from GermanZug'move' +Zwang'compulsion', so thatZugzwangmeans 'being forced to make a move'. Originally the term was used interchangeably with the termZugpflicht'obligation to make a move' as a general game rule. Games like chess andcheckershave "zugzwang" (or "zugpflicht"): a playermustalways make a move on their turn even if this is to their disadvantage. Over time, the term became especially associated with chess.
According to chesshistorianEdward Winter, the term had been in use in German chess circles in the 19th century.[8]
Pages 353–358 of the September 1858Deutsche Schachzeitunghad an unsigned article"Zugzwang, Zugwahl und Privilegien".Friedrich Amelungemployed the termsZugzwang,TempozwangandTempozugzwangon pages 257–259 of the September 1896 issue of the same magazine. When a perceived example of zugzwang occurred in the third game of the 1896–97 world championship match between Steinitz and Lasker, after 34...Rg8, theDeutsche Schachzeitung(December 1896, page 368) reported that "White has died of zugzwang".
The earliest known use of the term zugzwang in English was on page 166 of the February 1905 issue ofLasker'sChess Magazine.[9]The term did not become common in English-language chess sources until the 1930s, after the publication of the English translation of Nimzowitsch'sMy Systemin 1929.[8]
The concept of zugzwang, if not the term, must have been known to players for many centuries. Zugzwang is required to win the elementary (and common)king and rook versus kingendgame,[12]and the king and rook (or differently-named pieces with the same powers) have been chess pieces since the earliest versions of the game.[13]
Other than basiccheckmates, the earliest published use of zugzwang may be in this study by Zairab Katai, which was published sometime between 813 and 833, discussingshatranj. After
puts Black in zugzwang, since the black king must abandon its attack on the white rook and thus allow the white king to trap the knight: 3...Kc4 4.Kg3 (or Kg4) Kd4 5.Re1 and White wins.[14]
The concept of zugzwang is also seen in the 1585endgame studybyGiulio Cesare Polerio, published in 1604 byAlessandro Salvio, one of the earliest writers on the game.[15]The only way for White to win is 1.Ra1 Kxa1 2.Kc2, placing Black in zugzwang. The only legal move is 2...g5, whereupon Whitepromotesa pawn first and thencheckmateswith 3.hxg5 h4 4.g6 h3 5.g7 h2 6.g8=Q h1=Q 7.Qg7#.[16]
Joseph Bertinrefers to zugzwang inThe Noble Game of Chess(1735), wherein he documents 19 rules about chess play. His 18th rule is: "To play well the latter end of a game, you must calculate who has the move, on which the game always depends."[17]
François-André Danican Philidorwrote in 1777 of the position illustrated that after White plays 36.Kc3, Black "is obliged to move his rook from his king, which gives you an opportunity of taking his rook by adouble check[sic], or making himmate".[18]Lasker explicitly cited a mirror image of this position (White: king on f3, queen on h4; Black: king on g1, rook on g2) as an example of zugzwang inLasker's Manual of Chess.[19]The British masterGeorge Walkeranalyzed a similar position in the same endgame, giving a maneuver (triangulation) that resulted in the superior side reaching the initial position, but now with the inferior side on move and in zugzwang. Walker wrote of the superior side's decisive move: "throwing the move upon Black, in the initial position, and thereby winning".[20]
Paul Morphyis credited with composing the position illustrated "while still a young boy". After 1.Ra6, Black is in zugzwang and must allow mate on the next move with 1...bxa6 2.b7# or 1...B (moves) 2.Rxa7#.[21]
There are three types of chess positions: either none, one, or both of the players would be at a disadvantage if it were their turn to move. The great majority of positions are of the first type. In chess literature, most writers call positions of the second typezugzwang, and the third typereciprocal zugzwangormutual zugzwang. Some writers call the second type asqueezeand the third typezugzwang.[22]
Normally in chess, havingtempois desirable because the player who is to move has the advantage of being able to choose a move that improves their situation. Zugzwang typically occurs when "the player to move cannot do anything without making an important concession".[23][24]
Zugzwang most often occurs in the endgame when the number of pieces, and so the number of possible moves, is reduced, and the exact move chosen is often critical.[25]The first diagram shows the simplest possible example of zugzwang. If it is White's move, they must eitherstalemateBlack with 1.Kc6 or abandon thepawn, allowing 1...Kxc7 with a draw. If it is Black's move, the only legal move is 1...Kb7, which allows White to win with 2.Kd7 followed byqueeningthe pawn on the next move.
The second diagram is another simple example. Black, on move, must allow White to play Kc5 or Ke5, when White wins one or morepawnsand can advance their own pawn towardpromotion. White, on move, must retreat theirking, when Black is out of danger.[26]The squares d4 and d6 arecorresponding squares. Whenever the white king is on d4 with White to move, the black king must be on d6 to prevent the advance of the white king.
In many cases, the player having the move can put the other player in zugzwang by usingtriangulation. This often occurs in king and pawn endgames. Pieces other than the king can also triangulate to achieve zugzwang, such as in thePhilidor position. Zugzwang is a mainstay ofchess compositionsand occurs frequently inendgame studies.
Some zugzwang positions occurred in the second game of the 1971candidates matchbetweenBobby FischerandMark Taimanov.[27]In the position in the diagram, Black is in zugzwang because he would rather not move, but he must: a king move would lose the knight, while a knight move would allow thepassed pawnto advance.[28]The game continued:
and Black is again in zugzwang. The game ended shortly (because the pawn will slip through andpromote):[29]
In the position shown, White has just gotten his king to a6, where it attacks the black pawn on b6, tying down the black king to defend it. White now needs to get hisbishopto f7 or e8 to attack the pawn on g6. Play continued:
Now the bishop is able to make a waiting move. It is able to do so while maintaining access to f7, so that it can reach e8 safely, where it attacks the pawn on g6 and restricts the black king from c6.
and Black is in zugzwang. Knights are unable to lose a tempo,[30]so moving the knight would allow the bishop to capture thekingsidepawns. The black king must give way.
and White has a winning position. Either one of White'squeensidepawns will promote or the white king will attack and win the black kingside pawns and a kingside pawn will promote. Blackresignedseven moves later.[31][32][33]Andy Soltissays that this is "perhaps Fischer's most famous endgame".[34]
This position from a 1988 game betweenVitaly TseshkovskyandGlenn Flearat Wijk aan Zee shows an instance of "zugzwang" where the obligation to move makes the defense more difficult, but it does not mean the loss of the game. Adraw by agreementwas reached eleven moves later.[35][36]
A special case of zugzwang isreciprocal zugzwangormutual zugzwang, which is a position such that whoever is to move is in zugzwang. Studying positions of reciprocal zugzwang is in the analysis of endgames.[10][11]A position of mutual zugzwang is closely related to a game with a Conway value of zero ingame theory.[37]
In a position with reciprocal zugzwang, only the player to move is actually in zugzwang. However, the player who is not in zugzwang must play carefully because one inaccurate move can cause them to be put in zugzwang.[38]That is in contrast to regular zugzwang, because the superior side usually has awaiting moveor can triangulate to put the opponent in zugzwang.[11]
The diagram shows a position of reciprocal zugzwang. If Black is to move, 1... Kd7 is forced, which loses because White will move 2. Kb7, promote the pawn, and win. If White is to move the result is a draw as White must either stalemate Black with 1. Kc6 or allow Black tocapturethe pawn. Since each side would be in zugzwang if it were their move, it is a reciprocal zugzwang.[39][40]
An extreme type of reciprocal zugzwang, calledtrébuchet, is shown in the diagram. It is also called afull-point mutual zugzwangbecause it will result in a loss for the player in zugzwang, resulting in a full point for the opponent.[41]Whoever is to move in this position must abandon their ownpawn, thus allowing the opponent to capture it and proceed topromotetheir own pawn, resulting in an easily winnable position.[42]
Corresponding squaresare squares of mutual zugzwang. When there is only one pair of corresponding squares, they are calledmined squares.[43]A player will fall into zugzwang if they move their king onto the square and their opponent is able to move onto the corresponding square. In the diagram here, if either king moves onto the square marked with the dot of the same color, it falls into zugzwang if the other king moves into the mined square near them.[44]
Zugzwang usually works in favor of the stronger side, but sometimes it aids the defense. In this position based on a game betweenZoltán VargaandPéter Ács, it saves the game for the defense:
Reciprocal zugzwang.
Reciprocal zugzwang again.
Reciprocal zugzwang again.
This position is a draw and the playersagreed to a drawa few moves later.[45]
Alex Angos notes that, "As the number of pieces on the board increases, the probability forzugzwangto occur decreases."[46]As such, zugzwang is very rarely seen in themiddlegame.[47]
The gameFritz Sämisch–Aron Nimzowitsch,Copenhagen1923,[48]is often called the "Immortal Zugzwang Game". According to Nimzowitsch, writing in theWiener Schachzeitungin 1925, this term originated in "Danish chess circles".[8]Some consider the final position to be an extremely rare instance of zugzwang occurring in the middlegame.[49]It ended with Whiteresigningin the position in the diagram.
White has a few pawn moves which do not lose material, but eventually he will have to move one of his pieces. If he plays 1.Rc1 or Rd1, then 1...Re2 traps White's queen; 1.Kh2 fails to 1...R5f3, also trapping the queen, since White cannot play 2.Bxf3 because the bishop ispinnedto the king; 1.g4 runs into 1...R5f3 2.Bxf3?Rh2 mate. Angos analyzes 1.a3 a5 2.axb4 axb4 3.h4 Kh8 (waiting) 4.b3 Kg8 and White has run out of waiting moves and must lose material. Best in this line is 5.Nc3!? bxc3 6.Bxc3, which just leaves Black with a serious positional advantage and an extra pawn.[50]Other moves lose material in more obvious ways.
However, since Black would win even without the zugzwang,[51]it is debatable whether the position is true zugzwang. Even if White could pass his move he would still lose, albeit more slowly, after 1...R5f3 2.Bxf3 Rxf3, trapping the queen and thus winning queen and bishop for two rooks.[52]Wolfgang Heidenfeldthus considers it a misnomer to call this a true zugzwang position.[53]See alsoImmortal Zugzwang Game § Objections to the sobriquet.
This game betweenWilhelm SteinitzandEmanuel Laskerin the 1896–97World Chess Championship,[54]is an early example of zugzwang in the middlegame. After Lasker's 34...Re8–g8!, Steinitz had noplayablemoves, andresigned.[55][56][57][58]White's bishop cannot move because that would allow the crushing ...Rg2+. The queen cannot move without abandoning either its defense of the bishop on g5 or of the g2 square, where it is preventing ...Qg2#. Attempting to push the f-pawn to promotion with 35.f6 loses the bishop: 35...Rxg5 36. f7 Rg2+, forcingmate. The move 35.Kg1 allows 35...Qh1+ 36.Kf2 Qg2+ followed by capturing the bishop. The rook cannot leave the firstrank, as that would allow 35...Qh1#. Rook moves along the first rank other than 35.Rg1 allow 35...Qxf5, when 36.Bxh4 is impossible because of 36...Rg2+; for example, 35.Rd1 Qxf5 36.d5 Bd7, winning. That leaves only 35.Rg1, when Black wins with 35...Rxg5! 36.Qxg5 (36.Rxg5? Qh1#) Qd6+ 37.Rg3 hxg3+ 38.Qxg3 Be8 39.h4 Qxg3+ 40.Kxg3 b5! 41.axb5 a4! and Blackqueensfirst.[55]Colin Crouchcalls the final position, "An even more perfect middlegame zugzwang than ... Sämisch–Nimzowitsch ... in the final position Black has no direct threats, and no clear plan to improve the already excellent positioning of his pieces, and yet any move by White loses instantly."[59]
Soltis writes that his "candidate for the ideal zugzwang game" is the following gameSoltis 1978, p. 55, Podgaets–Dvoretsky,USSR1974:1. d4 c5 2. d5 e5 3. e4 d6 4. Nc3 Be7 5. Nf3 Bg4 6. h3 Bxf3 7. Qxf3 Bg5! 8. Bb5+ Kf8!Black exchanges off hisbad bishop, but does not allow White to do the same.9. Bxg5 Qxg5 10. h4 Qe7 11. Be2 h5 12. a4 g6 13. g3 Kg7 14. 0-0 Nh6 15. Nd1 Nd7 16. Ne3 Rhf8 17. a5 f5 18. exf5 e4! 19. Qg2 Nxf5 20. Nxf5+ Rxf5 21. a6 b6 22. g4? hxg4 23. Bxg4 Rf4 24. Rae1 Ne5! 25. Rxe4 Rxe4 26. Qxe4 Qxh4 27. Bf3 Rf8!! 28. Bh1If instead 28.Qxh4 then 28...Nxf3+ followed by 29...Nxh4 leaves Black a piece ahead.28... Ng4 29. Qg2(first diagram)Rf3!! 30. c4 Kh6!!(second diagram) Now all of White's piece moves allow checkmate or ...Rxf2 with a crushing attack (e.g. 31.Qxf3 Qh2#; 31.Rb1 Rxf2 32.Qxg4 Qh2#). That leaves only moves of White's b-pawn, which Black can ignore, e.g. 31.b3 Kg7 32.b4 Kh6 33.bxc5 bxc5 and White has run out of moves.[60]0–1
In this 1959 game[61]between futureWorld ChampionBobby FischerandHéctor Rossetto, 33.Bb3! puts Black in zugzwang.[62]If Black moves the king, White plays Rb8, winning a piece (...Rxc7 Rxf8); if Black moves the rook, 33...Ra8 or Re8, then not only does White gain a queen with 34.c8=Q+, but the black rook will also be lost after 35.Qxa8, 35.Qxe8 or 35.Rxe7+ (depending on Black's move); if Black moves the knight, Be6 will win Black's rook. That leaves only pawn moves, and they quickly run out.[63]The game concluded:
Jonathan Rowsoncoined the termZugzwang Liteto describe a situation, sometimes arising in symmetricalopeningvariations, where White's "extra move" is a burden.[65]He cites as an example of this phenomenon inHodgsonversusArkellatNewcastle2001. The position diagrammed arose after1. c4 c5 2. g3 g6 3. Bg2 Bg7 4. Nc3 Nc6 5. a3 a6 6. Rb1 Rb8 7. b4 cxb4 8. axb4 b5 9. cxb5 axb5(see diagram). Here Rowson remarks,
Both sides want to push their d-pawn and play Bf4/...Bf5, but White has to go first so Black gets to play ...d5 before White can play d4. This doesn't matter much, but it already points to the challenge that White faces here; his most natural continuations allow Black to play the moves he wants to. I would therefore say that White is in 'Zugzwang Lite' and that he remains in this state for several moves.
The game continued10. Nf3 d5 11. d4 Nf6 12. Bf4 Rb6 13. 0-0 Bf5 14. Rb3 0-0 15. Ne5 Ne4 16. h3 h5!? 17. Kh2. The position is still almost symmetrical, and White can find nothing useful to do with his extra move. Rowson whimsically suggests 17.h4!?, forcing Black to be the one to break the symmetry.17... Re8!Rowson notes that this is a useful waiting move, covering e7, which needs protection in some lines, and possibly supporting an eventual ...e5 (as Black in fact played on his 22nd move). White cannot copy it, since after 18.Re1? Nxf2 Black would win apawn. After18. Be3?!Nxe5! 19. dxe5 Rc6!Black seized the initiative and went on to win in 14 more moves.
Another instance of Zugzwang Lite occurred inLajos Portisch–Mikhail Tal,Candidates Match1965, again from theSymmetrical Variationof theEnglish Opening, after1. Nf3 c5 2. c4 Nc6 3. Nc3 Nf6 4. g3 g6 5. Bg2 Bg7 6. 0-0 0-0 7. d3 a6 8. a3 Rb8 9. Rb1 b5 10. cxb5 axb5 11. b4 cxb4 12. axb4 d6 13. Bd2 Bd7(see diagram). Soltis wrote, "It's ridiculous to think Black's position is better. But Mikhail Tal said it is easier to play. By moving second he gets to see White's move and then decide whether to match it."[66]14. Qc1Here, Soltis wrote that Black could maintain equality by keeping the symmetry: 14...Qc8 15.Bh6 Bh3. Instead, he plays to prove that White's queen is misplaced by breaking the symmetry.14... Rc8! 15. Bh6 Nd4!Threatening 15...Nxe2+.16. Nxd4 Bxh6 17. Qxh6 Rxc3 18. Qd2 Qc7 19. Rfc1 Rc8Although the pawn structure is still symmetrical, Black's control of the c-filegives him the advantage.[66]Black ultimately reached an endgame two pawns up, but White managed to hold a draw in 83 moves.[67]
Soltis listed some endgames in which zugzwang is required to win:
Positions where the stronger side can win in the ending ofking and pawn versus kingalso generally require zugzwang to win.[a]
Bibliography | https://en.wikipedia.org/wiki/Zugzwang |
odis acommandon variousoperating systemsfor displaying ("dumping") data in varioushuman-readableoutput formats. The name is an acronym for "octaldump" since it defaults to printing in theoctaldata format.
Theodprogram can display output in a variety of formats, includingoctal,hexadecimal,decimal, andASCII. It is useful for visualizing data that is not in a human-readable format, like theexecutablecode of a program, or where the primary form is ambiguous (e.g. some Latin, Greek and Cyrillic characters looking similar).
odis one of the earliestUnixprograms, having appeared in version 1AT&T Unix. It is also specified in thePOSIXstandards. The implementation forodused onLinuxsystems is usually provided byGNU Core Utilities.
Since it predates theBourne shell, its existence causes an inconsistency in thedoloop syntax. Other loops and logical blocks are opened by the name, and closed by the reversed name, e.g.if ... fiandcase ... esac, butod's existence necessitatesdo ... done.
The command is available as a separate package forMicrosoft Windowsas part of theUnxUtilscollection ofnativeWin32portsof common GNU Unix-like utilities.[1]Theodcommand has also been ported to theIBM ioperating system.[2]
Normally a dump of an executable file is very long. Theheadprogram prints out the first few lines of the output. Here is an example of a dump of the"Hello world" program,pipedthrough head.
Here is an example ofodused to diagnose the output ofechowhere the user typesCtrl+V+Ctrl+IandCtrl+V+Ctrl+Cafter writing "Hello" to literal insert ataband^Ccharacter: | https://en.wikipedia.org/wiki/Od_(Unix) |
Theaviation transponder interrogation modesare the standard formats of pulsed sequences from an interrogatingSecondary Surveillance Radar(SSR) or similarAutomatic Dependent Surveillance-Broadcast(ADS-B) system. The reply format is usually referred to as a "code" from atransponder, which is used to determine detailed information from a suitably equipped aircraft.
In its simplest form, a "Mode" or interrogation type is generally determined by pulse spacing between two or more interrogation pulses. Various modes exist from Mode 1 to 5 for military use, to Mode A, B, C and D, and Mode S for civilian use.
Several different RFcommunication protocolshave been standardized for aviation transponders:
Mode A and Mode C are implemented usingair traffic control radar beacon systemas thephysical layer, whereas Mode S is implemented as a standalone backwards-compatible protocol. ADS-B can operate using Mode S-ES orUniversal Access Transceiveras itstransport layer:[3]
When the transponder receives an interrogation request, it broadcasts the configured transponder code (or "squawk code"). This is referred to as "Mode 3A" or more commonly, Mode A. A separate type of response called "Ident" can be initiated from the airplane by pressing a button on the transponder control panel.
A Mode A transponder code response can be augmented by apressure altituderesponse, which is then referred to as Mode C operation.[2]Pressure altitude is obtained from an altitude encoder, either a separate self-contained unit mounted in the aircraft or an integral part of the transponder. The altitude information is passed to the transponder using a modified form of the modifiedGray codecalled aGillham code.
Mode A and C responses are used to help air traffic controllers identify a particular aircraft's position and altitude on a radar screen, in order to maintain separation.[2]
Another mode called Mode S (Select) is designed to help avoiding overinterrogation of the transponder (having many radars in busy areas) and to allow automatic collision avoidance. Mode S transponders are compatible with Mode A and Mode CSecondary Surveillance Radar(SSR) systems.[2]This is the type of transponder that is used for TCAS or ACAS II (Airborne Collision Avoidance System) functions, and is required to implement the extendedsquitterbroadcast, one means of participating inADS-Bsystems. A TCAS-equipped aircraft must have a Mode S transponder, but not all Mode S transponders include TCAS. Likewise, a Mode S transponder is required to implement 1090ES extended squitter ADS-B Out, but there are other ways to implement ADS-B Out (in the U.S. and China.) The format of Mode S messages is documented in ICAO Doc 9688,Manual on Mode S Specific Services.[4]
Upon interrogation, Mode S transponders transmit information about the aircraft to theSSRsystem, toTCASreceivers on board aircraft and to theADS-BSSR system. This information includes thecall signof the aircraft and/or the aircraft's permanent ICAO 24-bit address (which is represented for human interface purposes as six hexadecimal characters.) One of the hidden features of Mode S transponders is that they are backwards compatible; an aircraft equipped with a Mode S transponder can still be used to send replies to Mode A or C interrogations. This feature can be activated by a specific type of interrogation sequence called inter-mode.[citation needed]
Mode S equipped aircraft are assigned a unique ICAO 24-bit address or (informally) Mode-S "hex code" upon national registration and this address becomes a part of the aircraft'sCertificate of Registration. Normally, the address is never changed, however, the transponders are reprogrammable and, occasionally, are moved from one aircraft to another (presumably for operational or cost purposes), either by maintenance or by changing the appropriate entry in the aircraft'sFlight management system.
There are 16,777,214 (224-2) unique ICAO 24-bit addresses (hex codes) available.[5][6]The ICAO 24-bit address can be represented in three digital formats:hexadecimal,octal, andbinary. These addresses are used to provide a unique identity normally allocated to an individual aircraft or registration.
As an example, following is the ICAO 24-bit address assigned to theShuttle Carrier Aircraftwith theregistrationN905NA:[7][8]
These are all the same 24-bit address of the Shuttle Carrier Aircraft, represented indifferent numeral systems(see above).
An issue with Mode S transponders arises when pilots enter the wrongflight identitycode into the Mode S transponder.[9]In this case, the capabilities ofACAS IIand Mode SSSRcan be degraded.[10]
In 2009 the ICAO published an "extended" form of Mode S with more message formats to use withADS-B;[11]it was further refined in 2012.[12]Countries implementing ADS-B can require the use of either the extended squitter mode of a suitably-equipped Mode S transponder, or theUATtransponder on 978 MHz.
Mode-S data has the potential to contain the aircraft's movement vectors in relation to theEarthand its atmosphere. The difference between these two vectors is the wind acting on the aircraft.[13]Deriving winds (and temperatures from theMach numberandtrue airspeed) was developed simultaneously bySiebren de Haanof theKNMIandEdmund Stoneof theMet Office.[14]Over the UK the number of aircraft observations has increased from approximately 7500 per day fromAMDARto over 10 million per day. The Met Office together with KNMI andFlightRadar24are actively developing an expanded capability including data from every continent other than Antarctica.[15] | https://en.wikipedia.org/wiki/Aviation_transponder_interrogation_modes |
Theair traffic control radar beacon system(ATCRBS) is a system used inair traffic control(ATC) to enhance surveillanceradarmonitoring and separation of air traffic. It consists of a rotating ground antenna andtranspondersin aircraft. The ground antenna sweeps a narrow vertical beam ofmicrowavesaround the airspace. When the beam strikes an aircraft, the transponder transmits a return signal back giving information such as altitude and the Squawk Code, a four digit code assigned to each aircraft that enters a region. Information about this aircraft is then entered into the system and subsequently added to the controller's screen to display this information when queried. This information can includeflight numberdesignation and altitude of the aircraft. ATCRBS assistsair traffic control(ATC) surveillance radars by acquiring information about theaircraftbeing monitored, and providing this information to the radar controllers. The controllers can use the information to identify radar returns from aircraft (known astargets) and to distinguish those returns fromground clutter.
The system consists oftransponders, installed in aircraft, andsecondary surveillance radars(SSRs), installed at air traffic control facilities. The SSR is sometimes co-located with theprimary surveillance radar, or PSR. These two radar systems work in conjunction to produce a synchronized surveillance picture. The SSR transmits interrogations and listens for any replies. Transponders that receive an interrogation decode it, decide whether to reply, and then respond with the requested information when appropriate. Note that in common informal usage, the term "SSR" is sometimes used to refer to the entire ATCRBS system, however this term (as found in technical publications) properly refers only to the ground radar itself.
An ATC ground station consists of two radar systems and their associated support components. The most prominent component is the PSR. It is also referred to asskin paint radarbecause it shows not synthetic or alpha-numeric target symbols, but bright (or colored) blips or areas on the radar screen produced by the RF energy reflections from the target's "skin." This is a non-cooperative process, no additional avionic devices are needed. The radar detects and displays reflective objects within the radar's operating range.Weather radardata is displayed in skin paint mode. The primary surveillance radar is subject to theradar equationthat says signal strength drops off as the fourth power of distance to the target. Objects detected using the PSR are known asprimary targets.
The second system is thesecondary surveillance radar, or SSR, which depends on a cooperatingtransponderinstalled on the aircraft being tracked. The transponder emits a signal when it is interrogated by the secondary radar. In a transponder based system signals drop off as the inverse square of the distance to the target, instead of the fourth power in primary radars. As a result, effective range is greatly increased for a given power level. The transponder can also send encoded information about the aircraft, such as identity and altitude.
The SSR is equipped with a mainantenna, and anomnidirectional"Omni" antenna at many older sites. Newer antennas (as in the adjacent picture), are grouped as a left and right antenna, and each side connects to a hybrid device which combines the signals into sum and difference channels. Still other sites have both the sum and difference antenna, and an Omni antenna. Surveillance aircraft, e.g. AWACS, have only the sum and difference antennas, but can also be space stabilized by phase shifting the beam down or up when pitched or rolled. The SSR antenna is typically fitted to the PSR antenna, so they point in the same direction as the antennas rotate. The omnidirectional antenna is mounted near and high, usually on top of the radome if equipped. Mode-S interrogators require the sum and difference channels to provide themonopulsecapability to measure the off-boresight angle of the transponder reply.
The SSR repetitively transmits interrogations as the rotating radar antenna scans the sky. The interrogation specifies what type of information a replying transponder should send by using a system of modes. There have been a number of modes used historically, but four are in common use today: mode 1, mode 2, mode 3/A, and mode C.Mode 1is used to sort military targets during phases of a mission.Mode 2is used to identify military aircraft missions.Mode 3/Ais used to identify each aircraft in the radar's coverage area.Mode Cis used to request/report an aircraft's altitude.
Two other modes, mode 4 and mode S, are not considered part of the ATCRBS system, but they use the same transmit and receive hardware.Mode 4is used by military aircraft for theIdentification Friend or Foe(IFF) system.Mode Sis a discrete selective interrogation, rather than a general broadcast, that facilitatesTCASfor civilian aircraft. Mode S transponders ignore interrogations not addressed with their unique identity code, reducing channel congestion. At a typical SSR radar installation, ATCRBS, IFF, and mode S interrogations will all be transmitted in an interlaced fashion. Some military facilities and/or aircraft will also utilize Mode S.
Returns from both radars at the ground station are transmitted to the ATC facility using amicrowavelink, acoaxiallink, or (with newer radars) adigitizerand amodem. Once received at the ATC facility, a computer system known as aradar data processorassociates the reply information with the proper primary target and displays it next to the target on the radar scope.
The equipment installed in the aircraft is considerably simpler, consisting of the transponder itself, usually mounted in the instrument panel oravionicsrack, and a smallL bandUHFantenna, mounted on the bottom of thefuselage. Many commercial aircraft also have an antenna on the top of the fuselage, and either or both antennas can be selected by the flight crew.
Typical installations also include an altitude encoder, which is a small device connected to both the transponder and the aircraft's static system. It provides the aircraft'spressure altitudeto the transponder, so that it may relay the information to the ATC facility. The encoder uses 11 wires to pass altitude information to the transponder in the form of aGillham Code, a modified binary Gray code.
The transponder has a small required set of controls and is simple to operate. It has a method to enter the four-digittransponder code, also known as abeacon codeorsquawk code, and a control to transmit anident, which is done at the controller's request (see SPI pulse below). Transponders typically have 4 operating modes: Off, Standby, On (Mode-A), and Alt (Mode-C). On and Alt mode differ only in that the On mode inhibits transmitting any altitude information. Standby mode allows the unit to remain powered and warmed up but inhibits any replies, since the radar is used for searching the aircraft and exact location of aircraft.
The steps involved in performing an ATCRBS interrogation are as follows: First, the ATCRBSinterrogatorperiodically interrogates aircraft on a frequency of 1030 MHz. This is done through a rotating or scanning antenna at the radar's assigned Pulse Repetition Frequency (PRF). Interrogations are typically performed at 450 - 500 interrogations/second. Once an interrogation has been transmitted, it travels through space (at the speed of light) in the direction the antenna is pointing until an aircraft is reached.
When the aircraft receives the interrogation, the aircrafttransponderwill send a reply on 1090 MHz after a 3.0 μs delay indicating the requested information. The interrogator's processor will then decode the reply and identify the aircraft. The range of the aircraft is determined from the delay between the reply and the interrogation. The azimuth of the aircraft is determined from the direction the antenna is pointing when the first reply was received, until the last reply is received. This window of azimuth values is then divided by two to give the calculated "centroid" azimuth. The errors in this algorithm cause the aircraft to jitter across the controllers scope, and is referred to as "track jitter." The jitter problem makes software tracking algorithms problematic, and is the reason why monopulse was implemented.
Interrogations consist of three pulses, 0.8 μs in duration, referred to as P1, P2 and P3. The timing between pulses P1 and P3 determines the mode (or question) of the interrogation, and thus what the nature of the reply should be. P2 is used in side-lobe suppression, explained later.
Mode 3/A uses a P1 to P3 spacing of 8.0 μs, and is used to request thebeacon code, which was assigned to the aircraft by the controller to identify it. Mode C uses a spacing of 21 μs, and requests the aircraft's pressure altitude, provided by the altitude encoder. Mode 2 uses a spacing of 5 μs and requests the aircraft to transmit its Military identification code. The latter is only assigned to Military aircraft and so only a small percentage of aircraft actually reply to a mode 2 interrogation.
Replies to interrogations consist of 15 time slots, each 1.45 μs in width, encoding 12 + 1bitsof information. The reply is encoded by the presence or absence of a 0.45 μs pulse in each slot. These are labeled as follows:
F1 C1 A1 C2 A2 C4 A4 X B1 D1 B2 D2 B4 D4 F2 SPI
The F1 and F2 pulses areframingpulses, and are always transmitted by the aircraft transponder. They are used by the interrogator to identify legitimate replies. These are spaced 20.3 μs apart.
The A4, A2, A1, B4, B2, B1, C4, C2, C1, D4, D2, D1 pulses constitute the "information" contained in the reply. These bits are used in different ways for each interrogation mode.
For mode A, each digit in thetransponder code(A, B, C, or D) may be a number from zero to seven. Theseoctaldigits are transmitted as groups of three pulses each, the A slots reserved for the first digit, B for the second, and so on.
In a mode C reply, the altitude is encoded by aGillham interface,Gillham code, which usesGray code. The Gillham interface is capable of representing a wide range of altitudes, in 100-foot (30 m) increments. The altitude transmitted is pressure altitude, and corrected foraltimetersetting at the ATC facility. If no encoder is attached, the transponder may optionally transmit only framing pulses (most modern transponders do).
In a mode 3 reply, the information is the same as a mode A reply in that there are 4 digits transmitted between 0 and 7. The term mode 3 is utilized by the military, whereas mode A is the civilian term.
The X bit is currently only used for test targets. This bit was originally transmitted byBOMARC missilesthat were used as air-launched test targets. This bit may be used by drone aircraft.
The SPI pulse is positioned 4.35μs past the F2 pulse (3 time slots) and is used as a "Special Identification Pulse". The SPI pulse is turned on by the "identity control" on the transponder in the aircraft cockpit when requested by air traffic control. The air traffic controller can request the pilot to ident, and when the identity control is activated, the SPI bit will be added to the reply for about 20 seconds (two to four rotations of the interrogator antenna) thereby highlighting the track on the controllers display.
The SSR's directional antenna is never perfect; inevitably it will "leak" lower levels of RF energy in off-axis directions. These are known asside lobes. When aircraft are close to the ground station, the side lobe signals are often strong enough to elicit a reply from their transponders when the antenna is not pointing at them. This can causeghosting, where an aircraft's target may appear in more than one location on the radar scope. In extreme cases, an effect known asring-aroundoccurs, where the transponder replies to excess resulting in an arc or circle of replies centered on the radar site.
To combat these effects,side lobe suppression(SLS) is used. SLS employs a third pulse, P2, spaced 2μs after P1. This pulse is transmitted from the omnidirectional antenna (or the antenna difference channel) by the ground station, rather than from the directional antenna (or the sum channel). The power output from the omnidirectional antenna is calibrated so that, when received by an aircraft, the P2 pulse is stronger than either P1 or P3,exceptwhen the directional antenna is pointing directly at the aircraft. By comparing the relative strengths of P2 and P1, airborne transponders can determine whether or not the antenna is pointing at the aircraft when the interrogation was received. The power to the difference antenna pattern (for systems so equipped) is not adjusted from that of the P1 and P3 pulses. Algorithms are used in the ground receivers to delete replies on the edge of the two beam patterns.
To combat these effects more recently, side lobe suppression (SLS) is still used, but differently. The new and improved SLS employs a third pulse, spaced 2μs either before P3 (a new P2 position) or after P3 (which should be called P4 and appears in the Mode S radar and TCAS specifications). This pulse is transmitted from the directional antenna at the ground station, and the power output of this pulse is the same strength as the P1 and P3 pulses. The action to be taken is specified in the new and improved C74c as:
2.6 Decoding Performance.
c. Side-lobe Suppression. The transponder must be suppressed for a period of 35 ±10 microseconds following receipt of a pulse pair of proper spacing and suppression action must be capable of being reinitiated for the full duration within 2 microseconds after the end of any suppression period. The transponder must be suppressed with a 99 percent efficiency over a received signal amplitude range between 3 db above minimum triggering level and 50 db above that level and upon receipt of properly spaced interrogations when the received amplitude of P2 is equal to or in excess of the received amplitude of P1 and spaced 2.0 ±0.15 microsecond from P3.
Any requirement at the transponder to detect and act upon a P2 pulse 2μs after P1 has been removed from the new and improved TSO C74c specification.
Most "modern" transponders (manufactured since 1973) have an "SLS" circuit which suppresses reply on receipt of any two pulses in any interrogation spaced 2.0 microseconds apart that are above the MTL Minimum Triggering Level threshold of the receiver amplitude discriminator (P1->P2 or P2->P3 or P3->P4). This approach was used to comply with the original C74c and but also complies with the provisions of the new and improved C74c.
The FAA refers to the non-responsiveness of new and improved TSO C74c compliant transponders to Mode S compatible radars and TCAS as "The Terra Problem", and has issued Airworthiness Directives (ADs) against various transponder manufacturers, over the years, at various times on no predictable schedule. The ghosting and ring around problems have recurred on the more modern radars.
To combat these effects most recently, great emphasis is placed upon software solutions. It is highly likely that one of those software algorithms was the proximate cause of a mid-air collision recently, as one airplane was reported at showing its altitude as the pre-flight paper filed flight plan, and not the altitude assigned by the ATC controller (see the reports and observations contained in the below reference ATC Controlled Airplane Passenger Study of how radar worked).
See the reference section below for errors in performance standards for ATCRBS transponders in the US.
See the reference section below for FAA Technician Study of in-situ transponders.
The beacon code and altitude were historically displayed verbatim on the radar scope next to the target, however modernization has extended the radar data processor with aflight data processor, or FDP. The FDP automatically assigns beacon codes toflight plans, and when that beacon code is received from an aircraft, the computer can associate it with flight plan information to display immediately useful data, such as aircraftcallsign, the aircraft's next navigational fix, assigned and current altitude, etc. near the target in adata block.
Although the ATCRBS does not display aircraft heading.[1]
Mode S, ormode select, despite also being called a mode, is actually a radically improved system intended to replace ATCRBS altogether. A few countries have mandated mode S, and many other countries, including the United States, have begun phasing out ATCRBS in favor of this system. Mode S is designed to be fully backward compatible with existing ATCRBS technology.
Mode S, despite being called a replacement transponder system for ATCRBS, is actually a data packet protocol which can be used to augment ATCRBS transponder positioning equipment (radar and TCAS).
One major improvement of Mode S is the ability to interrogate a single aircraft at a time. With old ATCRBS technology, all aircraft within the beam pattern of the interrogating station will reply. In an airspace with multiple interrogation stations, ATCRBS transponders in aircraft can be overwhelmed. By interrogating one aircraft at a time, workload on the aircraft transponder is greatly reduced.
The second major improvement is increased azimuth accuracy. With PSRs and old SSRs, azimuth of the aircraft is determined by the half split (centroid) method. The half split method is computed by recording the azimuth of the first and last replies from the aircraft, as the radar beam sweeps past its position. Then the midpoint between the start and stop azimuth is used for aircraft position. With MSSR (monopulse secondary surveillance radar) and Mode S, the radar can use the information of one reply to determine azimuth. This is calculated based on the RF phase of the aircraft reply, as determined by the sum and difference antenna elements, and is called monopulse. This monopulse method results in superior azimuth resolution, and removes target jitter from the display.
The Mode S system also includes a more robust communications protocol, for a wider variety of information exchange. As of 2009[update]this capability is becoming mandatory across Europe with some states already requiring its use.
Diversity Mode S transponders may be implemented for the purpose of improving air-to-air surveillance and communications. Such systems shall employ two antennas, one
mounted on the top and the other on the bottom of the aircraft. Appropriate switching and signal processing channels to select the best antenna on the basis of the characteristics of
the received interrogation signals shall also be provided. Such diversity systems, in their installed configuration, shall not result in degraded performance relative to that which would have been produced by a single system having a bottom-mounted antenna.
Mode S was developed as a solution to frequency congestion on both the uplink and downlink frequencies (1030 and 1090 MHz). The high coverage of radar service available today means that some radar sites receive transponder replies from interrogations that were initiated by other nearby radar sites. This results inFRUIT, orFalse Replies Unsynchronous In Time[1], which is the reception of replies at a ground station that do not correspond with an interrogation. This problem has worsened with the increasing prevalence of technologies likeTCAS, in which individualaircraftinterrogate one another to avoid collisions.
Finally, technology improvements have made transponders increasingly affordable such that today almost all aircraft are equipped with them. As a result, the sheer number of aircraft replying to SSRs has increased.Defruitercircuitry clears FRUIT from the display.
Mode S attempts to reduce these[which?]problems by assigning aircraft a permanent mode S address, derived from the aircraft'sinternationally assigned registration number.[citation needed]It then provides a mechanism by which an aircraft can beselected, or interrogated such that no other aircraft reply.
The system also has provisions for transferring arbitrary data both to and from a transponder.[citation needed]This aspect of mode S makes it a building block for many other technologies, such as TCAS 2, Traffic Information Service (TIS), andAutomatic Dependent Surveillance-Broadcast.[citation needed] | https://en.wikipedia.org/wiki/Air_traffic_control_radar_beacon_system |
Identification, friend or foe(IFF) is acombat identificationsystem designed forcommand and control. It uses atransponderthat listens for aninterrogationsignal and then sends aresponsethat identifies the broadcaster. IFF systems usually useradarfrequencies, but other electromagnetic frequencies, radio or infrared, may be used.[1]It enables military and civilianair traffic controlinterrogation systems to identify aircraft, vehicles or forces as friendly, as opposed to neutral or hostile, and to determine their bearing and range from the interrogator. IFF is used by both military and civilian aircraft. IFF was first developed duringWorld War II, with the arrival of radar, and severalfriendly fireincidents.
IFF can only positively identify friendly aircraft or other forces.[2][3][4][5]If an IFF interrogation receives no reply or an invalid reply, the object is not positively identified as foe; friendly forces may not properly reply to IFF for various reasons such as equipment malfunction, and parties in the area not involved in the combat, such as civilian lightgeneral aviationaircraft may not carry a transponder.
IFF is a tool within the broader military action of combat identification (CID), the characterization of objects detected in the field of combat sufficiently accurately to support operational decisions. The broadest characterization is that of friend, enemy, neutral, or unknown. CID not only can reduce friendly fire incidents, but also contributes to overall tactical decision-making.[6]
With the successful deployment of radar systems forair defenceduringWorld War II, combatants were immediately confronted with the difficulty of distinguishing friendly aircraft from hostile ones; by that time, aircraft were flown at high speed and altitude, making visual identification impossible, and the targets showed up as featureless blips on the radar screen. This led to incidents such as theBattle of Barking Creek, over Britain,[7][8][9]and theair attack on the fortress of Koepenickover Germany.[10][11]
Already before the deployment of theirChain Home radar system(CH), theRAFhad considered the problem of IFF.Robert Watson-Watthad filed patents on such systems in 1935 and 1936. By 1938, researchers atBawdsey Manorbegan experiments with "reflectors" consisting ofdipole antennastuned to resonate to the primary frequency of the CH radars. When a pulse from the CH transmitter hit the aircraft, the antennas would resonate for a short time, increasing the amount of energy returned to the CH receiver. The antenna was connected to a motorized switch that periodically shorted it out, preventing it from producing a signal. This caused the return on the CH set to periodically lengthen and shorten as the antenna was turned on and off. In practice, the system was found to be too unreliable to use; the return was highly dependent on the direction the aircraft was moving relative to the CH station, and often returned little or no additional signal.[12]
It had been suspected from the start this system would be of little use in practice. When that turned out to be the case, the RAF turned to an entirely different system that was also being planned. This consisted of a set of tracking stations usingHF/DFradio direction finders. The aircraft voice communications radios were modified to send out a 1 kHz tone for 14 seconds every minute, allowing the stations ample time to measure the aircraft's bearing. Several such stations were assigned to each "sector" of the air defence system, and sent their measurements to a plotting station at sector headquarters, who usedtriangulationto determine the aircraft's location. Known as "pip-squeak", the system worked, but was labour-intensive and did not display its information directly to the radar operators, the information had to be forwarded to them over the telephone. A system that worked directly with the radar was clearly desirable.[13]
The first active IFFtransponder(transmitter/responder) was the IFF Mark I which was used experimentally in 1939. This used aregenerative receiver, which fed a small amount of the amplified output back into the input, strongly amplifying even small signals as long as they were of a single frequency (like Morse code, but unlike voice transmissions). They were tuned to the signal from the CH radar (20–30 MHz), amplifying it so strongly that it was broadcast back out the aircraft's antenna. Since the signal was received at the same time as the original reflection of the CH signal, the result was a lengthened "blip" on the CH display which was easily identifiable. In testing, it was found that the unit would often overpower the radar or produce too little signal to be seen, and at the same time, new radars were being introduced using new frequencies.
Instead of putting Mark I into production, a newIFF Mark IIwas introduced in early 1940. Mark II had a series of separate tuners inside tuned to different radar bands that it stepped through using a motorized switch, while anautomatic gain controlsolved the problem of it sending out too much signal. Mark II was technically complete as the war began, but a lack of sets meant it was not available in quantity and only a small number of RAF aircraft carried it by the time of theBattle of Britain. Pip-squeak was kept in operation during this period, but as the Battle ended, IFF Mark II was quickly put into full operation. Pip-squeak was still used for areas over land where CH did not cover, as well as an emergency guidance system.[14]
Even by 1940 the complex system of Mark II was reaching its limits while new radars were being constantly introduced. By 1941, a number of sub-models were introduced that covered different combinations of radars, common naval ones for instance, or those used by the RAF. But the introduction of radars based on themicrowave-frequencycavity magnetronrendered this obsolete; there was simply no way to make a responder operating in this band using contemporary electronics.
In 1940, English engineerFreddie Williamshad suggested using a single separate frequency for all IFF signals, but at the time there seemed no pressing need to change the existing system. With the introduction of the magnetron, work on this concept began at theTelecommunications Research Establishmentas theIFF Mark III. This was to become the standard for theWestern Alliesfor most of the war.
Mark III transponders were designed to respond to specific 'interrogators', rather than replying directly to received radar signals. These interrogators worked on a limited selection of frequencies, no matter what radar they were paired with. The system also allowed limited communication to be made, including the ability to transmit a coded 'Mayday' response. The IFF sets were designed and built byFerrantiinManchesterto Williams' specifications. Equivalent sets were manufactured in the US, initially as copies of British sets, so that allied aircraft would be identified upon interrogation by each other's radar.[14]
IFF sets were obviously highly classified. Thus, many of them were wired with explosives in the event the aircrew bailed out or crash landed. Jerry Proc reports:
Alongside the switch to turn on the unit was the IFF destruct switch to prevent its capture by the enemy. Many a pilot chose the wrong switch and blew up his IFF unit. The thud of a contained explosion and the acrid smell of burning insulation in the cockpit did not deter many pilots from destroying IFF units time and time again. Eventually, the self destruct switch was secured by a thin wire to prevent its accidental use."[15]
FuG 25aErstling(English: Firstborn, Debut) was developed in Germany in 1940. It was tuned to the low-VHFband at 125 MHz used by theFreya radar, and an adaptor was used with the low-UHF-banded 550–580 MHz used byWürzburg. Before a flight, the transceiver was set up with a selected day code of tenbitswhich was dialed into the unit. To start the identification procedure, the ground operator switched the pulse frequency of his radar from 3,750 Hz to 5,000 Hz. The airborne receiver decoded that and started to transmit the day code. The radar operator would then see the blip lengthen and shorten in the given code. The IFF transmitter worked on 168 MHz with a power of 400 watts (PEP).
The system included a way for ground controllers to determine whether an aircraft had the right code or not but it did not include a way for the transponder to reject signals from other sources.Britishmilitary scientists found a way of exploiting this by building their own IFF transmitter calledPerfectos, which were designed to trigger a response from any FuG 25a system in the vicinity. When an FuG 25a responded on its 168 MHz frequency, the signal was received by the antenna system from anAI Mk. IV radar, which originally operated at 212 MHz. By comparing the strength of the signal on different antennas the direction to the target could be determined. Mounted onMosquitos, the "Perfectos" severely limited German use of the FuG 25a.
TheUnited States Naval Research Laboratoryhad been working on their own IFF system since before the war. It used a single interrogation frequency, like the Mark III, but differed in that it used a separate responder frequency. Responding on a different frequency has several practical advantages, most notably that the response from one IFF cannot trigger another IFF on another aircraft. But it requires a complete transmitter for the responder side of the circuitry, in contrast to the greatly simplified regenerative system used in the British designs. This technique is now known as across-band transponder.
When the Mark II was revealed in 1941 during theTizard Mission, it was decided to use it and take the time to further improve their experimental system. The result was what became IFF Mark IV. The main difference between this and earlier models is that it worked on higher frequencies, around 600 MHz, which allowed much smaller antennas. However, this also turned out to be close to the frequencies used by the GermanWürzburg radarand there were concerns that it would be triggered by that radar and the transponder responses would be picked on its radar display. This would immediately reveal the IFF's operational frequencies.
This led to a US–British effort to make a further improved model, the Mark V, also known as the United Nations Beacon or UNB. This moved to still higher frequencies around 1 GHz but operational testing was not complete when the war ended. By the time testing was finished in 1948, the much improved Mark X was beginning its testing and Mark V was abandoned.
By 1943, Donald Barchok filed a patent for a radar system using the abbreviation IFF in his text with only parenthetic explanation, indicating that this acronym had become an accepted term.[16]In 1945, Emile Labin and Edwin Turner filed patents for radar IFF systems where the outgoing radar signal and the transponder's reply signal could each be independently programmed with a binary codes by setting arrays of toggle switches; this allowed the IFF code to be varied from day to day or even hour to hour.[17][18]
Mark X started as a purely experimental device operating at frequencies above 1 GHz;
the name refers to "experimental", not "number 10". As development continued it was decided to introduce an encoding system known as the "Selective Identification Feature", or SIF. SIF allowed the return signal to contain up to 12 pulses, representing fouroctaldigits of 3 bits each. Depending on the timing of the interrogation signal, SIF would respond in several ways. Mode 1 indicated the type of aircraft or its mission (cargo or bomber, for instance) while Mode 2 returned a tail code.
Mark X began to be introduced in the early 1950s. This was during a period of great expansion of the civilian air transport system, and it was decided to use slightly modified Mark X sets for these aircraft as well. These sets included a new military Mode 3 which was essentially identical to Mode 2, returning a four-digit code, but used a different interrogation pulse, allowing the aircraft to identify if the query was from a military or civilian radar. For civilian aircraft, this same system was known as Mode A, and because they were identical, they are generally known as Mode 3/A.
Several new modes were also introduced during this process. Civilian modes B and D were defined, but never used. Mode C responded with a 12-bit number encoded usingGillham code, which represented the altitude as (that number) x 100 feet - 1200. Radar systems can easily locate an aircraft in two dimensions, but measuring altitude is a more complex problem and, especially in the 1950s, added significantly to the cost of the radar system. By placing this function on the IFF, the same information could be returned for little additional cost, essentially that of adding a digitizer to the aircraft'saltimeter.
Modern interrogators generally send out a series of challenges on Mode 3/A and then Mode C, allowing the system to combine the identity of the aircraft with its altitude and location from the radar.
The current IFF system is the Mark XII. This works on the same frequencies as Mark X, and supports all of its military and civilian modes.[citation needed]
It had long been considered a problem that the IFF responses could be triggered by any properly formed interrogation, and those signals were simply two short pulses of a single frequency. This allowed enemy transmitters to trigger the response, and usingtriangulation, an enemy could determine the location of the transponder. The British had already used this technique against the Germans during WWII, and it was used by the USAF againstVPAFaircraft during theVietnam War.
Mark XII differs from Mark X through the addition of the new military Mode 4. This works in a fashion similar to Mode 3/A, with the interrogator sending out a signal that the IFF responds to. There are two key differences, however.
One is that the interrogation pulse is followed by a 12-bit code similar to the ones sent back by the Mark 3 transponders. The encoded number changes day-to-day. When the number is received and decoded in the aircraft transponder, a further cryptographic encoding is applied. If the result of that operation matches the value dialled into the IFF in the aircraft, the transponder replies with a Mode 3 response as before. If the values do not match, it does not respond.
This solves the problem of the aircraft transponder replying to false interrogations, but does not completely solve the problem of locating the aircraft through triangulation. To solve this problem, a delay is added to the response signal that varies based on the code sent from the interrogator. When received by an enemy that does not see the interrogation pulse, which is generally the case as they are often below theradar horizon, this causes a random displacement of the return signal with every pulse. Locating the aircraft within the set of returns is a difficult process.
During the 1980s, a new civilian mode, Mode S, was added that allowed greatly increased amounts of data to be encoded in the returned signal. This was used to encode the location of the aircraft from the navigation system. This is a basic part of thetraffic collision avoidance system(TCAS), which allows commercial aircraft to know the location of other aircraft in the area and avoid them without the need for ground operators.
The basic concepts from Mode S were then militarized as Mode 5, which is simply acryptographicallyencoded version of the Mode S data.
The IFF ofWorld War IIand Soviet military systems (1946 to 1991) used codedradarsignals (called cross-band interrogation, or CBI) to automatically trigger the aircraft's transponder in an aircraft illuminated by the radar. Radar-based aircraft identification is also calledsecondary surveillance radarin both military and civil usage, with primary radar bouncing an RF pulse off of the aircraft to determine position. George Charrier, working forRCA, filed for apatentfor such an IFF device in 1941. It required the operator to perform several adjustments to the radar receiver to suppress the image of the natural echo on the radar receiver, so that visual examination of the IFF signal would be possible.[19]
The United States and other NATO countries started using a system called Mark XII in the late twentieth century; Britain had not until then implemented an IFF system compatible with that standard, but then developed a program for a compatible system known as successor IFF (SIFF).[20]
Beginning around 2016, most NATO member states began upgrading their Mark XII systems to Mark XIIA Mode 5 where practicable. The transition from the legacy Mode 4 to Mode 5, however, has encountered several integration challenges—such as cryptographic key management, secure enrollment procedures, and ensuring interoperability with diverse legacy hardware. To mitigate these risks, backward compatibility with Mode 4 has been maintained longer than originally planned. According to DSCA Memorandum 18-14, this strategy permits a phased, mixed-mode operation until full Mode 5 fielding is achieved.[21][22]DOT&E has overseen a series of test and evaluation efforts to ensure that Mode 5 meets NATO and DOD standards. In July 2014, DOT&E published the Mark XIIA Mode 5 Joint Operational Test Approach (JOTA) 2 Interoperability Assessment. This assessment evaluated the performance of various interrogators and transponders within a system‐of‐systems environment, based on a joint operational test conducted off the U.S. East Coast, and identified both successes and deficiencies that necessitated additional testing.[22]More recently, the 2021 DOT&E Mark XIIA Mode 5 Test Methodology outlined updated evaluation criteria, test procedures, and integration strategies aimed at resolving persistent issues and verifying that the new systems conform to the required operational and security standards.[23][citation needed] In accordance with STANAG 4570, it is anticipated that by 2030 every interrogator and transponder within NATO will be Mode 5 capable—further standardizing operations and enhancing overall security. [citation needed]
Modes 4 and 5 are designated for use byNATOforces.
InWorld War I, eight submarines were sunk byfriendly fireand inWorld War IInearly twenty were sunk this way.[26]Still, IFF has not been regarded a high concern before the 1990s by the US military as not many othercountries possess submarines.[dubious–discuss][27]
IFF methods that are analogous to aircraft IFF have been deemed unfeasible for submarines because they would make submarines easier to detect. Thus, having friendly submarines broadcast a signal, or somehow increase the submarine's signature (based on acoustics, magnetic fluctuations etc.), are not considered viable.[27]Instead, submarine IFF is done based on carefully defining areas of operation. Each friendly submarine is assigned a patrol area, where the presence of any other submarine is deemed hostile and open to attack. Further, within these assigned areas, surface ships and aircraft refrain from anyanti-submarine warfare(ASW); only the resident submarine may target other submarines in its own area. Ships and aircraft may still engage in ASW in areas that have not been assigned to any friendly submarines.[27]Navies also use database of acoustic signatures to attempt to identify the submarine, but acoustic data can be ambiguous and several countries deploy similar classes of submarines.[28] | https://en.wikipedia.org/wiki/Selective_Identification_Feature |
Identification, friend or foe(IFF) is acombat identificationsystem designed forcommand and control. It uses atransponderthat listens for aninterrogationsignal and then sends aresponsethat identifies the broadcaster. IFF systems usually useradarfrequencies, but other electromagnetic frequencies, radio or infrared, may be used.[1]It enables military and civilianair traffic controlinterrogation systems to identify aircraft, vehicles or forces as friendly, as opposed to neutral or hostile, and to determine their bearing and range from the interrogator. IFF is used by both military and civilian aircraft. IFF was first developed duringWorld War II, with the arrival of radar, and severalfriendly fireincidents.
IFF can only positively identify friendly aircraft or other forces.[2][3][4][5]If an IFF interrogation receives no reply or an invalid reply, the object is not positively identified as foe; friendly forces may not properly reply to IFF for various reasons such as equipment malfunction, and parties in the area not involved in the combat, such as civilian lightgeneral aviationaircraft may not carry a transponder.
IFF is a tool within the broader military action of combat identification (CID), the characterization of objects detected in the field of combat sufficiently accurately to support operational decisions. The broadest characterization is that of friend, enemy, neutral, or unknown. CID not only can reduce friendly fire incidents, but also contributes to overall tactical decision-making.[6]
With the successful deployment of radar systems forair defenceduringWorld War II, combatants were immediately confronted with the difficulty of distinguishing friendly aircraft from hostile ones; by that time, aircraft were flown at high speed and altitude, making visual identification impossible, and the targets showed up as featureless blips on the radar screen. This led to incidents such as theBattle of Barking Creek, over Britain,[7][8][9]and theair attack on the fortress of Koepenickover Germany.[10][11]
Already before the deployment of theirChain Home radar system(CH), theRAFhad considered the problem of IFF.Robert Watson-Watthad filed patents on such systems in 1935 and 1936. By 1938, researchers atBawdsey Manorbegan experiments with "reflectors" consisting ofdipole antennastuned to resonate to the primary frequency of the CH radars. When a pulse from the CH transmitter hit the aircraft, the antennas would resonate for a short time, increasing the amount of energy returned to the CH receiver. The antenna was connected to a motorized switch that periodically shorted it out, preventing it from producing a signal. This caused the return on the CH set to periodically lengthen and shorten as the antenna was turned on and off. In practice, the system was found to be too unreliable to use; the return was highly dependent on the direction the aircraft was moving relative to the CH station, and often returned little or no additional signal.[12]
It had been suspected from the start this system would be of little use in practice. When that turned out to be the case, the RAF turned to an entirely different system that was also being planned. This consisted of a set of tracking stations usingHF/DFradio direction finders. The aircraft voice communications radios were modified to send out a 1 kHz tone for 14 seconds every minute, allowing the stations ample time to measure the aircraft's bearing. Several such stations were assigned to each "sector" of the air defence system, and sent their measurements to a plotting station at sector headquarters, who usedtriangulationto determine the aircraft's location. Known as "pip-squeak", the system worked, but was labour-intensive and did not display its information directly to the radar operators, the information had to be forwarded to them over the telephone. A system that worked directly with the radar was clearly desirable.[13]
The first active IFFtransponder(transmitter/responder) was the IFF Mark I which was used experimentally in 1939. This used aregenerative receiver, which fed a small amount of the amplified output back into the input, strongly amplifying even small signals as long as they were of a single frequency (like Morse code, but unlike voice transmissions). They were tuned to the signal from the CH radar (20–30 MHz), amplifying it so strongly that it was broadcast back out the aircraft's antenna. Since the signal was received at the same time as the original reflection of the CH signal, the result was a lengthened "blip" on the CH display which was easily identifiable. In testing, it was found that the unit would often overpower the radar or produce too little signal to be seen, and at the same time, new radars were being introduced using new frequencies.
Instead of putting Mark I into production, a newIFF Mark IIwas introduced in early 1940. Mark II had a series of separate tuners inside tuned to different radar bands that it stepped through using a motorized switch, while anautomatic gain controlsolved the problem of it sending out too much signal. Mark II was technically complete as the war began, but a lack of sets meant it was not available in quantity and only a small number of RAF aircraft carried it by the time of theBattle of Britain. Pip-squeak was kept in operation during this period, but as the Battle ended, IFF Mark II was quickly put into full operation. Pip-squeak was still used for areas over land where CH did not cover, as well as an emergency guidance system.[14]
Even by 1940 the complex system of Mark II was reaching its limits while new radars were being constantly introduced. By 1941, a number of sub-models were introduced that covered different combinations of radars, common naval ones for instance, or those used by the RAF. But the introduction of radars based on themicrowave-frequencycavity magnetronrendered this obsolete; there was simply no way to make a responder operating in this band using contemporary electronics.
In 1940, English engineerFreddie Williamshad suggested using a single separate frequency for all IFF signals, but at the time there seemed no pressing need to change the existing system. With the introduction of the magnetron, work on this concept began at theTelecommunications Research Establishmentas theIFF Mark III. This was to become the standard for theWestern Alliesfor most of the war.
Mark III transponders were designed to respond to specific 'interrogators', rather than replying directly to received radar signals. These interrogators worked on a limited selection of frequencies, no matter what radar they were paired with. The system also allowed limited communication to be made, including the ability to transmit a coded 'Mayday' response. The IFF sets were designed and built byFerrantiinManchesterto Williams' specifications. Equivalent sets were manufactured in the US, initially as copies of British sets, so that allied aircraft would be identified upon interrogation by each other's radar.[14]
IFF sets were obviously highly classified. Thus, many of them were wired with explosives in the event the aircrew bailed out or crash landed. Jerry Proc reports:
Alongside the switch to turn on the unit was the IFF destruct switch to prevent its capture by the enemy. Many a pilot chose the wrong switch and blew up his IFF unit. The thud of a contained explosion and the acrid smell of burning insulation in the cockpit did not deter many pilots from destroying IFF units time and time again. Eventually, the self destruct switch was secured by a thin wire to prevent its accidental use."[15]
FuG 25aErstling(English: Firstborn, Debut) was developed in Germany in 1940. It was tuned to the low-VHFband at 125 MHz used by theFreya radar, and an adaptor was used with the low-UHF-banded 550–580 MHz used byWürzburg. Before a flight, the transceiver was set up with a selected day code of tenbitswhich was dialed into the unit. To start the identification procedure, the ground operator switched the pulse frequency of his radar from 3,750 Hz to 5,000 Hz. The airborne receiver decoded that and started to transmit the day code. The radar operator would then see the blip lengthen and shorten in the given code. The IFF transmitter worked on 168 MHz with a power of 400 watts (PEP).
The system included a way for ground controllers to determine whether an aircraft had the right code or not but it did not include a way for the transponder to reject signals from other sources.Britishmilitary scientists found a way of exploiting this by building their own IFF transmitter calledPerfectos, which were designed to trigger a response from any FuG 25a system in the vicinity. When an FuG 25a responded on its 168 MHz frequency, the signal was received by the antenna system from anAI Mk. IV radar, which originally operated at 212 MHz. By comparing the strength of the signal on different antennas the direction to the target could be determined. Mounted onMosquitos, the "Perfectos" severely limited German use of the FuG 25a.
TheUnited States Naval Research Laboratoryhad been working on their own IFF system since before the war. It used a single interrogation frequency, like the Mark III, but differed in that it used a separate responder frequency. Responding on a different frequency has several practical advantages, most notably that the response from one IFF cannot trigger another IFF on another aircraft. But it requires a complete transmitter for the responder side of the circuitry, in contrast to the greatly simplified regenerative system used in the British designs. This technique is now known as across-band transponder.
When the Mark II was revealed in 1941 during theTizard Mission, it was decided to use it and take the time to further improve their experimental system. The result was what became IFF Mark IV. The main difference between this and earlier models is that it worked on higher frequencies, around 600 MHz, which allowed much smaller antennas. However, this also turned out to be close to the frequencies used by the GermanWürzburg radarand there were concerns that it would be triggered by that radar and the transponder responses would be picked on its radar display. This would immediately reveal the IFF's operational frequencies.
This led to a US–British effort to make a further improved model, the Mark V, also known as the United Nations Beacon or UNB. This moved to still higher frequencies around 1 GHz but operational testing was not complete when the war ended. By the time testing was finished in 1948, the much improved Mark X was beginning its testing and Mark V was abandoned.
By 1943, Donald Barchok filed a patent for a radar system using the abbreviation IFF in his text with only parenthetic explanation, indicating that this acronym had become an accepted term.[16]In 1945, Emile Labin and Edwin Turner filed patents for radar IFF systems where the outgoing radar signal and the transponder's reply signal could each be independently programmed with a binary codes by setting arrays of toggle switches; this allowed the IFF code to be varied from day to day or even hour to hour.[17][18]
Mark X started as a purely experimental device operating at frequencies above 1 GHz;
the name refers to "experimental", not "number 10". As development continued it was decided to introduce an encoding system known as the "Selective Identification Feature", or SIF. SIF allowed the return signal to contain up to 12 pulses, representing fouroctaldigits of 3 bits each. Depending on the timing of the interrogation signal, SIF would respond in several ways. Mode 1 indicated the type of aircraft or its mission (cargo or bomber, for instance) while Mode 2 returned a tail code.
Mark X began to be introduced in the early 1950s. This was during a period of great expansion of the civilian air transport system, and it was decided to use slightly modified Mark X sets for these aircraft as well. These sets included a new military Mode 3 which was essentially identical to Mode 2, returning a four-digit code, but used a different interrogation pulse, allowing the aircraft to identify if the query was from a military or civilian radar. For civilian aircraft, this same system was known as Mode A, and because they were identical, they are generally known as Mode 3/A.
Several new modes were also introduced during this process. Civilian modes B and D were defined, but never used. Mode C responded with a 12-bit number encoded usingGillham code, which represented the altitude as (that number) x 100 feet - 1200. Radar systems can easily locate an aircraft in two dimensions, but measuring altitude is a more complex problem and, especially in the 1950s, added significantly to the cost of the radar system. By placing this function on the IFF, the same information could be returned for little additional cost, essentially that of adding a digitizer to the aircraft'saltimeter.
Modern interrogators generally send out a series of challenges on Mode 3/A and then Mode C, allowing the system to combine the identity of the aircraft with its altitude and location from the radar.
The current IFF system is the Mark XII. This works on the same frequencies as Mark X, and supports all of its military and civilian modes.[citation needed]
It had long been considered a problem that the IFF responses could be triggered by any properly formed interrogation, and those signals were simply two short pulses of a single frequency. This allowed enemy transmitters to trigger the response, and usingtriangulation, an enemy could determine the location of the transponder. The British had already used this technique against the Germans during WWII, and it was used by the USAF againstVPAFaircraft during theVietnam War.
Mark XII differs from Mark X through the addition of the new military Mode 4. This works in a fashion similar to Mode 3/A, with the interrogator sending out a signal that the IFF responds to. There are two key differences, however.
One is that the interrogation pulse is followed by a 12-bit code similar to the ones sent back by the Mark 3 transponders. The encoded number changes day-to-day. When the number is received and decoded in the aircraft transponder, a further cryptographic encoding is applied. If the result of that operation matches the value dialled into the IFF in the aircraft, the transponder replies with a Mode 3 response as before. If the values do not match, it does not respond.
This solves the problem of the aircraft transponder replying to false interrogations, but does not completely solve the problem of locating the aircraft through triangulation. To solve this problem, a delay is added to the response signal that varies based on the code sent from the interrogator. When received by an enemy that does not see the interrogation pulse, which is generally the case as they are often below theradar horizon, this causes a random displacement of the return signal with every pulse. Locating the aircraft within the set of returns is a difficult process.
During the 1980s, a new civilian mode, Mode S, was added that allowed greatly increased amounts of data to be encoded in the returned signal. This was used to encode the location of the aircraft from the navigation system. This is a basic part of thetraffic collision avoidance system(TCAS), which allows commercial aircraft to know the location of other aircraft in the area and avoid them without the need for ground operators.
The basic concepts from Mode S were then militarized as Mode 5, which is simply acryptographicallyencoded version of the Mode S data.
The IFF ofWorld War IIand Soviet military systems (1946 to 1991) used codedradarsignals (called cross-band interrogation, or CBI) to automatically trigger the aircraft's transponder in an aircraft illuminated by the radar. Radar-based aircraft identification is also calledsecondary surveillance radarin both military and civil usage, with primary radar bouncing an RF pulse off of the aircraft to determine position. George Charrier, working forRCA, filed for apatentfor such an IFF device in 1941. It required the operator to perform several adjustments to the radar receiver to suppress the image of the natural echo on the radar receiver, so that visual examination of the IFF signal would be possible.[19]
The United States and other NATO countries started using a system called Mark XII in the late twentieth century; Britain had not until then implemented an IFF system compatible with that standard, but then developed a program for a compatible system known as successor IFF (SIFF).[20]
Beginning around 2016, most NATO member states began upgrading their Mark XII systems to Mark XIIA Mode 5 where practicable. The transition from the legacy Mode 4 to Mode 5, however, has encountered several integration challenges—such as cryptographic key management, secure enrollment procedures, and ensuring interoperability with diverse legacy hardware. To mitigate these risks, backward compatibility with Mode 4 has been maintained longer than originally planned. According to DSCA Memorandum 18-14, this strategy permits a phased, mixed-mode operation until full Mode 5 fielding is achieved.[21][22]DOT&E has overseen a series of test and evaluation efforts to ensure that Mode 5 meets NATO and DOD standards. In July 2014, DOT&E published the Mark XIIA Mode 5 Joint Operational Test Approach (JOTA) 2 Interoperability Assessment. This assessment evaluated the performance of various interrogators and transponders within a system‐of‐systems environment, based on a joint operational test conducted off the U.S. East Coast, and identified both successes and deficiencies that necessitated additional testing.[22]More recently, the 2021 DOT&E Mark XIIA Mode 5 Test Methodology outlined updated evaluation criteria, test procedures, and integration strategies aimed at resolving persistent issues and verifying that the new systems conform to the required operational and security standards.[23][citation needed] In accordance with STANAG 4570, it is anticipated that by 2030 every interrogator and transponder within NATO will be Mode 5 capable—further standardizing operations and enhancing overall security. [citation needed]
Modes 4 and 5 are designated for use byNATOforces.
InWorld War I, eight submarines were sunk byfriendly fireand inWorld War IInearly twenty were sunk this way.[26]Still, IFF has not been regarded a high concern before the 1990s by the US military as not many othercountries possess submarines.[dubious–discuss][27]
IFF methods that are analogous to aircraft IFF have been deemed unfeasible for submarines because they would make submarines easier to detect. Thus, having friendly submarines broadcast a signal, or somehow increase the submarine's signature (based on acoustics, magnetic fluctuations etc.), are not considered viable.[27]Instead, submarine IFF is done based on carefully defining areas of operation. Each friendly submarine is assigned a patrol area, where the presence of any other submarine is deemed hostile and open to attack. Further, within these assigned areas, surface ships and aircraft refrain from anyanti-submarine warfare(ASW); only the resident submarine may target other submarines in its own area. Ships and aircraft may still engage in ASW in areas that have not been assigned to any friendly submarines.[27]Navies also use database of acoustic signatures to attempt to identify the submarine, but acoustic data can be ambiguous and several countries deploy similar classes of submarines.[28] | https://en.wikipedia.org/wiki/IFF_code |
Inaviation, aflight level(FL) is an aircraft'saltitudeas determined by a pressure altimeter using theInternational Standard Atmosphere. It is expressed in hundreds offeetormetres. The altimeter setting used is theISAsea level pressure of 1013hPaor 29.92inHg. The actual surface pressure will vary from this at different locations and times. Therefore, by using a standard pressure setting, every aircraft has the same altimeter setting, and vertical clearance can be maintained during cruise flight.[1]
Flight levels are used to ensure safe vertical separation between aircraft. Historically, altitude has been measured using analtimeter, essentially a calibratedbarometer. An altimeter measures ambient air pressure, which decreases with increasing altitude following thebarometric formula. It displays the corresponding altitude. If aircraft altimeters were not calibrated consistently, then two aircraft could be flying at the same altitude even though their altimeters appeared to show that they are at different altitudes.[2]Flight levels require defining altitudes based on a standard altimeter setting. All aircraft operating at flight levels set 1013 hPa or 29.92 inHg. On the descent when descending through the published transition level, the altimeter is set to the local surface pressure, to display the correct altitude above sea level.
Flight levels[3]are described by a number, which is the nominal altitude, orpressure altitude, in hundreds of feet, and a multiple of 500 ft. Therefore, a pressure altitude of 32,000 ft (9,800 m) is referred to as "flight level 320". In metre altitudes the format is Flight Level xx000 metres.
Flight levels are usually designated in writing asFLxxx, wherexxxis a two- or three-digit number indicating the pressure altitude in units of 100 feet (30 m). In radio communications, FL290 would be stated as "flight level two nine(r) zero".
While use of a standardised pressure setting facilitates separation of aircraft from each other, it does not provide the aircraft's actual altitude above sea level. Below the Transition level (which varies worldwide), the altimeter is set to the local altimeter setting, which can be directly compared to the knownelevationof the terrain. The pressure setting to achieve this varies with local atmospheric pressure. It is calledQNH("barometric pressure adjusted to sea level"), or "altimeter setting", the current local value is available from various sources, includingair traffic controland the local airport weather frequency or aMETAR-issuing station.
Thetransition altitude(TA) is the altitude above sea level at which aircraft change from the use of local pressure to the use of standard pressure. When operating at or below the TA, aircraft altimeters are usually set to show the altitude above sea level.[4]Above the TA, the aircraft altimeter pressure setting is changed to the standard pressure setting of 1013hectopascals(equivalent to millibars) or 29.92inches of mercury, with the aircraft altitude will be stated as a flight level instead of altitude.
In the United States and Canada, the transition altitude is 18,000 ft (5,500 m).[5]In Europe, the transition altitude varies and can be as low as 3,000 ft (900 m). There are discussions to standardize the transition altitude within theEurocontrolarea.[6]In the United Kingdom, different airports have different transition altitudes, between 3000 and 6000 feet.[7]
On 25 November 2004 theCivil Aviation Authority of New Zealandraised New Zealand's transition altitude from 11,000 to 13,000 feet (3,400 to 4,000 m) and changed the transition level from FL130 to FL150.[8]
Thetransition level(TL) is the lowest flight level above the transition altitude. The table below shows the transition level according to transition altitude and QNH. When descending below the transition level, the pilot starts to refer to altitude of the aircraft by setting the altimeter to theQNHfor the region or airfield.
Thetransition layeris the airspace between thetransition altitudeand thetransition level.
According to these definitions the transition layer is 0–500 feet (0–150 m) thick. Aircraft are not normally assigned to fly at the "'transition level'" as this would provide inadequate separation from traffic flying on QNH at the transition altitude. Instead, the lowest usable "'flight level'" is the transition level plus 500 ft.
However, in some countries, such asNorwayfor example,[9]the transition level is determined by adding a buffer of minimum 1,000 ft (300 m) (depending on QNH) to the transition altitude. Therefore, aircraft may be flying at both transition level and transition altitude, and still be vertically separated by at least 1,000 ft (300 m). In those areas the transition layer will be 1,000–1,500 ft (300–460 m) thick, depending on QNH.
In summary, the connection between "transition altitude" (TA), "transition layer" (TLYR), and "transition level" (TL) is
TL = TA + TLYR
Thesemicircular rule(also known as thehemispheric rule) applies, in slightly different version, to IFR flights in the UK inside controlled airspace and generally in the rest of the world.
The standard rule defines an East/West track split:
At FL 290 and above, ifReduced Vertical Separation Minima(RVSM) are not in use, 4,000 ft intervals are used to separate same-direction aircraft (instead of 2,000 ft intervals below FL 290), and only odd flight levels are assigned, independent of the direction of flight:
Conversely, RVSM equipped aircraft are able to continue separation in 2,000 ft intervals as outlined in the semicircular rules. Both non-RVSM and RVSM equipped aircraft use a separation of 4,000 ft above FL 410.
Countries where the major airways are oriented north/south (e.g., New Zealand; Italy; Portugal) have semicircular rules that define a North/South rather than an East/West track split.
In Italy, France, Portugal and recently also in Spain (AIP ENR 1.7-3), for example, southbound traffic uses odd flight levels; in New Zealand, southbound traffic uses even flight levels.
In Europe commonly usedInternational Civil Aviation Organization(ICAO) separation levels are as per the following table:
The quadrantal rule is defunct.[11]It was used in the United Kingdom but was abolished in 2015 to bring the UK in line with the semi-circular rule used around the world.[12][13]
The quadrantal rule applied toIFRflights in the UK both in and outside of controlled airspace except that such aircraft may be flown at a level other than required by this rule if flying in conformity with instructions given by an air traffic control unit, or if complying with notified en-route holding patterns or holding procedures notified in relation to an aerodrome. The rule affected only those aircraft operating under IFR when in level flightabove 3,000 ft above mean sea level, or above the appropriate transition altitude, whichever is the higher, and whenbelow FL195(19,500 ft above the 1013.2 hPa datum in the UK,orwith the altimeter set according to the system published by the competent authority in relation to the area over which the aircraft is flying if such aircraft is not flying over the UK.)[citation needed]
The rule was non-binding upon flights operating undervisual flight rules(VFR).
Minimum vertical separation between two flights abiding by the UK Quadrantal Rule is 500 ft (note these are ingeopotentialfoot units). The level to be flown is determined by the magnetic track of the aircraft, as follows:[14]
Reduced vertical separation minima(RVSM) reduces the vertical separation between FL290 and FL410. This allows aircraft to safely fly more optimum routes, save fuel and increase airspace capacity by adding new flight levels. Only aircraft that have been certified to meet RVSM standards, with several exclusions, are allowed to fly in RVSM airspace. It was introduced into the UK in March 2001. On 20 January 2002, it entered European airspace. The United States, Canada and Mexico transitioned to RVSM between FL 290 and FL 410 on 20 January 2005, and Africa on 25 September 2008.
At FL 410 and above, 4,000 ft intervals are resumed to separate same-direction aircraft and only odd Flight Levels are assigned, depending on the direction of flight:
TheInternational Civil Aviation Organization(ICAO) has recommended a transition to using theInternational System of Unitssince 1979[15][16]with a recommendation on using metres (m) for reporting flight levels.[17]China, Mongolia, Russia and manyCIScountries have used flight levels specified inmetresfor years. Aircraft entering these areas normally make a slight climb or descent to adjust for this, although Russia and some CIS countries started using feet above transition altitude and introduced RVSM at the same time on 17 November 2011.
The flight levels below apply toKyrgyzstan,Kazakhstan,TajikistanandUzbekistanand 6,000 m or below inTurkmenistan(where feet is used for FL210 and above). Flight levels are read as e.g. "flight level 7,500 metres":
and every 2,000 metres thereafter.
and every 2,000 metres thereafter.
The flight levels below apply toMongoliaandPeople's Republic of China, not including Hong Kong. To distinguish flight levels in feet, flight levels are read without "flight level", e.g. "one two thousand six hundred metres" or for 12,600 m (Chinese only available in Chinese airspace). To distinguish altitude from flight level, "on standard" or "on QNH" would be added during initial clearance, such as "climb 4,800 metres on standard" or "descent 2,400 metres on QNH 1020".
RVSM was implemented in China at 16:00 UTC on 21 November 2007, and in Mongolia at 00:01 UTC on 17 November 2011. Aircraft flying in feet according to the table below will have differences between the metric readout of the onboard avionics and ATC cleared flight level; however, the differences will never be more than thirty metres.
and every 1,200 metres thereafter.
and every 1,200 metres thereafter.
On 5 September 2011 the government of theRussian Federationissued decree No.743,[18]pertaining to the changes in the rules of use of the country's airspace. The new rules came into force on 17 November 2011, introducing a flight level system similar to the one used in the West. RVSM has also been in force since this date.
The following table is true for IFR flights:
The new system would eliminate the need to perform climbs and descents in order to enter or leave Russian airspace from or to jurisdictions following the Western standard.[19]
From February 2017, Russia is changing to use QNH and Feet below the Transition Level. The first airport to use this is ULLI/St. Petersburg.[20]Most other airports still[as of?]use QFE.
Unlike Russia, North Korea uses metres below the TL based on QNH. | https://en.wikipedia.org/wiki/Flight_level |
ARINC 429,[1]the "Mark 33 Digital Information Transfer System (DITS)," is theARINCtechnical standard for the predominantavionicsdata busused on most higher-end commercial and transport aircraft.[2]It defines the physical and electrical interfaces of a two-wiredata busand a data protocol to support an aircraft's avionicslocal area network.
ARINC429 is a data transfer standard for aircraft avionics. It uses a self-clocking, self-synchronizing data bus protocol (Tx and Rx are on separate ports). The physical connection wires aretwisted pairscarryingbalanced differential signaling.Data wordsare 32 bits in length and most messages consist of a single data word.Messagesare transmitted at either 12.5 or 100 kbit/s[3]to other system elements that are monitoring the bus messages. The transmitter constantly transmits either 32-bit data words or the NULL state (0 Volts). A single wire pair is limited to one transmitter and no more than 20 receivers. The protocol allows for self-clocking at the receiver end, thus eliminating the need to transmit clocking data. ARINC 429 is an alternative toMIL-STD-1553.
The ARINC 429 unit of transmission is a fixed-length 32-bitframe, which the standard refers to as a 'word'. The bits within an ARINC 429 word are serially identified from Bit Number 1 to Bit Number 32[4]or simply Bit 1 to Bit 32. The fields and data structures of the ARINC 429 word are defined in terms of this numbering.
While it is common to illustrate serial protocol frames progressing in time from right to left, a reversed ordering is commonly practiced within the ARINC standard. Even though ARINC 429 word transmission begins with Bit 1 and ends with Bit 32, it is common to diagram[5]and describe[6][7]ARINC 429 words in the order from Bit 32 to Bit 1.
In simplest terms, while the transmission order of bits (from the first transmitted bit to the last transmitted bit) for a 32-bit frame is conventionally diagrammed as
this sequence is often diagrammed in ARINC 429 publications in the opposite direction as
Generally, when the ARINC 429 word format is illustrated with Bit 32 to the left, the numeric representations in the data field are read with themost significant biton the left. However, in this particular bit order presentation, theLabelfield reads with its most significant bit on the right. LikeCAN ProtocolIdentifier Fields,[8]ARINC 429label fieldsare transmitted most significant bit first. However, likeUART Protocol,Binary-coded decimalnumbers andbinarynumbers in the ARINC 429data fieldsare generally transmitted least significant bit first.
Some equipment suppliers[9][10]publish the bit transmission order as
The suppliers that use this representation have in effect renumbered the bits in the Label field, converting the standard'sMSB 1 bit numberingfor that field to LSB 1 bit numbering. This renumbering highlights the relative reversal of"bit endianness"between the Label representation and numeric data representations as defined within the ARINC 429 standard. Of note is how the87654321bit numbering is similar to the76543210bit numberingcommon in digital equipment; but reversed from the12345678bit numbering defined for the ARINC 429 Label field.
This notional reversal also reflects historical implementation details. ARINC 429transceivershave been implemented with 32-bitshift registers.[11]Parallel access to that shift register is oftenoctet-oriented. As such, the bit order of the octet access is the bit order of the accessing device, which is usuallyLSB 0; and serial transmission is arranged such that the least significant bit of each octet is transmitted first. So, in common practice, the accessing device wrote or read a "reversed label"[12](for example, to transmit a Label 2138[or 8B16] the bit-reversed value D116is written to the Label octet). Newer or "enhanced" transceivers may be configured to reverse the Label field bit order "in hardware."[13]
Each ARINC 429 word is a 32-bit sequence that contains five fields:
The image below exemplifies many of the concepts explained in the adjacent sections. In this image the Label (260) appears in red, the Data in blue-green and the Parity bit in navy blue.
Label guidelines are provided as part of the ARINC 429 specification, for various equipment types. Each aircraft will contain a number of different systems, such asflight management computers,inertial reference systems,air data computers,radar altimeters,radios, andGPSsensors. For each type of equipment, a set of standard parameters is defined, which is common across all manufacturers and models. For example, any air data computer will provide the barometric altitude of the aircraft as label 203. This allows some degree of interchangeability of parts, as all air data computers behave, for the most part, in the same way. There are only a limited number of labels, though, and so label 203 may have some completely different meaning if sent by a GPS sensor, for example. Very commonly needed aircraft parameters, however, use the same label regardless of source. Also, as with any specification, each manufacturer has slight differences from the formal specification, such as by providing extra data above and beyond the specification, leaving out some data recommended by the specification, or other various changes.
Avionics systems must meet environmental requirements, usually stated as RTCA DO-160 environmental categories. ARINC 429 employs several physical, electrical, and protocol techniques to minimizeelectromagnetic interferencewith on-board radios and other equipment, for example viaother transmission cables.
Its cabling is a shielded 78Ωtwisted-pair.[1]ARINC signaling defines a 10 Vp differential between the Data A and Data B levels within the bipolar transmission (i.e. 5 V on Data A and -5 V on Data B would constitute a valid driving signal), and the specification defines acceptable voltage rise and fall times.
ARINC 429's data encoding uses a complementary differential bipolarreturn-to-zero(BPRZ) transmission waveform, further reducing EMI emissions from the cable itself.
When developing and/or troubleshooting the ARINC 429 bus, examination of hardware signals can be very important to find problems. Aprotocol analyzeris useful to collect, analyze, decode and store signals. | https://en.wikipedia.org/wiki/ARINC_429 |
Abinary numberis anumberexpressed in thebase-2numeral systemorbinary numeral system, a method for representingnumbersthat uses only two symbols for thenatural numbers: typically "0" (zero) and "1" (one). Abinary numbermay also refer to arational numberthat has a finite representation in the binary numeral system, that is, the quotient of anintegerby a power of two.
The base-2 numeral system is apositional notationwith aradixof2. Each digit is referred to as abit, or binary digit. Because of its straightforward implementation indigital electronic circuitryusinglogic gates, the binary system is used by almost all moderncomputers and computer-based devices, as a preferred system of use, over various other human techniques of communication, because of the simplicity of the language and the noise immunity in physical implementation.[1]
The modern binary number system was studied in Europe in the 16th and 17th centuries byThomas Harriot, andGottfried Leibniz. However, systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Europe and India.
The scribes of ancient Egypt used two different systems for their fractions,Egyptian fractions(not related to the binary number system) andHorus-Eyefractions (so called because many historians of mathematics believe that the symbols used for this system could be arranged to form the eye ofHorus, although this has been disputed).[2]Horus-Eye fractions are a binary numbering system for fractional quantities of grain, liquids, or other measures, in which a fraction of ahekatis expressed as a sum of the binary fractions 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64. Early forms of this system can be found in documents from theFifth Dynasty of Egypt, approximately 2400 BC, and its fully developed hieroglyphic form dates to theNineteenth Dynasty of Egypt, approximately 1200 BC.[3]
The method used forancient Egyptian multiplicationis also closely related to binary numbers. In this method, multiplying one number by a second is performed by a sequence of steps in which a value (initially the first of the two numbers) is either doubled or has the first number added back into it; the order in which these steps are to be performed is given by the binary representation of the second number. This method can be seen in use, for instance, in theRhind Mathematical Papyrus, which dates to around 1650 BC.[4]
TheI Chingdates from the 9th century BC in China.[5]The binary notation in theI Chingis used to interpret itsquaternarydivinationtechnique.[6]
It is based on taoistic duality ofyin and yang.[7]Eight trigrams (Bagua)and a set of64 hexagrams ("sixty-four" gua), analogous to the three-bit and six-bit binary numerals, were in use at least as early as theZhou dynastyof ancient China.[5]
TheSong dynastyscholarShao Yong(1011–1077) rearranged the hexagrams in a format that resembles modern binary numbers, although he did not intend his arrangement to be used mathematically.[6]Viewing theleast significant biton top of single hexagrams in Shao Yong's square[8]and reading along rows either from bottom right to top left with solid lines as 0 and broken lines as 1 or from top left to bottom right with solid lines as 1 and broken lines as 0 hexagrams can be interpreted as sequence from 0 to 63.[9]
Etruscansdivided the outer edge ofdivination liversinto sixteen parts, each inscribed with the name of a divinity and its region of the sky. Each liver region produced a binary reading which was combined into a final binary for divination.[10]
Divination at Ancient GreekDodonaoracle worked by drawing from separate jars, questions tablets and "yes" and "no" pellets. The result was then combined to make a final prophecy.[11]
The Indian scholarPingala(c. 2nd century BC) developed a binary system for describingprosody.[12][13]He described meters in the form of short and long syllables (the latter equal in length to two short syllables).[14]They were known aslaghu(light) andguru(heavy) syllables.
Pingala's Hindu classic titledChandaḥśāstra(8.23) describes the formation of a matrix in order to give a unique value to each meter. "Chandaḥśāstra" literally translates toscience of metersin Sanskrit. The binary representations in Pingala's system increases towards the right, and not to the left like in the binary numbers of the modernpositional notation.[15]In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum ofplace values.[16]
TheIfáis an African divination system.Similar to theI Ching, but has up to 256 binary signs,[17]unlike theI Chingwhich has 64. The Ifá originated in 15th century West Africa amongYoruba people. In 2008,UNESCOadded Ifá to its list of the "Masterpieces of the Oral and Intangible Heritage of Humanity".[18][19]
The residents of the island ofMangarevainFrench Polynesiawere using a hybrid binary-decimalsystem before 1450.[20]Slit drumswith binary tones are used to encode messages across Africa and Asia.[7]Sets of binary combinations similar to theI Chinghave also been used in traditional African divination systems, such asIfáamong others, as well as inmedievalWesterngeomancy. The majority ofIndigenous Australian languagesuse a base-2 system.[21]
In the late 13th centuryRamon Llullhad the ambition to account for all wisdom in every branch of human knowledge of the time. For that purpose he developed a general method or "Ars generalis" based on binary combinations of a number of simple basic principles or categories, for which he has been considered a predecessor of computing science and artificial intelligence.[22]
In 1605,Francis Bacondiscussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text.[23]Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of a twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature".[23](SeeBacon's cipher.)
In 1617,John Napierdescribed a system he calledlocation arithmeticfor doing binary calculations using a non-positional representation by letters.Thomas Harriotinvestigated several positional numbering systems, including binary, but did not publish his results; they were found later among his papers.[24]Possibly the first publication of the system in Europe was byJuan Caramuel y Lobkowitz, in 1700.[25]
Leibniz wrote in excess of a hundred manuscripts on binary, most of them remaining unpublished.[26]Before his first dedicated work in 1679, numerous manuscripts feature early attempts to explore binary concepts, including tables of numbers and basic calculations, often scribbled in the margins of works unrelated to mathematics.[26]
His first known work on binary,“On the Binary Progression", in 1679, Leibniz introduced conversion between decimal and binary, along with algorithms for performing basic arithmetic operations such as addition, subtraction, multiplication, and division using binary numbers. He also developed a form of binary algebra to calculate the square of a six-digit number and to extract square roots.[26]
His most well known work appears in his articleExplication de l'Arithmétique Binaire(published in 1703).
The full title of Leibniz's article is translated into English as the"Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures ofFu Xi".[27]Leibniz's system uses 0 and 1, like the modern binary numeral system. An example of Leibniz's binary numeral system is as follows:[27]
While corresponding with the Jesuit priestJoachim Bouvetin 1700, who had made himself an expert on theI Chingwhile a missionary in China, Leibniz explained his binary notation, and Bouvet demonstrated in his 1701 letters that theI Chingwas an independent, parallel invention of binary notation.
Leibniz & Bouvet concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophicalmathematicshe admired.[28]Of this parallel invention, Leibniz wrote in his "Explanation Of Binary Arithmetic" that "this restitution of their meaning, after such a great interval of time, will seem all the more curious."[29]
The relation was a central idea to his universal concept of a language orcharacteristica universalis, a popular idea that would be followed closely by his successors such asGottlob FregeandGeorge Boolein formingmodern symbolic logic.[30]Leibniz was first introduced to theI Chingthrough his contact with the French JesuitJoachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw theI Chinghexagrams as an affirmation of theuniversalityof his own religious beliefs as a Christian.[31]Binary numerals were central to Leibniz's theology. He believed that binary numbers were symbolic of the Christian idea ofcreatio ex nihiloor creation out of nothing.[32]
[A concept that] is not easy to impart to the pagans, is the creationex nihilothrough God's almighty power. Now one can say that nothing in the world can better present and demonstrate this power than the origin of numbers, as it is presented here through the simple and unadorned presentation of One and Zero or Nothing.
In 1854, British mathematicianGeorge Boolepublished a landmark paper detailing analgebraicsystem oflogicthat would become known asBoolean algebra. His logical calculus was to become instrumental in the design of digital electronic circuitry.[33]
In 1937,Claude Shannonproduced his master's thesis atMITthat implemented Boolean algebra and binary arithmetic using electronic relays and switches for the first time in history. EntitledA Symbolic Analysis of Relay and Switching Circuits, Shannon's thesis essentially founded practicaldigital circuitdesign.[34]
In November 1937,George Stibitz, then working atBell Labs, completed a relay-based computer he dubbed the "Model K" (for "Kitchen", where he had assembled it), which calculated using binary addition.[35]Bell Labs authorized a full research program in late 1938 with Stibitz at the helm. Their Complex Number Computer, completed 8 January 1940, was able to calculatecomplex numbers. In a demonstration to theAmerican Mathematical Societyconference atDartmouth Collegeon 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by ateletype. It was the first computing machine ever used remotely over a phone line. Some participants of the conference who witnessed the demonstration wereJohn von Neumann,John MauchlyandNorbert Wiener, who wrote about it in his memoirs.[36][37][38]
TheZ1 computer, which was designed and built byKonrad Zusebetween 1935 and 1938, usedBoolean logicand binaryfloating-point numbers.[39]
Any number can be represented by a sequence ofbits(binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. Any of the following rows of symbols can be interpreted as the binary numeric value of 667:
The numeric value represented in each case depends on the value assigned to each symbol. In the earlier days of computing, switches, punched holes, and punched paper tapes were used to represent binary values.[40]In a modern computer, the numeric values may be represented by two differentvoltages; on amagneticdisk,magnetic polaritiesmay be used. A "positive", "yes", or "on" state is not necessarily equivalent to the numerical value of one; it depends on the architecture in use.
In keeping with the customary representation of numerals usingArabic numerals, binary numbers are commonly written using the symbols0and1. When written, binary numerals are often subscripted, prefixed, or suffixed to indicate their base, orradix. The following notations are equivalent:
When spoken, binary numerals are usually read digit-by-digit, to distinguish them from decimal numerals. For example, the binary numeral 100 is pronouncedone zero zero, rather thanone hundred, to make its binary nature explicit and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral asone hundred(a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correctvalue), but this does not make its binary nature explicit.
Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Before examining binary counting, it is useful to briefly discuss the more familiardecimalcounting system as a frame of reference.
Decimalcounting uses the ten symbols0through9. Counting begins with the incremental substitution of the least significant digit (rightmost digit) which is often called thefirst digit. When the available symbols for this position are exhausted, the least significant digit is reset to0, and the next digit of higher significance (one position to the left) is incremented (overflow), and incremental substitution of the low-order digit resumes. This method of reset and overflow is repeated for each digit of significance. Counting progresses as follows:
Binary counting follows the exact same procedure, and again the incremental substitution begins with the least significant binary digit, orbit(the rightmost one, also called thefirst bit), except that only the two symbols0and1are available. Thus, after a bit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next bit to the left:
In the binary system, each bit represents an increasing power of 2, with the rightmost bit representing 20, the next representing 21, then 22, and so on. The value of a binary number is the sum of the powers of 2 represented by each "1" bit. For example, the binary number 100101 is converted to decimal form as follows:
Fractionsin binary arithmeticterminateonly if thedenominatoris apower of 2. As a result, 1/10 does not have a finite binary representation (10has prime factors2and5). This causes 10 × 1/10 not to precisely equal 1 in binaryfloating-point arithmetic. As an example, to interpret the binary expression for 1/3 = .010101..., this means: 1/3 = 0 ×2−1+ 1 ×2−2+ 0 ×2−3+ 1 ×2−4+ ... = 0.3125 + ... An exact value cannot be found with a sum of a finite number of inverse powers of two, the zeros and ones in the binary representation of 1/3 alternate forever.
Arithmeticin binary is much like arithmetic in otherpositional notationnumeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals.
The simplest arithmetic operation in binary is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying:
Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented:
This is known ascarrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:
In this example, two numerals are being added together: 011012(1310) and 101112(2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002(3610).
When computers must add two numbers, the rule that:
xxory = (x + y)mod2
for any two bits x and y allows for very fast calculation, as well.
A simplification for many binary addition problems is the "long carry method" or "Brookhouse Method of Binary Addition". This method is particularly useful when one of the numbers contains a long stretch of ones. It is based on the simple premise that under the binary system, when given a stretch of digits composed entirely ofnones (wherenis any integer length), adding 1 will result in the number 1 followed by a string ofnzeros. That concept follows, logically, just as in the decimal system, where adding 1 to a string ofn9s will result in the number 1 followed by a string ofn0s:
Such long strings are quite common in the binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02(95810) and 1 0 1 0 1 1 0 0 1 12(69110), using the traditional carry method on the left, and the long carry method on the right:
The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowest-ordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives the final answer of 1 1 0 0 1 1 1 0 0 0 12(164910). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort.
The binary addition table is similar to, but not the same as, thetruth tableof thelogical disjunctionoperation∨{\displaystyle \lor }. The difference is that1∨1=1{\displaystyle 1\lor 1=1}, while1+1=10{\displaystyle 1+1=10}.
Subtractionworks in much the same way:
Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known asborrowing. The principle is the same as for carrying. When the result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value.
Subtracting a positive number is equivalent toaddinganegative numberof equalabsolute value. Computers usesigned number representationsto handle negative numbers—most commonly thetwo's complementnotation. Such representations eliminate the need for a separate "subtract" operation. Using two's complement notation, subtraction can be summarized by the following formula:
Multiplicationin binary is similar to its decimal counterpart. Two numbersAandBcan be multiplied by partial products: for each digit inB, the product of that digit inAis calculated and written on a new line, shifted leftward so that its rightmost digit lines up with the digit inBthat was used. The sum of all these partial products gives the final result.
Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication:
For example, the binary numbers 1011 and 1010 are multiplied as follows:
Binary numbers can also be multiplied with bits after abinary point:
See alsoBooth's multiplication algorithm.
The binary multiplication table is the same as thetruth tableof thelogical conjunctionoperation∧{\displaystyle \land }.
Long divisionin binary is again similar to its decimal counterpart.
In the example below, thedivisoris 1012, or 5 in decimal, while thedividendis 110112, or 27 in decimal. The procedure is the same as that of decimallong division; here, the divisor 1012goes into the first three digits 1102of the dividend one time, so a "1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of the dividend; the next digit (a "1") is included to obtain a new three-digit sequence:
The procedure is then repeated with the new sequence, continuing until the digits in the dividend have been exhausted:
Thus, thequotientof 110112divided by 1012is 1012, as shown on the top line, while the remainder, shown on the bottom line, is 102. In decimal, this corresponds to the fact that 27 divided by 5 is 5, with a remainder of 2.
Aside from long division, one can also devise the procedure so as to allow for over-subtracting from the partial remainder at each iteration, thereby leading to alternative methods which are less systematic, but more flexible as a result.
The process oftaking a binary square rootdigit by digit is essentially the same as for a decimal square root but much simpler, due to the binary nature. First group the digits in pairs, using a leading 0 if necessary so there are an even number of digits. Now at each step, consider the answer so far, extended with the digits 01. If this can be subtracted from the current remainder, do so. Then extend the remainder with the next pair of digits. If you subtracted, the next digit of the answer is 1, otherwise it's 0.
Though not directly related to the numerical interpretation of binary symbols, sequences of bits may be manipulated usingBoolean logical operators. When a string of binary symbols is manipulated in this way, it is called abitwise operation; the logical operatorsAND,OR, andXORmay be performed on corresponding bits in two binary numerals provided as input. The logicalNOToperation may be performed on individual bits in a single binary numeral provided as input. Sometimes, such operations may be used as arithmetic short-cuts, and may have other computational benefits as well. For example, anarithmetic shiftleft of a binary number is the equivalent of multiplication by a (positive, integral) power of 2.
To convert from a base-10integerto its base-2 (binary) equivalent, the number isdivided by two. The remainder is theleast-significant bit. The quotient is again divided by two; its remainder becomes the next least significant bit. This process repeats until a quotient of one is reached. The sequence of remainders (including the final quotient of one) forms the binary value, as each remainder must be either zero or one when dividing by two. For example, (357)10is expressed as (101100101)2.[43]
Conversion from base-2 to base-10 simply inverts the preceding algorithm. The bits of the binary number are used one by one, starting with the most significant (leftmost) bit. Beginning with the value 0, the prior value is doubled, and the next bit is then added to produce the next value. This can be organized in a multi-column table. For example, to convert 100101011012to decimal:
The result is 119710. The first Prior Value of 0 is simply an initial decimal value. This method is an application of theHorner scheme.
The fractional parts of a number are converted with similar methods. They are again based on the equivalence of shifting with doubling or halving.
In a fractional binary number such as 0.110101101012, the first digit is12{\textstyle {\frac {1}{2}}}, the second(12)2=14{\textstyle ({\frac {1}{2}})^{2}={\frac {1}{4}}}, etc. So if there is a 1 in the first place after the decimal, then the number is at least12{\textstyle {\frac {1}{2}}}, and vice versa. Double that number is at least 1. This suggests the algorithm: Repeatedly double the number to be converted, record if the result is at least 1, and then throw away the integer part.
For example,(13)10{\textstyle ({\frac {1}{3}})_{10}}, in binary, is:
Thus the repeating decimal fraction 0.3... is equivalent to the repeating binary fraction 0.01... .
Or for example, 0.110, in binary, is:
This is also a repeating binary fraction 0.00011... . It may come as a surprise that terminating decimal fractions can have repeating expansions in binary. It is for this reason that many are surprised to discover that 1/10 + ... + 1/10 (addition of 10 numbers) differs from 1 in binaryfloating-point arithmetic. In fact, the only binary fractions with terminating expansions are of the form of an integer divided by a power of 2, which 1/10 is not.
The final conversion is from binary to decimal fractions. The only difficulty arises with repeating fractions, but otherwise the method is to shift the fraction to an integer, convert it as above, and then divide by the appropriate power of two in the decimal base. For example:
x=1100.101110¯…x×26=1100101110.01110¯…x×2=11001.01110¯…x×(26−2)=1100010101x=1100010101/111110x=(789/62)10{\displaystyle {\begin{aligned}x&=&1100&.1{\overline {01110}}\ldots \\x\times 2^{6}&=&1100101110&.{\overline {01110}}\ldots \\x\times 2&=&11001&.{\overline {01110}}\ldots \\x\times (2^{6}-2)&=&1100010101\\x&=&1100010101/111110\\x&=&(789/62)_{10}\end{aligned}}}
Another way of converting from binary to decimal, often quicker for a person familiar withhexadecimal, is to do so indirectly—first converting (x{\displaystyle x}in binary) into (x{\displaystyle x}in hexadecimal) and then converting (x{\displaystyle x}in hexadecimal) into (x{\displaystyle x}in decimal).
For very large numbers, these simple methods are inefficient because they perform a large number of multiplications or divisions where one operand is very large. A simple divide-and-conquer algorithm is more effective asymptotically: given a binary number, it is divided by 10k, wherekis chosen so that the quotient roughly equals the remainder; then each of these pieces is converted to decimal and the two areconcatenated. Given a decimal number, it can be split into two pieces of about the same size, each of which is converted to binary, whereupon the first converted piece is multiplied by 10kand added to the second converted piece, wherekis the number of decimal digits in the second, least-significant piece before conversion.
Binary may be converted to and from hexadecimal more easily. This is because theradixof the hexadecimal system (16) is a power of the radix of the binary system (2). More specifically, 16 = 24, so it takes four digits of binary to represent one digit of hexadecimal, as shown in the adjacent table.
To convert a hexadecimal number into its binary equivalent, simply substitute the corresponding binary digits:
To convert a binary number into its hexadecimal equivalent, divide it into groups of four bits. If the number of bits isn't a multiple of four, simply insert extra0bits at the left (calledpadding). For example:
To convert a hexadecimal number into its decimal equivalent, multiply the decimal equivalent of each hexadecimal digit by the corresponding power of 16 and add the resulting values:
Binary is also easily converted to theoctalnumeral system, since octal uses a radix of 8, which is apower of two(namely, 23, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal and binary numerals is the same as for the first eight digits ofhexadecimalin the table above. Binary 000 is equivalent to the octal digit 0, binary 111 is equivalent to octal 7, and so forth.
Converting from octal to binary proceeds in the same fashion as it does forhexadecimal:
And from binary to octal:
And from octal to decimal:
Non-integers can be represented by using negative powers, which are set off from the other digits by means of aradix point(called adecimal pointin the decimal system). For example, the binary number 11.012means:
For a total of 3.25 decimal.
Alldyadic rational numbersp2a{\displaystyle {\frac {p}{2^{a}}}}have aterminatingbinary numeral—the binary representation has a finite number of terms after the radix point. Otherrational numbershave binary representation, but instead of terminating, theyrecur, with a finite sequence of digits repeating indefinitely. For instance
110310=12112=0.0101010101¯…2{\displaystyle {\frac {1_{10}}{3_{10}}}={\frac {1_{2}}{11_{2}}}=0.01010101{\overline {01}}\ldots \,_{2}}12101710=11002100012=0.101101001011010010110100¯…2{\displaystyle {\frac {12_{10}}{17_{10}}}={\frac {1100_{2}}{10001_{2}}}=0.1011010010110100{\overline {10110100}}\ldots \,_{2}}
The phenomenon that the binary representation of any rational is either terminating or recurring also occurs in other radix-based numeral systems. See, for instance, the explanation indecimal. Another similarity is the existence of alternative representations for any terminating representation, relying on the fact that0.111111...is the sum of thegeometric series2−1+ 2−2+ 2−3+ ... which is 1.
Binary numerals that neither terminate nor recur representirrational numbers. For instance, | https://en.wikipedia.org/wiki/Base2 |
Quaternary/kwəˈtɜːrnəri/is anumeral systemwithfouras itsbase. It uses thedigits0, 1, 2, and 3 to represent anyreal number. Conversion frombinaryis straightforward.
Four is the largest number within thesubitizingrange and one of two numbers that is both a square and ahighly composite number(the other being thirty-six), making quaternary a convenient choice for a base at this scale. Despite being twice as large, itsradix economyis equal to that of binary. However, it fares no better in the localization of prime numbers (the smallest better base being theprimorialbase six,senary).
Quaternary shares with all fixed-radixnumeral systems many properties, such as the ability to represent any real number with a canonical representation (almost unique) and the characteristics of the representations ofrational numbersandirrational numbers. Seedecimalandbinaryfor a discussion of these properties.
As with theoctalandhexadecimalnumeral systems, quaternary has a special relation to thebinary numeral system. Eachradixfour, eight, and sixteen is apower of two, so the conversion to and from binary is implemented by matching each digit with two, three, or four binary digits, orbits. For example, in quaternary,
Since sixteen is a power of four, conversion between these bases can be implemented by matching each hexadecimal digit with two quaternary digits. In the above example,
Although octal and hexadecimal are widely used incomputingandcomputer programmingin the discussion and analysis of binary arithmetic and logic, quaternary does not enjoy the same status.
Although quaternary has limited practical use, it can be helpful if it is ever necessary to perform hexadecimal arithmetic without a calculator. Each hexadecimal digit can be turned into a pair of quaternary digits. Then, arithmetic can be performed relatively easily before converting the end result back to hexadecimal. Quaternary is convenient for this purpose, since numbers have only half the digit length compared to binary, while still having very simple multiplication and addition tables with only three unique non-trivial elements.
By analogy withbyteandnybble, a quaternary digit is sometimes called acrumb.
Due to having only factors of two, many quaternary fractions have repeating digits, although these tend to be fairly simple:
Many or all of theChumashan languages(spoken by the Native AmericanChumash peoples) originally used a quaternary numeral system, in which the names for numbers were structured according to multiples of four and sixteen, instead of ten. There is a surviving list ofVentureño languagenumber words up to thirty-two written down by a Spanish priest ca. 1819.[1]
TheKharosthi numerals(from the languages of the tribes of Pakistan and Afghanistan) have a partial quaternary numeral system from one to ten.
Quaternary numbers are used in the representation of 2DHilbert curves. Here, a real number between 0 and 1 is converted into the quaternary system. Every single digit now indicates in which of the respective four sub-quadrants the number will be projected.
Parallels can be drawn between quaternary numerals and the waygenetic codeis represented byDNA. The four DNAnucleotidesinalphabetical order, abbreviatedA,C,G, andT, can be taken to represent the quaternary digits innumerical order0, 1, 2, and 3. With this encoding, thecomplementarydigit pairs 0↔3, and 1↔2 (binary 00↔11 and 01↔10) match the complementation of thebase pairs: A↔T and C↔G and can be stored as data in DNA sequence.[2]For example, the nucleotide sequence GATTACA can be represented by the quaternary number 2033010 (=decimal9156 orbinary10 00 11 11 00 01 00). Thehuman genomeis 3.2 billion base pairs in length.[3]
Quaternaryline codeshave been used for transmission, from theinvention of the telegraphto the2B1Qcode used in modernISDNcircuits.
The GDDR6X standard, developed byNvidiaandMicron, uses quaternary bits to transmit data.[4]
Some computers have usedquaternary floating pointarithmetic including theIllinois ILLIAC II(1962)[5]and the Digital Field System DFS IV and DFS V high-resolution site survey systems.[6] | https://en.wikipedia.org/wiki/Base4 |
Hexadecimal(also known asbase-16or simplyhex) is apositional numeral systemthat represents numbers using aradix(base) of sixteen. Unlike thedecimalsystem representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9 and "A"–"F" to represent values from ten to fifteen.
Software developers and system designers widely use hexadecimal numbers because they provide a convenient representation ofbinary-codedvalues. Each hexadecimal digit represents fourbits(binary digits), also known as anibble(or nybble).[1]For example, an 8-bitbyteis two hexadecimal digits and its value can be written as00toFFin hexadecimal.
In mathematics, a subscript is typically used to specify the base. For example, the decimal value711would be expressed in hexadecimal as 2C716. In programming, several notations denote hexadecimal numbers, usually involving a prefix. The prefix0xis used inC, which would denote this value as0x2C7.
Hexadecimal is used in the transfer encodingBase 16, in which each byte of theplain textis broken into two 4-bit values and represented by two hexadecimal digits.
In most current use cases, the letters A–F or a–f represent the values 10–15, while thenumerals0–9 are used to represent their decimal values.
There is no universal convention to use lowercase or uppercase, so each is prevalent or preferred in particular environments by community standards or convention; even mixed case is used. Someseven-segment displaysuse mixed-case 'A b C d E F' to distinguish the digits A–F from one another and from 0–9.
There is some standardization of using spaces (rather than commas or another punctuation mark) to separate hex values in a long list. For instance, in the followinghex dump, each 8-bitbyteis a 2-digit hex number, with spaces between them, while the 32-bit offset at the start is an 8-digit hex number.
In contexts where thebaseis not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously. A numerical subscript (itself written in decimal) can give the base explicitly: 15910is decimal 159; 15916is hexadecimal 159, which equals 34510. Some authors prefer a text subscript, such as 159decimaland 159hex, or 159dand 159h.
Donald Knuthintroduced the use of a particular typeface to represent a particular radix in his bookThe TeXbook.[2]Hexadecimal representations are written there in atypewriter typeface:5A3,C1F27ED
In linear text systems, such as those used in most computer programming environments, a variety of methods have arisen:
Sometimes the numbers are known to be Hex.
The use of the lettersAthroughFto represent the digits above 9 was not universal in the early history of computers.
Since there were no traditional numerals to represent the quantities from ten to fifteen, alphabetic letters were re-employed as a substitute. Most European languages lack non-decimal-based words for some of the numerals eleven to fifteen. Some people read hexadecimal numbers digit by digit, like a phone number, or using theNATO phonetic alphabet, theJoint Army/Navy Phonetic Alphabet, or a similarad-hocsystem. In the wake of the adoption of hexadecimal amongIBM System/360programmers, Magnuson (1968)[23]suggested a pronunciation guide that gave short names to the letters of hexadecimal – for instance, "A" was pronounced "ann", B "bet", C "chris", etc.[23]Another naming-system was published online by Rogers (2007)[24]that tries to make the verbal representation distinguishable in any case, even when the actual number does not contain numbers A–F. Examples are listed in the tables below. Yet another naming system was elaborated by Babb (2015), based on a joke inSilicon Valley.[25]The system proposed by Babb was further improved by Atkins-Bittner in 2015-2016.[26]
Others have proposed using the verbal Morse Code conventions to express four-bit hexadecimal digits, with "dit" and "dah" representing zero and one, respectively, so that "0000" is voiced as "dit-dit-dit-dit" (....), dah-dit-dit-dah (-..-) voices the digit with a value of nine, and "dah-dah-dah-dah" (----) voices the hexadecimal digit for decimal 15.
Systems of counting ondigitshave been devised for both binary and hexadecimal.Arthur C. Clarkesuggested using each finger as an on/off bit, allowing finger counting from zero to 102310on ten fingers.[27]Another system for counting up to FF16(25510) is illustrated on the right.
The hexadecimal system can express negative numbers the same way as in decimal: −2A to represent −4210, −B01D9 to represent −72136910and so on.
Hexadecimal can also be used to express the exact bit patterns used in theprocessor, so a sequence of hexadecimal digits may represent asignedor even afloating-pointvalue. This way, the negative number −4210can be written as FFFF FFD6 in a 32-bitCPU register(intwo's complement), as C228 0000 in a 32-bitFPUregister or C045 0000 0000 0000 in a 64-bit FPU register (in theIEEE floating-point standard).
Just as decimal numbers can be represented inexponential notation, so too can hexadecimal numbers.P notationuses the letterP(orp, for "power"), whereasE(ore) serves a similar purpose in decimalE notation. The number after thePisdecimaland represents thebinaryexponent. Increasing the exponent by 1 multiplies by 2, not 16:20p0 = 10p1 = 8p2 = 4p3 = 2p4 = 1p5. Usually, the number is normalized so that the hexadecimal digits start with1.(zero is usually0with noP).
Example:1.3DEp42represents1.3DE16× 24210.
P notation is required by theIEEE 754-2008binary floating-point standard and can be used for floating-point literals in theC99edition of theC programming language.[28]Using the%aor%Aconversion specifiers, this notation can be produced by implementations of theprintffamily of functions following the C99 specification[29]andSingle Unix Specification(IEEE Std 1003.1)POSIXstandard.[30]
Most computers manipulate binary data, but it is difficult for humans to work with a large number of digits for even a relatively small binary number. Although most humans are familiar with the base 10 system, it is much easier to map binary to hexadecimal than to decimal because each hexadecimal digit maps to a whole number of bits (410).
This example converts 11112to base ten. Since eachpositionin a binary numeral can contain either a 1 or a 0, its value may be easily determined by its position from the right:
Therefore:
With little practice, mapping 11112to F16in one step becomes easy (see table inwritten representation). The advantage of using hexadecimal rather than decimal increases rapidly with the size of the number. When the number becomes large, conversion to decimal is very tedious. However, when mapping to hexadecimal, it is trivial to regard the binary string as 4-digit groups and map each to a single hexadecimal digit.[31]
This example shows the conversion of a binary number to decimal, mapping each digit to the decimal value, and adding the results.
Compare this to the conversion to hexadecimal, where each group of four digits can be considered independently and converted directly:
The conversion from hexadecimal to binary is equally direct.[31]
Althoughquaternary(base 4) is little used, it can easily be converted to and from hexadecimal or binary. Each hexadecimal digit corresponds to a pair of quaternary digits, and each quaternary digit corresponds to a pair of binary digits. In the above example 2 5 C16= 02 11 304.
Theoctal(base 8) system can also be converted with relative ease, although not quite as trivially as with bases 2 and 4. Each octal digit corresponds to three binary digits, rather than four. Therefore, we can convert between octal and hexadecimal via an intermediate conversion to binary followed by regrouping the binary digits in groups of either three or four.
As with all bases there is a simplealgorithmfor converting a representation of a number to hexadecimal by doing integer division and remainder operations in the source base. In theory, this is possible from any base, but for most humans, only decimal and for most computers, only binary (which can be converted by far more efficient methods) can be easily handled with this method.
Let d be the number to represent in hexadecimal, and the series hihi−1...h2h1be the hexadecimal digits representing the number.
"16" may be replaced with any other base that may be desired.
The following is aJavaScriptimplementation of the above algorithm for converting any number to a hexadecimal in String representation. Its purpose is to illustrate the above algorithm. To work with data seriously, however, it is much more advisable to work withbitwise operators.
It is also possible to make the conversion by assigning each place in the source base the hexadecimal representation of its place value — before carrying out multiplication and addition to get the final representation.
For example, to convert the number B3AD to decimal, one can split the hexadecimal number into its digits: B (1110), 3 (310), A (1010) and D (1310), and then get the final result by multiplying each decimal representation by 16p(pbeing the corresponding hex digit position, counting from right to left, beginning with 0). In this case, we have that:
B3AD = (11 × 163) + (3 × 162) + (10 × 161) + (13 × 160)
which is 45997 in base 10.
Many computer systems provide a calculator utility capable of performing conversions between the various radices frequently including hexadecimal.
InMicrosoft Windows, theCalculator, on its Programmer mode, allows conversions between hexadecimal and other common programming bases.
Elementary operations such as division can be carried out indirectly through conversion to an alternatenumeral system, such as the commonly used decimal system or the binary system where each hex digit corresponds to four binary digits.
Alternatively, one can also perform elementary operations directly within the hex system itself — by relying on its addition/multiplication tables and its corresponding standard algorithms such aslong divisionand the traditional subtraction algorithm.
As with other numeral systems, the hexadecimal system can be used to representrational numbers, althoughrepeating expansionsare common since sixteen (1016) has only a single prime factor: two.
For any base, 0.1 (or "1/10") is always equivalent to one divided by the representation of that base value in its own number system. Thus, whether dividing one by two forbinaryor dividing one by sixteen for hexadecimal, both of these fractions are written as0.1. Because the radix 16 is aperfect square(42), fractions expressed in hexadecimal have an odd period much more often than decimal ones, and there are nocyclic numbers(other than trivial single digits). Recurring digits are exhibited when the denominator in lowest terms has aprime factornot found in the radix; thus, when using hexadecimal notation, all fractions with denominators that are not apower of tworesult in an infinite string of recurring digits (such as thirds and fifths). This makes hexadecimal (and binary) less convenient thandecimalfor representing rational numbers since a larger proportion lies outside its range of finite representation.
All rational numbers finitely representable in hexadecimal are also finitely representable in decimal,duodecimalandsexagesimal: that is, any hexadecimal number with a finite number of digits also has a finite number of digits when expressed in those other bases. Conversely, only a fraction of those finitely representable in the latter bases are finitely representable in hexadecimal. For example, decimal 0.1 corresponds to the infinite recurring representation 0.19in hexadecimal. However, hexadecimal is more efficient than duodecimal and sexagesimal for representing fractions with powers of two in the denominator. For example, 0.062510(one-sixteenth) is equivalent to 0.116, 0.0912, and 0;3,4560.
The table below gives the expansions of some commonirrational numbersin decimal and hexadecimal.
Powers of two have very simple expansions in hexadecimal. The first sixteen powers of two are shown below.
The traditionalChinese units of measurementwere base-16. For example, one jīn (斤) in the old system equals sixteentaels. Thesuanpan(Chineseabacus) can be used to perform hexadecimal calculations such as additions and subtractions.[32]
As with theduodecimalsystem, there have been occasional attempts to promote hexadecimal as the preferred numeral system. These attempts often propose specific pronunciation and symbols for the individual numerals.[33]Some proposals unify standard measures so that they are multiples of 16.[34][35]An early such proposal was put forward byJohn W. NystrominProject of a New System of Arithmetic, Weight, Measure and Coins: Proposed to be called the Tonal System, with Sixteen to the Base, published in 1862.[36]Nystrom among other things suggestedhexadecimal time, which subdivides a day by 16,
so that there are 16 "hours" (or "10tims", pronouncedtontim) in a day.[37]
The wordhexadecimalis first recorded in 1952.[38]It ismacaronicin the sense that it combinesGreekἕξ (hex) "six" withLatinate-decimal.
The all-Latin alternativesexadecimal(compare the wordsexagesimalfor base 60) is older, and sees at least occasional use from the late 19th century.[39]It is still in use in the 1950s inBendixdocumentation.
Schwartzman (1994) argues that use ofsexadecimalmay have been avoided because of its suggestive abbreviation tosex.[40]Many western languages since the 1960s have adopted terms equivalent in formation tohexadecimal(e.g. Frenchhexadécimal, Italianesadecimale, Romanianhexazecimal, Serbianхексадецимални, etc.)
but others have introduced terms which substitute native words for "sixteen" (e.g. Greek δεκαεξαδικός, Icelandicsextándakerfi, Russianшестнадцатеричнойetc.)
Terminology and notation did not become settled until the end of the 1960s.
In 1969,Donald Knuthargued that the etymologically correct term would besenidenary, or possiblysedenary, a Latinate term intended to convey "grouped by 16" modelled onbinary,ternary,quaternary, etc.
According to Knuth's argument, the correct terms fordecimalandoctalarithmetic would bedenaryandoctonary, respectively.[41]Alfred B. Taylor usedsenidenaryin his mid-1800s work on alternative number bases, although he rejected base 16 because of its "incommodious number of digits".[42][43]
The now-current notation using the letters A to F establishes itself as the de facto standard beginning in 1966, in the wake of the
publication of theFortran IVmanual forIBM System/360, which (unlike earlier variants of Fortran) recognizes a standard for entering hexadecimal constants.[44]As noted above, alternative notations were used byNEC(1960) and The Pacific Data Systems 1020 (1964). The standard adopted by IBM seems to have become widely adopted by 1968, when Bruce Alan Martin
in his letter to the editor of theCACMcomplains that
With the ridiculous choice of letters A, B, C, D, E, F as hexadecimal number symbols adding to already troublesome problems of distinguishing octal (or hex) numbers from decimal numbers (or variable names), the time is overripe for reconsideration of our number symbols. This should have been done before poor choices gelled into a de facto standard!
Martin's argument was that use of numerals 0 to 9 in nondecimal numbers "imply to us a base-ten place-value scheme":
"Why not use entirely new symbols (and names) for the seven or fifteen nonzero digits needed in octal or hex. Even use of the letters A through P would be an improvement, but entirely new symbols could reflect the binary nature of the system".[19]He also argued that "re-using alphabetic letters for numerical digits represents a gigantic backward step from the invention of distinct, non-alphabetic glyphs for numerals sixteen centuries ago" (asBrahmi numerals, and later in aHindu–Arabic numeral system),
and that the recentASCIIstandards (ASA X3.4-1963 and USAS X3.4-1968)
"should have preserved six code table positions following the ten decimal digits
-- rather than needlessly filling these with punctuation characters"
(":;<=>?") that might have been placed elsewhere among the 128 available positions.
Base16(as a proper name without a space) can also refer to abinary to text encodingbelonging to the same family asBase32,Base58, andBase64.
In this case, data is broken into 4-bit sequences, and each value (between 0 and 15 inclusively) is encoded using one of 16 symbols from theASCIIcharacter set. Although any 16 symbols from the ASCII character set can be used, in practice, the ASCII digits "0"–"9" and the letters "A"–"F" (or the lowercase "a"–"f") are always chosen in order to align with standard written notation for hexadecimal numbers.
There are several advantages of Base16 encoding:
The main disadvantages of Base16 encoding are:
Support for Base16 encoding is ubiquitous in modern computing. It is the basis for theW3Cstandard forURL percent encoding, where a character is replaced with a percent sign "%" and its Base16-encoded form. Most modern programming languages directly include support for formatting and parsing Base16-encoded numbers. | https://en.wikipedia.org/wiki/Base16 |
TheSimple Mail Transfer Protocol(SMTP) is anInternet standardcommunication protocolforelectronic mailtransmission. Mail servers and othermessage transfer agentsuse SMTP to send and receive mail messages. User-levelemail clientstypically use SMTP only for sending messages to a mail server for relaying, and typically submit outgoing email to the mail server on port 465 or 587 perRFC8314. For retrieving messages,IMAP(which replaced the olderPOP3) is standard, but proprietary servers also often implement proprietary protocols, e.g.,Exchange ActiveSync.
SMTP's origins began in 1980, building on concepts implemented on theARPANETsince 1971. It has been updated, modified and extended multiple times. The protocol version in common use today has extensible structure with various extensions forauthentication,encryption, binary data transfer, andinternationalized email addresses. SMTP servers commonly use theTransmission Control Protocolonport number25 (between servers) and 587 (for submission from authenticated clients), both with or without encryption, and 465 with encryption for submission.
Various forms of one-to-oneelectronic messagingwere used in the 1960s. Users communicated using systems developed for specificmainframe computers. As more computers were interconnected, especially in the U.S. Government'sARPANET, standards were developed to permit exchange of messages between different operating systems.
Mail on the ARPANET traces its roots to 1971: the Mail Box Protocol, which was not implemented,[1]but is discussed inRFC196; and theSNDMSGprogram, whichRay TomlinsonofBBNadapted that year to send messages across two computers on the ARPANET.[2][3][4]A further proposal for a Mail Protocol was made in RFC 524 in June 1973,[5]which was not implemented.[6]
The use of theFile Transfer Protocol(FTP) for "network mail" on the ARPANET was proposed in RFC 469 in March 1973.[7]Through RFC 561, RFC 680, RFC 724, and finally RFC 733 in November 1977, a standardized framework for "electronic mail" using FTP mail servers on was developed.[8][9]
SMTP grew out of these standards developed during the 1970s. Ray Tomlinson discussed network mail among theInternational Network Working GroupinINWG Protocol note 2, written in September 1974.[10]INWG discussed protocols for electronic mail in 1979,[11]which was referenced byJon Postelin his early work on Internet email. Postel first proposed an Internet Message Protocol in 1979 as part of theInternet Experiment Note(IEN) series.[12][13][14]
In 1980, Postel and Suzanne Sluizer publishedRFC772which proposed the Mail Transfer Protocol as a replacement for the use of the FTP for mail.RFC780of May 1981 removed all references to FTP and allocated port 57 forTCPandUDP,[15]an allocation that has since been removed byIANA. In November 1981, Postel publishedRFC788"Simple Mail Transfer Protocol".
The SMTP standard was developed around the same time asUsenet, a one-to-many communication network with some similarities.[15]
SMTP became widely used in the early 1980s. At the time, it was a complement to theUnix to Unix Copy Program(UUCP), which was better suited for handling email transfers between machines that were intermittently connected. SMTP, on the other hand, works best when both the sending and receiving machines are connected to the network all the time. Both used astore and forwardmechanism and are examples ofpush technology. Though Usenet'snewsgroupswere still propagated with UUCP between servers,[16]UUCP as a mail transport has virtually disappeared[17]along with the "bang paths" it used as message routing headers.[18]
Sendmail, released with4.1cBSDin 1983, was one of the first mail transfer agents to implement SMTP.[19]Over time, as BSD Unix became the most popular operating system on the Internet, Sendmail became the most commonMTA(mail transfer agent).[20]
The original SMTP protocol supported only unauthenticated unencrypted 7-bit ASCII text communications, susceptible to trivialman-in-the-middle attack,spoofing, andspamming, and requiring any binary data to be encoded to readable text before transmission. Due to absence of a proper authentication mechanism, by design every SMTP server was anopen mail relay. TheInternet Mail Consortium(IMC) reported that 55% of mail servers were open relays in 1998,[21]but less than 1% in 2002.[22]Because of spam concerns most email providersblocklistopen relays,[23]making original SMTP essentially impractical for general use on the Internet.
In November 1995,RFC1869defined Extended Simple Mail Transfer Protocol (ESMTP), which established a general structure for all existing and future extensions which aimed to add-in the features missing from the original SMTP. ESMTP defines consistent and manageable means by which ESMTP clients and servers can be identified and servers can indicate supported extensions.
Message submission (RFC2476) andSMTP-AUTH(RFC2554) were introduced in 1998 and 1999, both describing new trends in email delivery. Originally, SMTP servers were typically internal to an organization, receiving mail for the organizationfrom the outside, and relaying messages from the organizationto the outside. But as time went on, SMTP servers (mail transfer agents), in practice, were expanding their roles to becomemessage submission agentsformail user agents, some of which were now relaying mailfrom the outsideof an organization. (e.g. a company executive wishes to send email while on a trip using the corporate SMTP server.) This issue, a consequence of the rapid expansion and popularity of theWorld Wide Web, meant that SMTP had to include specific rules and methods for relaying mail and authenticating users to prevent abuses such as relaying of unsolicited email (spam). Work on message submission (RFC2476) was originally started because popular mail servers would often rewrite mail in an attempt to fix problems in it, for example, adding a domain name to an unqualified address. This behavior is helpful when the message being fixed is an initial submission, but dangerous and harmful when the message originated elsewhere and is being relayed. Cleanly separating mail into submission and relay was seen as a way to permit and encourage rewriting submissions while prohibiting rewriting relay. As spam became more prevalent, it was also seen as a way to provide authorization for mail being sent out from an organization, as well as traceability. This separation of relay and submission quickly became a foundation for modern email security practices.
As this protocol started out purelyASCIItext-based, it did not deal well with binary files, or characters in many non-English languages. Standards such as Multipurpose Internet Mail Extensions (MIME) were developed to encode binary files for transfer through SMTP. Mail transfer agents (MTAs) developed afterSendmailalso tended to be implemented8-bit clean, so that the alternate "just send eight" strategy could be used to transmit arbitrary text data (in any 8-bit ASCII-like character encoding) via SMTP.Mojibakewas still a problem due to differing character set mappings between vendors, although the email addresses themselves still allowed onlyASCII. 8-bit-clean MTAs today tend to support the 8BITMIME extension, permitting some binary files to be transmitted almost as easily as plain text (limits on line length and permitted octet values still apply, so that MIME encoding is needed for most non-text data and some text formats). In 2012, theSMTPUTF8extension was created to supportUTF-8text, allowing international content and addresses in non-Latinscripts likeCyrillicorChinese.
Many people contributed to the core SMTP specifications, among themJon Postel,Eric Allman, Dave Crocker,Ned Freed, Randall Gellens,John Klensin, andKeith Moore.
Email is submitted by a mail client (mail user agent, MUA) to a mail server (mail submission agent, MSA) using SMTP onTCPport 465 or 587. Mostmailbox providersstill allow submission on traditional port 25. The MSA delivers the mail to itsmail transfer agent(MTA). Often, these two agents are instances of the same software launched with different options on the same machine. Local processing can be done either on a single machine, or split among multiple machines; mail agent processes on one machine can share files, but if processing is on multiple machines, they transfer messages between each other using SMTP, where each machine is configured to use the next machine as asmart host. Each process is an MTA (an SMTP server) in its own right.
The boundary MTA usesDNSto look up theMX (mail exchanger) recordfor the recipient's domain (the part of theemail addresson the right of@). The MX record contains the name of the target MTA. Based on the target host and other factors, the sending MTA selects a recipient server and connects to it to complete the mail exchange.
Message transfer can occur in a single connection between two MTAs, or in a series of hops through intermediary systems. A receiving SMTP server may be the ultimate destination, an intermediate "relay" (that is, it stores and forwards the message) or a "gateway" (that is, it may forward the message using some protocol other than SMTP). PerRFC5321section 2.1, each hop is a formal handoff of responsibility for the message, whereby the receiving server must either deliver the message or properly report the failure to do so.
Once the final hop accepts the incoming message, it hands it to amail delivery agent(MDA) for local delivery. An MDA saves messages in the relevantmailboxformat. As with sending, this reception can be done using one or multiple computers, but in the diagram above the MDA is depicted as one box near the mail exchanger box. An MDA may deliver messages directly to storage, orforwardthem over a network using SMTP or other protocol such asLocal Mail Transfer Protocol(LMTP), a derivative of SMTP designed for this purpose.
Once delivered to the local mail server, the mail is stored for batch retrieval by authenticated mail clients (MUAs). Mail is retrieved by end-user applications, called email clients, usingInternet Message Access Protocol(IMAP), a protocol that both facilitates access to mail and manages stored mail, or thePost Office Protocol(POP) which typically uses the traditionalmboxmail file format or a proprietary system such as Microsoft Exchange/Outlook orLotus Notes/Domino.Webmailclients may use either method, but the retrieval protocol is often not a formal standard.
SMTP defines messagetransport, not the messagecontent. Thus, it defines the mailenvelopeand its parameters, such as theenvelope sender, but not the header (excepttrace information) nor the body of the message itself. STD 10 andRFC5321define SMTP (the envelope), while STD 11 andRFC5322define the message (header and body), formally referred to as theInternet Message Format.
SMTP is aconnection-oriented,text-based protocolin which a mail sender communicates with a mail receiver by issuing command strings and supplying necessary data over a reliable ordered data stream channel, typically aTransmission Control Protocol(TCP) connection. AnSMTP sessionconsists of commands originated by an SMTPclient(the initiatingagent, sender, or transmitter) and corresponding responses from the SMTPserver(the listening agent, or receiver) so that the session is opened, and session parameters are exchanged. A session may include zero or more SMTP transactions. AnSMTP transactionconsists of three command/reply sequences:
Besides the intermediate reply for DATA, each server's reply can be either positive (2xx reply codes) or negative. Negative replies can be permanent (5xx codes) or transient (4xx codes). Arejectis a permanent failure and the client should send a bounce message to the server it received it from. Adropis a positive response followed by message discard rather than delivery.
The initiating host, the SMTP client, can be either an end-user'semail client, functionally identified as amail user agent(MUA), or a relay server'smail transfer agent(MTA), that is an SMTP server acting as an SMTP client, in the relevant session, in order to relay mail. Fully capable SMTP servers maintain queues of messages for retrying message transmissions that resulted in transient failures.
A MUA knows theoutgoing mailSMTP server from its configuration. A relay server typically determines which server to connect to by looking up theMX(Mail eXchange)DNSresource record for each recipient'sdomain name. If no MX record is found, a conformant relaying server (not all are) instead looks up theA record. Relay servers can also be configured to use asmart host. A relay server initiates aTCPconnection to the server on the "well-known port" for SMTP:port25, or for connecting to an MSA, port 465 or 587. The main difference between an MTA and an MSA is that connecting to an MSA requiresSMTP Authentication.
SMTP is a delivery protocol only. In normal use, mail is "pushed" to a destination mail server (or next-hop mail server) as it arrives. Mail is routed based on the destination server, not the individual user(s) to which it is addressed. Other protocols, such as thePost Office Protocol(POP) and theInternet Message Access Protocol(IMAP) are specifically designed for use by individual users retrieving messages and managingmailboxes. To permit an intermittently-connected mail server topullmessages from a remote server on demand, SMTP has a feature to initiate mail queue processing on a remote server (seeRemote Message Queue Startingbelow). POP and IMAP are unsuitable protocols for relaying mail by intermittently-connected machines; they are designed to operate after final delivery, when information critical to the correct operation of mail relay (the "mail envelope") has been removed.
Remote Message Queue Starting enables a remote host to start processing of the mail queue on a server so it may receive messages destined to it by sending a corresponding command. The originalTURNcommand was deemed insecure and was extended inRFC1985with theETRNcommand which operates more securely using anauthenticationmethod based onDomain Name Systeminformation.[26]
Anemail clientneeds to know the IP address of its initial SMTP server and this has to be given as part of its configuration (usually given as aDNSname). This server will deliver outgoing messages on behalf of the user.
Server administrators need to impose some control on which clients can use the server. This enables them to deal with abuse, for examplespam. Two solutions have been in common use:
Under this system, anISP's SMTP server will not allow access by users who are outside the ISP's network. More precisely, the server may only allow access to users with an IP address provided by the ISP, which is equivalent to requiring that they are connected to the Internet using that same ISP. A mobile user may often be on a network other than that of their normal ISP, and will then find that sending email fails because the configured SMTP server choice is no longer accessible.
This system has several variations. For example, an organisation's SMTP server may only provide service to users on the same network, enforcing this by firewalling to block access by users on the wider Internet. Or the server may perform range checks on the client's IP address. These methods were typically used by corporations and institutions such as universities which provided an SMTP server for outbound mail only for use internally within the organisation. However, most of these bodies now use client authentication methods, as described below.
Where a user is mobile, and may use different ISPs to connect to the internet, this kind of usage restriction is onerous, and altering the configured outbound email SMTP server address is impractical. It is highly desirable to be able to use email client configuration information that does not need to change.
Modern SMTP servers typically requireauthenticationof clients by credentials before allowing access, rather than restricting access by location as described earlier. This more flexible system is friendly to mobile users and allows them to have a fixed choice of configured outbound SMTP server.SMTP Authentication, often abbreviated SMTP AUTH, is an extension of the SMTP in order to log in using an authentication mechanism.
Communication between mail servers generally uses the standardTCPport 25 designated for SMTP.
Mailclientshowever generally don't use this, instead using specific "submission" ports. Mail services generally accept email submission from clients on one of:
Port 2525 and others may be used by some individual providers, but have never been officially supported.
ManyInternet service providersnow block all outgoing port 25 traffic from their customers. Mainly as an anti-spam measure,[27]but also to cure for the higher cost they have when leaving it open, perhaps by charging more from the few customers that require it open.
A typical example of sending a message via SMTP to two mailboxes (aliceandtheboss) located in the same mail domain (example.com) is reproduced in the following session exchange. (In this example, the conversation parts are prefixed withS:andC:, forserverandclient, respectively; these labels are not part of the exchange.)
After the message sender (SMTP client) establishes a reliable communications channel to the message receiver (SMTP server), the session is opened with a greeting by the server, usually containing itsfully qualified domain name(FQDN), in this casesmtp.example.com. The client initiates its dialog by responding with aHELOcommand identifying itself in the command's parameter with its FQDN (or an address literal if none is available).[28]
The client notifies the receiver of the originating email address of the message in aMAIL FROMcommand. This is also the return orbounce addressin case the message cannot be delivered. In this example the email message is sent to two mailboxes on the same SMTP server: one for each recipient listed in theTo:andCc:header fields. The corresponding SMTP command isRCPT TO. Each successful reception and execution of a command is acknowledged by the server with aresult code and response message(e.g.,250 Ok).
The transmission of the body of the mail message is initiated with aDATAcommand after which it is transmitted verbatim line by line and is terminated with an end-of-data sequence. This sequence consists of a new-line (<CR><LF>), a singlefull stop(.), followed by another new-line (<CR><LF>). Since a message body can contain a line with just a period as part of the text, the client sendstwoperiods every time a line starts with a period; correspondingly, the server replaces every sequence of two periods at the beginning of a line with a single one. Such escaping method is calleddot-stuffing.
The server's positive reply to the end-of-data, as exemplified, implies that the server has taken the responsibility of delivering the message. A message can be doubled if there is a communication failure at this time, e.g. due to a power outage: Until the sender has received that250 Okreply, it must assume the message was not delivered. On the other hand, after the receiver has decided to accept the message, it must assume the message has been delivered to it. Thus, during this time span, both agents have active copies of the message that they will try to deliver.[29]The probability that a communication failure occurs exactly at this step is directly proportional to the amount of filtering that the server performs on the message body, most often for anti-spam purposes. The limiting timeout is specified to be 10 minutes.[30]
TheQUITcommand ends the session. If the email has other recipients located elsewhere, the client wouldQUITand connect to an appropriate SMTP server for subsequent recipients after the current destination(s) had been queued. The information that the client sends in theHELOandMAIL FROMcommands are added (not seen in example code) as additional header fields to the message by the receiving server. It adds aReceivedandReturn-Pathheader field, respectively.
Some clients are implemented to close the connection after the message is accepted (250 Ok: queued as 12345), so the last two lines may actually be omitted. This causes an error on the server when trying to send the221 Byereply.
Clients learn a server's supported options by using theEHLOgreeting, as exemplified below, instead of the originalHELO. Clients fall back toHELOonly if the server does not supportEHLOgreeting.[31]
Modern clients may use the ESMTP extension keywordSIZEto query the server for the maximum message size that will be accepted. Older clients and servers may try to transfer excessively sized messages that will be rejected after consuming network resources, including connect time to network links that is paid by the minute.[32]
Users can manually determine in advance the maximum size accepted by ESMTP servers. The client replaces theHELOcommand with theEHLOcommand.
Thussmtp2.example.comdeclares that it can accept a fixed maximum message size no larger than 14,680,064octets(8-bit bytes).
In the simplest case, an ESMTP server declares a maximumSIZEimmediately after receiving anEHLO. According toRFC1870, however, the numeric parameter to theSIZEextension in theEHLOresponse is optional. Clients may instead, when issuing aMAIL FROMcommand, include a numeric estimate of the size of the message they are transferring, so that the server can refuse receipt of overly-large messages.
Original SMTP supports only a single body of ASCII text, therefore any binary data needs to be encoded as text into that body of the message before transfer, and then decoded by the recipient.Binary-to-text encodings, such asuuencodeandBinHexwere typically used.
The 8BITMIME command was developed to address this. It was standardized in 1994 asRFC1652[33]It facilitates thetransparentexchange ofe-mailmessages containing octets outside the seven-bitASCIIcharacter set by encoding them asMIMEcontent parts, typically encoded withBase64.
On-Demand Mail Relay(ODMR) is an SMTP extension standardized inRFC2645that allows an intermittently-connected SMTP server to receive email queued for it when it is connected.
Original SMTP supports email addresses composed ofASCIIcharacters only, which is inconvenient for users whose native script is not Latin based, or who usediacriticnot in the ASCII character set. This limitation was alleviated via extensions enabling UTF-8 in address names.RFC5336introduced experimental[32]UTF8SMTPcommand and later was superseded byRFC6531that introducedSMTPUTF8command. These extensions provide support for multi-byte and non-ASCII characters in email addresses, such as those with diacritics and other language characters such asGreekandChinese.[34]
Current support is limited, but there is strong interest in broad adoption ofRFC6531and the related RFCs in countries likeChinathat have a large user base where Latin (ASCII) is a foreign script.
Like SMTP, ESMTP is a protocol used to transport Internet mail. It is used as both an inter-server transport protocol and (with restricted behavior enforced) a mail submission protocol.
The main identification feature for ESMTP clients is to open a transmission with the commandEHLO(Extended HELLO), rather thanHELO(Hello, the originalRFC821standard). A server will respond with success (code 250), failure (code 550) or error (code 500, 501, 502, 504, or 421), depending on its configuration. An ESMTP server returns the code 250 OK in a multi-line reply with its domain and a list of keywords to indicate supported extensions. A RFC 821 compliant server returns error code 500, allowing ESMTP clients to try eitherHELOorQUIT.
Each service extension is defined in an approved format in subsequent RFCs and registered with theInternet Assigned Numbers Authority(IANA). The first definitions were the RFC 821 optional services:SEND,SOML(Send or Mail),SAML(Send and Mail),EXPN,HELP, andTURN. The format of additional SMTP verbs was set and for new parameters inMAILandRCPT.
Some relatively common keywords (not all of them corresponding to commands) used today are:
The ESMTP format was restated inRFC2821(superseding RFC 821) and updated to the latest definition inRFC5321in 2008. Support for theEHLOcommand in servers became mandatory, andHELOdesignated a required fallback.
Non-standard, unregistered, service extensions can be used by bilateral agreement, these services are indicated by anEHLOmessage keyword starting with "X", and with any additional parameters or verbs similarly marked.
SMTP commands are case-insensitive. They are presented here in capitalized form for emphasis only. An SMTP server that requires a specific capitalization method is a violation of the standard.[28]
At least the following servers advertise the 8BITMIME extension:
The following servers can be configured to advertise 8BITMIME, but do not perform conversion of 8-bit data to 7-bit when connecting to non-8BITMIME relays:
The SMTP-AUTH extension provides an access control mechanism. It consists of anauthenticationstep through which the client effectively logs into themail serverduring the process of sending mail. Servers that support SMTP-AUTH can usually be configured to require clients to use this extension, ensuring the true identity of the sender is known. The SMTP-AUTH extension is defined inRFC4954.
SMTP-AUTH can be used to allow legitimate users to relay mail while denying relay service to unauthorized users, such asspammers. It does not necessarily guarantee the authenticity of either the SMTPenvelope senderor theRFC2822"From:" header. For example,spoofing, in which one sender masquerades as someone else, is still possible with SMTP-AUTH unless the server is configured to limit message from-addresses to addresses this AUTHed user is authorized for.
The SMTP-AUTH extension also allows one mail server to indicate to another that the sender has been authenticated when relaying mail. In general this requires the recipient server to trust the sending server, meaning that this aspect of SMTP-AUTH is rarely used on the Internet.[citation needed]
Supporting servers include:
Mail delivery can occur both over plain text and encrypted connections, however the communicating parties might not know in advance of other party's ability to use secure channel.
The STARTTLS extensions enables supporting SMTP servers to notify connecting clients that it supportsTLSencrypted communication and offers the opportunity for clients to upgrade their connection by sending the STARTTLS command. Servers supporting the extension do not inherently gain any security benefits from its implementation on its own, as upgrading to aTLSencrypted session is dependent on the connecting client deciding to exercise this option, hence the termopportunisticTLS.
STARTTLS is effective only against passive observation attacks, since the STARTTLS negotiation happens in plain text and an active attacker can trivially remove STARTTLS commands. This type ofman-in-the-middle attackis sometimes referred to asSTRIPTLS, where the encryption negotiation information sent from one end never reaches the other. In this scenario both parties take the invalid or unexpected responses as indication that the other does not properly support STARTTLS, defaulting to traditional plain-text mail transfer.[49]Note that STARTTLS is also defined forIMAPandPOP3in other RFCs, but these protocols serve different purposes: SMTP is used for communication between message transfer agents, while IMAP and POP3 are for end clients and message transfer agents.
In 2014 theElectronic Frontier Foundationbegan "STARTTLS Everywhere" project that, similarly to "HTTPS Everywhere" list, allowed relying parties to discover others supporting secure communication without prior communication. The project stopped accepting submissions on 29 April 2021, and EFF recommended switching toDANEand MTA-STS for discovering information on peers' TLS support.[50]
RFC8314officially declared plain text obsolete and recommend always using TLS for mail submission and access, adding ports with implicit TLS.
RFC7672introduced the ability for DNS records to declare the encryption capabilities of a mail server. UtilisingDNSSEC, mail server operators are able to publish a hash of their TLS certificate, thereby mitigating the possibility of unencrypted communications.[51]
Microsoft expects to enable full SMTP DANE support for Exchange Online customers by the end of 2024.[52]
A newer 2018RFC8461called "SMTP MTA Strict Transport Security (MTA-STS)" aims to address the problem of active adversaries by defining a protocol for mail servers to declare their ability to use secure channels in specific files on the server and specificDNSTXT records. The relying party would regularly check existence of such record, and cache it for the amount of time specified in the record and never communicate over insecure channels until record expires.[49]Note that MTA-STS records apply only to SMTP traffic between mail servers while communications between a user's client and the mail server are protected byTransport Layer Securitywith SMTP/MSA, IMAP, POP3, orHTTPSin combination with an organizational or technical policy. Essentially, MTA-STS is a means to extend such a policy to third parties.
In April 2019Google Mailannounced support for MTA-STS.[53]
Protocols designed to securely deliver messages can fail due to misconfigurations or deliberate active interference, leading to undelivered messages or delivery over unencrypted or unauthenticated channels.RFC8460"SMTP TLS Reporting" describes a reporting mechanism and format for sharing statistics and specific information about potential failures with recipient domains. Recipient domains can then use this information to both detect potential attacks and diagnose unintentional misconfigurations.
In April 2019Google Mailannounced support for SMTP TLS Reporting.[53]
The original design of SMTP had no facility to authenticate senders, or check that servers were authorized to send on their behalf, with the result thatemail spoofingis possible, and commonly used inemail spamandphishing.
Occasional proposals are made to modify SMTP extensively or replace it completely. One example of this isInternet Mail 2000, but neither it, nor any other has made much headway in the face of thenetwork effectof the huge installed base of classic SMTP.
Instead, mail servers now use a range of techniques, such as stricter enforcement of standards such asRFC5322,[54][55]DomainKeys Identified Mail,Sender Policy FrameworkandDMARC,DNSBLsandgreylistingto reject or quarantine suspicious emails.[56] | https://en.wikipedia.org/wiki/8BITMIME |
Ascii85, also calledBase85, is a form ofbinary-to-text encodingdeveloped by Paul E. Rutter for thebtoautility. By using fiveASCIIcharacters to represent four bytes ofbinary data(making the encoded size1⁄4larger than the original, assuming eight bits per ASCII character), it is more efficient thanuuencodeorBase64, which use four characters to represent three bytes of data (1⁄3increase, assuming eight bits per ASCII character).
Its main modern uses are inAdobe'sPostScriptandPortable Document Formatfile formats, as well as in thepatchencoding forbinary filesused byGit.[1]
The basic need for a binary-to-text encoding comes from a need to communicate arbitrarybinary dataover preexistingcommunications protocolsthat were designed to carry only English languagehuman-readabletext. Those communication protocols may only be 7-bit safe (and within that avoid certain ASCII control codes), and may requireline breaksat certain maximum intervals, and may not maintainwhitespace. Thus, only the 94printable ASCII charactersare "safe" to use to convey data.
Eighty-five is the minimum integer value ofnsuch thatn5≥ 2564; soanysequence of 4 bytes can be encoded as 5 symbols, as long as at least 85 distinct symbols are available. (Fiveradix-85 digits can represent the integers from 0 to 4,437,053,124 inclusive, which suffice to represent all 4,294,967,296 possible 4-byte sequences.)
Characters used by the encoded text are!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuand additionallyzto mark a sequence of four zero bytes.
When encoding, each group of 4 bytes is taken as a 32-bit binary number, most significant byte first (Ascii85 uses abig-endianconvention). This is converted, by repeatedly dividing by 85 and taking the remainder, into 5 radix-85 digits. Then each digit (again, most significant first) is encoded as an ASCII printable character by adding 33 to it, giving the ASCII characters 33 (!) through 117 (u).
Because all-zero data is quite common, an exception is made for the sake ofdata compression, and an all-zero group is encoded as a single characterzinstead of!!!!!.
Groups of characters that decode to a value greater than232− 1(encoded ass8W-!) will cause a decoding error, as willzcharacters in the middle of a group. White space between the characters is ignored and may occur anywhere to accommodate line-length limitations.
The original specification only allows a stream that is a multiple of 4 bytes to be encoded.
Encoded data may containcharactersthat have special meaning in many programming languages and in some text-based protocols, such as left-angle-bracket<, backslash\, and the single and double quotes'&". Other base-85 encodings like Z85 andRFC1924are designed to be safe in source code.[2]
The original btoa program always encoded full groups (padding the source as necessary), with a prefix line of "xbtoa Begin", and suffix line of "xbtoa End", followed by the original file length (in decimal andhexadecimal) and three 32-bitchecksums. The decoder needs to use the file length to see how much of the group was padding. The initial proposal for btoa encoding used an encoding alphabet starting at the ASCII space character through "t" inclusive, but this was replaced with an encoding alphabet of "!" to "u" to avoid "problems with some mailers (stripping off trailing blanks)".[3]This program also introduced the special "z" short form for an all-zero group. Version 4.2 added a "y" exception for a group of all ASCIIspacecharacters (0x20202020).
"ZMODEM Pack-7 encoding" encodes groups of 4 octets into groups of 5 printable ASCII characters in a similar, or possibly in the same way as Ascii85 does. When aZMODEMprogram sends pre-compressed 8-bit data files over7-bit data channels, it uses "ZMODEM Pack-7 encoding".[4]
Adobe adopted the basic btoa encoding, but with slight changes, and gave it the name Ascii85. The characters used are the ASCII characters 33 (!) through 117 (u) inclusive (to represent the base-85 digits 0 through 84), together with the letterz(as a special case to represent a 32-bit 0 value), and white space is ignored. Adobe uses the delimiter "~>" to mark the end of an Ascii85-encoded string, and the string may be prefixed by "<~".[5]Adobe represents the length by truncating the final group: If the last block of source bytes contains fewer than 4 bytes, the block is padded with up to 3 null bytes before encoding. After encoding, as many bytes as were added as padding are removed from the end of the output.
The reverse is applied when decoding: The last block is padded to 5 bytes with the Ascii85 characteru, and as many bytes as were added as padding are omitted from the end of the output (see example).
The padding is not arbitrary. Converting from binary to base 64 only regroups bits and does not change them or their order (a high bit in binary does not affect the low bits in the base64 representation). In converting a binary number to base85 (85 isnota power of two) high bits do affect the low order base85 digits and conversely. Padding the binary low (with zero bits) while encoding and padding the base85 value high (withus) in decoding assures that the high order bits are preserved (the zero padding in the binary gives enough room so that a small addition is trapped and there is no "carry" to the high bits).
In Ascii85-encoded blocks, whitespace and line-break characters may be present anywhere, including in the middle of a 5-character block, but they must be silently ignored.
Adobe's specification does not supportbtoa'syexception.
Take this quote fromThomas Hobbes'sLeviathan:
Man is distinguished, not only by his reason, but by this singular passion from other animals, which is a lust of the mind, that by a perseverance of delight in the continued and indefatigable generation of knowledge, exceeds the short vehemence of any carnal pleasure.
Assuming that 269-character quote is provided in US-ASCII or a 100% compatible encoding to start with, it can then be re-encoded in Ascii85 as the following 337 characters (count and output shown without "<~" and "~>" pre/postfixes):[a]
For a detailed look at the re-encoding, this is the beginning of the Hobbes quote:
...and the following is the end of the quote (penultimate 4-tuple):
As however the final 4-tuple is incomplete after the period, it must be padded with three zero bytes:
Since three bytes of padding had to be added, the three final characters 'YkO' are omitted from the output.
Decoding is done inversely, except that the last 5-tuple is padded with 'u' characters:
Since the input had to be padded with three 'u' bytes, the last three bytes of the output are ignored and we end up with the original period.
The input sentence does not contain 4 consecutive zero bytes, so the example does not show the use of the 'z' abbreviation.
The Ascii85 encoding is compatible with 7-bit and 8-bitMIME, while having less overhead thanBase64.
One potential compatibility issue of Ascii85 is that some of the characters it uses are significant in markup languages such asXMLorSGML. To include Ascii85 data in these documents, it may be necessary to escape thequote,angle brackets, andampersands.
Published onApril 1, 1996, informationalRFC1924: "A Compact Representation of IPv6 Addresses" byRobert Elzsuggests a base-85 encoding ofIPv6addresses as anApril Fools' Dayjoke. This differs from the scheme used above in that he proposes a different set of 85 ASCII characters, and proposes to do all arithmetic on the 128-bit number, converting it to a single 20-digit base-85 number (internal whitespace not allowed), rather than breaking it into four 32-bit groups.
The proposed character set is, in order,0–9,A–Z,a–z, and then the 23 characters!#$%&()*+-;<=>?@^_`{|}~. The highest possible representable address, 2128−1 = 74×8519+ 53×8518+ 5×8517+ ..., would be encoded as=r54lj&NUUO~Hi%c2ym0.
This character set excludes the characters"',./:[\], making it suitable for use inJSONstrings (where"and\would require escaping). However, for SGML-based protocols, notably including XML, string escapes may still be required (to accommodate<,>and&). | https://en.wikipedia.org/wiki/Ascii85 |
Base36is abinary-to-text encodingscheme that representsbinary datain anASCIIstring format by translating it into aradix-36 representation. The choice of 36 is convenient in that the digits can be represented using theArabic numerals0–9 and theLatin lettersA–Z[1](theISO basic Latin alphabet).
Each base36 digit needs less than 6 bits of information to be represented.
Signed32- and64-bitintegerswill only hold at most 6 or 13 base-36 digits, respectively (that many base-36 digits can overflow the 32- and 64-bit integers). For example, the 64-bit signed integer maximum value of "9223372036854775807" is "1Y2P0IJ32E8E7" in base-36.
Similarly, the 32-bit signed integer maximum value of "2147483647" is "ZIK0ZJ" in base-36.
The C standard librarysince C89 supports base36 numbers via the strtol and strtoul functions[2]
In theCommon Lispstandard (ANSI INCITS 226-1994), functions likeparse-integersupport a radix of 2 to 36.[3]
Java SEsupports conversion from/to String to different bases from 2 up to 36. For example,[1]and[2]
Just likeJava,JavaScriptalso supports conversion from/to String to different bases from 2 up to 36.[3]
PHP, like Java, supports conversion from/to String to different bases from 2 up to 36 using thebase_convertfunction, available since PHP 4.
Gosupports conversion to string to different bases from 2 up to 36 using the built-instrconv.FormatInt(), andstrconv.FormatUint()functions,[4][5]and conversions from string encoded in different bases from 2 up to 36 using the built-instrconv.ParseInt(), andstrconv.ParseUint()functions.[6][7]
Pythonallows conversions of strings from base 2 to base 36.[8]
Rakusupports base2 to base36 for all its real numeric types with its builtins:base[9]andparse-base.[10] | https://en.wikipedia.org/wiki/Base36 |
Thebase62encoding scheme uses 62 characters. The characters consist of the capital letters A-Z, the lower case letters a-z and the numbers 0–9. It is abinary-to-text encodingscheme that representsbinary datain anASCIIstring format.[1][2]
The Base62 index table: | https://en.wikipedia.org/wiki/Base62 |
Auniform resource locator(URL), colloquially known as anaddresson theWeb,[1]is a reference to aresourcethat specifies its location on acomputer networkand a mechanism for retrieving it. A URL is a specific type ofUniform Resource Identifier(URI),[2][3]although many people use the two terms interchangeably.[4][a]URLs occur most commonly to referenceweb pages(HTTP/HTTPS) but are also used for file transfer (FTP), email (mailto), database access (JDBC), and many other applications.
Mostweb browsersdisplay the URL of a web page above the page in anaddress bar. A typical URL could have the formhttp://www.example.com/index.html, which indicates a protocol (http), ahostname(www.example.com), and a file name (index.html).
Uniform Resource Locators were defined inRFC1738in 1994 byTim Berners-Lee, the inventor of theWorld Wide Web, and the URI working group of theInternet Engineering Task Force(IETF),[7]as an outcome of collaboration started at the IETF Living Documentsbirds of a feathersession in 1992.[7][8]
The format combines the pre-existing system ofdomain names(created in 1985) withfile pathsyntax, whereslashesare used to separatedirectoryandfilenames. Conventions already existed where server names could be prefixed to complete file paths, preceded by a double slash (//).[9]
Berners-Lee later expressed regret at the use of dots to separate the parts of thedomain namewithinURIs, wishing he had used slashes throughout,[9]and also said that, given the colon following the first component of a URI, the two slashes before the domain name were unnecessary.[10]
EarlyWorldWideWebcollaborators including Berners-Lee originally proposed the use of UDIs: Universal Document Identifiers.
An early (1993) draft of the HTML Specification[11]referred to "Universal" Resource Locators. This was dropped some time between June 1994 (RFC1630) and October 1994 (draft-ietf-uri-url-08.txt).[12]In his bookWeaving the Web, Berners-Lee emphasizes his preference for the original inclusion of "universal" in the expansion rather than the word "uniform", to which it was later changed, and he gives a brief account of the contention that led to the change.
Every HTTP URL conforms to the syntax of a generic URI. The URI generic syntax consists of fivecomponentsorganized hierarchically in order of decreasing significance from left to right:[13]: §3
A component isundefinedif it has an associated delimiter and the delimiter does not appear in the URI; the scheme and path components are always defined.[13]: §5.2.1A component isemptyif it has no characters; the scheme component is always non-empty.[13]: §3
The authority component consists ofsubcomponents:
This is represented in asyntax diagramas:
The URI comprises:
A web browser will usuallydereferencea URL by performing anHTTPrequest to the specified host, by default on port number 80. URLs using thehttpsscheme require that requests and responses be made over asecure connection to the website.
Internet users are distributed throughout the world using a wide variety of languages and alphabets, and expect to be able to create URLs in their own local alphabets. AnInternationalized Resource Identifier(IRI) is a form of URL that includesUnicodecharacters. All modern browsers support IRIs. The parts of the URL requiring special treatment for different alphabets are the domain name and path.[18][19]
The domain name in the IRI is known as anInternationalized Domain Name(IDN). Web and Internet software automatically convert the domain name intopunycodeusable by theDomain Name System; for example, the Chinese URLhttp://例子.卷筒纸becomeshttp://xn--fsqu00a.xn--3lr804guic/. Thexn--indicates that the character was not originallyASCII.[20]
The URL path name can also be specified by the user in the local writing system. If not already encoded, it is converted toUTF-8, and any characters not part of the basic URL character set are escaped ashexadecimalusingpercent-encoding; for example, the Japanese URLhttp://example.com/引き割り.htmlbecomeshttp://example.com/%E5%BC%95%E3%81%8D%E5%89%B2%E3%82%8A.html. The target computer decodes the address and displays the page.[18]
Protocol-relative links (PRL), also known as protocol-relative URLs (PRURL), are URLs that have no protocol specified. For example,//example.comwill use the protocol of the current page, typically HTTP or HTTPS.[21][22] | https://en.wikipedia.org/wiki/URL |
Abinary clockis aclockthat displays the time of day in abinaryformat. Originally, such clocks showedeach decimal digitofsexagesimaltime as a binary value, but presently binary clocks also exist which display hours, minutes, and seconds as binary numbers. Most binary clocks aredigital, althoughanalogvarieties exist. True binary clocks also exist, which indicate the time by successively halving the day, instead of using hours, minutes, or seconds. Similar clocks, based onGray codedbinary, also exist.
Most common binary clocks use six columns ofLEDsto representzerosandones. Each column represents a single decimal digit, a format known asbinary-coded decimal(BCD). The bottom row in each column represents 1 (or 20), with each row above representing higher powers of two, up to 23(or 8).
To read each individual digit in the time, the user adds the values that each illuminatedLEDrepresents, then reads these from left to right. The first two columns represent thehour, the next two represent theminuteand the last two represent thesecond. Since zero digits are not illuminated, the positions of each digit must be memorized if the clock is to be usable in the dark.
Binary clocks that display time in binary-codedsexagesimalalso exist. Instead of representing each digit of traditional sexagesimal time with one binary number, each component of traditional sexagesimal time is represented with one binary number, that is, using up to 6 bits instead of only 4.
For 24-hour binary-coded sexagesimal clocks, there are 11 or 17 LED lights to show the time. There are 5 LEDs to show the hours, there are 6 LEDs to show the minutes, and there are 6 LEDs to show the seconds (which aren't used in clocks with 11 LED lights).
A format exists also where hours, minutes and seconds are shown on three lines instead of columns as binary numbers.[1]
Less commonly, the day could be divided in binary fractions, such as ½ day, ¼ day, etc. The clock would show the time in 16 bits, where the smallest unit would be exactly1⁄65536day, or675⁄512(about 1.318) seconds.[2]An analog format also exists of this type.[3]However, it is much easier to write and express this in hexadecimal, which would behexadecimal time. | https://en.wikipedia.org/wiki/Binary_time |
The following is a comparison of notablehex editors.
ao:ANSIis the Windows character set,OEMis the DOS character set. Both are based onASCII. | https://en.wikipedia.org/wiki/Comparison_of_hex_editors |
Adisk editoris acomputer programthat allows its user to read, edit, and write raw data (atcharacterorhexadecimal,byte-levels) on disk drives (e.g.,hard disks,USB flash disksorremovable mediasuch as afloppy disks); as such, they are sometimes calledsector editors,since the read/write routines built into the electronics of most disk drives require to read/write data in chunks ofsectors(usually 512 bytes). Many disk editors can also be used to edit the contents of a running computer'smemoryor adisk image.
Unlikehex editors,which are used to editfiles, adisk editorallows access to the underlying disk structures, such as themaster boot record(MBR) orGUID Partition Table(GPT),file system, anddirectories. On some operating systems (likeUnixorUnix-like) most hex editors can act as disk editors just openingblock devicesinstead of regular files. Programmers can use disk editors to understand these structures and test whether their implementation (e.g. of a file system) works correctly. Sometimes these structures are edited in order to provide examples for teaching data recovery and forensics, or in an attempt to hide data to achieve privacy or hide data from casual examiners. However, modifying such data structures gives only a weak level of protection anddata encryptionis the preferred method to achieve privacy.
Some disk editors include special functions which enable more comfortable ways to edit and fix file systems or other disk specific data structures. Furthermore, some include simple file browsers that can present the disk contents for partially corrupted file systems or file systems unknown to the operating system. These features can be used for example for file recovery.
Disk editors forhome computersof the 1980s were often included as part of utility software packages on floppies orcartridges. The latter had the advantage of being instantly available at power-on and after resets, instead of having to be loaded or reloaded on the same disk drive that later would hold the floppy to be edited (the majority of home computer users possessed only one floppy disk drive at that time). Having the disk editor on cartridge also helped the user avoid editing/damaging the disk editor application disk by mistake.
All 1980s disk editors strove to be better thanDEBUGcontained inDOS.DEBUGcould load, edit, and write one or more sectors from afloppyorhard diskbased on theBIOS. This permitted simple disk editing tasks such as saving and restoring themaster boot recordand other critical sectors, or even changing the active (= boot) partition in the MBR.
In anNTVDMunder 1993'sWindows NTDEBUGcould not access the physical drive with theMBRof the operating system and so was in essence useless as disk editor for the system drive. TheResource Kitand the support tools for some Windows NT versions containedDSKPROBE[1]as a very simple disk editor supporting the use and modification of the partition table in the MBR and related tasks.
Apartition editor(also calledpartitioning utility) is a kind ofutility softwaredesigned to view, create, modify or deletedisk partitions. A disk partition is a logical segment of the storage space on a storage device. By partitioning a large device into several partitions, it is possible to isolate various types of data from one another, and allow the coexistence of two or more operating systems simultaneously. Features and capabilities of partition editors vary, but generally they can create several partition on a disk, or one contiguous partition on several disks, at the discretion of the user. They can also, shrink a partition to allow more partitions created on a storage device, or delete one and expand an adjacent partition into the available space. | https://en.wikipedia.org/wiki/Disk_editor |
Approximationsfor themathematical constantpi(π) in thehistory of mathematicsreached an accuracy within 0.04% of the true value before the beginning of theCommon Era. InChinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century.
Further progress was not made until the 14th century, whenMadhava of Sangamagramadeveloped approximations correct to eleven and then thirteen digits.Jamshīd al-Kāshīachieved sixteen digits next. Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega).
The record of manual approximation ofπis held byWilliam Shanks, who calculated 527 decimals correctly in 1853.[1]Since the middle of the 20th century, the approximation ofπhas been the task of electronic digital computers (for a comprehensive account, seeChronology of computation ofπ). On April 2, 2025, the current record was established byLinus Media GroupandKioxiawith Alexander Yee'sy-cruncherwith 300 trillion (3×1014) digits.[2]
The best known approximations toπdating tobefore the Common Erawere accurate to two decimal places; this was improved upon inChinese mathematicsin particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period.
Some Egyptologists[3]have claimed that theancient Egyptiansused an approximation ofπas22⁄7= 3.142857 (about 0.04% too high) from as early as theOld Kingdom(c. 2700–2200 BC).[4]This claim has been met with skepticism.[5][6]
Babylonian mathematicsusually approximatedπto 3, sufficient for the architectural projects of the time (notably also reflected in the description ofSolomon's Templein theHebrew Bible).[7]The Babylonians were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated nearSusain 1936 (dated to between the 19th and 17th centuries BCE) gives a better approximation ofπas25⁄8= 3.125, about 0.528% below the exact value.[8][9][10][11]
At about the same time, the EgyptianRhind Mathematical Papyrus(dated to theSecond Intermediate Period, c. 1600 BCE, although stated to be a copy of an older,Middle Kingdomtext) implies an approximation ofπas256⁄81≈ 3.16 (accurate to 0.6 percent) by calculating the area of a circle via approximation with theoctagon.[5][12]
Astronomical calculations in theShatapatha Brahmana(c. 6th century BCE) use a fractional approximation of339⁄108≈ 3.139.[13]
TheMahabharata(500 BCE – 300 CE) offers an approximation of 3, in the ratios offered inBhishma Parvaverses: 6.12.40–45.[14]
...
The Moon is handed down by memory to be eleven thousand yojanas in diameter. Its peripheral circle happens to be thirty three thousand yojanas when calculated....The Sun is eight thousand yojanas and another two thousand
yojanas in diameter. From that its peripheral circle comes to be equal to thirty thousand yojanas.
...
In the 3rd century BCE,Archimedesproved the sharp inequalities223⁄71<π<22⁄7, by means of regular96-gons(accuracies of 2·10−4and 4·10−4, respectively).[15]
In the 2nd century CE,Ptolemyused the value377⁄120, the first known approximation accurate to three decimal places (accuracy 2·10−5).[16]It is equal to3+8/60+30/602,{\displaystyle 3+8/60+30/60^{2},}which is accurate to twosexagesimaldigits.
TheChinese mathematicianLiu Huiin 263 CE computedπto between3.141024and3.142708by inscribing a 96-gon and 192-gon; the average of these two values is3.141866(accuracy 9·10−5).
He also suggested that 3.14 was a good enough approximation for practical purposes. He has also frequently been credited with a later and more accurate result, π ≈3927⁄1250= 3.1416 (accuracy 2·10−6), although some scholars instead believe that this is due to the later (5th-century) Chinese mathematicianZu Chongzhi.[17]Zu Chongzhi is known to have computedπto be between 3.1415926 and 3.1415927, which was correct to seven decimal places. He also gavetwo other approximations ofπ: π ≈22⁄7and π ≈355⁄113, which are not as accurate as his decimal result. The latter fraction is the best possible rational approximation ofπusing fewer than five decimal digits in the numerator and denominator. Zu Chongzhi's results surpass the accuracy reached in Hellenistic mathematics, and would remain without improvement for close to a millennium.
InGupta-era India(6th century), mathematicianAryabhata, in his astronomical treatiseĀryabhaṭīyastated:
Add 4 to 100, multiply by 8 and add to 62,000. This is 'approximately' the circumference of a circle whose diameter is 20,000.
Approximatingπto four decimal places: π ≈62832⁄20000= 3.1416,[18][19][20]Aryabhata stated that his result "approximately" (āsanna"approaching") gave the circumference of a circle. His 15th-century commentatorNilakantha Somayaji(Kerala school of astronomy and mathematics) has argued that the word means not only that this is an approximation, but that the value isincommensurable (irrational).[21]
Further progress was not made for nearly a millennium, until the 14th century, when Indian mathematician and astronomerMadhava of Sangamagrama, founder of theKerala school of astronomy and mathematics, found theMaclaurin seriesfor arctangent, and then twoinfinite seriesforπ.[22][23][24]One of them is now known as theMadhava–Leibniz series, based onπ=4arctan(1):{\displaystyle \pi =4\arctan(1):}
The other was based onπ=6arctan(1/3):{\displaystyle \pi =6\arctan(1/{\sqrt {3}}):}
He used the first 21 terms to compute an approximation ofπcorrect to 11 decimal places as3.14159265359.
He also improved the formula based on arctan(1) by including a correction:
It is not known how he came up with this correction.[23]Using this he found an approximation ofπto 13 decimal places of accuracy whenn= 75.
Indian mathematicianBhaskara IIused regular polygons with up to 384 sides to obtain a close approximation of π, calculating it as 3.141666.[25]
Jamshīd al-Kāshī(Kāshānī), aPersian astronomerandmathematician, correctly computed the fractional part of 2πto 9sexagesimaldigits in 1424,[26]and translated this into 16 decimal digits[27]after the decimal point:
which gives 16 correct digits for π after the decimal point:
He achieved this level of accuracy by calculating the perimeter of aregular polygonwith 3 × 228sides.[28]
In the second half of the 16th century, the French mathematicianFrançois Viètediscovered an infinite product that converged onπknown asViète's formula.
The German-Dutch mathematicianLudolph van Ceulen(circa1600) computed the first 35 decimal places ofπwith a 262-gon. He was so proud of this accomplishment that he had them inscribed on histombstone.[29]
InCyclometricus(1621),Willebrord Snelliusdemonstrated that the perimeter of the inscribed polygon converges on the circumference twice as fast as does the perimeter of the corresponding circumscribed polygon. This was proved byChristiaan Huygensin 1654. Snellius was able to obtain seven digits ofπfrom a96-sided polygon.[30]
In 1656,John Wallispublished theWallis product:
π2=∏n=1∞4n24n2−1=∏n=1∞(2n2n−1⋅2n2n+1)=(21⋅23)⋅(43⋅45)⋅(65⋅67)⋅(87⋅89)⋅⋯{\displaystyle {\frac {\pi }{2}}=\prod _{n=1}^{\infty }{\frac {4n^{2}}{4n^{2}-1}}=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)={\Big (}{\frac {2}{1}}\cdot {\frac {2}{3}}{\Big )}\cdot {\Big (}{\frac {4}{3}}\cdot {\frac {4}{5}}{\Big )}\cdot {\Big (}{\frac {6}{5}}\cdot {\frac {6}{7}}{\Big )}\cdot {\Big (}{\frac {8}{7}}\cdot {\frac {8}{9}}{\Big )}\cdot \;\cdots }
In 1706,John MachinusedGregory's series(theTaylor seriesforarctangent) andthe identity14π=4arccot5−arccot239{\textstyle {\tfrac {1}{4}}\pi =4\operatorname {arccot} 5-\operatorname {arccot} 239}to calculate 100 digits ofπ(see§ Machin-like formulabelow).[31][32]In 1719,Thomas de Lagnyused a similar identity to calculate 127 digits (of which 112 were correct). In 1789, the Slovene mathematicianJurij VegaimprovedJohn Machin's formula to calculate the first 140 digits, of which the first 126 were correct.[33]In 1841,William Rutherfordcalculated 208 digits, of which the first 152 were correct.
The magnitude of such precision (152 decimal places) can be put into context by the fact that the circumference of the largest known object, the observable universe, can be calculated from its diameter (93billionlight-years) to a precision of less than onePlanck length(at1.6162×10−35meters, the shortest unit of length expected to be directly measurable) usingπexpressed to just 62 decimal places.[34]
The English amateur mathematicianWilliam Shankscalculatedπto 530 decimal places in January 1853, of which the first 527 were correct (the last few likely being incorrect due to round-off errors).[1][35]He subsequently expanded his calculation to 607 decimal places in April 1853,[36]but an error introduced right at the 530th decimal place rendered the rest of his calculation erroneous; due to the nature of Machin's formula, the error propagated back to the 528th decimal place, leaving only the first 527 digits correct once again.[1]Twenty years later, Shanks expanded his calculation to 707 decimal places in April 1873.[37]Due to this being an expansion of his previous calculation, most of the new digits were incorrect as well.[1]Shanks was said to have calculated new digits all morning and would then spend all afternoon checking his morning's work. This was the longest expansion ofπuntil the advent of the electronic digital computer three-quarters of a century later.[38]
In 1910, the Indian mathematicianSrinivasa Ramanujanfound several rapidly converging infinite series ofπ, including
which computes a further eight decimal places ofπwith each term in the series. His series are now the basis for the fastest algorithms currently used to calculateπ. Evaluating the first term alone yields a value correct to seven decimal places:
SeeRamanujan–Sato series.
From the mid-20th century onwards, all improvements in calculation ofπhave been done with the help ofcalculatorsorcomputers.
In 1944−45, D. F. Ferguson, with the aid of amechanical desk calculator, found thatWilliam Shankshad made a mistake in the 528th decimal place, and that all succeeding digits were incorrect.[35][39]
In the early years of the computer, an expansion ofπto100000decimal places[40]: 78was computed by Maryland mathematicianDaniel Shanks(no relation to the aforementioned William Shanks) and his team at theUnited States Naval Research Laboratoryin Washington, D.C. In 1961, Shanks and his team used two different power series for calculating the digits ofπ. For one, it was known that any error would produce a value slightly high, and for the other, it was known that any error would produce a value slightly low. And hence, as long as the two series produced the same digits, there was a very high confidence that they were correct. The first 100,265 digits ofπwere published in 1962.[40]: 80–99The authors outlined what would be needed to calculateπto 1 million decimal places and concluded that the task was beyond that day's technology, but would be possible in five to seven years.[40]: 78
In 1989, theChudnovsky brotherscomputedπto over 1 billion decimal places on thesupercomputerIBM 3090using the following variation of Ramanujan's infinite series ofπ:
Records since then have all been accomplished using theChudnovsky algorithm.
In 1999,Yasumasa Kanadaand his team at theUniversity of Tokyocomputedπto over 200 billion decimal places on the supercomputerHITACHI SR8000/MPP(128 nodes) using another variation of Ramanujan's infinite series ofπ.
In November 2002,Yasumasa Kanadaand a team of 9 others used theHitachi SR8000, a 64-node supercomputer with 1 terabyte of main memory, to calculateπto roughly 1.24 trillion digits in around 600 hours (25days).[41]
Depending on the purpose of a calculation,πcan be approximated by using fractions for ease of calculation. The most notable such approximations are22⁄7(relative errorof about 4·10−4) and355⁄113(relative error of about 8·10−8).[58][59][60]In Chinese mathematics, the fractions 22/7 and 355/113 are known as Yuelü (约率;yuēlǜ; 'approximate ratio') andMilü(密率;mìlǜ; 'close ratio').
Of some notability are legal or historical texts purportedly "definingπ" to have some rational value, such as the "Indiana Pi Bill" of 1897, which stated "the ratio of the diameter and circumference is as five-fourths to four" (which would imply "π= 3.2") and a passage in theHebrew Biblethat implies thatπ= 3.
The so-called "Indiana Pi Bill" from 1897 has often been characterized as an attempt to "legislate the value of Pi". Rather, the bill dealt with a purported solution to the problem of geometrically "squaring the circle".[61]
The bill was nearly passed by theIndiana General Assemblyin the U.S., and has been claimed to imply a number of different values forπ, although the closest it comes to explicitly asserting one is the wording "the ratio of the diameter and circumference is as five-fourths to four", which would makeπ=16⁄5= 3.2, a discrepancy of nearly 2 percent. A mathematics professor who happened to be present the day the bill was brought up for consideration in the Senate, after it had passed in the House, helped to stop the passage of the bill on its second reading, after which the assembly thoroughly ridiculed it beforepostponing it indefinitely.
It is sometimes claimed[by whom?]that theHebrew Bibleimplies that "πequals three", based on a passage in1 Kings 7:23and2 Chronicles 4:2giving measurements for theround basinlocated in front of theTemple in Jerusalemas having a diameter of 10cubitsand a circumference of 30 cubits.
The issue is discussed in theTalmudand inRabbinic literature.[62]Among the many explanations and comments are these:
There is still some debate on this passage in biblical scholarship.[failed verification][64][65]Many reconstructions of the basin show a wider brim (or flared lip) extending outward from the bowl itself by several inches to match the description given inNRSV[66]In the succeeding verses, the rim is described as "a handbreadth thick; and the brim thereof was wrought like the brim of a cup, like the flower of a lily: it received and held three thousand baths"NRSV, which suggests a shape that can be encompassed with a string shorter than the total length of the brim, e.g., aLiliumflower or aTeacup.
Archimedes, in hisMeasurement of a Circle, created the first algorithm for the calculation ofπbased on the idea that the perimeter of any (convex) polygon inscribed in a circle is less than the circumference of the circle, which, in turn, is less than the perimeter of any circumscribed polygon. He started with inscribed and circumscribed regular hexagons, whose perimeters are readily determined. He then shows how to calculate the perimeters of regular polygons of twice as many sides that are inscribed and circumscribed about the same circle. This is a recursive procedure which would be described today as follows: LetpkandPkdenote the perimeters of regular polygons ofksides that are inscribed and circumscribed about the same circle, respectively. Then,
Archimedes uses this to successively computeP12,p12,P24,p24,P48,p48,P96andp96.[67]Using these last values he obtains
It is not known why Archimedes stopped at a 96-sided polygon; it only takes patience to extend the computations.Heronreports in hisMetrica(about 60 CE) that Archimedes continued the computation in a now lost book, but then attributes an incorrect value to him.[68]
Archimedes uses no trigonometry in this computation and the difficulty in applying the method lies in obtaining good approximations for the square roots that are involved. Trigonometry, in the form of a table of chord lengths in a circle, was probably used byClaudius Ptolemy of Alexandriato obtain the value ofπgiven in theAlmagest(circa 150 CE).[69]
Advances in the approximation ofπ(when the methods are known) were made by increasing the number of sides of the polygons used in the computation. A trigonometric improvement byWillebrord Snell(1621) obtains better bounds from a pair of bounds obtained from the polygon method. Thus, more accurate results were obtained from polygons with fewer sides.[70]Viète's formula, published byFrançois Viètein 1593, was derived by Viète using a closely related polygonal method, but with areas rather than perimeters of polygons whose numbers of sides are powers of two.[71]
The last major attempt to computeπby this method was carried out by Grienberger in 1630 who calculated 39 decimal places ofπusing Snell's refinement.[70]
For fast calculations, one may use formulae such asMachin's:
together with theTaylor seriesexpansion of the functionarctan(x). This formula is most easily verified usingpolar coordinatesofcomplex numbers, producing:
(5+i)4⋅(239−i)=22⋅134(1+i).{\displaystyle (5+i)^{4}\cdot (239-i)=2^{2}\cdot 13^{4}(1+i).}
((x),(y) = {239, 132} is a solution to thePell equationx2− 2y2= −1.)
Formulae of this kind are known asMachin-like formulae. Machin's particular formula was used well into the computer era for calculating record numbers of digits ofπ,[40]but more recently other similar formulae have been used as well.
For instance,Shanksand his team used the following Machin-like formula in 1961 to compute the first 100,000 digits ofπ:[40]
and they used another Machin-like formula,
as a check.
The record as of December 2002 by Yasumasa Kanada of Tokyo University stood at 1,241,100,000,000 digits. The following Machin-like formulae were used for this:
K. Takano (1982).
F. C. M. Størmer(1896).
Other formulae that have been used to compute estimates ofπinclude:
Liu Hui(see alsoViète's formula):
Madhava:
Newton/ Euler Convergence Transformation:[72]
Euler:
Ramanujan:
David ChudnovskyandGregory Chudnovsky:
Ramanujan's work is the basis for theChudnovsky algorithm, the fastest algorithms used, as of the turn of the millennium, to calculateπ.
Extremely long decimal expansions ofπare typically computed with iterative formulae like theGauss–Legendre algorithmandBorwein's algorithm. The latter, found in 1985 byJonathanandPeter Borwein, converges extremely quickly:
Fory0=2−1,a0=6−42{\displaystyle y_{0}={\sqrt {2}}-1,\ a_{0}=6-4{\sqrt {2}}}and
wheref(y)=(1−y4)1/4{\displaystyle f(y)=(1-y^{4})^{1/4}}, the sequence1/ak{\displaystyle 1/a_{k}}converges quarticallytoπ, giving about 100 digits in three steps and over a trillion digits after 20 steps. Even though the Chudnovsky series is only linearly convergent, the Chudnovsky algorithm might be faster than the iterative algorithms in practice; that depends on technological factors such as memory sizes andaccess times.[73]For breaking world records, the iterative algorithms are used less commonly than the Chudnovsky algorithm since they are memory-intensive.
The first one million digits ofπand1⁄πare available fromProject Gutenberg.[74][75]A former calculation record (December 2002) byYasumasa KanadaofTokyo Universitystood at 1.24 trillion digits, which were computed in September 2002 on a 64-nodeHitachisupercomputerwith 1 terabyte of main memory, which carries out 2 trillion operations per second, nearly twice as many as the computer used for the previous record (206 billion digits). The following Machin-like formulae were used for this:
These approximations have so many digits that they are no longer of any practical use, except for testing new supercomputers.[76]Properties like the potentialnormalityofπwill always depend on the infinite string of digits on the end, not on any finite computation.
As well as the formulas and approximations such as227{\displaystyle {\tfrac {22}{7}}}and355113{\displaystyle {\tfrac {355}{113}}}discussed elsewhere in this article,
The following expressions have been used to estimateπ:
Pi can be obtained from a circle if its radius and area are known using the relationship:
If a circle with radiusris drawn with its center at the point(0, 0), any point whose distance from the origin is less thanrwill fall inside the circle. ThePythagorean theoremgives the distance from any point(x,y)to the center:
Mathematical "graph paper" is formed by imagining a 1×1 square centered around each cell(x,y), wherexandyareintegersbetween −randr. Squares whose center resides inside or exactly on the border of the circle can then be counted by testing whether, for each cell(x,y),
The total number of cells satisfying that condition thus approximates the area of the circle, which then can be used to calculate an approximation ofπ. Closer approximations can be produced by using larger values ofr.
Mathematically, this formula can be written:
In other words, begin by choosing a value forr. Consider all cells (x,y) in which bothxandyare integers between −randr. Starting at 0, add 1 for each cell whose distance to the origin(0, 0)is less than or equal tor. When finished, divide the sum, representing the area of a circle of radiusr, byr2to find the approximation ofπ.
For example, ifris 5, then the cells considered are:
The 12 cells (0, ±5), (±5, 0), (±3, ±4), (±4, ±3) areexactly onthe circle, and 69 cells arecompletely inside, so the approximate area is 81, andπis calculated to be approximately 3.24 because81/52= 3.24. Results for some values ofrare shown in the table below:[88]
Similarly, the more complex approximations ofπgiven below involve repeated calculations of some sort, yielding closer and closer approximations with increasing numbers of calculations.
Besides its simplecontinued fractionrepresentation [3; 7, 15, 1, 292, 1, 1,...], which displays no discernible pattern,πhas manygeneralized continued fractionrepresentations generated by a simple rule, including these two.
The remainder of theMadhava–Leibniz seriescan be expressed as generalized continued fraction as follows.[89]
Note that Madhava's correction term is
The well-known values22/7and355/113are respectively the second and fourth continued fraction approximations to π.[90]
TheGregory–Leibniz series
is the power series forarctan(x) specialized tox= 1. It converges too slowly to be of practical interest. However, the power series converges much faster for smaller values ofx{\displaystyle x}, which leads to formulae whereπ{\displaystyle \pi }arises as the sum of small angles with rational tangents, known asMachin-like formulae.
Knowing that 4 arctan 1 =π, the formula can be simplified to get:
with a convergence such that each additional 10 terms yields at least three more digits.
Another formula forπ{\displaystyle \pi }involving arctangent function is given by
whereak=2+ak−1{\displaystyle a_{k}={\sqrt {2+a_{k-1}}}}such thata1=2{\displaystyle a_{1}={\sqrt {2}}}. Approximations can be made by using, for example, the rapidly convergentEulerformula[92]
Alternatively, the following simple expansion series of the arctangent function can be used
where
to approximateπ{\displaystyle \pi }with even more rapid convergence. Convergence in this arctangent formula forπ{\displaystyle \pi }improves as integerk{\displaystyle k}increases.
The constantπ{\displaystyle \pi }can also be expressed by infinite sum of arctangent functions as
and
whereFn{\displaystyle F_{n}}is then-thFibonacci number. However, these two formulae forπ{\displaystyle \pi }are much slower in convergence because of set of arctangent functions that are involved in computation.
Observing an equilateral triangle and noting that
yields
with a convergence such that each additional five terms yields at least three more digits.
TheBailey–Borwein–Plouffe formula(BBP) for calculatingπwas discovered in 1995 by Simon Plouffe. Using aspigot algorithm, the formula can compute any particularbase 16digit ofπ—returning the hexadecimal value of the digit—without computing the intervening digits.[93]
In 1996, Plouffe derived an algorithm to extract thenth decimal digit ofπ(using base10 math to extract a base10 digit), and which can do so with an improved speed ofO(n3(logn)3)time. The algorithm does not require memory for storage of a full n-digit result, so the one-millionth digit ofπcould in principle be computed using a pocket calculator.[94](However, it would be quite tedious and impractical to do so.)
The calculation speed of Plouffe's formula was improved toO(n2)byFabrice Bellard, who derived an alternative formula (albeit only in base2 math) for computingπ.[95]
Many other expressions forπwere developed and published by Indian mathematicianSrinivasa Ramanujan. He worked with mathematicianGodfrey Harold Hardyin England for a number of years.
Extremely long decimal expansions ofπare typically computed with theGauss–Legendre algorithmandBorwein's algorithm; theSalamin–Brent algorithm, which was invented in 1976, has also been used.
In 1997,David H. Bailey,Peter BorweinandSimon Plouffepublished a paper (Bailey, 1997) ona new formulaforπas aninfinite series:
This formula permits one to fairly readily compute thekthbinaryorhexadecimaldigit ofπ, without having to compute the precedingk− 1 digits. Bailey's website[96]contains the derivation as well as implementations in variousprogramming languages. ThePiHexproject computed 64 bits around thequadrillionthbit ofπ(which turns out to be 0).
Fabrice Bellardfurther improved on BBP withhis formula:[97]
Other formulae that have been used to compute estimates ofπinclude:
This converges extraordinarily rapidly. Ramanujan's work is the basis for the fastest algorithms used, as of the turn of the millennium, to calculateπ.
In 1988,David ChudnovskyandGregory Chudnovskyfound an even faster-converging series (theChudnovsky algorithm):
The speed of various algorithms for computing pi to n correct digits is shown below in descending order of asymptotic complexity. M(n) is the complexity of the multiplication algorithm employed.
Pi Hexwas a project to compute three specific binary digits ofπusing a distributed network of several hundred computers. In 2000, after two years, the project finished computing the five trillionth (5*1012), the forty trillionth, and the quadrillionth (1015) bits. All three of them turned out to be 0.
Over the years, several programs have been written for calculatingπtomany digitsonpersonal computers.
Mostcomputer algebra systemscan calculateπand other commonmathematical constantsto any desired precision.
Functions for calculatingπare also included in many generallibrariesforarbitrary-precision arithmetic, for instanceClass Library for Numbers,MPFRandSymPy.
Programs designed for calculatingπmay have better performance than general-purpose mathematical software. They typically implementcheckpointingand efficientdisk swappingto facilitate extremely long-running and memory-expensive computations.
Reprinted inSmith, David Eugene (1929)."William Jones: The First Use ofπfor the Circle Ratio".A Source Book in Mathematics. McGraw–Hill. pp.346–347.
Sandifer, Edward (2006)."Why 140 Digits of Pi Matter"(PDF).Jurij baron Vega in njegov čas: Zbornik ob 250-letnici rojstva[Baron Jurij Vega and His Times: Celebrating 250 Years]. Ljubljana: DMFA.ISBN978-961-6137-98-0.LCCN2008467244.OCLC448882242. Archived fromthe original(PDF)on 28 August 2006.We should note that Vega's value contains an error in the 127th digit. Vega gives a 4 where there should be an [6], and all digits after that are incorrect.
Roy, Ranjan (2021) [1st ed. 2011].Series and Products in the Development of Mathematics. Vol. 1 (2 ed.). Cambridge University Press. pp.215–216,219–220.
Sandifer, Ed (2009)."Estimating π"(PDF).How Euler Did It.Reprinted inHow Euler Did Even More. Mathematical Association of America. 2014. pp.109–118.
Newton, Isaac(1971).Whiteside, Derek Thomas(ed.).The Mathematical Papers of Isaac Newton. Vol. 4,1674–1684. Cambridge University Press. pp.526–653.
Euler, Leonhard(1755)."§2.30".Institutiones Calculi Differentialis(in Latin). Academiae Imperialis Scientiarium Petropolitanae. p. 318.E 212.
Euler, Leonhard(1798) [written 1779]."Investigatio quarundam serierum, quae ad rationem peripheriae circuli ad diametrum vero proxime definiendam maxime sunt accommodatae".Nova Acta Academiae Scientiarum Petropolitinae.11:133–149,167–168.E 705.
Hwang Chien-Lih (2005), "An elementary derivation of Euler's series for the arctangent function",The Mathematical Gazette,89(516):469–470,doi:10.1017/S0025557200178404,S2CID123395287 | https://en.wikipedia.org/wiki/Approximations_of_%CF%80 |
Bellard's formulais used to calculate thenth digit ofπinbase 16.
Bellard's formula was discovered byFabrice Bellardin 1997. It is about 43% faster than theBailey–Borwein–Plouffe formula(discovered in 1995).[1][2]It has been used inPiHex, the now-completeddistributed computingproject.
One important application is verifying computations of all digits of pi performed by other means. Rather than having to compute all of the digits twice by two separate algorithms to ensure that a computation is correct, the final digits of a very long all-digits computation can be verified by the much faster Bellard's formula.[3]
Formula:
Thismathematics-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Bellard%27s_formula |
fileis ashellcommandfor reporting the type of data contained in afile. It is commonly supported inUnixandUnix-likeoperating systems.
As the command uses relatively quick-runningheuristicsto determinefile type, it can report misleading information. The command can be fooled, for example, by including a magic number in the content even if the rest of the content does not match what the magic number indicates. The command report cannot be taken as completely trustworthy.
TheSingle UNIX Specification(SUS) requires the command to exhibit the following behavior with respect to the file specified via thecommand-line:
Position-sensitive tests are normally implemented by matching various locations within the file against a textual database ofmagic numbers(see the Usage section). This differs from other simpler methods such asfile extensionsand schemes likeMIME.
In the System V implementation, the Ian Darwin implementation, and the OpenBSD implementation, the command uses a database to drive the probing of the lead bytes. That database is stored as a file that is located in/etc/magic,/usr/share/file/magicor similar.
Thefilecommand originated inUnix Research Version 4[2]in 1973.System Vbrought a major update with several important changes, most notably moving the file type information into an external text file rather than compiling it into the binary itself.
Most majorBSDandLinuxdistributions include afree,open-sourceimplementation that was written from scratch by Ian Darwin in 1986–87.[3]It keeps file type information in a text file with a format based on that of the System V version. It was expanded byGeoff Collyerin 1989 and since then has had input from many others, including Guy Harris, Chris Lowth and Eric Fischer. From late 1993 onward, its maintenance has been organized by Christos Zoulas. TheOpenBSDsystem has its own subset implementation written from scratch, but still uses the Darwin/Zoulas collection of magic file formatted information.
Thefilecommand was ported to theIBM ioperating system.[4]
As of version 4.00 of the Ian Darwin/Christos Zoulas implementation offile, the functionality of the command is implemented in and exposed by alibmagiclibrarythat is accessible to consuming code viaC(and compatible) linking.[5][6][7][8]
The SUS[9]mandates the following command-line options:
Implementations may add extra options. Ian Darwin's implementation adds-s'special files',-k'keep-going' or-r'raw', among many others.[10]
For aCsource codefile,file main.creports:
For a compiled executable,file programreports information like:
For a block device/dev/hda,file /dev/hda1reports:
By default,filedoes not try to read a device file due to potential undesirable effects. But using the non-standard option-s(available in the Ian Darwin branch), which requests to read device files to identify content,file -s /dev/hda1reports details such as:
Via Ian Darwin's non-standard option-k, the command does not stop after the first hit found, but looks for other matching patterns. The-roption, which is available in some versions, causes thenew linecharacter to be displayed in its raw form rather than in its octal representation. On Linux,file -k -r libmagic-dev_5.35-4_armhf.debreports information like:
For a compressed file,file compressed.gzreports information like:
For a compressed file,file -i compressed.gzreports information like:
For a PPM file,file data.ppmreports;
For aMach-Ouniversal binary,file /bin/catreports like:
For asymbolic link,file /usr/bin/vireports:
Identifying a symbolic link is not available on all platforms and will be dereferenced if-Lis passed orPOSIXLY_CORRECTis set. | https://en.wikipedia.org/wiki/File_(command) |
Apunched card sorteris a machine forsortingdecks ofpunched cards.
Sorting was a major activity in most facilities that processed data on punched cards usingunit record equipment. The work flow of many processes required decks of cards to be put into some specific order as determined by the data punched in the cards. The same deck might be sorted differently for different processing steps. A popular family of sorters, the IBM 80 series sorters, sorted input cards into one of 13 pockets depending on the holes punched in a selected column and the sorter's settings.
The basic operation of a card sorter is to take a punched card, examine a single column, and place the card into a selected pocket. There are twelve rows on a punched card, and thirteen pockets in the sorter; one pocket is for blanks, rejects, and errors. (IBM 1962)
Cards are normally passed through the sorter face down with the bottom edge ("9-edge") first. A small metal brush or optical sensor is positioned so that, as each card goes through the sorter, one column passes under the brush or optical sensor. The holes sensed in that column together with the settings of the sorter controls determine which pocket the card is to be directed to. This directing is done by slipping the card into a stack of metal strips (orchute blades) that run the length of the sorter feed mechanism. Each blade ends above one of the output pockets, and the card is thus routed to the designated pocket.[1]
Multiple column sorting was commonly done by first sorting the least significant column, then proceeding, column by column, to the most significant column. This is called a least significant digitradix sort.
Numeric columns have one punch in rows 0-9, possibly a sign overpunch in rows 11-12, and can be sorted in a single pass through the sorter. Alphabetic columns have a zone punch in rows 12, 11, or 0 and a digit punch in one of the rows 1-9, and can be sorted by passing some or all of the cards through the sorter twice on that column. For more details of punched card codes seepunched card#IBM 80-column format and character codes.
Several methods were used foralphabetical sorting, depending on the features provided by the particular sorter and the characteristics of the data to be sorted. A commonly used method on the 082 and earlier sorters was to sort the cards twice on the same column, first on digit rows 1-9, and then (after re-stacking) on the zone rows 12, 11, and 0. Operator switches allow zone-sorting by "switching off" rows 1-9 for the second pass of the card for each column.
Other specialcharactersandpunctuation markswere added to the cardcode, involving as many as three punches per column (and in 1964 with the introduction ofEBCDICas many as six punches per column). The 083 and 084 sorters recognized these multiple-digit or multiple-zone punches, sorting them to the error pocket.
Original census sorting box, 1890, manual.[3]
Sorting cards became an issue during the 1900 agricultural census, soHerman Hollerith's company developed the 1901 Hollerith Automatic Horizontal Sorter,[4]a sorter with horizontal pockets.[5]
In 1908, he designed the more compact Hollerith 070 Vertical Sorting Machine[6]that sorted 250 cards per minute.[3][5]
The Type 71 Vertical Sorter came out in 1928. It had 12 pockets that could hold 80 cards. It could sort 150 cards per minute.[7]
The Type 75, Model 1, 19??, 400 cards per minute[3]
The Type 75, Model 2, 19??, 250 cards per minute[3]
Card Sorters in the IBM 80 series[8]included:
In August 1957, a basic 082 rented for $55 per month; an 083 for twice that. (IBM 1957)
By 1969, only the 82, 83, and 84 were made available for rental by IBM.[10]
In the early 2020s,TCG Machinesintroduced a card sorting machine to processtrading card gamecards.[11]The punched cards and brushes in these modern sorters have been replaced with image sensors (cameras) andcomputer visiontechnology, but their form and operation remain essentially identical to that of their historical predecessors. | https://en.wikipedia.org/wiki/IBM_80_series_Card_Sorters |
Incomputer science, asorting algorithmis analgorithmthat puts elements of alistinto anorder. The most frequently used orders arenumerical orderandlexicographical order, and either ascending or descending. Efficientsortingis important for optimizing theefficiencyof other algorithms (such assearchandmergealgorithms) that require input data to be in sorted lists. Sorting is also often useful forcanonicalizingdata and for producing human-readable output.
Formally, the output of any sorting algorithm must satisfy two conditions:
Although some algorithms are designed forsequential access, the highest-performing algorithms assume data is stored in adata structurewhich allowsrandom access.
From the beginning of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. Among the authors of early sorting algorithms around 1951 wasBetty Holberton, who worked onENIACandUNIVAC.[1][2]Bubble sortwas analyzed as early as 1956.[3]Asymptotically optimal algorithms have been known since the mid-20th century – new algorithms are still being invented, with the widely usedTimsortdating to 2002, and thelibrary sortbeing first published in 2006.
Comparison sorting algorithms have a fundamental requirement ofΩ(nlogn)comparisons (some input sequences will require a multiple ofnlogncomparisons, where n is the number of elements in the array to be sorted). Algorithms not based on comparisons, such ascounting sort, can have better performance.
Sorting algorithms are prevalent in introductorycomputer scienceclasses, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such asbig O notation,divide-and-conquer algorithms,data structuressuch asheapsandbinary trees,randomized algorithms,best, worst and average caseanalysis,time–space tradeoffs, andupper and lower bounds.
Sorting small arrays optimally (in the fewest comparisons and swaps) or fast (i.e. taking into account machine-specific details) is still an open research problem, with solutions only known for very small arrays (<20 elements). Similarly optimal (by various definitions) sorting on a parallel machine is an open research topic.
Sorting algorithms can be classified by:
Stable sort algorithms sort equal elements in the same order that they appear in the input. For example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list. Stable sorting algorithms choose one of these, according to the following rule: if two items compare as equal (like the two 5 cards), then their relative order will be preserved, i.e. if one comes before the other in the input, it will come before the other in the output.
Stability is important to preserve order over multiple sorts on the samedata set. For example, say that student records consisting of name and class section are sorted dynamically, first by name, then by class section. If a stable sorting algorithm is used in both cases, the sort-by-class-section operation will not change the name order; with an unstable sort, it could be that sorting by section shuffles the name order, resulting in a nonalphabetical list of students.
More formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called thekey. In the card example, cards are represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable if whenever there are two records R and S with the same key, and R appears before S in the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any data where the entire element is the key, stability is not an issue. Stability is also not an issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original input list as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary key. For example, suppose we wish to sort a hand of cards such that the suits are in the order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are sorted by rank. This can be done by first sorting the cards by rank (using any sort), and then doing a stable sort by suit:
Within each suit, the stable sort preserves the ordering by rank that was already done. This idea can be extended to any number of keys and is utilised byradix sort. The same effect can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g., compares first by suit, and then compares by rank if the suits are the same.
This analysis assumes that the length of each key is constant and that all comparisons, swaps and other operations can proceed in constant time.
Legend:
Below is a table ofcomparison sorts.Mathematical analysisdemonstrates a comparison sort cannot perform better thanO(nlogn)on average.[4]
The following table describesinteger sortingalgorithms and other sorting algorithms that are notcomparison sorts. These algorithms are not limited toΩ(nlogn)unless meet unit-costrandom-access machinemodel as described below.[12]
Also cannot sort non-integers.
Unlike most distribution sorts, this can sort non-integers.
Same as the LSD variant, it can sort non-integers.
Samplesortcan be used to parallelize any of the non-comparison sorts, by efficiently distributing data into several buckets and then passing down sorting to several processors, with no need to merge as buckets are already sorted between each other.
Some algorithms are slow compared to those discussed above, such as thebogosortwith unbounded run time and thestooge sortwhich hasO(n2.7) run time. These sorts are usually described for educational purposes to demonstrate how the run time of algorithms is estimated. The following table describes some sorting algorithms that are impractical for real-life use in traditional software contexts due to extremely poor performance or specialized hardware requirements.
Mostly of theoretical interest due to implementational complexity and suboptimal data moves.
Worst case is unbounded when using randomization, but a deterministic version guaranteesO(n×n!){\displaystyle O(n\times n!)}worst case.
Theoretical computer scientists have detailed other sorting algorithms that provide better thanO(nlogn) time complexity assuming additional constraints, including:
While there are a large number of sorting algorithms, in practical implementations a few algorithms predominate. Insertion sort is widely used for small data sets, while for large data sets an asymptotically efficient sort is used, primarily heapsort, merge sort, or quicksort. Efficient implementations generally use ahybrid algorithm, combining an asymptotically efficient algorithm for the overall sort with insertion sort for small lists at the bottom of a recursion. Highly tuned implementations use more sophisticated variants, such asTimsort(merge sort, insertion sort, and additional logic), used inAndroid,Java, andPython, andintrosort(quicksort and heapsort), used (in variant forms) in someC++ sortimplementations and in.NET.
For more restricted data, such as numbers in a fixed interval,distribution sortssuch as counting sort or radix sort are widely used. Bubble sort and variants are rarely used in practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intuitively generally use insertion sorts for small sets. For larger sets, people often first bucket, such as by initial letter, and multiple bucketing allows practical sorting of very large sets. Often space is relatively cheap, such as by spreading objects out on the floor or over a large area, but operations are expensive, particularly moving an object a large distance –locality of referenceis important. Merge sorts are also practical for physical objects, particularly as two hands can be used, one for each list to merge, while other algorithms, such as heapsort or quicksort, are poorly suited for human use. Other algorithms, such aslibrary sort, a variant of insertion sort that leaves spaces, are also practical for physical use.
Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on small data, due to low overhead, but not efficient on large data. Insertion sort is generally faster than selection sort in practice, due to fewer comparisons and good performance on almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes, and thus is used when write performance is a limiting factor.
Insertion sortis a simple sorting algorithm that is relatively efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list similar to how one puts money in their wallet.[22]In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one.Shellsortis a variant of insertion sort that is more efficient for larger lists.
Selection sortis anin-placecomparison sort. It hasO(n2) complexity, making it inefficient on large lists, and generally performs worse than the similarinsertion sort. Selection sort is noted for its simplicity and also has performance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and repeats these steps for the remainder of the list.[23]It does no more thannswaps and thus is useful where swapping is very expensive.
Practical general sorting algorithms are almost always based on an algorithm with average time complexity (and generally worst-case complexity) O(nlogn), of which the most common are heapsort, merge sort, and quicksort. Each has advantages and drawbacks, with the most significant being that simple implementation of merge sort uses O(n) additional space, and simple implementation of quicksort has O(n2) worst-case complexity. These problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency on real-world data various modifications are used. First, the overhead of these algorithms becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching to insertion sort once the data is small enough. Second, the algorithms often perform poorly on already sorted data or almost sorted data – these are common in real-world data and can be sorted in O(n) time by appropriate algorithms. Finally, they may also beunstable, and stability is often a desirable property in a sort. Thus more sophisticated algorithms are often employed, such asTimsort(based on merge sort) orintrosort(based on quicksort, falling back to heapsort).
Merge sorttakes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list.[24]Of the algorithms described here, this is the first that scales well to very large lists, because its worst-case running time is O(nlogn). It is also easily applied to lists, not only arrays, as it only requires sequential access, not random access. However, it has additional O(n) space complexity and involves a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations, due to its use in the sophisticated algorithmTimsort, which is used for the standard sort routine in the programming languagesPython[25]andJava(as ofJDK7[26]). Merge sort itself is the standard routine inPerl,[27]among others, and has been used in Java at least since 2000 inJDK1.3.[28]
Heapsortis a much more efficient version ofselection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called aheap, a special type ofbinary tree.[29]Once the data list has been made into a heap, the root node is guaranteed to be the largest (or smallest) element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(logn) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(nlogn) time, and this is also the worst-case complexity.
Recombinant sort is a non-comparison-based sorting algorithm developed by Peeyush Kumar et al in 2020. The algorithm combines bucket sort, counting sort, radix sort, hashing, and dynamic programming techniques. It employs an n-dimensional Cartesian space mapping approach consisting of two primary phases: a Hashing cycle that maps elements to a multidimensional array using a special hash function, and an Extraction cycle that retrieves elements in sorted order. Recombinant Sort achieves O(n) time complexity for best, average, and worst cases, and can process both numerical and string data types, including mixed decimal and non-decimal numbers.[30]
Quicksortis adivide-and-conquer algorithmwhich relies on apartitionoperation: to partition an array, an element called apivotis selected.[31][32]All elements smaller than the pivot are moved before it and all greater elements are moved after it. This can be done efficiently in linear time andin-place. The lesser and greater sublists are then recursively sorted. This yields an average time complexity of O(nlogn), with low overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with in-place partitioning) are typically unstable sorts and somewhat complex but are among the fastest sorting algorithms in practice. Together with its modest O(logn) space usage, quicksort is one of the most popular sorting algorithms and is available in many standard programming libraries.
The important caveat about quicksort is that its worst-case performance is O(n2); while this is rare, in naive implementations (choosing the first or last element as pivot) this occurs for sorted data, which is a common case. The most complex issue in quicksort is thus choosing a good pivot element, as consistently poor choices of pivots can result in drastically slower O(n2) performance, but good choice of pivots yields O(nlogn) performance, which is asymptotically optimal. For example, if at each step themedianis chosen as the pivot then the algorithm works in O(nlogn). Finding the median, such as by themedian of mediansselection algorithmis however an O(n) operation on unsorted lists and therefore exacts significant overhead with sorting. In practice choosing a random pivot almost certainly yields O(nlogn) performance.
If a guarantee of O(nlogn) performance is important, there is a simple modification to achieve that. The idea, due to Musser, is to set a limit on the maximum depth of recursion.[33]If that limit is exceeded, then sorting is continued using the heapsort algorithm. Musser proposed that the limit should be1+2⌊log2(n)⌋{\displaystyle 1+2\lfloor \log _{2}(n)\rfloor }, which is approximately twice the maximum recursion depth one would expect on average with a randomlyordered array.
Shellsortwas invented byDonald Shellin 1959.[34]It improves upon insertion sort by moving out of order elements more than one position at a time. The concept behind Shellsort is that insertion sort performs inO(kn){\displaystyle O(kn)}time, where k is the greatest distance between two out-of-place elements. This means that generally, they perform inO(n2), but for data that is mostly sorted, with only a few elements out of place, they perform faster. So, by first sorting elements far away, and progressively shrinking the gap between the elements to sort, the final sort computes much faster. One implementation can be described as arranging the data sequence in a two-dimensional array and then sorting the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is anopen problemand depends on the gap sequence used, with known complexities ranging fromO(n2) toO(n4/3) and Θ(nlog2n). This, combined with the fact that Shellsort isin-place, only needs a relatively small amount of code, and does not require use of thecall stack, makes it is useful in situations where memory is at a premium, such as inembedded systemsandoperating system kernels.
Bubble sort, and variants such as theComb sortandcocktail sort, are simple, highly inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of analysis, but they are rarely used in practice.
Bubble sortis a simple sorting algorithm. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them. It continues doing this for each pair of adjacent elements to the end of the data set. It then starts again with the first two elements, repeating until no swaps have occurred on the last pass.[35]This algorithm's average time and worst-case performance is O(n2), so it is rarely used to sort large, unordered data sets. Bubble sort can be used to sort a small number of items (where its asymptotic inefficiency is not a high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly sorted (that is, the elements are not significantly out of place). For example, if any number of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble sort's exchange will get them in order on the first pass, the second pass will find all elements in order, so the sort will take only 2ntime.
Comb sortis a relatively simple sorting algorithm based onbubble sortand originally designed by Włodzimierz Dobosiewicz in 1980.[36]It was later rediscovered and popularized by Stephen Lacey and Richard Box with aByteMagazinearticle published in April 1991. The basic idea is to eliminateturtles, or small values near the end of the list, since in a bubble sort these slow the sorting down tremendously. (Rabbits, large values around the beginning of the list, do not pose a problem in bubble sort) It accomplishes this by initially swapping elements that are a certain distance from one another in the array, rather than only swapping elements if they are adjacent to one another, and then shrinking the chosen distance until it is operating as a normal bubble sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that swaps elements spaced a certain distance away from one another, comb sort can be thought of as the same generalization applied to bubble sort.
Exchange sortis sometimes confused with bubble sort, although the algorithms are in fact distinct.[37][38]Exchange sort works by comparing the first element with all elements above it, swapping where needed, thereby guaranteeing that the first element is correct for the final sort order; it then proceeds to do the same for the second element, and so on. It lacks the advantage that bubble sort has of detecting in one pass if the list is already sorted, but it can be faster than bubble sort by a constant factor (one less pass over the data to be sorted; half as many total comparisons) in worst-case situations. Like any simple O(n2) sort it can be reasonably fast over very small data sets, though in generalinsertion sortwill be faster.
Distribution sortrefers to any sorting algorithm where data is distributed from their input to multiple intermediate structures which are then gathered and placed on the output. For example, bothbucket sortandflashsortare distribution-based sorting algorithms. Distribution sorting algorithms can be used on a single processor, or they can be adistributed algorithm, where individual subsets are separately sorted on different processors, then combined. This allowsexternal sortingof data too large to fit into a single computer's memory.
Counting sort is applicable when each input is known to belong to a particular set,S, of possibilities. The algorithm runs in O(|S| +n) time and O(|S|) memory wherenis the length of the input. It works by creating an integer array of size |S| and using theith bin to count the occurrences of theith member ofSin the input. Each input is then counted by incrementing the value of its corresponding bin. Afterward, the counting array is looped through to arrange all of the inputs in order. This sorting algorithm often cannot be used becauseSneeds to be reasonably small for the algorithm to be efficient, but it is extremely fast and demonstrates great asymptotic behavior asnincreases. It also can be modified to provide stable behavior.
Bucket sort is adivide-and-conquersorting algorithm that generalizescounting sortby partitioning an array into a finite number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm or by recursively applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across all buckets.
Radix sortis an algorithm that sorts numbers by processing individual digits.nnumbers consisting ofkdigits each are sorted in O(n·k) time. Radix sort can process digits of each number either starting from theleast significant digit(LSD) or starting from themost significant digit(MSD). The LSD algorithm first sorts the list by the least significant digit while preserving their relative order using a stable sort. Then it sorts them by the next digit, and so on from the least significant to the most significant, ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix sort is not stable. It is common for thecounting sortalgorithm to be used internally by the radix sort. Ahybridsorting approach, such as usinginsertion sortfor small bins, improves performance of radix sort significantly.
When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen atsystem busspeed (or, with caching, even atCPUspeed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursivequicksortalgorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in arelational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the disk-swapping problem. This procedure is sometimes called "tag sort".[39]
Another technique for overcoming the memory-size problem is usingexternal sorting, for example, one of the ways is to combine two algorithms in a way that takes advantage of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted using an efficient algorithm (such asquicksort), and the results merged using ak-way merge similar to that used inmerge sort. This is faster than performing either merge sort or quicksort over the entire list.[40][41]
Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably withvirtual memory, i.e., to reduce the amount of swapping required.
Related problems includeapproximate sorting(sorting a sequence to within a certainamountof the correct order),partial sorting(sorting only theksmallest elements of a list, or finding theksmallest elements, but unordered) andselection(computing thekth smallest element). These can be solved inefficiently by a total sort, but more efficient algorithms exist, often derived by generalizing a sorting algorithm. The most notable example isquickselect, which is related toquicksort. Conversely, some sorting algorithms can be derived by repeated application of a selection algorithm; quicksort and quickselect can be seen as the same pivoting move, differing only in whether one recurses on both sides (quicksort,divide-and-conquer) or one side (quickselect,decrease-and-conquer).
A kind of opposite of a sorting algorithm is ashuffling algorithm. These are fundamentally different because they require a source of random numbers. Shuffling can also be implemented by a sorting algorithm, namely by a random sort: assigning a random number to each element of the list and then sorting based on the random numbers. This is generally not done in practice, however, and there is a well-known simple and efficient algorithm for shuffling: theFisher–Yates shuffle.
Sorting algorithms are ineffective for finding an order in many situations. Usually, when elements have no reliable comparison function (crowdsourced preferences likevoting systems), comparisons are very costly (sports), or when it would be impossible to pairwise compare all elements for all criteria (search engines). In these cases, the problem is usually referred to asrankingand the goal is to find the "best" result for some criteria according to probabilities inferred from comparisons or rankings. A common example is in chess, where players are ranked with theElo rating system, and rankings are determined by atournament systeminstead of a sorting algorithm. | https://en.wikipedia.org/wiki/Distribution_sort |
Kirkpatrick–Reisch sortingis a fastsorting algorithmfor items with limited-size integer keys. It is notable for having anasymptotic time complexitythat is better thanradix sort.[1][2]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Kirkpatrick-Reisch_sorting |
In the mathematical theory ofnon-standard positional numeral systems, theKomornik–Loreti constantis amathematical constantthat represents the smallest baseqfor which the number 1 has a unique representation, called itsq-development. The constant is named afterVilmos KomornikandPaola Loreti, who defined it in 1998.[1]
Given a real numberq> 1, the series
is called theq-expansion, orβ{\displaystyle \beta }-expansion, of the positive real numberxif, for alln≥0{\displaystyle n\geq 0},0≤an≤⌊q⌋{\displaystyle 0\leq a_{n}\leq \lfloor q\rfloor }, where⌊q⌋{\displaystyle \lfloor q\rfloor }is thefloor functionandan{\displaystyle a_{n}}need not be aninteger. Any real numberx{\displaystyle x}such that0≤x≤q⌊q⌋/(q−1){\displaystyle 0\leq x\leq q\lfloor q\rfloor /(q-1)}has such an expansion, as can be found using thegreedy algorithm.
The special case ofx=1{\displaystyle x=1},a0=0{\displaystyle a_{0}=0}, andan=0{\displaystyle a_{n}=0}or1{\displaystyle 1}is sometimes called aq{\displaystyle q}-development.an=1{\displaystyle a_{n}=1}gives the only 2-development. However, for almost all1<q<2{\displaystyle 1<q<2}, there are an infinite number of differentq{\displaystyle q}-developments. Even more surprisingly though, there exist exceptionalq∈(1,2){\displaystyle q\in (1,2)}for which there exists only a singleq{\displaystyle q}-development. Furthermore, there is a smallest number1<q<2{\displaystyle 1<q<2}known as the Komornik–Loreti constant for which there exists a uniqueq{\displaystyle q}-development.[2]
The Komornik–Loreti constant is the valueq{\displaystyle q}such that
wheretk{\displaystyle t_{k}}is theThue–Morse sequence, i.e.,tk{\displaystyle t_{k}}is the parity of the number of 1's in thebinary representationofk{\displaystyle k}. It has approximate value
The constantq{\displaystyle q}is also the unique positive real solution to the equation
This constant istranscendental.[4] | https://en.wikipedia.org/wiki/Komornik%E2%80%93Loreti_constant |
Number systems have progressed from theuse of fingersandtally marks, perhaps more than 40,000 years ago, to the use of sets ofglyphsable to represent any conceivable number efficiently. The earliest known unambiguous notations for numbers emerged inMesopotamiaabout 5000 or 6000 years ago.
Counting initially involves the fingers,[1]given that digit-tallying is common in number systems that are emerging today, as is the use of the hands to express the numbers five and ten.[2]In addition, the majority of the world's number systems are organized by tens, fives, and twenties, suggesting the use of the hands and feet in counting, and cross-linguistically, terms for these amounts are etymologically based on the hands and feet.[3][4]Finally, there are neurological connections between the parts of the brain that appreciate quantity and the part that "knows" the fingers (finger gnosia), and these suggest that humans are neurologically predisposed to use their hands in counting.[5][6]While finger-counting is typically not something that preserves archaeologically, some prehistorichand stencilshave been interpreted as finger-counting since of the 32 possible patterns the fingers can produce, only five (the ones typically used in counting from one to five) are found at Cosquer Cave, France.[7]
Since the capacity and persistence of the fingers are limited,finger-countingis typically supplemented by means of devices with greater capacity and persistence, including tallies made of wood or other materials.[8]Possibletally marksmade by carving notches in wood, bone, and stone appear in the archaeological record at least forty thousand years ago.[9][10]These tally marks may have been used for counting time, such as numbers of days orlunar cycles, or for keeping records of quantities, such as numbers of animals or other valuablecommodities. However, there is currently no diagnostic technique that can reliably determine the social purpose or use of prehistoric linear marks inscribed on surfaces, and contemporary ethnographic examples show that similarartifactsare made and used for non-numerical purposes.[11]
TheLebombo boneis a baboonfibulawith incised markings discovered in theLebombo Mountainslocated betweenSouth AfricaandEswatini. The bone has been dated to 42,000 years ago.[12]According toThe Universal Book of Mathematics,: p. 184the Lebombo bone's 29 notches suggest that "it may have been used as a lunar phase counter, in which case African women may have been the first mathematicians, because keeping track of menstrual cycles requires alunar calendar." However, the bone is clearly broken at one end, so the 29 notches might only represent a portion of a larger sequence.[12]Similar artifacts from contemporary societies, like those of Australia, also suggest that such notches can servemnemonicor conventional functions, rather than meaning numbers.[11]
TheIshango boneis an artifact with a sharp piece ofquartzaffixed to one end, perhaps for engraving. It has been dated to 25,000 years ago.[13]The artifact was first thought to be atally stick, as it has a series of what has been interpreted as tally marks carved in three rows running the length of the tool. The first row has been interpreted as theprime numbersbetween 10 and 20 (i.e., 19, 17, 13, and 11), while a second row appears to add and subtract 1 from 10 and 20 (i.e., 9, 19, 21, and 11); the third row contains amounts that might be halves and doubles, though these are inconsistent.[14]Noting the statistical probability of producing such numbers by accident, researchers like Jean de Heinzelin have suggested that the notch groupings indicate a mathematical understanding far beyond simple counting. It has also been suggested that the marks might have been made for a utilitarian purpose, like creating a better grip for the handle, or for some other non-mathematical reason. The purpose and meaning of the notches continue to be debated in academic literature.[15]
The earliest known writing for record keeping emerged from a system of accounting that used small clay tokens. The earliest artifacts claimed to be tokens are fromTell Abu Hureyra, a site in the Upper Euphrates valley in Syria dated to the 10th millennium BCE,[16]andGanj-i-Dareh Tepe, a site in theZagrosregion of Iran dated to the 9th millennium BCE.[17]
To create a record that represented "two sheep", two tokens each representing one unit were used. Different types of objects were also counted differently. Within the counting system used with most discrete objects (including animals like sheep), there was a token for one item (units), a different token for ten items (tens), a different token for six tens (sixties), etc. Tokens of different sizes and shapes were used to record higher groups of ten or six in asexagesimalnumber system. Different combinations of token shapes and sizes encoded the different counting systems.[18]ArchaeologistDenise Schmandt-Besserathas argued that the plain geometric tokens used for numbers were accompanied by complex tokens that identified the commodities being enumerated. For ungulates like sheep, this complex token was a flat disk marked with a quartered circle. However, the purported use of complex tokens has also beencriticizedon a number of grounds.[19]
To ensure that tokens were not lost or altered in their type or quantity, they were placed into clay envelopes shaped like hollow balls known as bullae (abulla). Ownership and witness seals were impressed on bullae surfaces, which might also be left plain. If tokens needed to be verified after the bulla containing them was sealed, the bulla had to be broken open. Around the mid-fourth millennium BCE, tokens began being pressed into a bulla's outer surface before being sealed inside, presumably to avoid the need to break open the bulla to see them. This process created external impressions on bullae surfaces that corresponded to the enclosed tokens in their sizes, shapes, and quantities. Eventually, the redundancy created by the tokens inside and impressions outside a bulla seems to have been recognized, and impressions on flat tablets became the preferred method of recording numerical information. The correspondences between impressions and tokens, and the chronology of forms they comprised, were initially noticed and published by scholars like Piere Amiet.[20][21][22][23]
By the time that the numerical impressions provided insight into ancient numbers, theSumerianshad already developed a complexarithmetic.[24]Computations were likely performed either with tokens or by means of anabacusorcounting board.[25][26]
In the mid-to-late-fourth millennium BCE, numerical impressions used with bullae were replaced by numerical tablets bearing proto-cuneiform numerals impressed into clay with a roundstylusheld at different angles to produce the various shapes used for numerical signs.[27]As was true of tokens and the numerical impressions on the outside of bullae, each numerical sign represented both the commodity being counted and the quantity or volume of that commodity. These numerals were soon accompanied by small pictures that identified the commodity being enumerated. The Sumerians counted different types of objects differently. As understood through analyses of early proto-cuneiform notations from the city ofUruk, there were more than a dozen different counting systems,[18]including a general system for counting most discrete objects (such as animals, tools, and people) and specialized systems for counting cheese and grain products, volumes of grain (includingfractionalunits), land areas, and time. Object-specified counting is not unusual and has been documented for contemporary peoples around the world; such modern systems provide good insight into how the ancient Sumerian number systems likely functioned.[28]
Around 2700 BCE, the round stylus began to be replaced by a reed stylus that produced the wedge-shaped impressions that givecuneiform signstheir name. As was the case with the tokens, numerical impressions, and proto-cuneiform numerals, cuneiform numerals are today sometimes ambiguous in the numerical values they represent. This ambiguity is partly because the base unit of an object-specified counting system is not always understood, and partly because the Sumerian number system lacked a convention like a decimal point to differentiate integers from fractions or higher exponents from lower ones. About 2100 BCE, a common sexagesimal number system withplace-valuedeveloped and was used to aid conversions between object-specified counting systems.[29][30][31]A decimal version of thesexagesimalnumber system, today called Assyro-Babylonian Common, developed in the second millennium BCE, reflecting the increased influence of Semitic peoples like the Akkadians and Eblaites; while today it is less well known than its sexagesimal counterpart, it would eventually become the dominant system used throughout the region, especially as Sumerian cultural influence began to wane.[32][33]
Sexagesimal numerals were amixed radixsystem that retained the alternating bases of 10 and 6 that characterized tokens, numerical impressions, and proto-cuneiform numerical signs. Sexagesimal numerals were used in commerce, as well as for astronomical and other calculations. InArabic numerals, sexagesimal is still used today to count time (second per minute; minutes per hour), and angles (degrees).
The Roman numerals developed fromEtruscan symbolsaround the middle of the 1st millennium BCE.[34]In the Etruscan system, the symbol 1 was a single vertical mark, the symbol 10 was two perpendicularly crossed tally marks, and the symbol 100 was three crossed tally marks (similar in form to a modern asterisk *); while 5 (an inverted V shape) and 50 (an inverted V split by a single vertical mark) were perhaps derived from the lower halves of the signs for 10 and 100, there is no convincing explanation as to how the Roman symbol for 100, C, was derived from its asterisk-shaped Etruscan antecedent.[35] | https://en.wikipedia.org/wiki/History_of_ancient_numeral_systems |
This is a list of Wikipedia articles on topics ofnumeral systemand "numeric representations"
See also:computer numbering formatsandnumber names. | https://en.wikipedia.org/wiki/List_of_numeral_system_topics |
In apositional numeral system, theradix(pl.:radices) orbaseis the number of uniquedigits, including the digit zero, used to represent numbers. For example, for thedecimal system(the most common system in use today) the radix is ten, because it uses the ten digits from 0 through 9.
In any standard positional numeral system, a number is conventionally written as(x)ywithxas thestringof digits andyas its base. For base ten, the subscript is usually assumed and omitted (together with the enclosingparentheses), as it is the most common way to expressvalue. For example,(100)10is equivalent to 100(the decimal system is implied in the latter) and represents the number one hundred, while (100)2(in thebinary systemwith base 2) represents the number four.[1]
Radixis a Latin word for "root".Rootcan be considered a synonym forbase,in the arithmetical sense.
Generally, in a system with radixb(b> 1), a string of digitsd1...dndenotes the numberd1bn−1+d2bn−2+ ... +dnb0, where0 ≤di<b.[1]In contrast to decimal, or radix 10, which has a ones' place, tens' place, hundreds' place, and so on, radixbwould have a ones' place, then ab1s' place, ab2s' place, etc.[2]
For example, ifb= 12, a string of digits such as 59A (where the letter "A" represents the value of ten) would represent the value5×122+9×121+10×120= 838 in base 10.
Commonly used numeral systems include:
The octal and hexadecimal systems are often used in computing because of their ease as shorthand for binary. Every hexadecimal digit corresponds to a sequence of four binary digits, since sixteen is the fourth power of two; for example, hexadecimal 7816is binary11110002. Similarly, every octal digit corresponds to a unique sequence of three binary digits, since eight is the cube of two.
This representation is unique. Letbbe a positive integer greater than 1. Then every positive integeracan be expressed uniquely in the form
wheremis a nonnegative integer and ther's are integers such that
Radices are usuallynatural numbers. However, other positional systems are possible, for example,golden ratio base(whose radix is a non-integeralgebraic number),[5]andnegative base(whose radix is negative).[6]A negative base allows the representation of negative numbers without the use of a minus sign. For example, letb= −10. Then a string of digits such as 19 denotes the (decimal) number1 × (−10)1+ 9 × (−10)0= −1.
Different bases are especially used in connection with computers.
The commonly used bases are 10 (decimal), 2 (binary), 8 (octal), and 16 (hexadecimal).
Abytewith 8bitscan represent values from 0 to 255, often expressed withleading zerosin base 2, 8 or 16 to give the same length.[7]
The first row in the tables is the base written in decimal. | https://en.wikipedia.org/wiki/Radix |
Atimelineofnumeralsandarithmetic. | https://en.wikipedia.org/wiki/Timeline_of_numerals_and_arithmetic |
This list compiles notable works that explore the history and development of number systems across various civilizations and time periods. These works cover topics ranging from ancient numeral systems and arithmetic methods to the evolution of mathematical notations and the impact of numerals on science, trade, and culture.
Number systems have been central to the development of human civilization, enabling record-keeping, commerce, astronomy, and scientific advancement. Early systems such as tally marks and Roman numerals gradually gave way to more abstract and efficient representations like the Babylonian base-60 system and the Hindu–Arabic numerals, now standard worldwide. The invention of zero, positional notation, and symbolic mathematics has had profound philosophical and technological implications. | https://en.wikipedia.org/wiki/List_of_books_on_history_of_number_systems |
Inprobability theoryandstatistics,Wallenius' noncentral hypergeometric distribution(named after Kenneth Ted Wallenius) is a generalization of thehypergeometric distributionwhere items are sampled withbias.
This distribution can be illustrated as anurn modelwith bias. Assume, for example, that an urn containsm1red balls andm2white balls, totallingN=m1+m2balls. Each red ball has the weight ω1and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1/ ω2. Now we are takingnballs, one by one, in such a way that the probability of taking a particular ball at a particular draw is equal to its proportion of the total weight of all balls that lie in the urn at that moment. The number of red ballsx1that we get in this experiment is arandom variablewith Wallenius' noncentral hypergeometric distribution.
The matter is complicated by the fact that there is more than one noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution is obtained if balls are sampled one by one in such a way that there iscompetitionbetween the balls.Fisher's noncentral hypergeometric distributionis obtained if the balls are sampled simultaneously or independently of each other. Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name.
The two distributions are both equal to the (central)hypergeometric distributionwhen theodds ratiois 1.
The difference between these two probability distributions is subtle. See the Wikipedia entry onnoncentral hypergeometric distributionsfor a more detailed explanation.
Wallenius' distribution is particularly complicated because each ball has a probability of being taken that depends not only on its weight, but also on the total weight of its competitors. And the weight of the competing balls depends on the outcomes of all preceding draws.
This recursive dependency gives rise to adifference equationwith a solution that is given inopen formby the integral in the expression of the probability mass function in the table above.
Closed form expressionsfor the probability mass function exist (Lyons, 1980), but they are not very useful for practical calculations because of extremenumerical instability, except in degenerate cases.
Several other calculation methods are used, includingrecursion,Taylor expansionandnumerical integration(Fog, 2007, 2008).
The most reliable calculation method is recursive calculation of f(x,n) from f(x,n-1) and f(x-1,n-1) using the recursion formula given below under properties. The probabilities of all (x,n) combinations on all possibletrajectoriesleading to the desired point are calculated, starting with f(0,0) = 1 as shown on the figure to the right. The total number of probabilities to calculate isn(x+1)-x2. Other calculation methods must be used whennandxare so big that this method is too inefficient.
The probability that all balls have the same color is easier to calculate. See the formula below under multivariate distribution.
No exact formula for the mean is known (short of complete enumeration of all probabilities). The equation given above is reasonably accurate. This equation can be solved for μ byNewton-Raphson iteration. The same equation can be used for estimating the odds from an experimentally obtained value of the mean.
Wallenius' distribution has fewer symmetry relations thanFisher's noncentral hypergeometric distributionhas. The only symmetry relates to the swapping of colors:
Unlike Fisher's distribution, Wallenius' distribution has no symmetry relating to the number of ballsnottaken.
The following recursion formula is useful for calculating probabilities:
Another recursion formula is also known:
The probability is limited by
where the underlined superscript indicates thefalling factorialab_=a(a−1)…(a−b+1){\displaystyle a^{\underline {b}}=a(a-1)\ldots (a-b+1)}.
The distribution can be expanded to any number of colorscof balls in the urn. The multivariate distribution is used when there are more than two colors.
The probability mass function can be calculated by variousTaylor expansionmethods or bynumerical integration(Fog, 2008).
The probability that all balls have the same color,j, can be calculated as:
forxj=n≤mj, where the underlined superscript denotes thefalling factorial.
A reasonably good approximation to the mean can be calculated using the equation given above. The equation can be solved by defining θ so that
and solving
for θ byNewton-Raphson iteration.
The equation for the mean is also useful for estimating the odds from experimentally obtained values for the mean.
No good way of calculating the variance is known. The best known method is to approximate the multivariate Wallenius distribution by a multivariateFisher's noncentral hypergeometric distributionwith the same mean, and insert the mean as calculated above in the approximate formula for the variance of the latter distribution.
The order of the colors is arbitrary so that any colors can be swapped.
The weights can be arbitrarily scaled:
Colors with zero number (mi= 0) or zero weight (ωi= 0) can be omitted from the equations.
Colors with the same weight can be joined:
wherehypg(x;n,m,N){\displaystyle \operatorname {hypg} (x;n,m,N)}is the (univariate, central) hypergeometric distribution probability.
The balls that arenottaken in the urn experiment have a distribution that is different from Wallenius' noncentral hypergeometric distribution, due to a lack of symmetry. The distribution of the balls not taken can be called thecomplementary Wallenius' noncentral hypergeometric distribution.
Probabilities in the complementary distribution are calculated from Wallenius' distribution by replacingnwithN-n,xiwithmi-xi, and ωiwith 1/ωi. | https://en.wikipedia.org/wiki/Wallenius%27_noncentral_hypergeometric_distribution |
Inprobability theoryandstatistics,Fisher's noncentral hypergeometric distributionis a generalization of thehypergeometric distributionwhere sampling probabilities are modified by weight factors. It can also be defined as theconditional distributionof two or morebinomially distributedvariables dependent upon their fixed sum.
The distribution may be illustrated by the followingurn model. Assume, for example, that an urn containsm1red balls andm2white balls, totallingN=m1+m2balls. Each red ball has the weight ω1and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1/ ω2. Now we are taking balls randomly in such a way that the probability of taking a particular ball is proportional to its weight, but independent of what happens to the other balls. The number of balls taken of a particular color follows thebinomial distribution. If the total numbernof balls taken is known then the conditional distribution of the number of taken red balls for givennis Fisher's noncentral hypergeometric distribution. To generate this distribution experimentally, we have to repeat the experiment until it happens to givenballs.
If we want to fix the value ofnprior to the experiment then we have to take the balls one by one until we havenballs. The balls are therefore no longer independent. This gives a slightly different distribution known asWallenius' noncentral hypergeometric distribution. It is far from obvious why these two distributions are different. See the entry fornoncentral hypergeometric distributionsfor an explanation of the difference between these two distributions and a discussion of which distribution to use in various situations.
The two distributions are both equal to the (central)hypergeometric distributionwhen theodds ratiois 1.
Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name.
Fisher's noncentral hypergeometric distribution was first given the nameextended hypergeometric distribution(Harkness, 1965), and some authors still use this name today.
The probability function, mean and variance are given in the adjacent table.
An alternative expression of the distribution has both the number of balls taken of each color and the number of balls not taken as random variables, whereby the expression for the probability becomes symmetric.
The calculation time for the probability function can be high when the sum inP0has many terms. The calculation time can be reduced by calculating the terms in the sum recursively relative to the term fory=xand ignoring negligible terms in the tails (Liao and Rosen, 2001).
The mean can be approximated by:
wherea=ω−1{\displaystyle a=\omega -1},b=m1+n−N−(m1+n)ω{\displaystyle b=m_{1}+n-N-(m_{1}+n)\omega },c=m1nω{\displaystyle c=m_{1}n\omega }.
The variance can be approximated by:
Better approximations to the mean and variance are given by Levin (1984, 1990), McCullagh and Nelder (1989), Liao (1992), and Eisinga and Pelzer (2011). The saddlepoint methods to approximate the mean and the variance suggested Eisinga and Pelzer (2011) offer extremely accurate results.
The following symmetry relations apply:
Recurrence relation:
The distribution is affectionately called "finchy-pig," based on the abbreviation convention above.
The univariate noncentral hypergeometric distribution may be derived alternatively as a conditional distribution in the context of two binomially distributed random variables, for example when considering the response to a particular treatment in two different groups of patients participating in a clinical trial. An important application of the noncentral hypergeometric distribution in this context is the computation of exact confidence intervals for the odds ratio comparing treatment response between the two groups.
SupposeXandYare binomially distributed random variables counting the number of responders in two corresponding groups of sizemXandmYrespectively,
Their odds ratio is given as
The responder prevalenceπi{\displaystyle \pi _{i}}is fully defined in terms of the oddsωi{\displaystyle \omega _{i}},i∈{X,Y}{\displaystyle i\in \{X,Y\}}, which correspond to the sampling bias in the urn scheme above, i.e.
The trial can be summarized and analyzed in terms of the following contingency table.
In the table,n=x+y{\displaystyle n=x+y}corresponds to the total number of responders across groups, andNto the total number of patients recruited into the trial. The dots denote corresponding frequency counts of no further relevance.
The sampling distribution of responders in group X conditional upon the trial outcome and prevalences,Pr(X=x|X+Y=n,mX,mY,ωX,ωY){\displaystyle Pr(X=x\;|\;X+Y=n,m_{X},m_{Y},\omega _{X},\omega _{Y})},
is noncentral hypergeometric:
F(X,ω):=Pr(X=x|X+Y=n,mX,mY,ωX,ωY)=Pr(X=x,X+Y=n|mX,mY,ωX,ωY)Pr(X+Y=n|mX,mY,ωX,ωY)=Pr(X=x|mX,ωX)Pr(Y=n−x|mY,ωY,X=x)Pr(X+Y=n|mX,mY,ωX,ωY)=(mXx)πXx(1−πX)mX−x(mYn−x)πYn−x(1−πY)mY−(n−x)Pr(X+Y=n|mX,mY,ωX,ωY)=(mXx)ωXx(1−πX)mX(mYn−x)ωYn−x(1−πY)mYPr(X+Y=n|mX,mY,ωX,ωY)=(mXx)(mYn−x)ωx(1−πX)mXωYn(1−πY)mY(1−πX)mXωYn(1−πY)mY∑u=max(0,n−mY)min(mX,n)(mXu)(mYn−u)ωu=(mXx)(mYn−x)ωx∑u=max(0,n−mY)min(mX,n)(mXu)(mYn−u)ωu{\displaystyle {\begin{aligned}F(X,\omega ):&=Pr(X=x\;|\;X+Y=n,m_{X},m_{Y},\omega _{X},\omega _{Y})\\&={\frac {Pr(X=x,X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}{Pr(X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}}\\&={\frac {Pr(X=x\;|\;m_{X},\omega _{X})Pr(Y=n-x\;|\;m_{Y},\omega _{Y},X=x)}{Pr(X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}}\\&={\frac {{\binom {m_{X}}{x}}\pi _{X}^{x}(1-\pi _{X})^{m_{X}-x}{\binom {m_{Y}}{n-x}}\pi _{Y}^{n-x}(1-\pi _{Y})^{m_{Y}-(n-x)}}{Pr(X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}}\\&={\frac {{\binom {m_{X}}{x}}\omega _{X}^{x}(1-\pi _{X})^{m_{X}}{\binom {m_{Y}}{n-x}}\omega _{Y}^{n-x}(1-\pi _{Y})^{m_{Y}}}{Pr(X+Y=n\;|\;m_{X},m_{Y},\omega _{X},\omega _{Y})}}\\&={\frac {{\binom {m_{X}}{x}}{\binom {m_{Y}}{n-x}}\omega ^{x}(1-\pi _{X})^{m_{X}}\omega _{Y}^{n}(1-\pi _{Y})^{m_{Y}}}{(1-\pi _{X})^{m_{X}}\omega _{Y}^{n}(1-\pi _{Y})^{m_{Y}}\sum _{u=\max(0,n-m_{Y})}^{\min(m_{X},n)}{\binom {m_{X}}{u}}{\binom {m_{Y}}{n-u}}\omega ^{u}}}\\&={\frac {{\binom {m_{X}}{x}}{\binom {m_{Y}}{n-x}}\omega ^{x}}{\sum _{u=\max(0,n-m_{Y})}^{\min(m_{X},n)}{\binom {m_{X}}{u}}{\binom {m_{Y}}{n-u}}\omega ^{u}}}\end{aligned}}}
Note that the denominator is essentially just the numerator, summed over all events of the joint sample space(X,Y){\displaystyle (X,Y)}for which it holds thatX+Y=n{\displaystyle X+Y=n}. Terms independent ofXcan be factored out of the sum and cancel out with the numerator.
The distribution can be expanded to any number of colorscof balls in the urn. The multivariate distribution is used when there are more than two colors.
The probability function and a simple approximation to the mean are given to the right. Better approximations to the mean and variance are given by McCullagh and Nelder (1989).
The order of the colors is arbitrary so that any colors can be swapped.
The weights can be arbitrarily scaled:
Colors with zero number (mi= 0) or zero weight (ωi= 0) can be omitted from the equations.
Colors with the same weight can be joined:
wherehypg(x;n,m,N){\displaystyle \operatorname {hypg} (x;n,m,N)}is the (univariate, central) hypergeometric distribution probability.
Fisher's noncentral hypergeometric distribution is useful for models of biased sampling or biased selection where the individual items are sampled independently of each other with no competition. The bias or odds can be estimated from an experimental value of the mean. UseWallenius' noncentral hypergeometric distributioninstead if items are sampled one by one with competition.
Fisher's noncentral hypergeometric distribution is used mostly for tests incontingency tableswhere a conditional distribution for fixed margins is desired. This can be useful, for example, for testing or measuring the effect of a medicine. See McCullagh and Nelder (1989).
Breslow, N. E.; Day, N. E. (1980),Statistical Methods in Cancer Research, Lyon: International Agency for Research on Cancer.
Eisinga, R.; Pelzer, B. (2011),"Saddlepoint approximations to the mean and variance of the extended hypergeometric distribution"(PDF),Statistica Neerlandica, vol. 65, no. 1, pp.22–31,doi:10.1111/j.1467-9574.2010.00468.x.
Fog, A. (2007),Random number theory.
Fog, A. (2008), "Sampling Methods for Wallenius' and Fisher's Noncentral Hypergeometric Distributions",Communications in Statictics, Simulation and Computation, vol. 37, no. 2, pp.241–257,doi:10.1080/03610910701790236,S2CID14904723.
Johnson, N. L.;Kemp, A. W.; Kotz, S. (2005),Univariate Discrete Distributions, Hoboken, New Jersey: Wiley and Sons.
Levin, B. (1984), "Simple Improvements on Cornfield's approximation to the mean of a noncentral Hypergeometric random variable",Biometrika, vol. 71, no. 3, pp.630–632,doi:10.1093/biomet/71.3.630.
Levin, B. (1990), "The saddlepoint correction in conditional logistic likelihood analysis",Biometrika, vol. 77, no. 2, [Oxford University Press, Biometrika Trust], pp.275–285,doi:10.1093/biomet/77.2.275,JSTOR2336805.
Liao, J. (1992), "An Algorithm for the Mean and Variance of the Noncentral Hypergeometric Distribution",Biometrics, vol. 48, no. 3, [Wiley, International Biometric Society], pp.889–892,doi:10.2307/2532354,JSTOR2532354.
Liao, J. G.; Rosen, O. (2001), "Fast and Stable Algorithms for Computing and Sampling from the Noncentral Hypergeometric Distribution",The American Statistician, vol. 55, no. 4, pp.366–369,doi:10.1198/000313001753272547,S2CID121279235.
McCullagh, P.; Nelder, J. A. (1989),Generalized Linear Models, 2. ed., London: Chapman and Hall. | https://en.wikipedia.org/wiki/Fisher%27s_noncentral_hypergeometric_distribution |
Inprobabilityandstatistics, anurn problemis an idealizedmental exercisein which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in anurnor other container. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another,
or some other properties. A number of important variations are described below.
Anurn modelis either a set of probabilities that describe events within an urn problem, or it is aprobability distribution, or a family of such distributions, ofrandom variablesassociated with urn problems.[1]
InArs Conjectandi(1713),Jacob Bernoulliconsidered the problem of determining, given a number of pebbles drawn from an urn, the proportions of different colored pebbles within the urn. This problem was known as theinverse probabilityproblem, and was a topic of research in the eighteenth century, attracting the attention ofAbraham de MoivreandThomas Bayes.
Bernoulli used theLatinwordurna, which primarily means a clay vessel, but is also the term used in ancient Rome for a vessel of any kind for collectingballotsor lots; the present-dayItalianorSpanishword forballot boxis stillurna. Bernoulli's inspiration may have beenlotteries,elections, orgames of chancewhich involved drawing balls from a container, and it has been asserted that elections in medieval and renaissanceVenice, including that of thedoge, often included thechoice of electors by lot, using balls of different colors drawn from an urn.[2]
In this basic urn model inprobability theory, the urn containsxwhite andyblack balls, well-mixed together. One ball is drawn randomly from the urn and its color observed; it is then placed back in the urn (or not), and the selection process is repeated.[3]
Possible questions that can be answered in this model are: | https://en.wikipedia.org/wiki/Urn_problem |
In thefield of statistics,biasis a systematic tendency in which the methods used to gatherdataandestimateasample statisticpresent an inaccurate, skewed or distorted (biased) depiction of reality. Statistical bias exists in numerous stages of the data collection and analysis process, including: the source of the data, the methods used to collect the data, theestimatorchosen, and the methods used to analyze the data.
Data analystscan take various measures at each stage of the process to reduce the impact of statistical bias in their work. Understanding the source of statistical bias can help to assess whether the observed results are close to actuality. Issues of statistical bias has been argued to be closely linked to issues ofstatistical validity.[1]
Statistical bias can have significant real world implications as data is used to inform decision making across a wide variety of processes in society. Data is used to inform lawmaking, industry regulation, corporate marketing and distribution tactics, and institutional policies in organizations and workplaces. Therefore, there can be significant implications if statistical bias is not accounted for and controlled. For example, if a pharmaceutical company wishes to explore the effect of a medication on the common cold but the datasampleonly includes men, any conclusions made from that data will be biased towards how the medication affects men rather than people in general. That means the information would be incomplete and not useful for deciding if the medication is ready for release in the general public. In this scenario, the bias can be addressed by broadening the sample. Thissampling erroris only one of the ways in which data can be biased.
Bias can be differentiated from other statistical mistakes such asaccuracy(instrument failure/inadequacy), lack of data, or mistakes in transcription (typos). Bias implies that the data selection may have been skewed by the collection criteria. Other forms of human-based bias emerge in data collection as well such asresponse bias, in which participants give inaccurate responses to a question. Bias does not preclude the existence of any other mistakes. One may have a poorly designed sample, an inaccurate measurement device, and typos in recording data simultaneously. Ideally, all factors are controlled and accounted for.
Also it is useful to recognize that the term “error” specifically refers to the outcome rather than the process (errors of rejection or acceptance of the hypothesis being tested), or from the phenomenon ofrandom errors.[2]The termsflawormistakeare recommended to differentiate procedural errors from these specifically defined outcome-based terms.
Statistical bias is a feature of astatisticaltechnique or of its results whereby theexpected valueof the results differs from the true underlying quantitativeparameterbeingestimated. The bias of an estimator of a parameter should not be confused with its degree of precision, as the degree of precision is a measure of the sampling error. The bias is defined as follows: letT{\displaystyle T}be a statistic used to estimate a parameterθ{\displaystyle \theta }, and letE(T){\displaystyle \operatorname {E} (T)}denote the expected value ofT{\displaystyle T}. Then,
is called the bias of the statisticT{\displaystyle T}(with respect toθ{\displaystyle \theta }). Ifbias(T,θ)=0{\displaystyle \operatorname {bias} (T,\theta )=0}, thenT{\displaystyle T}is said to be anunbiased estimatorofθ{\displaystyle \theta }; otherwise, it is said to be abiased estimatorofθ{\displaystyle \theta }.
The bias of a statisticT{\displaystyle T}is always relative to the parameterθ{\displaystyle \theta }it is used to estimate, but the parameterθ{\displaystyle \theta }is often omitted when it is clear from the context what is being estimated.
Statistical bias comes from all stages of data analysis. The following sources of bias will be listed in each stage separately.
Selection biasinvolves individuals being more likely to be selected for study than others,biasing the sample. This can also be termed selection effect,sampling biasandBerksonian bias.[3]
Type I and type II errorsinstatistical hypothesis testingleads to wrong results.[12]Type I error happens when the null hypothesis is correct but is rejected. For instance, suppose that the null hypothesis is that if the average driving speed limit ranges from 75 to 85 km/h, it is not considered as speeding. On the other hand, if the average speed is not in that range, it is considered speeding. If someone receives a ticket with an average driving speed of 7 km/h, the decision maker has committed a Type I error. In other words, the average driving speed meets the null hypothesis but is rejected. On the contrary, Type II error happens when the null hypothesis is not correct but is accepted.
Bias in hypothesis testing occurs when the power (the complement of the type II error rate) at some alternative is lower than the supremum of the Type I error rate (which is usually the significance level,α{\displaystyle \alpha }). Equivalently, if no rejection rate at any alternative is lower than the rejection rate at any point in the null hypothesis set, the test is said to be unbiased.[13]
Thebias of an estimatoris the difference between an estimator's expected value and the true value of the parameter being estimated. Although an unbiased estimator is theoretically preferable to a biased estimator, in practice, biased estimators with small biases are frequently used. A biased estimator may be more useful for several reasons. First, an unbiased estimator may not exist without further assumptions. Second, sometimes an unbiased estimator is hard to compute. Third, a biased estimator may have a lower value of mean squared error.
Reporting biasinvolves a skew in the availability of data, such that observations of a certain kind are more likely to be reported.
Depending on the type of bias present, researchers and analysts can take different steps to reduce bias on a data set. All types of bias mentioned above have corresponding measures which can be taken to reduce or eliminate their impacts.
Bias should be accounted for at every step of the data collection process, beginning with clearly defined research parameters and consideration of the team who will be conducting the research.[2]Observer biasmay be reduced by implementing ablindordouble-blindtechnique. Avoidance ofp-hackingis essential to the process of accurate data collection. One way to check for bias in results after is rerunning analyses with different independent variables to observe whether a given phenomenon still occurs in dependent variables.[17]Careful use of language in reporting can reduce misleading phrases, such as discussion of a result "approaching" statistical significant as compared to actually achieving it.[2] | https://en.wikipedia.org/wiki/Bias_(statistics) |
Instatistics,sampling biasis abiasin which a sample is collected in such a way that some members of the intendedpopulationhave a lower or highersampling probabilitythan others. It results in abiased sample[1]of a population (or non-human factors) in which all individuals, or instances, were not equally likely to have been selected.[2]If this is not accounted for, results can be erroneously attributed to the phenomenon under study rather than to the method ofsampling.
Medical sources sometimes refer to sampling bias asascertainment bias.[3][4]Ascertainment bias has basically the same definition,[5][6]but is still sometimes classified as a separate type of bias.[5]
Sampling bias is usually classified as a subtype ofselection bias,[7]sometimes specifically termedsample selection bias,[8][9][10]but some classify it as a separate type of bias.[11]A distinction, albeit not universally accepted, of sampling bias is that it undermines theexternal validityof a test (the ability of its results to be generalized to the entire population), whileselection biasmainly addressesinternal validityfor differences or similarities found in the sample at hand. In this sense, errors occurring in the process of gathering the sample or cohort cause sampling bias, while errors in any process thereafter cause selection bias.
However, selection bias and sampling bias are often used synonymously.[12]
The study of medical conditions begins with anecdotal reports. By their nature, such reports only include those referred for diagnosis and treatment. A child who can't function in school is more likely to be diagnosed withdyslexiathan a child who struggles but passes. A child examined for one condition is more likely to be tested for and diagnosed with other conditions, skewingcomorbiditystatistics. As certain diagnoses become associated with behavior problems orintellectual disability, parents try to prevent their children from being stigmatized with those diagnoses, introducing further bias. Studies carefully selected from whole populations are showing that many conditions are much more common and usually much milder than formerly believed.
Geneticists are limited in how they can obtain data from human populations. As an example, consider a human characteristic. We are interested in deciding if the characteristic is inherited as asimple Mendeliantrait. Following the laws ofMendelian inheritance, if the parents in a family do not have the characteristic, but carry the allele for it, they are carriers (e.g. a non-expressiveheterozygote). In this case their children will each have a 25% chance of showing the characteristic. The problem arises because we can't tell which families have both parents as carriers (heterozygous) unless they have a child who exhibits the characteristic. The description follows the textbook by Sutton.[13]
The figure shows the pedigrees of all the possible families with two children when the parents are carriers (Aa).
The probabilities of each of the families being selected is given in the figure, with the sample frequency of affected children also given. In this simple case, the researcher will look for a frequency of4⁄7or5⁄8for the characteristic, depending on the type of truncate selection used.
An example of selection bias is called the "caveman effect". Much of our understanding ofprehistoricpeoples comes from caves, such ascave paintingsmade nearly 40,000 years ago. If there had been contemporary paintings on trees, animal skins or hillsides, they would have been washed away long ago. Similarly, evidence of fire pits,middens,burial sites, etc. are most likely to remain intact to the modern era in caves. Prehistoric people are associated with caves because that is where the data still exists, not necessarily because most of them lived in caves for most of their lives.[14]
Sampling bias is problematic because it is possible that astatisticcomputed of the sample is systematically erroneous. Sampling bias can lead to a systematic over- or under-estimation of the correspondingparameterin the population. Sampling bias occurs in practice as it is practically impossible to ensure perfect randomness in sampling. If the degree of misrepresentation is small, then the sample can be treated as a reasonable approximation to a random sample. Also, if the sample does not differ markedly in the quantity being measured, then a biased sample can still be a reasonable estimate.
The wordbiashas a strong negative connotation. Indeed, biases sometimes come from deliberate intent to mislead or otherscientific fraud. In statistical usage, bias merely represents a mathematical property, no matter if it is deliberate or unconscious or due to imperfections in the instruments used for observation. While some individuals might deliberately use a biased sample to produce misleading results, more often, a biased sample is just a reflection of the difficulty in obtaining a truly representative sample, or ignorance of the bias in their process of measurement or analysis. An example of how ignorance of a bias can exist is in the widespread use of a ratio (a.k.a.fold change) as a measure of difference in biology. Because it is easier to achieve a large ratio with two small numbers with a given difference, and relatively more difficult to achieve a large ratio with two large numbers with a larger difference, large significant differences may be missed when comparing relatively large numeric measurements. Some have called this a 'demarcation bias' because the use of a ratio (division) instead of a difference (subtraction) removes the results of the analysis from science into pseudoscience (SeeDemarcation Problem).
Some samples use a biased statistical design which nevertheless allows the estimation of parameters. The U.S.National Center for Health Statistics, for example, deliberately oversamples from minority populations in many of its nationwide surveys in order to gain sufficient precision for estimates within these groups.[15]These surveys require the use of sample weights (see later on) to produce proper estimates across all ethnic groups. Provided that certain conditions are met (chiefly that the weights are calculated and used correctly) these samples permit accurate estimation of population parameters.
A classic example of a biased sample and the misleading results it produced occurred in 1936. In the early days of opinion polling, the AmericanLiterary Digestmagazine collected over two million postal surveys and predicted that the Republican candidate in theU.S. presidential election,Alf Landon, would beat the incumbent president,Franklin Roosevelt, by a large margin. The result was the exact opposite. The Literary Digest survey represented a sample collected from readers of the magazine, supplemented by records of registered automobile owners and telephone users. This sample included an over-representation of wealthy individuals, who, as a group, were more likely to vote for the Republican candidate. In contrast, a poll of only 50 thousand citizens selected byGeorge Gallup's organization successfully predicted the result, leading to the popularity of theGallup poll.
Another classic example occurred in the1948 presidential election. On election night, theChicago Tribuneprinted the headlineDEWEY DEFEATS TRUMAN, which turned out to be mistaken. In the morning the grinningpresident-elect,Harry S. Truman, was photographed holding a newspaper bearing this headline. The reason the Tribune was mistaken is that their editor trusted the results of aphone survey. Survey research was then in its infancy, and few academics realized that a sample of telephone users was not representative of the general population. Telephones were not yet widespread, and those who had them tended to be prosperous and have stable addresses. (In many cities, theBell Systemtelephone directorycontained the same names as theSocial Register). In addition, the Gallup poll that the Tribune based its headline on was over two weeks old at the time of the printing.[17]
Inair qualitydata, pollutants (such ascarbon monoxide,nitrogen monoxide,nitrogen dioxide, orozone) frequently show highcorrelations, as they stem from the same chemical process(es). These correlations depend on space (i.e., location) and time (i.e., period). Therefore, a pollutant distribution is not necessarily representative for every location and every period. If a low-cost measurement instrument is calibrated with field data in a multivariate manner, more precisely by collocation next to a reference instrument, the relationships between the different compounds are incorporated into the calibration model. By relocation of the measurement instrument, erroneous results can be produced.[18]
A twenty-first century example is theCOVID-19 pandemic, where variations in sampling bias inCOVID-19 testinghave been shown to account for wide variations in bothcase fatality ratesand theage distributionof cases across countries.[19][20]
If entire segments of the population are excluded from a sample, then there are no adjustments that can produce estimates that are representative of the entire population. But if some groups are underrepresented and the degree of underrepresentation can be quantified, then sample weights can correct the bias. However, the success of the correction is limited to the selection model chosen. If certain variables are missing the methods used to correct the bias could be inaccurate.[21]
For example, a hypothetical population might include 10 million men and 10 million women. Suppose that a biased sample of 100 patients included 20 men and 80 women. A researcher could correct for this imbalance byattaching a weightof 2.5 for each male and 0.625 for each female. This would adjust any estimates to achieve the same expected value as a sample that included exactly 50 men and 50 women, unless men and women differed in their likelihood of taking part in the survey.[citation needed] | https://en.wikipedia.org/wiki/Biased_sample |
In mathematics,Appellseriesare a set of fourhypergeometric seriesF1,F2,F3,F4of twovariablesthat were introduced byPaul Appell(1880) and that generalizeGauss's hypergeometric series2F1of one variable. Appell established the set ofpartial differential equationsof which thesefunctionsare solutions, and found various reduction formulas and expressions of these series in terms of hypergeometric series of one variable.
The Appell seriesF1is defined for |x| < 1, |y| < 1 by the double series
where(q)n{\displaystyle (q)_{n}}is thePochhammer symbol. For other values ofxandythe functionF1can be defined byanalytic continuation. It can be shown[1]that
Similarly, the functionF2is defined for |x| + |y| < 1 by the series
and it can be shown[2]that
Also the functionF3for |x| < 1, |y| < 1 can be defined by the series
and the functionF4for |x|1⁄2+ |y|1⁄2< 1 by the series
Like the Gauss hypergeometric series2F1, the Appell double series entailrecurrence relationsamong contiguous functions. For example, a basic set of such relations for Appell'sF1is given by:
Any other relation[3]valid forF1can be derived from these four.
Similarly, all recurrence relations for Appell'sF3follow from this set of five:
For Appell'sF1, the followingderivativesresult from the definition by a double series:
From its definition, Appell'sF1is further found to satisfy the following system of second-orderdifferential equations:
A system partial differential equations forF2is
The system have solution
Similarly, forF3the following derivatives result from the definition:
And forF3the following system of differential equations is obtained:
A system partial differential equations forF4is
The system has solution
The four functions defined by Appell's double series can be represented in terms ofdouble integralsinvolvingelementary functionsonly (Gradshteyn et al. 2015, §9.184). However,Émile Picard(1881) discovered that Appell'sF1can also be written as a one-dimensionalEuler-typeintegral:
This representation can be verified by means ofTaylor expansionof the integrand, followed by termwise integration.
Picard's integral representation implies that theincomplete elliptic integralsFandEas well as thecomplete elliptic integralΠ are special cases of Appell'sF1: | https://en.wikipedia.org/wiki/Appell_series |
In mathematics,Humbert seriesare a set of sevenhypergeometric seriesΦ1, Φ2, Φ3, Ψ1, Ψ2, Ξ1, Ξ2of twovariablesthat generalizeKummer's confluent hypergeometric series1F1of one variable and theconfluent hypergeometric limit function0F1of one variable. The first of these double series was introduced byPierre Humbert(1920).
The Humbert series Φ1is defined for |x| < 1 by the double series:
where thePochhammer symbol(q)nrepresents the rising factorial:
where the second equality is true for all complexq{\displaystyle q}exceptq=0,−1,−2,…{\displaystyle q=0,-1,-2,\ldots }.
For other values ofxthe function Φ1can be defined byanalytic continuation.
The Humbert series Φ1can also be written as a one-dimensionalEuler-typeintegral:
This representation can be verified by means ofTaylor expansionof the integrand, followed by termwise integration.
Similarly, the function Φ2is defined for allx,yby the series:
the function Φ3for allx,yby the series:
the function Ψ1for |x| < 1 by the series:
the function Ψ2for allx,yby the series:
the function Ξ1for |x| < 1 by the series:
and the function Ξ2for |x| < 1 by the series: | https://en.wikipedia.org/wiki/Humbert_series |
Inmathematics, theKampé de Fériet functionis a two-variable generalization of thegeneralized hypergeometric series, introduced byJoseph Kampé de Fériet.
The Kampé de Fériet function is given by
The generalsextic equationcan be solved in terms of Kampé de Fériet functions.[1]
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Kamp%C3%A9_de_F%C3%A9riet_function |
In 1893Giuseppe Lauricelladefined and studied fourhypergeometric seriesFA,FB,FC,FDof three variables. They are (Lauricella 1893):
for |x1| + |x2| + |x3| < 1 and
for |x1| < 1, |x2| < 1, |x3| < 1 and
for |x1|1/2+ |x2|1/2+ |x3|1/2< 1 and
for |x1| < 1, |x2| < 1, |x3| < 1. Here thePochhammer symbol(q)iindicates thei-th rising factorial ofq, i.e.
where the second equality is true for all complexq{\displaystyle q}exceptq=0,−1,−2,…{\displaystyle q=0,-1,-2,\ldots }.
These functions can be extended to other values of the variablesx1,x2,x3by means ofanalytic continuation.
Lauricella also indicated the existence of ten other hypergeometric functions of three variables. These were namedFE,FF, ...,FTand studied by Shanti Saran in 1954 (Saran 1954). There are therefore a total of 14 Lauricella–Saran hypergeometric functions.
These functions can be straightforwardly extended tonvariables. One writes for example
where |x1| + ... + |xn| < 1. These generalized series too are sometimes referred to as Lauricella functions.
Whenn= 2, the Lauricella functions correspond to theAppell hypergeometric seriesof two variables:
Whenn= 1, all four functions reduce to theGauss hypergeometric function:
In analogy withAppell's functionF1, Lauricella'sFDcan be written as a one-dimensionalEuler-typeintegralfor any numbernof variables:
This representation can be easily verified by means ofTaylor expansionof the integrand, followed by termwise integration. The representation implies that theincomplete elliptic integralΠ is a special case of Lauricella's functionFDwith three variables:
Case 1 :a>c{\displaystyle a>c},a−c{\displaystyle a-c}a positive integer
One can relateFDto theCarlson RfunctionRn{\displaystyle R_{n}}via
FD(a,b¯,c,z¯)=Ra−c(b∗¯,z∗¯)⋅∏i(zi∗)bi∗=Γ(a−c+1)Γ(b∗)Γ(a−c+b∗)⋅Da−c(b∗¯,z∗¯)⋅∏i(zi∗)bi∗{\displaystyle F_{D}(a,{\overline {b}},c,{\overline {z}})=R_{a-c}({\overline {b^{*}}},{\overline {z^{*}}})\cdot \prod _{i}(z_{i}^{*})^{b_{i}^{*}}={\frac {\Gamma (a-c+1)\Gamma (b^{*})}{\Gamma (a-c+b^{*})}}\cdot D_{a-c}({\overline {b^{*}}},{\overline {z^{*}}})\cdot \prod _{i}(z_{i}^{*})^{b_{i}^{*}}}
with the iterative sum
Dn(b∗¯,z∗¯)=1n∑k=1n(∑i=1Nbi∗⋅(zi∗)k)⋅Dk−i{\displaystyle D_{n}({\overline {b^{*}}},{\overline {z^{*}}})={\frac {1}{n}}\sum _{k=1}^{n}\left(\sum _{i=1}^{N}b_{i}^{*}\cdot (z_{i}^{*})^{k}\right)\cdot D_{k-i}}andD0=1{\displaystyle D_{0}=1}
where it can be exploited that the Carlson R function withn>0{\displaystyle n>0}has an exact representation (see[1]for more information).
The vectors are defined as
b∗¯=[b¯,c−∑ibi]{\displaystyle {\overline {b^{*}}}=[{\overline {b}},c-\sum _{i}b_{i}]}
z∗¯=[11−z1,…,11−zN−1,1]{\displaystyle {\overline {z^{*}}}=[{\frac {1}{1-z_{1}}},\ldots ,{\frac {1}{1-z_{N-1}}},1]}
where the length ofz¯{\displaystyle {\overline {z}}}andb¯{\displaystyle {\overline {b}}}isN−1{\displaystyle N-1}, while the vectorsz∗¯{\displaystyle {\overline {z^{*}}}}andb∗¯{\displaystyle {\overline {b^{*}}}}have lengthN{\displaystyle N}.
Case 2:c>a{\displaystyle c>a},c−a{\displaystyle c-a}a positive integer
In this case there is also a known analytic form, but it is rather complicated to write down and involves several steps.
See[2]for more information. | https://en.wikipedia.org/wiki/Lauricella_hypergeometric_series |
TheMcDonald's Monopolygame is a sales promotion run byfast foodrestaurant chainMcDonald's, with a theme based on theHasbroboard gameMonopoly. The game first ran in the U.S. in 1987 and has since been used worldwide.
The promotion has used other names, such asMonopoly: Pick Your Prize!(2001),Monopoly Best Chance Game(2003–2005),Monopoly/Millionaire Game(2013),Prize Vault(2014),Money Monopoly(2016–present),Coast To Coast(2015–2024) Double Play (2024-present) in Canada,Golden Chances(2015),Prize Choice(2016),Win Win(2017),Wiiiin!!(2018),V.I.P.(2021), Double Peel (2022, 2023)[1]and Power Peel (2024) in the UK.
The promotion was first offered in the United States, in 1987. Following countries included Canada,[2]Australia, Austria, France, Germany, Hong Kong, the Netherlands, New Zealand, Poland, Portugal, Romania, Russia, Singapore, South Africa, Spain, Switzerland, Taiwan, and the United Kingdom. Argentina and Brazil were included in 2013 as well as South Korea in 2014[3]and Ireland in 2016. From 2003 to 2009,Best Buywas involved in the U.S. version, and later[when?]in the Canadian version.
Like many merchants, McDonald's offered sweepstakes to draw customers into its restaurants. Laws generally forbid a company fromadministering its own contests, in order to prevent fraud and to ensure that all prizes are given away; as a result, such promotions are handled by an impartial third-party company.[4]McDonald's had a relationship withSimon Worldwide Inc., which was responsible for the distribution of the contest pieces and the awarding of major prizes.[citation needed]
In 2015, the Monopoly game was replaced in the US by "Game Time Gold", using anNFLtheme.[5]
In 2020, in the UK, it was intended to run in March that year, but due to theCOVID-19 pandemic, the promotion was postponed.[6]Instead, the same promotion "Monopoly VIP" was instead run in 2021.[7]As a result of its being postponed, the stickers for that year's promotion had incorrect dates, as they were originally printed the year prior.[8]
The promotion mimics the gameMonopoly. The game is also advertised with tokens appearing in Sunday newspapers.[9][10]Originally, customers received a set of two tokens with every purchase, but now tokens come only with certain menu items. Tokens correspond to a property space on the Monopoly board (with the exception of theGolden Avenue/Arches Avenue"properties", which were added in the 2008 edition; andElectric Company/Water Worksutilities added in 2014). When combined into color-matched properties, the tokens may be redeemed for money. Historically, the grand prize ($1 million,annuityonly) has been the combination of the two most costly properties, Park Place and Boardwalk, but in the 2006–2007 games the top prize ($5 million, with the traditional $1 million prize for Boardwalk/Park Place) was awarded for collecting the four railroads.
There are also "instant win" tokens the recipient can redeem for McDonald's food (typically small menu items, such as a free smallMcFlurryormedium fries) but never for any food item that has game pieces, money, or other prizes. The 2001 edition was titled "Pick Your Prize!", in which winners could choose which of three ways they wanted their prize awarded (i.e., they could choose gold, diamonds, or $50,000 per year for 20 years).
In 2016, the game changed so that all available prizes were cash, including the $1 million for Park Place and Boardwalk, and was titled "Money Monopoly".[11]
Additionally, in the 2005 edition, certain foods always came with one coupon which could be used at eitherBest Buy,Toys "R" Us, orFoot Locker(including online stores). The value of each coupon was random, with Toys R Us coupons ranging from $1 to $5; up to $5 in coupons could be used in a single transaction. In 2008, these coupons were redeemed for up to 25% off any Foot Locker item(s). Since 2009, the promotion has not featured any coupons.
Canadian and US laws require that game pieces be available upon request with no purchase necessary (Alternative Method of Entry, "AMOE"),[citation needed]and can be requested by the mailing of a handwritten, self-addressed stamped envelope.[12]
Many of the prizes stay unclaimed. For example, in 2018, 25 million instant food prizes were offered in the promotion in the UK.[13]However, only 8 million prizes were claimed overall.[14]Out of 20Mini Coopersoffered, only 6 were claimed.[14][15]
The rare collectible pieces that dictate the odds of winning are as follows:
In 2013, McDonald's allowed two Boardwalk pieces to be produced; prior to this only one was produced. McDonald's added Golden Avenue and Arches Avenue for 2008 only; obtaining both won $100,000. For 2014 only, McDonald's added Electric Company and Water Works; obtaining both won $10,000. For 2014 only, Free Parking was added, with a $100,000 prize.
In 2005, McDonald's introduced an online counterpart to its traditional game. In addition to the traditional "sticker" game, participants can play online. Each game piece lists a code which can be entered online, to a maximum of 10 entries per 24 hours, a day starts at midnight EDT. Up to 2014, each code entered grants the user one roll on a virtual Monopoly game board, identical to the board game's board. Rolling "doubles" (two dice sharing the same number), as with the real board game, allows the user to move again.
Landing on Electric Company, Income Tax, Jail/Just Visiting, Go to Jail, Water Works, or Luxury Tax does not count towards any prize. If a player lands on an unowned property (not landed upon by the player in a previous turn), the user will "collect" that property. When all properties of a colored set are collected, the user wins a prize, with prize values similar to those of the sticker game. In addition to collecting property sets, users can also win by landing on certain "instant win" spaces, including Go, Chance, Community Chest, and Free Parking. Landing on Go (but not simply passing it) gives the player a code worth one free hour of WiFi access at participating McDonald's restaurants. Landing on Chance is worth money to spend atFoot Locker. Landing on Community Chest allows the user to be given a code worth 25 My Coke Rewards points. Landing on Free Parking is a prize of a $50 refillable gas card from Shell, or alternatively 25 complimentary apple pies.
In 2007, landing on Community Chest won game downloads.[17]
In 2009, the prizes became two hours of Wi-Fi and a $25 Arch Card for landing on Go, an entry into an online roll for $1,000,000 (annuity) for landing on Chance, 25 My Coke Rewards points for landing on Community Chest, and a $50 refillable Shell gift card for landing on Free Parking.
The values of the dice are not random. As stated in the contest rules, one property in each set is "rare," similar to the sticker game. These rare properties are landed on only when the game server "seeds" a winning roll. Winning rolls are seeded at specific times on specific dates, and the first user to roll the dice once a win has been seeded will land on a winning piece. This allows McDonald's to declare the odds of winning certain prizes, a legal requirement for contests in most jurisdictions.
In 2010, the online game was changed, removing the dice and the game board, presenting only 3 Chance cards from which to pick. One has a prize, starting at 30My Coke Rewardspoints, but may be (non-randomly) seeded with a higher-valued prize. Player chooses one card, is asked if that is the choice the player wants to make, and the card is flipped. If it is the pre-selected winning card, the player wins the pre-selected prize.
In 2011, the game was changed again – the mascot,Rich Uncle Pennybags(aka "Mr. Monopoly"), is shown attempting to throw a Chance card into a top hat. If the card lands in the hat, the player can claim a prize. Players must choose a "throwing style", which only changes the animations used – it does nothing to affect one's odds of winning.
In 2012, the game was changed once more. Players must click on "Spin" first, and if it landed on "GO!", the player wins the online prize shown. The next year, players had to click on "Play"; a win resulted in the prize shown onscreen; regardless of the outcome, players received an entry to win a 2013 Fiat 500 Cabrio. For the 2014 game, players must click on "GO!", and if it results in a win, the online prize is shown onscreen; regardless of the outcome, the participant receives an entry to win $50,000.
In 2016, players can enter up to ten codes daily in order to win weekly sweepstakes with a $50,000 grand prize.
For all versions of the online game, players are restricted to a set number of online entries per day.[18]In the UK, this is restricted to 24 entries. In the US, Guam, Puerto Rico, and Saipan, the limit is 10.
Players in the UK must be aged 18 years or older.
Note that the official rules state: "The purchase, sale, trading, or barter of Game Pieces, Game Stamps, FREE Codes or Game Codes via Online or live auctions, or any other methods, does not constitute Legitimate Channels and is expressly prohibited."[18]This includes eBay.com, where it is also a violation of that site's lottery policy.
The promotion has been criticized for incentivizing ordering more andupsizingthe portions.[13][19][20]In 2019, Deputy Leader of theUK Labour Party,Tom Watson, said that the Monopoly promotion was a "danger to public health" and urged McDonald's to drop the "grotesque marketing strategy".[20][21]
In 2001, the U.S. promotion was halted after fraud was uncovered. A subcontracting company, Simon Marketing (then a subsidiary ofCyrk), which had been hired by McDonald's to organize and promote the game, failed to recognize a flaw in its procedures. Simon's chief of security Jerome P. Jacobson ("Uncle Jerry"), a former police officer, stole the most valuable game pieces.[22][23]Jacobson justified his long-running multimillion-dollar crime as his reaction to Simon executives having rerun randomized draws to ensure that high-level prizes went to areas in the United States rather than Canada, although he did not take the stolen pieces to Canada.[23]He began stealing winning game pieces after a supplier mistakenly provided him a sheet of the anti-tamper seals needed to securely conduct the legitimate transfer of winning pieces. Jacobson first offered the game pieces to friends and family but eventually began selling them to Gennaro "Jerry" Colombo of theColombo crime family, whom he had met by chance at the Atlanta airport.[24]Colombo would then recruit people to act as contest winners in exchange for half of the winnings.[22][23]
In 1995, Colombo appeared in a nationally televised McDonald's commercial promoting his (fraudulent) win of aDodge Viper.[25]In 1995,St. Jude Children's HospitalinMemphis, Tennessee, received an anonymous letter with aDallas, Texas, postmark that contained a $1 million winning game piece. Although game rules prohibited the transfer of prizes, McDonald's awarded the $1 million as a donation to the hospital, making the final $50,000 annuity payment in 2014.[26][27]Investigations later revealed that Jacobson had admitted to sending the winning piece to the hospital.[28]In June 1996, Colombo's father-in-law William "Buddy" Fisher came forward as a winner with a stolen $1 million Monopoly piece.[29]After Colombo died in a 1998 traffic accident, Jacobson found new accomplices to help him sell the stolen game pieces.[24]
Jacobson's associates and those of his collaborators won almost all of the top prizes, including cash and cars, between 1995 and 2000, including McDonald's giveaways outside of the Monopoly promotion.[30]The associates netted over $24 million. While the fraud appeared to have been perpetrated by only one key employee of the promotion company, and not by the company's management, eight people were originally arrested,[31][32]soon growing to 21 indicted people, including members of the Colombo crime family.[33]By the end of the criminal prosecutions, 53 people were indicted, of whom 48 pled guilty: 46 in pretrial plea agreements and two who changed their pleas from not guilty to guilty during their trials.[26]
McDonald's severed its relationship with Simon Marketing and each company filed lawsuits against the other for breach of contract that were eventually settled out of court. The case brought forth by McDonald's was dismissed but Simon received $16.6 million.[34][35]Four of the putative winners convicted of fraud had their convictions reversed on appeal on grounds of a constitutional violation, as they did not know Jacobson and thus did not know that the winning game pieces were necessarily stolen.[36][37]
Jacobson pleaded guilty to three counts of mail fraud in federal court in Jacksonville, Florida, and served three years in federal prison. The trial began on September 10, 2001, but was overshadowed in the media by theSeptember 11 attacksthat occurred the next day.
In August 2018,20th Century Foxannounced plans for a film based on the Jacobson fraud, withBen Affleckattached as director,Paul WernickandRhett Reeseas writers andMatt Damonin an acting role.[38][39]While there have been no further updates on the plans for the film, the controversy is depicted in the 2020HBOdocuseriesMcMillions.[40] | https://en.wikipedia.org/wiki/McDonald%27s_Monopoly |
Inpopulation genetics, theWatterson estimatoris a method for describing thegenetic diversityin a population. It was developed byMargaret Wuand G. A. Watterson in the 1970s.[1][2]It is estimated by counting the number of polymorphic sites. It is a measure of the "population mutation rate" (the product of the effective population size and the neutral mutation rate) from the observed nucleotide diversity of a population.θ=4Neμ{\displaystyle \theta =4N_{e}\mu },[3]whereNe{\displaystyle N_{e}}is theeffective population sizeandμ{\displaystyle \mu }is the per-generationmutation rateof the population of interest (Watterson (1975)). The assumptions made are that there is a sample ofn{\displaystyle n}haploidindividuals from the population of interest with effective sizeNe{\displaystyle N_{e}}, thatn≪Ne{\displaystyle n\ll N_{e}}, and that there are infinitely many sites capable of varying (so that mutations never overlay or reverse one another).
Because the number of segregating sites counted will increase with the number of sequences looked at, the correction factoran{\displaystyle a_{n}}is used.
The estimate ofθ{\displaystyle \theta }, often denoted asθ^w{\displaystyle {\widehat {\theta \,}}_{w}}, is
whereK{\displaystyle K}is the number of segregating sites (an example of a segregating site would be asingle-nucleotide polymorphism) in the sample and
is the(n−1){\displaystyle (n-1)}thharmonic number.
This estimate is based oncoalescent theory. Watterson's estimator is commonly used for its simplicity. When its assumptions are met, the estimator isunbiasedand thevarianceof the estimator decreases with increasing sample size or recombination rate. However, the estimator can be biased by population structure. For example,θ^w{\displaystyle {\widehat {\theta \,}}_{w}}is downwardly biased in anexponentially growingpopulation. It can also be biased by violation of the infinite-sites mutational model; if multiple mutations can overwrite one another, Watterson's estimator will be biased downward.
Comparing the value of the Watterson's estimator, to nucleotide diversity is the basis of Tajima's D which allows inference of the evolutionary regime of a given locus. | https://en.wikipedia.org/wiki/Watterson_estimator |
Apermutation test(also called re-randomization test or shuffle test) is anexactstatistical hypothesis test.
A permutation test involves two or more samples. The (possiblycounterfactual) null hypothesis is that all samples come from the same distributionH0:F=G{\displaystyle H_{0}:F=G}. Under thenull hypothesis, the distribution of the test statistic is obtained by calculating all possible values of thetest statisticunder possible rearrangements of the observed data. Permutation tests are, therefore, a form ofresampling.
Permutation tests can be understood assurrogate data testingwhere thesurrogate dataunder the null hypothesis are obtained through permutations of the original data.[1]
In other words, the method by which treatments are allocated to subjects in an experimental design is mirrored in the analysis of that design. If the labels are exchangeable under the null hypothesis, then the resulting tests yield exact significance levels; see alsoexchangeability. Confidence intervals can then be derived from the tests. The theory has evolved from the works ofRonald FisherandE. J. G. Pitmanin the 1930s.
Permutation tests should not be confused withrandomized tests.[2]
To illustrate the basic idea of a permutation test, suppose we collect random variablesXA{\displaystyle X_{A}}andXB{\displaystyle X_{B}}for each individual from two groupsA{\displaystyle A}andB{\displaystyle B}whose sample means arex¯A{\displaystyle {\bar {x}}_{A}}andx¯B{\displaystyle {\bar {x}}_{B}}, and that we want to know whetherXA{\displaystyle X_{A}}andXB{\displaystyle X_{B}}come from the same distribution. LetnA{\displaystyle n_{A}}andnB{\displaystyle n_{B}}be the sample size collected from each group. The permutation test is designed to determine whether the observed difference between the sample means is large enough to reject, at some significance level, the null hypothesis H0{\displaystyle _{0}}that the data drawn fromA{\displaystyle A}is from the same distribution as the data drawn fromB{\displaystyle B}.
The test proceeds as follows. First, the difference in means between the two samples is calculated: this is the observed value of the test statistic,Tobs{\displaystyle T_{\text{obs}}}.
Next, the observations of groupsA{\displaystyle A}andB{\displaystyle B}are pooled, and the difference in sample means is calculated and recorded for every possible way of dividing the pooled values into two groups of sizenA{\displaystyle n_{A}}andnB{\displaystyle n_{B}}(i.e., for every permutation of the group labels A and B). The set of these calculated differences is the exact distribution of possible differences (for this sample) under the null hypothesis that group labels are exchangeable (i.e., are randomly assigned).
The one-sided p-value of the test is calculated as the proportion of sampled permutations where the difference in means was greater thanTobs{\displaystyle T_{\text{obs}}}. The two-sided p-value of the test is calculated as the proportion of sampled permutations where theabsolute differencewas greater than|Tobs|{\displaystyle |T_{\text{obs}}|}.
Many implementations of permutation tests require that the observed data itself be counted as one of the permutations so that the permutation p-value will never be zero.[3]
Alternatively, if the only purpose of the test is to reject or not reject the null hypothesis, one could sort the recorded differences, and then observe ifTobs{\displaystyle T_{\text{obs}}}is contained within the middle(1−α)×100{\displaystyle (1-\alpha )\times 100}% of them, for some significance levelα{\displaystyle \alpha }. If it is not, we reject the hypothesis of identical probability curves at theα×100%{\displaystyle \alpha \times 100\%}significance level.
To exploitvariance reductionwithpaired samples, a paired permutation test must be applied, seepaired difference test. This is equivalent to performing a normal, unpaired permutation test, but restricting the set of valid permutations to only those which respect the paired nature of the data by forbidding both halves of any pair from being included in the same partition. In the specific but common case where the test statistic is the mean, this is also equivalent to computing a single set of differences of each pair and iterating over all of the2n{\displaystyle 2^{n}}sign-reversals instead of the usual partitioning approach.
Permutation tests are a subset ofnon-parametric statistics. Assuming that our experimental data come from data measured from two treatment groups, the method simply generates the distribution of mean differences under the assumption that the two groups are not distinct in terms of the measured variable. From this, one then uses the observed statistic (Tobs{\displaystyle T_{\text{obs}}}above) to see to what extent this statistic is special, i.e., the likelihood of observing the magnitude of such a value (or larger) if the treatment labels had simply been randomized after treatment.
In contrast to permutation tests, the distributions underlying many popular"classical" statisticaltests, such as thet-test,F-test,z-test, andχ2test, are obtained from theoreticalprobability distributions.Fisher's exact testis an example of a commonly used parametric test for evaluating the association between two dichotomous variables. When sample sizes are very large, the Pearson's chi-square test will give accurate results. For small samples, the chi-square reference distribution cannot be assumed to give a correct description of the probability distribution of the test statistic, and in this situation the use of Fisher's exact test becomes more appropriate.
Permutation tests exist in many situations where parametric tests do not (e.g., when deriving an optimal test when losses are proportional to the size of an error rather than its square). All simple and many relatively complex parametric tests have a corresponding permutation test version that is defined by using the same test statistic as the parametric test, but obtains the p-value from the sample-specific permutation distribution of that statistic, rather than from the theoretical distribution derived from the parametric assumption. For example, it is possible in this manner to construct a permutationt-test, a permutationχ2{\textstyle \chi ^{2}}testof association, a permutation version of Aly's test for comparing variances and so on.
The major drawbacks to permutation tests are that they
Permutation tests exist for any test statistic, regardless of whether or not its distribution is known. Thus one is always free to choose the statistic which best discriminates between hypothesis and alternative and which minimizes losses.
Permutation tests can be used for analyzing unbalanced designs[4]and for combining dependent tests on mixtures of categorical, ordinal, and metric data (Pesarin, 2001)[citation needed]. They can also be used to analyze qualitative data that has been quantitized (i.e., turned into numbers). Permutation tests may be ideal for analyzing quantitized data that do not satisfy statistical assumptions underlying traditional parametric tests (e.g., t-tests, ANOVA),[5]seePERMANOVA.
Before the 1980s, the burden of creating the reference distribution was overwhelming except for data sets with small sample sizes.
Since the 1980s, the confluence of relatively inexpensive fast computers and the development of new sophisticated path algorithms applicable in special situations made the application of permutation test methods practical for a wide range of problems. It also initiated the addition of exact-test options in the main statistical software packages and the appearance of specialized software for performing a wide range of uni- and multi-variable exact tests and computing test-based "exact" confidence intervals.
An important assumption behind a permutation test is that the observations are exchangeable under the null hypothesis. An important consequence of this assumption is that tests of difference in location (like a permutation t-test) require equal variance under the normality assumption. In this respect, the classic permutation t-test shares the same weakness as the classical Student's t-test (theBehrens–Fisher problem). This can be addressed in the same way the classic t-test has been extended to handle unequal variances: by employing theWelch statisticwith Satterthwaite adjustment to the degrees of freedom.[6]A third alternative in this situation is to use abootstrap-based test. StatisticianPhillip Goodexplains the difference between permutation tests and bootstrap tests the following way: "Permutations test hypotheses concerning distributions; bootstraps test hypotheses concerning parameters. As a result, the bootstrap entails less-stringent assumptions."[7]Bootstrap tests are not exact. In some cases, a permutation test based on a properly studentized statistic can be asymptotically exact even when the exchangeability assumption is violated.[8]Bootstrap-based tests can test with the null hypothesisH0:F≠G{\displaystyle H_{0}:F\neq G}and, therefore, are suited for performingequivalence testing.
An asymptotically equivalent permutation test can be created when there are too many possible orderings of the data to allow complete enumeration in a convenient manner. This is done by generating the reference distribution byMonte Carlo sampling, which takes a small (relative to the total number of permutations) random sample of the possible replicates.
The realization that this could be applied to any permutation test on any dataset was an important breakthrough in the area of applied statistics. The earliest known references to this approach are Eden andYates(1933) andDwass(1957).[9][10]This type of permutation test is known under various names:approximate permutation test,Monte Carlo permutation testsorrandom permutation tests.[11]
AfterN{\displaystyle N}random permutations, it is possible to obtain a confidence interval for the p-value based on the Binomial distribution, seeBinomial proportion confidence interval. For example, if afterN=10000{\displaystyle N=10000}random permutations the p-value is estimated to bep^=0.05{\displaystyle {\widehat {p}}=0.05}, then a 99% confidence interval for the truep{\displaystyle p}(the one that would result from trying all possible permutations) is[p^−z0.05(1−0.05)10000,p^+z0.05(1−0.05)10000]=[0.045,0.055]{\displaystyle \left[{\hat {p}}-z{\sqrt {\frac {0.05(1-0.05)}{10000}}},{\hat {p}}+z{\sqrt {\frac {0.05(1-0.05)}{10000}}}\right]=[0.045,0.055]}.
On the other hand, the purpose of estimating the p-value is most often to decide whetherp≤α{\displaystyle p\leq \alpha }, whereα{\displaystyle \scriptstyle \ \alpha }is the threshold at which the null hypothesis will be rejected (typicallyα=0.05{\displaystyle \alpha =0.05}). In the example above, the confidence interval only tells us that there is roughly a 50% chance that the p-value is smaller than 0.05, i.e. it is completely unclear whether the null hypothesis should be rejected at a levelα=0.05{\displaystyle \alpha =0.05}.
If it is only important to know whetherp≤α{\displaystyle p\leq \alpha }for a givenα{\displaystyle \alpha }, it is logical to continue simulating until the statementp≤α{\displaystyle p\leq \alpha }can be established to be true or false with a very low probability of error. Given a boundϵ{\displaystyle \epsilon }on the admissible probability of error (the probability of finding thatp^>α{\displaystyle {\widehat {p}}>\alpha }when in factp≤α{\displaystyle p\leq \alpha }or vice versa), the question of how many permutations to generate can be seen as the question of when to stop generating permutations, based on the outcomes of the simulations so far, in order to guarantee that the conclusion (which is eitherp≤α{\displaystyle p\leq \alpha }orp>α{\displaystyle p>\alpha }) is correct with probability at least as large as1−ϵ{\displaystyle 1-\epsilon }. (ϵ{\displaystyle \epsilon }will typically be chosen to be extremely small, e.g. 1/1000.) Stopping rules to achieve this have been developed[12]which can be incorporated with minimal additional computational cost. In fact, depending on the true underlying p-value it will often be found that the number of simulations required is remarkably small (e.g. as low as 5 and often not larger than 100) before a decision can be reached with virtual certainty.
Original references:
Modern references:
Computational methods: | https://en.wikipedia.org/wiki/Permutation_test |
Random assignmentorrandom placementis anexperimentaltechnique for assigninghuman participantsoranimal subjectsto different groups in an experiment (e.g.,a treatment group versus a control group) usingrandomization, such as by a chance procedure (e.g.,flipping a coin) or arandom number generator.[1]This ensures that each participant or subject has an equal chance of being placed in any group.[1]Random assignment of participants helps to ensure that any differences between and within the groups are notsystematicat the outset of the experiment.[1]Thus, any differences between groups recorded at the end of the experiment can be more confidently attributed to the experimental procedures or treatment.[1]
Random assignment,blinding, andcontrollingare key aspects of thedesign of experimentsbecause they help ensure that the results are not spurious or deceptive viaconfounding. This is whyrandomized controlled trialsare vital inclinical research, especially ones that can bedouble-blindedandplacebo-controlled.
Mathematically, there are distinctions between randomization,pseudorandomization, andquasirandomization, as well as betweenrandom number generatorsandpseudorandom number generators. How much these differences matter in experiments (such asclinical trials) is a matter oftrial designandstatisticalrigor, which affectevidence grading. Studies done with pseudo- or quasirandomization are usually given nearly the same weight as those with true randomization but are viewed with a bit more caution.
Imagine an experiment in which the participants are not randomly assigned; perhaps the first 10 people to arrive are assigned to the Experimental group, and the last 10 people to arrive are assigned to the Control group. At the end of the experiment, the experimenter finds differences between the Experimental group and the Control group, and claims these differences are a result of the experimental procedure. However, they also may be due to some other preexisting attribute of the participants, e.g. people who arrive early versus people who arrive late.
Imagine the experimenter instead uses a coin flip to randomly assign participants. If the coin lands heads-up, the participant is assigned to the Experimental group. If the coin lands tails-up, the participant is assigned to the Control group. At the end of the experiment, the experimenter finds differences between the Experimental group and the Control group. Because each participant had an equal chance of being placed in any group, it is unlikely the differences could be attributable to some other preexisting attribute of the participant, e.g. those who arrived on time versus late.
Random assignment does not guarantee that the groups are matched or equivalent. The groups may still differ on some preexisting attribute due to chance. The use of random assignment cannot eliminate this possibility, but it greatly reduces it.
To express this same idea statistically - If a randomly assigned group is compared to themeanit may be discovered that they differ, even though they were assigned from the same group. If a test ofstatistical significanceis applied to randomly assigned groups to test the difference between samplemeansagainst thenull hypothesisthat they are equal to the same population mean (i.e., population mean of differences = 0), given the probability distribution, the null hypothesis will sometimes be "rejected," that is, deemed not plausible. That is, the groups will be sufficiently different on the variable tested to conclude statistically that they did not come from the same population, even though, procedurally, they were assigned from the same total group. For example, using random assignment may create an assignment to groups that has 20 blue-eyed people and 5 brown-eyed people in one group. This is a rare event under random assignment, but it could happen, and when it does it might add some doubt to the causal agent in the experimental hypothesis.
Random sampling is a related, but distinct, process.[2]Random sampling is recruiting participants in a way that they represent a larger population.[2]Because most basic statistical tests require the hypothesis of an independent randomly sampled population, random assignment is the desired assignment method because it provides control for all attributes of the members of the samples—in contrast to matching on only one or more variables—and provides the mathematical basis for estimating the likelihood of group equivalence for characteristics one is interested in, both for pretreatment checks on equivalence and the evaluation of post treatment results using inferential statistics. More advanced statistical modeling can be used to adapt the inference to the sampling method.
Randomization was emphasized in the theory of statistical inference ofCharles S. Peircein "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883). Peirce applied randomization in the Peirce-Jastrowexperiment on weight perception.
Charles S. Peirce randomly assigned volunteers to ablinded,repeated-measures designto evaluate their ability to discriminate weights.[3][4][5][6]Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition ofrandomized experimentsin laboratories and specialized textbooks in the eighteen-hundreds.[3][4][5][6]
Jerzy Neymanadvocated randomization in survey sampling (1934) and in experiments (1923).[7]Ronald A. Fisheradvocated randomization in hisbookonexperimental design(1935). | https://en.wikipedia.org/wiki/Random_assignment |
Instatistics,resamplingis the creation of new samples based on one observed sample.
Resampling methods are:
Permutation tests rely on resampling the original data assuming the null hypothesis. Based on the resampled data it can be concluded how likely the original data is to occur under the null hypothesis.
Bootstrapping is a statistical method for estimating thesampling distributionof anestimatorbysamplingwith replacement from the original sample, most often with the purpose of deriving robust estimates ofstandard errorsandconfidence intervalsof a population parameter like amean,median,proportion,odds ratio,correlation coefficientorregressioncoefficient. It has been called theplug-in principle,[1]as it is the method ofestimationof functionals of a population distribution by evaluating the same functionals at theempirical distributionbased on a sample.
For example,[1]when estimating thepopulationmean, this method uses thesamplemean; to estimate the populationmedian, it uses the sample median; to estimate the populationregression line, it uses the sample regression line.
It may also be used for constructing hypothesis tests. It is often used as a robust alternative to inference based on parametric assumptions when those assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the calculation of standard errors. Bootstrapping techniques are also used in the updating-selection transitions ofparticle filters,genetic type algorithmsand related resample/reconfigurationMonte Carlo methodsused incomputational physics.[2][3]In this context, the bootstrap is used to replace sequentially empirical weighted probability measures byempirical measures. The bootstrap allows to replace the samples with low weights by copies of the samples with high weights.
Cross-validation is a statistical method for validating apredictive model. Subsets of the data are held out for use as validating sets; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy. Cross-validation is employed repeatedly in building decision trees.
One form of cross-validation leaves out a single observation at a time; this is similar to thejackknife. Another,K-fold cross-validation, splits the data intoKsubsets; each is held out in turn as the validation set.
This avoids "self-influence". For comparison, inregression analysismethods such aslinear regression, eachyvalue draws the regression line toward itself, making the prediction of that value appear more accurate than it really is. Cross-validation applied to linear regression predicts theyvalue for each observation without using that observation.
This is often used for deciding how many predictor variables to use in regression. Without cross-validation, adding predictors always reduces the residual sum of squares (or possibly leaves it unchanged). In contrast, the cross-validated mean-square error will tend to decrease if valuable predictors are added, but increase if worthless predictors are added.[4]
Subsampling is an alternative method for approximating the sampling distribution of an estimator. The two key differences to the bootstrap are:
The advantage of subsampling is that it is valid under much weaker conditions compared to the bootstrap. In particular, a set of sufficient conditions is that the rate of convergence of the estimator is known and that the limiting distribution is continuous.
In addition, the resample (or subsample) size must tend to infinity together with the sample size but at a smaller rate, so that their ratio converges to zero. While subsampling was originally proposed for the case of independent and identically distributed (iid) data only, the methodology has been extended to cover time series data as well; in this case, one resamples blocks of subsequent data rather than individual data points. There are many cases of applied interest where subsampling leads to valid inference whereas bootstrapping does not; for example, such cases include examples where the rate of convergence of the estimator is not the square root of the sample size or when the limiting distribution is non-normal. When both subsampling and the bootstrap are consistent, the bootstrap is typically more accurate.RANSACis a popular algorithm using subsampling.
Jackknifing (jackknife cross-validation), is used instatistical inferenceto estimate the bias and standard error (variance) of a statistic, when a random sample of observations is used to calculate it. Historically, this method preceded the invention of the bootstrap withQuenouilleinventing this method in 1949 andTukeyextending it in 1958.[5][6]This method was foreshadowed byMahalanobiswho in 1946 suggested repeated estimates of the statistic of interest with half the sample chosen at random.[7]He coined the name 'interpenetrating samples' for this method.
Quenouille invented this method with the intention of reducing the bias of the sample estimate. Tukey extended this method by assuming that if the replicates could be considered identically and independently distributed, then an estimate of the variance of the sample parameter could be made and that it would be approximately distributed as a t variate withn−1 degrees of freedom (nbeing the sample size).
The basic idea behind the jackknife variance estimator lies in systematically recomputing the statistic estimate, leaving out one or more observations at a time from the sample set. From this new set of replicates of the statistic, an estimate for the bias and an estimate for the variance of the statistic can be calculated. Jackknife is equivalent to the random (subsampling) leave-one-out cross-validation, it only differs in the goal.[8]
For many statistical parameters the jackknife estimate of variance tends asymptotically to the true value almost surely. In technical terms one says that the jackknife estimate isconsistent. The jackknife is consistent for the samplemeans, samplevariances, central and non-central t-statistics (with possibly non-normal populations), samplecoefficient of variation,maximum likelihood estimators, least squares estimators,correlation coefficientsandregression coefficients.
It is not consistent for the samplemedian. In the case of a unimodal variate the ratio of the jackknife variance to the sample variance tends to be distributed as one half the square of a chi square distribution with twodegrees of freedom.
Instead of using the jackknife to estimate the variance, it may instead be applied to the log of the variance. This transformation may result in better estimates particularly when the distribution of the variance itself may be non normal.
The jackknife, like the original bootstrap, is dependent on the independence of the data. Extensions of the jackknife to allow for dependence in the data have been proposed. One such extension is the delete-a-group method used in association withPoisson sampling.
Both methods, the bootstrap and the jackknife, estimate the variability of a statistic from the variability of that statistic between subsamples, rather than from parametric assumptions. For the more general jackknife, the delete-m observations jackknife, the bootstrap can be seen as a random approximation of it. Both yield similar numerical results, which is why each can be seen as approximation to the other. Although there are huge theoretical differences in their mathematical insights, the main practical difference for statistics users is that thebootstrapgives different results when repeated on the same data, whereas the jackknife gives exactly the same result each time. Because of this, the jackknife is popular when the estimates need to be verified several times before publishing (e.g., official statistics agencies). On the other hand, when this verification feature is not crucial and it is of interest not to have a number but just an idea of its distribution, the bootstrap is preferred (e.g., studies in physics, economics, biological sciences).
Whether to use the bootstrap or the jackknife may depend more on operational aspects than on statistical concerns of a survey. The jackknife, originally used for bias reduction, is more of a specialized method and only estimates the variance of the point estimator. This can be enough for basic statistical inference (e.g., hypothesis testing, confidence intervals). The bootstrap, on the other hand, first estimates the whole distribution (of the point estimator) and then computes the variance from that. While powerful and easy, this can become highly computationally intensive.
"The bootstrap can be applied to both variance and distribution estimation problems. However, the bootstrap variance estimator is not as good as the jackknife or thebalanced repeated replication(BRR) variance estimator in terms of the empirical results. Furthermore, the bootstrap variance estimator usually requires more computations than the jackknife or the BRR. Thus, the bootstrap is mainly recommended for distribution estimation."[attribution needed][9]
There is a special consideration with the jackknife, particularly with the delete-1 observation jackknife. It should only be used with smooth, differentiable statistics (e.g., totals, means, proportions, ratios, odd ratios, regression coefficients, etc.; not with medians or quantiles). This could become a practical disadvantage. This disadvantage is usually the argument favoring bootstrapping over jackknifing. More general jackknifes than the delete-1, such as the delete-m jackknife or the delete-all-but-2Hodges–Lehmann estimator, overcome this problem for the medians and quantiles by relaxing the smoothness requirements for consistent variance estimation.
Usually the jackknife is easier to apply to complex sampling schemes than the bootstrap. Complex sampling schemes may involve stratification, multiple stages (clustering), varying sampling weights (non-response adjustments, calibration, post-stratification) and under unequal-probability sampling designs. Theoretical aspects of both the bootstrap and the jackknife can be found in Shao and Tu (1995),[10]whereas a basic introduction is accounted in Wolter (2007).[11]The bootstrap estimate of model prediction bias is more precise than jackknife estimates with linear models such as linear discriminant function or multiple regression.[12] | https://en.wikipedia.org/wiki/Randomization_test |
Boolean grammars, introduced byOkhotin[Wikidata], are a class offormal grammarsstudied informal languagetheory. They extend the basic type of grammars, thecontext-free grammars, withconjunctionandnegationoperations. Besides these explicit operations, Boolean grammars allow implicitdisjunctionrepresented by multiple rules for a single nonterminal symbol, which is the only logical connective expressible in context-free grammars. Conjunction and negation can be used, in particular, to specify intersection and complement of languages. An intermediate class of grammars known asconjunctive grammarsallows conjunction and disjunction, but not negation.
The rules of a Boolean grammar are of the form
A→α1&…&αm&¬β1&…&¬βn{\displaystyle A\to \alpha _{1}\And \ldots \And \alpha _{m}\And \lnot \beta _{1}\And \ldots \And \lnot \beta _{n}}
whereA{\displaystyle A}is a nonterminal,m+n≥1{\displaystyle m+n\geq 1}andα1{\displaystyle \alpha _{1}}, ...,αm{\displaystyle \alpha _{m}},β1{\displaystyle \beta _{1}}, ...,βn{\displaystyle \beta _{n}}are strings formed of symbols inΣ{\displaystyle \Sigma }andN{\displaystyle N}. Informally, such a rule asserts that every stringw{\displaystyle w}overΣ{\displaystyle \Sigma }that satisfies each of the syntactical conditions represented byα1{\displaystyle \alpha _{1}}, ...,αm{\displaystyle \alpha _{m}}and none of the syntactical conditions represented byβ1{\displaystyle \beta _{1}}, ...,βn{\displaystyle \beta _{n}}therefore satisfies the condition defined byA{\displaystyle A}.
There exist several formal definitions of the language generated by a Boolean grammar. They have one thing in common: if the grammar is represented as a system oflanguage equationswith union, intersection, complementation and concatenation, the languages generated by the grammar must be the solution of this system. The semantics differ in details, some define the languages using language equations, some draw upon ideas from the field oflogic programming. However, these nontrivial issues of formal definition are mostly irrelevant for practical considerations, and one can construct grammars according to the given informal semantics. The practical properties of the model are similar to those ofconjunctive grammars, while the descriptional capabilities are further improved. In particular, some practically useful properties inherited fromcontext-free grammars, such as efficient parsing algorithms, are retained, seeOkhotin (2010).
Thisformal methods-related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Boolean_grammar |
Compiler Description Language (CDL)is aprogramming languagebased onaffix grammars. It is very similar toBackus–Naur form(BNF) notation. It was designed for the development ofcompilers. It is very limited in its capabilities and control flow, and intentionally so. The benefits of these limitations are twofold.
On the one hand, they make possible the sophisticated data and control flow analysis used by the CDL2 optimizers resulting in extremely efficient code. The other benefit is that they foster a highly verbose naming convention. This, in turn, leads to programs that are, to a great extent,self-documenting.
The language looks a bit likeProlog(this is not surprising since both languages arose at about the same time out of work onaffix grammars). However, as opposed to Prolog, control flow in CDL is deterministically based on success/failure, i.e., no other alternatives are tried when the current one succeeds. This idea is also used inparsing expression grammars.
CDL3 is the third version of the CDL language, significantly different from the previous two versions.
The original version, designed byCornelis H. A. Kosterat theUniversity of Nijmegen, which emerged in 1971, had a rather unusual concept: it had no core. A typical programming language source is translated to machine instructions or canned sequences of those instructions. Those represent the core, the most basicabstractionsthat the given language supports. Such primitives can be the additions of numbers, copying variables to each other, and so on. CDL1 lacks such a core. It is the responsibility of the programmer to provide the primitive operations in a form that can then be turned into machine instructions by means of an assembler or a compiler for a traditional language. The CDL1 language itself has no concept of primitives, no concept of data types apart from the machine word (an abstract unit of storage - not necessarily a real machine word as such). The evaluation rules are rather similar to theBackus–Naur formsyntax descriptions; in fact, writing a parser for a language described in BNF is rather simple in CDL1.
Basically, the language consists of rules. A rule can either succeed or fail. A rule consists of alternatives that are sequences of other rule invocations. A rule succeeds if any of its alternatives succeed; these are tried in sequence. An alternative succeeds if all of its rule invocations succeed. The language provides operators to create evaluation loops without recursion (although this is not strictly necessary in CDL2 as the optimizer achieves the same effect) and some shortcuts to increase the efficiency of the otherwise recursive evaluation, but the basic concept is as above. Apart from the obvious application in context-free grammar parsing, CDL is also well suited to control applications since a lot of control applications are essentially deeply nested if-then rules.
Each CDL1 rule, while being evaluated, can act on data, which is of unspecified type. Ideally, the data should not be changed unless the rule is successful (no side effects on failure). This causes problems as although this rule may succeed, the rule invoking it might still fail, in which case the data change should not take effect. It is fairly easy (albeit memory intensive) to assure the above behavior if all the data is dynamically allocated on a stack. However, it is rather hard when there's static data, which is often the case. The CDL2 compiler is able to flag the possible violations thanks to the requirement that the direction of parameters (input, output, input-output) and the type of rules (can fail:test,predicate; cannot fail:function,action; can have a side effect:predicate,action; cannot have a side effect:test,function) must be specified by the programmer.
As the rule evaluation is based on calling simpler and simpler rules, at the bottom, there should be some primitive rules that do the actual work. That is where CDL1 is very surprising: it does not have those primitives. You have to provide those rules yourself. If you need addition in your program, you have to create a rule with two input parameters and one output parameter, and the output is set to be the sum of the two inputs by your code. The CDL compiler uses your code as strings (there are conventions on how to refer to the input and output variables) and simply emits it as needed. If you describe your adding rule using assembly, you will need an assembler to translate the CDL compiler's output into the machine code. If you describe all the primitive rules (macros in CDL terminology) in Pascal or C, then you need a Pascal or C compiler to run after the CDL compiler. This lack of core primitives can be very painful when you have to write a snippet of code, even for the simplest machine instruction operation. However, on the other hand, it gives you great flexibility in implementing esoteric, abstract primitives acting on exoticabstract objects(the 'machine word' in CDL is more like 'unit of data storage, with no reference to the kind of data stored there). Additionally, large projects made use of carefully crafted libraries of primitives. These were then replicated for each target architecture and OS allowing the production of highly efficient code for all.
To get a feel for the language, here is a small code fragment adapted from the CDL2 manual:
The primitive operations are here defined in terms of Java (or C). This is not a complete program; we must define the Java arrayitemselsewhere.
CDL2, which appeared in 1976, kept the principles of CDL1 but made the language suitable for large projects. It introduced modules, enforced data-change-only-on-success, and extended the capabilities of the language somewhat. The optimizers in the CDL2 compiler and especially in the CDL2 Laboratory (an IDE for CDL2) were world-class and not just for their time. One feature of the CDL2 Laboratory optimizer is almost unique: it can perform optimizations across compilation units, i.e., treating the entire program as a single compilation.
CDL3 is a more recent language. It gave up the open-ended feature of the previous CDL versions, and it provides primitives to basic arithmetic and storage access. The extremely puritan syntax of the earlier CDL versions (the number of keywords and symbols both run in single digits) has also been relaxed. Some basic concepts are now expressed in syntax rather than explicit semantics. In addition, data types have been introduced to the language.
The commercial mbp Cobol (a Cobol compiler for the PC) as well as the MProlog system (an industrial-strength Prolog implementation that ran on numerous architectures (IBM mainframe, VAX, PDP-11, Intel 8086, etc.) and OS-s (DOS/OS/CMS/BS2000, VMS/Unix, DOS/Windows/OS2)). The latter, in particular, is testimony to CDL2's portability.
While most programs written with CDL have been compilers, there is at least one commercial GUI application that was developed and maintained in CDL. This application was a dental image acquisition application now owned by DEXIS. A dental office management system was also once developed in CDL.
The software for theMephisto III chess computerwas written with CDL2.[1] | https://en.wikipedia.org/wiki/Compiler_Description_Language |
Aformal grammaris a set ofsymbolsand theproduction rulesfor rewriting some of them into every possible string of aformal languageover analphabet. A grammar does not describe themeaningof the strings — only their form.
Inapplied mathematics, formal language theory is the discipline that studies formal grammars and languages. Its applications are found intheoretical computer science,theoretical linguistics,formal semantics,mathematical logic, and other areas.
A formal grammar is asetof rules for rewriting strings, along with a "start symbol" from which rewriting starts. Therefore, a grammar is usually thought of as a language generator. However, it can also sometimes be used as the basis for a "recognizer"—a function in computing that determines whether a given string belongs to the language or is grammatically incorrect. To describe such recognizers, formal language theory uses separate formalisms, known asautomata theory. One of the interesting results of automata theory is that it is not possible to design a recognizer for certain formal languages.[1]Parsingis the process of recognizing an utterance (a string in natural languages) by breaking it down to asetof symbols and analyzing each one against the grammar of the language. Most languages have the meanings of their utterances structured according to their syntax—a practice known ascompositional semantics. As a result, the first step to describing the meaning of an utterance in language is to break it down part by part and look at its analyzed form (known as itsparse treein computer science, and as itsdeep structureingenerative grammar).
A grammar mainly consists of a set ofproduction rules, rewrite rules for transforming strings. Each rule specifies a replacement of a particular string (itsleft-hand side) with another (itsright-hand side). A rule can be applied to each string that contains its left-hand side and produces a string in which an occurrence of that left-hand side has been replaced with its right-hand side.
Unlike asemi-Thue system, which is wholly defined by these rules, a grammar further distinguishes between two kinds of symbols:nonterminalandterminalsymbols; each left-hand side must contain at least one nonterminal symbol. It also distinguishes a special nonterminal symbol, called thestart symbol.
The language generated by the grammar is defined to be the set of all strings without any nonterminal symbols that can be generated from the string consisting of a single start symbol by (possibly repeated) application of its rules in whatever way possible.
If there are essentially different ways of generating the same single string, the grammar is said to beambiguous.
In the following examples, the terminal symbols areaandb, and the start symbol isS.
Suppose we have the following production rules:
then we start withS, and can choose a rule to apply to it. If we choose rule 1, we obtain the stringaSb. If we then choose rule 1 again, we replaceSwithaSband obtain the stringaaSbb. If we now choose rule 2, we replaceSwithbaand obtain the stringaababb, and are done. We can write this series of choices more briefly, using symbols:S⇒aSb⇒aaSbb⇒aababb{\displaystyle S\Rightarrow aSb\Rightarrow aaSbb\Rightarrow aababb}.
The language of the grammar is the infinite set{anbabn∣n≥0}={ba,abab,aababb,aaababbb,…}{\displaystyle \{a^{n}bab^{n}\mid n\geq 0\}=\{ba,abab,aababb,aaababbb,\dotsc \}}, whereak{\displaystyle a^{k}}isa{\displaystyle a}repeatedk{\displaystyle k}times (andn{\displaystyle n}in particular represents the number of times production rule 1 has been applied). This grammar iscontext-free(only single nonterminals appear as left-hand sides) and unambiguous.
Suppose the rules are these instead:
This grammar is not context-free due to rule 3 and it is ambiguous due to the multiple ways in which rule 2 can be used to generate sequences ofS{\displaystyle S}s.
However, the language it generates is simply the set of all nonempty strings consisting ofa{\displaystyle a}s and/orb{\displaystyle b}s.
This is easy to see: to generate ab{\displaystyle b}from anS{\displaystyle S}, use rule 2 twice to generateSSS{\displaystyle SSS}, then rule 1 twice and rule 3 once to produceb{\displaystyle b}. This means we can generate arbitrary nonempty sequences ofS{\displaystyle S}s and then replace each of them witha{\displaystyle a}orb{\displaystyle b}as we please.
That same language can alternatively be generated by a context-free, nonambiguous grammar; for instance, theregulargrammar with rules
In the classic formalization of generative grammars first proposed byNoam Chomskyin the 1950s,[2][3]a grammarGconsists of the following components:
A grammar is formally defined as thetuple(N,Σ,P,S){\displaystyle (N,\Sigma ,P,S)}. Such a formal grammar is often called arewriting systemor aphrase structure grammarin the literature.[5][6]
The operation of a grammar can be defined in terms of relations on strings:
The grammarG=(N,Σ,P,S){\displaystyle G=(N,\Sigma ,P,S)}is effectively thesemi-Thue system(N∪Σ,P){\displaystyle (N\cup \Sigma ,P)}, rewriting strings in exactly the same way; the only difference is in that we distinguish specificnonterminalsymbols, which must be replaced in rewrite rules, and are only interested in rewritings from the designated start symbolS{\displaystyle S}to strings without nonterminal symbols.
For these examples, formal languages are specified usingset-builder notation.
Consider the grammarG{\displaystyle G}whereN={S,B}{\displaystyle N=\left\{S,B\right\}},Σ={a,b,c}{\displaystyle \Sigma =\left\{a,b,c\right\}},S{\displaystyle S}is the start symbol, andP{\displaystyle P}consists of the following production rules:
This grammar defines the languageL(G)={anbncn∣n≥1}{\displaystyle L(G)=\left\{a^{n}b^{n}c^{n}\mid n\geq 1\right\}}wherean{\displaystyle a^{n}}denotes a string ofnconsecutivea{\displaystyle a}'s. Thus, the language is the set of strings that consist of 1 or morea{\displaystyle a}'s, followed by the same number ofb{\displaystyle b}'s, followed by the same number ofc{\displaystyle c}'s.
Some examples of the derivation of strings inL(G){\displaystyle L(G)}are:
WhenNoam Chomskyfirst formalized generative grammars in 1956,[2]he classified them into types now known as theChomsky hierarchy. The difference between these types is that they have increasingly strict production rules and can therefore express fewer formal languages. Two important types arecontext-free grammars(Type 2) andregular grammars(Type 3). The languages that can be described with such a grammar are calledcontext-free languagesandregular languages, respectively. Although much less powerful thanunrestricted grammars(Type 0), which can in fact express any language that can be accepted by aTuring machine, these two restricted types of grammars are most often used because parsers for them can be efficiently implemented.[8]For example, all regular languages can be recognized by afinite-state machine, and for useful subsets of context-free grammars there are well-known algorithms to generate efficientLL parsersandLR parsersto recognize the corresponding languages those grammars generate.
Acontext-free grammaris a grammar in which the left-hand side of each production rule consists of only a single nonterminal symbol. This restriction is non-trivial; not all languages can be generated by context-free grammars. Those that can are calledcontext-free languages.
The languageL(G)={anbncn∣n≥1}{\displaystyle L(G)=\left\{a^{n}b^{n}c^{n}\mid n\geq 1\right\}}defined above is not a context-free language, and this can be strictly proven using thepumping lemma for context-free languages, but for example the language{anbn∣n≥1}{\displaystyle \left\{a^{n}b^{n}\mid n\geq 1\right\}}(at least 1a{\displaystyle a}followed by the same number ofb{\displaystyle b}'s) is context-free, as it can be defined by the grammarG2{\displaystyle G_{2}}withN={S}{\displaystyle N=\left\{S\right\}},Σ={a,b}{\displaystyle \Sigma =\left\{a,b\right\}},S{\displaystyle S}the start symbol, and the following production rules:
A context-free language can be recognized inO(n3){\displaystyle O(n^{3})}time (seeBig O notation) by an algorithm such asEarley's recogniser. That is, for every context-free language, a machine can be built that takes a string as input and determines inO(n3){\displaystyle O(n^{3})}time whether the string is a member of the language, wheren{\displaystyle n}is the length of the string.[9]Deterministic context-free languagesis a subset of context-free languages that can be recognized in linear time.[10]There exist various algorithms that target either this set of languages or some subset of it.
Inregular grammars, the left hand side is again only a single nonterminal symbol, but now the right-hand side is also restricted. The right side may be the empty string, or a single terminal symbol, or a single terminal symbol followed by a nonterminal symbol, but nothing else. (Sometimes a broader definition is used: one can allow longer strings of terminals or single nonterminals without anything else, making languageseasier to denotewhile still defining the same class of languages.)
The language{anbn∣n≥1}{\displaystyle \left\{a^{n}b^{n}\mid n\geq 1\right\}}defined above is not regular, but the language{anbm∣m,n≥1}{\displaystyle \left\{a^{n}b^{m}\mid m,n\geq 1\right\}}(at least 1a{\displaystyle a}followed by at least 1b{\displaystyle b}, where the numbers may be different) is, as it can be defined by the grammarG3{\displaystyle G_{3}}withN={S,A,B}{\displaystyle N=\left\{S,A,B\right\}},Σ={a,b}{\displaystyle \Sigma =\left\{a,b\right\}},S{\displaystyle S}the start symbol, and the following production rules:
All languages generated by a regular grammar can be recognized inO(n){\displaystyle O(n)}time by a finite-state machine. Although in practice, regular grammars are commonly expressed usingregular expressions, some forms of regular expression used in practice do not strictly generate the regular languages and do not show linear recognitional performance due to those deviations.
Many extensions and variations on Chomsky's original hierarchy of formal grammars have been developed, both by linguists and by computer scientists, usually either in order to increase their expressive power or in order to make them easier to analyze or parse. Some forms of grammars developed include:
A recursive grammar is a grammar that contains production rules that arerecursive. For example, a grammar for acontext-free languageisleft-recursiveif there exists a non-terminal symbolAthat can be put through the production rules to produce a string withAas the leftmost symbol.[15]An example of recursive grammar is a clause within a sentence separated by two commas.[16]All types of grammars in theChomsky hierarchycan be recursive.
Though there is a tremendous body of literature onparsing algorithms, most of these algorithms assume that the language to be parsed is initiallydescribedby means of agenerativeformal grammar, and that the goal is to transform this generative grammar into a working parser. Strictly speaking, a generative grammar does not in any way correspond to the algorithm used to parse a language, and various algorithms have different restrictions on the form of production rules that are considered well-formed.
An alternative approach is to formalize the language in terms of an analytic grammar in the first place, which more directly corresponds to the structure and semantics of a parser for the language. Examples of analytic grammar formalisms include the following: | https://en.wikipedia.org/wiki/Formal_grammar |
Top-Down Parsing Language(TDPL) is a type ofanalyticformal grammardeveloped byAlexander Birmanin the early 1970s[1][2][3]in order to study formally the behavior of a common class of practicaltop-down parsersthat support a limited form ofbacktracking. Birman originally named his formalismthe TMG Schema(TS), afterTMG, an earlyparser generator, but it was later given the name TDPL byAhoandUllmanin their classic anthologyThe Theory of Parsing, Translation and Compiling.[4]
Formally, aTDPL grammarGis a quadruple consisting of the following components:
A TDPL grammar can be viewed as an extremely minimalistic formal representation of arecursive descent parser, in which each of the nonterminals schematically represents a parsingfunction. Each of these nonterminal-functions takes as its input argument a string to be recognized, and yields one of two possible outcomes:
Note that a nonterminal-function may succeed without actually consuming any input, and this is considered an outcome distinct from failure.
A nonterminalAdefined by a rule of the formA→ ε always succeeds without consuming any input, regardless of the input string provided. Conversely, a rule of the formA→falways fails regardless of input. A rule of the formA→asucceeds if the next character in the input string is the terminala, in which case the nonterminal succeeds and consumes that one terminal; if the next input character does not match (or there is no next character), then the nonterminal fails.
A nonterminalAdefined by a rule of the formA→BC/Dfirstrecursivelyinvokes nonterminalB, and ifBsucceeds, invokesCon the remainder of the input string left unconsumed byB. If bothBandCsucceed, thenAin turn succeeds and consumes the same total number of input characters thatBandCtogether did. If eitherBorCfails, however, thenAbacktracksto the original point in the input string where it was first invoked, and then invokesDon that original input string, returning whatever resultDproduces.
The following TDPL grammar describes theregular languageconsisting of an arbitrary-length sequence of a's and b's:
The following grammar describes thecontext-freeDyck languageconsisting of arbitrary-length strings of matched braces, such as '{}', '{{}{{}}}', etc.:
The above examples can be represented equivalently but much more succinctly inparsing expression grammarnotation asS←(a/b)*andS←({S})*, respectively.
A slight variation of TDPL, known asGeneralized TDPLor GTDPL, greatly increases the apparent expressiveness of TDPL while retaining the same minimalist approach (though they are actually equivalent). In GTDPL, instead of TDPL's recursive rule formA→BC/D, the rule formA→B[C,D]is used. This rule is interpreted as follows: When nonterminalAis invoked on some input string, it first recursively invokesB. IfBsucceeds, thenAsubsequently invokesCon the remainder of the input left unconsumed byB, and returns the result ofCto the original caller. IfBfails, on the other hand, thenAinvokesDon the original input string, and passes the result back to the caller.
The important difference between this rule form and theA→BC/Drule form used in TDPL is thatCandDare neverbothinvoked in the same call toA: that is, the GTDPL rule acts more like a "pure" if/then/else construct usingBas the condition.
In GTDPL it is straightforward to express interesting non-context-free languagessuch as the classic example {anbncn}.
A GTDPL grammar can be reduced to an equivalent TDPL grammar that recognizes the same language, although the process is not straightforward and may greatly increase the number of rules required.[5]Also, both TDPL and GTDPL can be viewed as very restricted forms ofparsing expression grammars, all of which represent the same class of grammars.[5] | https://en.wikipedia.org/wiki/Top-down_parsing_language |
This is a list of notablelexer generatorsandparser generatorsfor various language classes.
Regular languagesare a category of languages (sometimes termedChomsky Type 3) which can be matched by a state machine (more specifically, by adeterministic finite automatonor anondeterministic finite automaton) constructed from aregular expression. In particular, a regular language can match constructs like "A follows B", "Either A or B", "A, followed by zero or more instances of B", but cannot match constructs which require consistency between non-adjacent elements, such as "some instances of A followed by the same number of instances of B", and also cannot express the concept of recursive "nesting" ("every A is eventually followed by a matching B"). A classic example of a problem which a regular grammar cannot handle is the question of whether a given string contains correctly nested parentheses. (This is typically handled by a Chomsky Type 2 grammar, also termed acontext-free grammar.)
Context-free languagesare a category of languages (sometimes termedChomsky Type 2) which can be matched by a sequence of replacement rules, each of which essentially maps each non-terminal element to a sequence of terminal elements and/or other nonterminal elements. Grammars of this type can match anything that can be matched by aregular grammar, and furthermore, can handle the concept of recursive "nesting" ("every A is eventually followed by a matching B"), such as the question of whether a given string contains correctly nested parentheses. The rules of Context-free grammars are purely local, however, and therefore cannot handle questions that require non-local analysis such as "Does a declaration exist for every variable that is used in a function?". To do so technically would require a more sophisticated grammar, like a Chomsky Type 1 grammar, also termed acontext-sensitive grammar. However, parser generators for context-free grammars often support the ability for user-written code to introduce limited amounts of context-sensitivity. (For example, upon encountering a variable declaration, user-written code could save the name and type of the variable into an external data structure, so that these could be checked against later variable references detected by the parser.)
Thedeterministic context-free languagesare a proper subset of the context-free languages which can be efficiently parsed bydeterministic pushdown automata.
This table compares parser generators withparsing expression grammars, deterministicBoolean grammars.
This table compares parser generator languages with a generalcontext-free grammar, aconjunctive grammar, or aBoolean grammar.
This table compares parser generators withcontext-sensitive grammars. | https://en.wikipedia.org/wiki/Comparison_of_parser_generators |
Incomputer programming, aparser combinatoris ahigher-order functionthat accepts several parsers as input and returns a new parser as its output. In this context, aparseris a function accepting strings as input and returning some structure as output, typically aparse treeor a set of indices representing locations in the string whereparsingstopped successfully. Parser combinators enable arecursive descent parsingstrategy that facilitates modular piecewise construction and testing. This parsing technique is calledcombinatory parsing.
Parsers using combinators have been used extensively in the prototyping of compilers and processors fordomain-specific languagessuch asnatural-language user interfacesto databases, where complex and varied semantic actions are closely integrated with syntactic processing. In 1989, Richard Frost and John Launchbury demonstrated[1]use of parser combinators to constructnatural-languageinterpreters. Graham Hutton also used higher-order functions for basic parsing in 1992[2]and monadic parsing in 1996.[3]S. D. Swierstra also exhibited the practical aspects of parser combinators in 2001.[4]In 2008, Frost, Hafiz and Callaghan[5]described a set of parser combinators in thefunctional programminglanguageHaskellthat solve the long-standing problem of accommodatingleft recursion, and work as a completetop-down parsingtool inpolynomialtime and space.
In anyprogramming languagethat hasfirst-class functions, parser combinators can be used to combine basic parsers to construct parsers for more complex rules. For example, aproduction ruleof acontext-free grammar(CFG) may have one or more alternatives and each alternative may consist of a sequence of non-terminal(s) and/or terminal(s), or the alternative may consist of a single non-terminal or terminal or the empty string. If a simple parser is available for each of these alternatives, a parser combinator can be used to combine each of these parsers, returning a new parser which can recognise any or all of the alternatives.
In languages that supportoperator overloading, a parser combinator can take the form of aninfix operator, used to glue different parsers to form a complete rule. Parser combinators thereby enable parsers to be defined in an embedded style, in code which is similar in structure to the rules of the formal grammar. As such, implementations can be thought of as executable specifications with all the associated advantages such as readability.
To keep the discussion relatively straightforward, we discuss parser combinators in terms ofrecognizersonly. If the input string is of length#inputand its members are accessed through an indexj, a recognizer is aparserwhich returns, as output, a set of indices representing indices at which the parser successfully finished recognizing a sequence of tokens that begin at indexj. An empty result set indicates that the recognizer failed to recognize any sequence beginning at indexj.
Given two recognizerspandq, we can define two major parser combinators, one for matching alternative rules and one for sequencing rules:
There may be multiple distinct ways to parse a string while finishing at the same index, indicating anambiguous grammar. Simple recognizers do not acknowledge these ambiguities; each possible finishing index is listed only once in the result set. For a more complete set of results, a more complicated object such as aparse treemust be returned.
Consider a highly ambiguouscontext-free grammar,s ::= ‘x’ s s | ε. Using the combinators defined earlier, we can modularly define executable notations of this grammar in a modernfunctional programminglanguage (e.g.,Haskell) ass = term ‘x’ <*> s <*> s <+> empty. When the recognizersis applied at index2of the input sequencex x x x xit would return a result set{2,3,4,5}, indicating that there were matches starting at index 2 and finishing at any index between 2 and 5 inclusive.
Parser combinators, like allrecursive descent parsers, are not limited to thecontext-free grammarsand thus do no global search for ambiguities in theLL(k) parsingFirstkand Followksets. Thus, ambiguities are not known until run-time if and until the input triggers them. In such cases, the recursive descent parser may default (perhaps unknown to the grammar designer) to one of the possible ambiguous paths, resulting in semantic confusion (aliasing) in the use of the language. This leads to bugs by users of ambiguous programming languages, which are not reported at compile-time, and which are introduced not by human error, but by ambiguous grammar. The only solution that eliminates these bugs is to remove the ambiguities and use a context-free grammar.
The simple implementations of parser combinators have some shortcomings, which are common in top-down parsing. Naïve combinatory parsing requiresexponentialtime and space when parsing an ambiguous context-free grammar. In 1996, Frost and Szydlowski demonstrated howmemoizationcan be used with parser combinators to reduce the time complexity to polynomial.[6]Later Frost usedmonadsto construct the combinators for systematic and correct threading of memo-table throughout the computation.[7]
Like any top-downrecursive descent parsing, the conventional parser combinators (like the combinators described above) will not terminate while processing aleft-recursive grammar(e.g.s ::= s <*> term ‘x’|empty). A recognitionalgorithmthat accommodates ambiguous grammars with direct left-recursive rules is described by Frost and Hafiz in 2006.[8]The algorithm curtails the otherwise ever-growing left-recursive parse by imposing depth restrictions. That algorithm was extended to a complete parsing algorithm to accommodate indirect as well as direct left-recursion inpolynomial time, and to generate compact polynomial-size representations of the potentially exponential number of parse trees for highly ambiguous grammars by Frost, Hafiz and Callaghan in 2007.[9]This extended algorithm accommodates indirect left recursion by comparing its ‘computed context’ with ‘current context’. The same authors also described their implementation of a set of parser combinators written in the Haskell language based on the same algorithm.[5][10] | https://en.wikipedia.org/wiki/Parser_combinator |
Incomputer science, agrammaris informally called arecursive grammarif it containsproduction rulesthat arerecursive, meaning that expanding a non-terminal according to these rules can eventually lead to a string that includes the same non-terminal again. Otherwise it is called anon-recursive grammar.[1]
For example, a grammar for acontext-free languageisleft recursiveif there exists a non-terminal symbolAthat can be put through the production rules to produce a string withA(as the leftmost symbol).[2][3]All types of grammars in theChomsky hierarchycan be recursive and it is recursion that allows the production of infinite sets of words.[1]
A non-recursive grammar can produce only a finite language; and each finite language can be produced by a non-recursive grammar.[1]For example, astraight-line grammarproduces just a single word.
A recursive context-free grammar that contains nouseless rulesnecessarily produces an infinite language. This property forms the basis for analgorithmthat can test efficiently whether a context-free grammar produces a finite or infinite language.[4]
Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it. | https://en.wikipedia.org/wiki/Non-recursive_grammar |
Incomputer science,augmented Backus–Naur form(ABNF) is ametalanguagebased onBackus–Naur form(BNF) but consisting of its own syntax and derivation rules. The motive principle for ABNF is to describe aformal systemof a language to be used as a bidirectionalcommunications protocol. It is defined byInternet Standard 68("STD 68", type case sic), which as of December 2010[update]wasRFC5234, and it often serves as the definition language forIETFcommunication protocols.[1][2]
RFC5234supersedesRFC4234,2234and733.[3]RFC7405updates it, adding a syntax for specifying case-sensitive string literals.
An ABNF specification is a set of derivation rules, written as
where rule is acase-insensitivenonterminal, the definition consists of sequences of symbols that define the rule, a comment for documentation, and ending with a carriage return and line feed.
Rule names are case-insensitive:<rulename>,<Rulename>,<RULENAME>, and<rUlENamE>all refer to the same rule. Rule names consist of a letter followed by letters, numbers, and hyphens.
Angle brackets (<,>) are not required around rule names (as they are in BNF). However, they may be used to delimit a rule name when used in prose to discern a rule name.
Terminalsare specified by one or more numeric characters.
Numeric characters may be specified as the percent sign%, followed by the base (b= binary,d= decimal, andx= hexadecimal), followed by the value, or concatenation of values (indicated by.). For example, a carriage return is specified by%d13in decimal or%x0Din hexadecimal. A carriage return followed by a line feed may be specified with concatenation as%d13.10.
Literal text is specified through the use of a string enclosed in quotation marks ("). These strings are case-insensitive, and the character set used is (US-)ASCII. Therefore, the string"abc"will match “abc”, “Abc”, “aBc”, “abC”, “ABc”, “AbC”, “aBC”, and “ABC”.RFC 7405added a syntax for case-sensitive strings:%s"aBc"will only match "aBc". Prior to that, a case-sensitive string could only be specified by listing the individual characters: to match “aBc”, the definition would be%d97.66.99. A string can also be explicitly specified as case-insensitive with a%iprefix.
White space is used to separate elements of a definition; for space to be recognized as a delimiter, it must be explicitly included. The explicit reference for a single whitespace character isWSP(linear white space), andLWSPis for zero or more whitespace characters with newlines permitted. TheLWSPdefinition in RFC5234 is controversial[4]because at least one whitespace character is needed to form a delimiter between two fields.
Definitions are left-aligned. When multiple lines are required (for readability), continuation lines are indented by whitespace.
; comment
A semicolon (;) starts a comment that continues to the end of the line.
Rule1 Rule2
A rule may be defined by listing a sequence of rule names.
To match the string “aba”, the following rules could be used:
Rule1 / Rule2
A rule may be defined by a list of alternative rules separated by asolidus(/).
To accept the rulefuor the rulebar, the following rule could be constructed:
Rule1 =/ Rule2
Additional alternatives may be added to a rule through the use of=/between the rule name and the definition.
The rule
is therefore equivalent to
%c##-##
A range of numeric values may be specified through the use of a hyphen (-).
The rule
is equivalent to
(Rule1 Rule2)
Elements may be placed in parentheses to group rules in a definition.
To match "a b d" or "a c d", the following rule could be constructed:
To match “a b” or “c d”, the following rules could be constructed:
n*nRule
To indicate repetition of an element, the form<a>*<b>elementis used. The optional<a>gives the minimal number of elements to be included (with the default of 0). The optional<b>gives the maximal number of elements to be included (with the default of infinity).
Use*elementfor zero or more elements,*1elementfor zero or one element,1*elementfor one or more elements, and2*3elementfor two or three elements, cf.regular expressionse*,e?,e+ande{2,3}.
nRule
To indicate an explicit number of elements, the form<a>elementis used and is equivalent to<a>*<a>element.
Use2DIGITto get two numeric digits, and3DIGITto get three numeric digits. (DIGITis defined below under "Core rules". Also seezip-codein the example below.)
[Rule]
To indicate an optional element, the following constructions are equivalent:
The following operators have the given precedence from tightest binding to loosest binding:
Use of the alternative operator with concatenation may be confusing, and it is recommended that grouping be used to make explicit concatenation groups.
The core rules are defined in the ABNF standard.
Note that in the core rules diagram theCHAR2charset is inlined inchar-valandCHAR3is inlined inprose-valin the RFC spec. They are named here for clarity in the main syntax diagram.
The (U.S.) postal address example given in the augmented Backus–Naur form (ABNF) page may be specified as follows:
RFC 5234adds a warning in conjunction to the definition of LWSP as follows:
Use of this linear-white-space rule permits lines containing only white space that are no longer legal in mail headers and have caused interoperability problems in other contexts. Do not use when defining mail headers and use with caution in other contexts. | https://en.wikipedia.org/wiki/Augmented_Backus%E2%80%93Naur_form |
Adefinite clause grammar(DCG) is a way of expressing grammar, either fornaturalorformallanguages, in a logic programming language such asProlog. It is closely related to the concept ofattribute grammars/affix grammars.
DCGs are usually associated with Prolog, but similar languages such asMercuryalso include DCGs. They are called definite clause grammars because they represent a grammar as a set ofdefinite clausesinfirst-order logic.
The term DCG refers to the specific type of expression in Prolog and other similar languages; not all ways of expressing grammars using definite clauses are considered DCGs. However, all of the capabilities or properties of DCGs will be the same for any grammar that is represented with definite clauses in essentially the same way as in Prolog.
The definite clauses of a DCG can be considered a set of axioms where the validity of a sentence, and the fact that it has a certain parse tree can be considered theorems that follow from these axioms.[1]This has the advantage of making it so that recognition and parsing of expressions in a language becomes a general matter of proving statements, such as statements in a logic programming language.
The history of DCGs is closely tied to the history of Prolog, and the history of Prolog revolves around several researchers in both Marseille, France, and Edinburgh, Scotland. According toRobert Kowalski, an early developer of Prolog, the first Prolog system was developed in 1972 byAlain Colmerauerand Phillipe Roussel.[2]The first program written in the language was a large natural-language processing system. Fernando Pereira andDavid Warrenat the University of Edinburgh were also involved in the early development of Prolog.
Colmerauer had previously worked on a language processing system called Q-systems that was used to translate between English and French.[3]In 1978, Colmerauer wrote a paper about a way of representing grammars called metamorphosis grammars which were part of the early version of Prolog called Marseille Prolog. In this paper, he gave a formal description of metamorphosis grammars and some examples of programs that use them.
Fernando Pereira and David Warren, two other early architects of Prolog, coined the term "definite clause grammar" and created the notation for DCGs that is used in Prolog today. They gave credit for the idea to Colmerauer and Kowalski, and they note that DCGs are a special case of Colmerauer's metamorphosis grammars. They introduced the idea in an article called "Definite Clause Grammars for Language Analysis", where they describe DCGs as a "formalism ... in which grammars are expressed clauses of first-order predicate logic" that "constitute effective programs of the programming language Prolog".[4]
Pereira, Warren, and other pioneers of Prolog later wrote about several other aspects of DCGs. Pereira and Warren wrote an article called "Parsing as Deduction", describing things such as how the Earley Deduction proof procedure is used for parsing.[5]Pereira also collaborated withStuart M. Shieberon a book called "Prolog and Natural Language Analysis", that was intended as a general introduction tocomputational linguisticsusing logic programming.[6]
A basic example of DCGs helps to illustrate what they are and what they look like.
This generates sentences such as "the cat eats the bat", "a bat eats the cat". One can generate all of the valid expressions in the language generated by this grammar at a Prolog interpreter by typingsentence(X,[]). Similarly, one can test whether a sentence is valid in the language by typing something likesentence([the,bat,eats,the,bat],[]).
DCG notation is justsyntactic sugarfor normal definite clauses in Prolog. For example, the previous example could be translated into the following:
The arguments to each functor, such as(A,B)and(B,Z)aredifference lists; difference lists are a way of representing a prefix of a list as the difference between its two suffixes (the bigger including the smaller one). Using Prolog's notation for lists, a singleton list prefixP = [H]can be seen as the difference between[H|X]andX, and thus represented with the pair([H|X],X), for instance.
Saying thatPis the difference betweenAandBis the same as saying thatappend(P,B,A)holds. Or in the case of the previous example,append([H],X,[H|X]).
Difference lists are used to represent lists with DCGs for reasons of efficiency. It is much more efficient to concatenate list differences (prefixes), in the circumstances that they can be used, because the concatenation of(A,B)and(B,Z)is just(A,Z).[7]
Indeed,append(P,B,A), append(Q,Z,B)entailsappend(P,Q,S), append(S,Z,A). This is the same as saying that list concatenation isassociative:
Inpure Prolog, normal DCG rules with no extra arguments on the functors, such as the previous example, can only expresscontext-free grammars; there is only one argument on the left side of theproduction. However,context-sensitive grammarscan also be expressed with DCGs, by providing extra arguments, such as in the following example:
This set of DCG rules describes the grammar which generates the language that consists of strings of the formanbncn{\displaystyle a^{n}b^{n}c^{n}}.[8]
This set of DCG rules describes the grammar which generates the language that consists of strings of the formanbncn{\displaystyle a^{n}b^{n}c^{n}}, by structurally representingn[citation needed]
Various linguisticfeaturescan also be represented fairly concisely with DCGs by providing extra arguments to the functors.[9]For example, consider the following set of DCG rules:
This grammar allows sentences like "he likes her" and "he likes him", butnot"her likes he" and "him likes him".
The main practical use of a DCG is to parse sentences of the given grammar, i.e. to construct a parse tree. This can be done by providing "extra arguments" to the functors in the DCG, like in the following rules:
One can now query the interpreter to yield a parse tree of any given sentence:
DCGs can serve as a convenient syntactic sugar to hide certain parameters in code in other places besides parsing applications.
In the declaratively pure programming languageMercuryI/O must be represented by a pair ofio.statearguments.
DCG notation can be used to make using I/O more convenient,[10]although state variable notation is usually preferred.[11]DCG notation is also used for parsing and similar things in Mercury as it is in Prolog.
Since DCGs were introduced by Pereira and Warren, several extensions have been proposed. Pereira himself proposed an extension called extraposition grammars (XGs).[12]This formalism was intended in part to make it easier to express certain grammatical phenomena, such as left-extraposition. Pereira states, "The difference between XG rules and DCG rules is then that the left-hand side of an XG rule may contain several symbols." This makes it easier to express rules for context-sensitive grammars.
Peter Van Roy extended DCGs to allow multiple accumulators.[13][14]
Another, more recent, extension was made by researchers at NEC Corporation called Multi-Modal Definite Clause Grammars (MM-DCGs) in 1995. Their extensions were intended to allow the recognizing and parsing expressions that include non-textual parts such as pictures.[15]
Another extension, called definite clause translation grammars (DCTGs) was described by Harvey Abramson in 1984.[16]DCTG notation looks very similar to DCG notation; the major difference is that one uses::=instead of-->in the rules. It was devised to handle grammatical attributes conveniently.[17]The translation of DCTGs into normal Prolog clauses is like that of DCGs, but 3 arguments are added instead of 2. | https://en.wikipedia.org/wiki/Definite_clause_grammar |
META IIis adomain-specificprogramming languagefor writingcompilers. It was created in 1963–1964 by Dewey Val Schorre atUniversity of California, Los Angeles(UCLA). META II uses what Schorre calledsyntaxequations. Its operation is simply explained as:
Eachsyntax equationis translated into a recursivesubroutinewhich tests the input string for a particular phrase structure, and deletes it if found.[1]
Meta II programs are compiled into aninterpretedbyte codelanguage. VALGOL and SMALGOL compilers illustrating its capabilities were written in the META II language,[1][2]VALGOL is a simple algebraic language designed for the purpose of illustrating META II. SMALGOL was a fairly large subset ofALGOL 60.
META II was first written in META I,[3]a hand-compiled version of META II. The history is unclear as to whether META I was a full implementation of META II or a required subset of the META II language required to compile the full META II compiler.
In its documentation, META II is described as resemblingBackus–Naur form(BNF), which today is explained as a production grammar. META II is an analytical grammar. In theTREE-METAdocument, these languages were described as reductive grammars.
For example, in BNF, an arithmetic expression may be defined as:
BNF rules are today production rules describing how constituent parts may be assembled to form only valid language constructs. A parser does the opposite taking language constructs apart. META II is astack-basedfunctionalparserprogramming languagethat includes output directive. In META II, the order of testing is specified by the equation. META II like other programming languages would overflow its stack attempting left recursion. META II uses a $ (zero or more) sequence operator. The expr parsing equation written in META II is a conditional expression evaluated left to right:
Above the expr equation is defined by the expression to the right of the '='. Evaluating left to right from the '=', term is the first thing that must be tested. If term returns failure expr fails. If successful a term was recognized we then enter the indefinite $ zero or more loop were we first test for a '+' if that fails the alternative '-' is attempted and finally if a '-' were not recognized the loop terminates with expr returning success having recognized a single term. If a '+' or '-' were successful then term would be called. And if successful the loop would repeat. The expr equation can also be expressed using nested grouping as:
The code production elements were left out to simplify the example. Due to the limited character set of early computers the character/was used as the alternative, or, operator. The$, loop operator, is used to match zero or more of something:
The above can be expressed in English: An expr is a term followed by zero or more of (plus term or minus term). Schorre describes this as being an aid to efficiency, but unlike a naiverecursive descent parsercompiler it will also ensure that theassociativityof arithmetic operations is correct:
With the ability to express a sequence with a loop or right ("tail") recursion, the order of evaluation can be controlled.
Syntax rules appear declarative, but are actually made imperative by their semantic specifications.
META II outputs assembly code for a stack machine. Evaluating this is like using anReverse Polish notation(RPN) calculator.
In the above .ID and .NUM are built-in token recognizers. * in the .OUT code production references the last token recognized. On recognizing a number with .NUM .OUT('LDL' *) outputs the load literal instruction followed the number. An expression:
will generate:
META II is the first documented version of ametacompiler,[notes 1]as it compiles to machine code for one of the earliest instances of avirtual machine.
The paper itself is a wonderful gem which includes a number of excellent examples, including thebootstrappingof Meta II in itself (all this was done on an 8K (six bit byte)1401!)." –Alan Kay
The original paper is not freely available, but was reprinted inDr. Dobb's Journal, April 1980. Transcribed source code has at various times been made available (possibly by theCP/MUser Group). The paper included a listing of the description of Meta II, this could in principle be processed manually to yield an interpretable program in virtual machine opcodes; if this ran and produced identical output then the implementation was correct.
META II was basically a proof of concept. A base from which to work.
META IIis not presented as astandard language, but as a point of departure from which a user may develop his ownMETA"language".[1]
Many META "languages" followed. Schorre went to work forSystem Development Corporationwhere he was a member of the Compiler for Writing and Implementing Compilers (CWIC) project. CWIC's SYNTAX language built on META II adding a backtrack alternative operator positive and negative look ahead operators and programmed token equations. The.OUTand.LABELoperations removed and stack transforming operations:<node>and!<number>added. The GENERATOR language based onLISP 2processed the trees produced by the SYNTAX parsing language. To generate code a call to a generator function was placed in a SYNTAX equation. These languages were developed by members of the L.A. ACM SIGPLAN sub-group on Syntax Directed Compilers. Schorre viewed the META II language in a general way:
The termMETA"language" withMETAin capital letters is used to denote any compiler-writinglanguageso developed.[1]
Schorre explains META II as a base from which other META "languages" may be developed. | https://en.wikipedia.org/wiki/Meta-II |
Syntax diagrams(orrailroad diagrams) are a way to represent acontext-free grammar. They represent a graphical alternative toBackus–Naur form,EBNF,Augmented Backus–Naur form, and other text-based grammars asmetalanguages. Early books using syntax diagrams include the "Pascal User Manual" written byNiklaus Wirth[1](diagrams start at page 47) and the BurroughsCANDEManual.[2]In the compilation field, textual representations like BNF or its variants are usually preferred. BNF is text-based, and used by compiler writers and parser generators. Railroad diagrams are visual, and may be more readily understood by laypeople, sometimes incorporated into graphic design. The canonical source defining theJSONdata interchangeformat provides yet another example of a popular modern usage of these diagrams.
The representation of a grammar is a set of syntax diagrams. Each diagram defines a "nonterminal" stage in a process. There is a main diagram which defines the language in the following way: to belong to the language, a word must describe a path in the main diagram.
Each diagram has an entry point and an end point. The diagram describes possible paths between these two points by going through other nonterminals and terminals. Historically, terminals have been represented by round boxes and nonterminals by rectangular boxes but there is no official standard.
We use arithmetic expressions as an example, in various grammar formats.
BNF:
EBNF:
ABNF:
ABNF also supports ranges, e.g.DIGIT=%x30-39, but it is not used here for consistency with the other examples.
Red (programming language)Parse Dialect:
This format also supports ranges, e.g.digit:charset[#"0"-#"9"], but it is not used here for consistency with the other examples.
One possible syntax diagram for the example grammars is below. While the syntax for the text-based grammars differs, the syntax diagram for all of them can be the same because it is ametalanguage.
Note: the first link is sometimes blocked by the server outside of its domain, but it is available onarchive.org. The file was also mirrored atstandardpascal.org. | https://en.wikipedia.org/wiki/Syntax_diagram |
Translational Backus–Naur Form(TBNForTranslational BNF) refers toBackus–Naur form, which is aformal grammarnotation used to define the syntax of computer languages, such asAlgol,Ada,C++,COBOL,Fortran,Java,Perl,Python, and many others. TBNF goes beyond BNF andextended BNF(EBNF) grammar notation because it not only defines the syntax of a language, but also defines the structure of theabstract syntax tree(AST) to be created in memory and the output intermediate code to be generated. Thus TBNF defines the complete translation process from input source code to intermediate code. Specification of the output intermediate code is optional, in which case you will still get automatic AST creation and have the ability to define its structure in the grammar.
The TBNF concept was first published in April 2006 in a paper at SIGPLAN Notices, a special interest group of theACM.[1]
Here is a sample grammar specified in TBNF:
Given this input:
Running the translator generated from the above grammar would produce this output: | https://en.wikipedia.org/wiki/Translational_Backus%E2%80%93Naur_form |
Incomputer science, aVan Wijngaarden grammar(alsovW-grammarorW-grammar[1]) is a formalism for definingformal languages. The name derives from the formalism invented byAdriaan van Wijngaarden[2]for the purpose of defining theALGOL 68programming language.
The resulting specification[3]remains its most notable application.
Van Wijngaarden grammars address the problem thatcontext-free grammarscannot express agreement or reference, where two different parts of the sentence must agree with each other in some way. For example, the sentence "The birds was eating" is notStandard Englishbecause it fails toagree on number. A context-free grammar would parse "The birds was eating" and "The birds were eating" and "The bird was eating" in the same way. However, context-free grammars have the benefit of simplicity whereas van Wijngaarden grammars are considered highly complex.[4]
W-grammars aretwo-level grammars: they are defined by a pair of grammars, that operate on different levels:
The set of strings generated by a W-grammar is defined by a two-stage process:
Theconsistent substitutionused in the first step is the same assubstitution in predicate logic, and actually supportslogic programming; it corresponds tounificationinProlog, as noted byAlain Colmerauer[where?].
W-grammars areTuring complete;[5]hence, all decision problems regarding the languages they generate, such as
areundecidable.
Curtailed variants, known asaffix grammars, were developed, and applied incompiler constructionand to the description of natural languages.
Definitelogic programs, that is, logic programs that make no use of negation, can be viewed as a subclass of W-grammars.[6]
In the 1950s, attempts started to apply computers to the recognition, interpretation and translation of natural languages, such as English and Russian. This requires a machine-readable description of the phrase structure of sentences, that can be used to parse and interpret them, and to generate them. Context-free grammars, a concept fromstructural linguistics, were adopted for this purpose; their rules can express how sentences are recursively built out ofparts of speech, such asnoun phrasesandverb phrases, and ultimately, words, such asnouns,verbs, andpronouns.
This work influenced the design and implementation ofprogramming languages, most notably, ofALGOL 60, which introduced a syntax description inBackus–Naur form.
However, context-free rules cannot expressagreementor reference (anaphora), where two different parts of the sentence must agree with each other in some way.
These can be readily expressed in W-grammars. (See example below.)
Programming languages have the analogous notions oftypingandscoping.
A compiler or interpreter for the language must recognize which uses of a variable belong together (refer to the same variable). This is typically subject to constraints such as:
W-grammars are based on the idea of providing the nonterminal symbols of context-free grammars withattributes(oraffixes) that pass information between the nodes of theparse tree, used to constrain the syntax and to specify the semantics.
This idea was well known at the time; e.g.Donald Knuthvisited the ALGOL 68 design committee while developing his own version of it,attribute grammars.[7]
By augmenting the syntax description with attributes, constraints like the above can be checked, ruling many invalid programs out at compile time.
As Van Wijngaarden wrote in his preface:[2]
My main objections were certain to me unnecessary restrictions and the definition of the syntax and semantics. Actually the syntax viewed in MR 75 produces a large number of programs, whereas I should prefer to have the subset of meaningful programs as large as possible, which requires a stricter syntax. [...] it soon became clear that some better tools than the Backus notation might be advantageous [...]. I developed a scheme [...] which enables the design of a language to carry much more information in the syntax than is normally carried.
Quite peculiar to W-grammars was their strict treatment of attributes as strings, defined by a context-free grammar, on which concatenation is the only possible operation; complex data structures and operations can be defined bypattern matching. (See example below.)
After their introduction in the 1968ALGOL 68"Final Report", W-grammars were widely considered as too powerful and unconstrained to be practical.[citation needed]
This was partly a consequence of the way in which they had been applied; the 1973 ALGOL 68 "Revised Report" contains a much more readable grammar, without modifying the W-grammar formalism itself.
Meanwhile, it became clear that W-grammars, when used in their full generality, are indeed too powerful for such practical purposes as serving as the input for aparser generator.
They describe precisely allrecursively enumerable languages,[8]which makes parsing impossible in general: it is anundecidable problemto decide whether a given string can be generated by a given W-grammar.
Hence, their use must be seriously constrained when used for automatic parsing or translation. Restricted and modified variants of W-grammars were developed to address this, e.g.
After the 1970s, interest in the approach waned; occasionally, new studies are published.[9]
In English, nouns, pronouns and verbs have attributes such asgrammatical number,gender, andperson, which must agree betweensubject, main verb, and pronouns referring to the subject:
are valid sentences; invalid are, for instance:
Here, agreement serves to stress that both pronouns (e.g.Iandmyself) refer to the same person.
A context-free grammar to generate all such sentences:
From<sentence>, we can generate all combinations:
A W-grammar to generate only the valid sentences:
A well-known non-context-free language is
A two-level grammar for this language is the metagrammar
together with grammar schema
The Revised Report on the Algorithmic Language Algol 60[10]defines a full context-free syntax for the language.
Assignmentsare defined as follows (section 4.2.1):
A<variable>can be (amongst other things) an<identifier>, which in turn is defined as:
Examples (section 4.2.2):
Expressions and assignments must betype checked: for instance,
The rules above distinguish between<arithmetic expression>and<Boolean expression>, but they cannot verify that the same variable always has the same type.
This (non-context-free) requirement can be expressed in a W-grammar by annotating the rules with attributes that record, for each variable used or assigned to, its name and type.
This record can then be carried along to all places in the grammar where types need to be matched, and implement type checking.
Similarly, it can be used to checking initialization of variables before use, etcetera.
One may wonder how to create and manipulate such a data structure without explicit support in the formalism for data structures and operations on them. It can be done by using the metagrammar to define a string representation for the data structure and usingpattern matchingto define operations:
When compared to the original grammar, three new elements have been added:
The new hyperrules areε-rules: they only generate the empty string.
The ALGOL 68 reports use a slightly different notation without <angled brackets>.
A simple example of the power of W-grammars is clause
This allows BEGIN ... END and { } as block delimiters, while ruling out BEGIN ... } and { ... END.
One may wish to compare the grammar in the report with theYaccparser for a subset of ALGOL 68 by Marc van Leeuwen.[11]
Anthony Fisher wroteyo-yo,[12]a parser for a large class of W-grammars, with example grammars forexpressions,eva,salandPascal(the actualISO 7185standard for Pascal usesextended Backus–Naur form).
Dick Grunecreated aCprogram that would generate all possible productions of a W-grammar.[13]
The applications ofExtended Affix Grammars(EAG)s mentioned above can effectively be regarded as applications of W-grammars, since EAGs are so close to W-grammars.[14]
W-grammars have also been proposed for the description of complex human actions inergonomics.[citation needed]
A W-Grammar Description has also been supplied forAda.[15] | https://en.wikipedia.org/wiki/Van_Wijngaarden_grammar |
ALGOL 68(short forAlgorithmic Language 1968) is animperativeprogramming languagemember of theALGOLfamily that was conceived as a successor to theALGOL 60language, designed with the goal of a much wider scope of application and more rigorously definedsyntaxand semantics.
The complexity of the language's definition, which runs to several hundred pages filled with non-standard terminology, madecompilerimplementation difficult and it was said it had "no implementations and no users". This was only partly true; ALGOL 68 did find use in several niche markets, notably in theUnited Kingdomwhere it was popular onInternational Computers Limited(ICL) machines, and in teaching roles. Outside these fields, use was relatively limited.
Nevertheless, the contributions of ALGOL 68 to the field ofcomputer sciencehave been deep, wide-ranging and enduring, although many of these contributions were only publicly identified when they had reappeared in subsequently developed programming languages. Many languages were developed specifically as a response to the perceived complexity of the language, the most notable beingPascal, or were reimplementations for specific roles, likeAda.
Many languages of the 1970s trace their design specifically to ALGOL 68, selecting some features while abandoning others that were considered too complex or out-of-scope for given roles. Among these is the languageC, which was directly influenced by ALGOL 68, especially by itsstrong typingand structures. Most modern languages trace at least some of their syntax to either C or Pascal, and thus directly or indirectly to ALGOL 68.
ALGOL 68 features include expression-based syntax, user-declared types and structures/tagged-unions, a reference model of variables and reference parameters, string, array and matrix slicing, and concurrency.
ALGOL 68 was designed by theInternational Federation for Information Processing(IFIP)IFIP Working Group 2.1on Algorithmic Languages and Calculi. On December 20, 1968, the language was formally adopted by the group, and then approved for publication by the General Assembly of IFIP.
ALGOL 68 was defined using aformalism, a two-levelformal grammar, invented byAdriaan van Wijngaarden.Van Wijngaarden grammarsuse acontext-free grammarto generate an infinite set of productions that will recognize a particular ALGOL 68 program; notably, they are able to express the kind of requirements that in many other programming languagetechnical standardsare labelledsemantics, and must be expressed in ambiguity-prone natural language prose, and then implemented in compilers asad hoccode attached to the formal language parser.
ALGOL 68 was the first (and possibly one of the last) major language for which a full formal definition was made before it was implemented.
The main aims and principles of design of ALGOL 68:
ALGOL 68 has been criticized, most prominently by some members of its design committee such asC. A. R. HoareandEdsger Dijkstra, for abandoning the simplicity ofALGOL 60, becoming a vehicle for complex or overly general ideas, and doing little to make thecompilerwriter's task easier, in contrast to deliberately simple contemporaries (and competitors) such asC,S-algolandPascal.
In 1970,ALGOL 68-Rbecame the first working compiler for ALGOL 68.
In the 1973 revision, certain features — such asproceduring, gommas[13]andformal bounds— were omitted.[14]Cf.The language of the unrevised report.r0
Though European defence agencies (in BritainRoyal Signals and Radar Establishment(RSRE)) promoted the use of ALGOL 68 for its expected security advantages, the American side of the NATO alliance decided to develop a different project, the languageAda, making its use obligatory for US defense contracts.
ALGOL 68 also had a notable influence in theSoviet Union, details of which can be found inAndrey Terekhov's 2014 paper: "ALGOL 68 and Its Impact on the USSR and Russian Programming",[15]and "Алгол 68 и его влияние на программирование в СССР и России".[16]
Steve Bourne, who was on the ALGOL 68 revision committee, took some of its ideas to hisBourne shell(and thereby, to descendantUnix shellssuch asBash) and toC(and thereby to descendants such asC++).
The complete history of the project can be found inC. H. Lindsey's "A History of ALGOL 68".[17]
For a full-length treatment of the language, see "Programming ALGOL 68 Made Easy"[18]by Dr. Sian Mountbatten, or "Learning ALGOL 68 Genie"[19]by Marcel van der Veer which includes the Revised Report.
ALGOL 68, as the name implies, is a follow-on to theALGOLlanguage that was first formalized in 1960. That same year theInternational Federation for Information Processing(IFIP) formed and started the Working Group on ALGOL, or WG2.1. This group released an updated ALGOL 60 specification in Rome in April 1962. At a follow-up meeting in March 1964, it was agreed that the group should begin work on two follow-on standards, ALGOL X, which would be a redefinition of the language with some additions, andALGOL Y, which would have the ability to modify its own programs in the style of the languageLISP.[20]
The first meeting of the ALGOL X group was held inPrinceton Universityin May 1965. A report of the meeting noted two broadly supported themes, the introduction ofstrong typingand interest inEuler'sconcepts of 'trees' or 'lists' for handling collections.[21]Although intended as a "short-term solution to existing difficulties",[22]ALGOL X got as far as having a compiler made for it. This compiler was written byDouglas T. Rossof theMassachusetts Institute of Technology(MIT) with theAutomated Engineering Design(AED-0) system, also termedALGOL Extended for Design.[23][24]
At the second meeting in October in France, three formal proposals were presented,Niklaus Wirth'sALGOL Walong with comments about record structures byC.A.R. (Tony) Hoare, a similar language by Gerhard Seegmüller, and a paper byAdriaan van Wijngaardenon "Orthogonal design and description of a formal language". The latter, written in almost indecipherable "W-Grammar", proved to be a decisive shift in the evolution of the language. The meeting closed with an agreement that van Wijngaarden would re-write the Wirth/Hoare submission using his W-Grammar.[21]
This seemingly simple task ultimately proved more difficult than expected, and the follow-up meeting had to be delayed six months. When it met in April 1966 inKootwijk, van Wijngaarden's draft remained incomplete and Wirth and Hoare presented a version using more traditional descriptions. It was generally agreed that their paper was "the right language in the wrong formalism".[25]As these approaches were explored, it became clear there was a difference in the way parameters were described that would have real-world effects, and while Wirth and Hoare protested that further delays might become endless, the committee decided to wait for van Wijngaarden's version. Wirth then implemented their current definition as ALGOL W.[26]
At the next meeting inWarsawin October 1966,[27]there was an initial report from the I/O Subcommittee who had met at theOak Ridge National Laboratoryand theUniversity of Illinoisbut had not yet made much progress. The two proposals from the previous meeting were again explored, and this time a new debate emerged about the use ofpointers; ALGOL W used them only to refer to records, while van Wijngaarden's version could point to any object. To add confusion,John McCarthypresented a new proposal foroperator overloadingand the ability to string togetherandandorconstructs, andKlaus Samelsonwanted to allowanonymous functions. In the resulting confusion, there was some discussion of abandoning the entire effort.[26]The confusion continued through what was supposed to be the ALGOL Y meeting inZandvoortin May 1967.[21]
A draft report was finally published in February 1968. This was met by "shock, horror and dissent",[21]mostly due to the hundreds of pages of unreadable grammar and odd terminology.Charles H. Lindseyattempted to figure out what "language was hidden inside of it",[28]a process that took six man-weeks of effort. The resulting paper, "ALGOL 68 with fewer tears",[29]was widely circulated. At a wider information processing meeting inZürichin May 1968, attendees complained that the language was being forced upon them and that IFIP was "the true villain of this unreasonable situation" as the meetings were mostly closed and there was no formal feedback mechanism. Wirth andPeter Naurformally resigned their authorship positions in WG2.1 at that time.[28]
The next WG2.1 meeting took place inTirreniain June 1968. It was supposed to discuss the release of compilers and other issues, but instead devolved into a discussion on the language. van Wijngaarden responded by saying (or threatening) that he would release only one more version of the report. By this point Naur, Hoare, and Wirth had left the effort, and several more were threatening to do so.[30]Several more meetings followed,North Berwickin August 1968, Munich in December which produced the release of the official Report in January 1969 but also resulted in a contentious Minority Report being written. Finally, atBanff, Albertain September 1969, the project was generally considered complete and the discussion was primarily on errata and a greatly expanded Introduction to the Report.[31]
The effort took five years, burned out many of the greatest names incomputer science, and on several occasions became deadlocked over issues both in the definition and the group as a whole. Hoare released a "Critique of ALGOL 68" almost immediately,[32]which has been widely referenced in many works. Wirth went on to further develop the ALGOL W concept and released this as Pascal in 1970.
The first implementation of the standard, based on the late-1968 draft Report, was introduced by theRoyal Radar Establishmentin the UK asALGOL 68-Rin July 1970. This was, however, a subset of the full language, andBarry Mailloux, the final editor of the Report, joked that "It is a question of morality. We have a Bible and you are sinning!"[33]This version nevertheless became very popular on theICLmachines, and became a widely-used language in military coding, especially in the UK.[34]
Among the changes in 68-R was the requirement for all variables to be declared before their first use. This had a significant advantage that it allowed the compiler to be one-pass, as space for the variables in theactivation recordwas set aside before it was used. However, this change also had the side-effect of demanding thePROCs be declared twice, once as a declaration of the types, and then again as the body of code. Another change was to eliminate the assumedVOIDmode, an expression that returns no value (named astatementin other languages) and demanding the wordVOIDbe added where it would have been assumed. Further, 68-R eliminated the explicitparallel processingcommands based onPAR.[33]
The first full implementation of the language was introduced in 1974 by CDC Netherlands for theControl Datamainframe series. This saw limited use, mostly teaching in Germany and the Netherlands.[34]
A version similar to 68-R was introduced fromCarnegie Mellon Universityin 1976 as 68S, and was again a one-pass compiler based on various simplifications of the original and intended for use on smaller machines like theDEC PDP-11. It too was used mostly for teaching purposes.[34]
A version forIBMmainframes did not become available until 1978, when one was released fromCambridge University. This was "nearly complete". Lindsey released a version for small machines including theIBM PCin 1984.[34]
Three open source Algol 68 implementations are known:[35]
"Van Wijngaarden once characterized the four authors, somewhat tongue-in-cheek, as: Koster:transputter, Peck: syntaxer, Mailloux: implementer, Van Wijngaarden: party ideologist." – Koster.
1968: On 20 December 1968, the "Final Report" (MR 101) was adopted by the Working Group, then subsequently approved by the General Assembly ofUNESCO'sIFIPfor publication. Translations of the standard were made forRussian,German,FrenchandBulgarian, and then laterJapaneseandChinese.[50]The standard was also made available inBraille.
1984:TC 97considered ALGOL 68 for standardisation as "New Work Item" TC97/N1642[2][3]. West Germany, Belgium, Netherlands, USSR and Czechoslovakia willing to participate in preparing the standard but the USSR and Czechoslovakia "were not the right kinds of member of the right ISO committees"[4]and Algol 68's ISO standardisation stalled.[5]
1988: Subsequently ALGOL 68 became one of theGOSTstandards in Russia.
The standard language contains about sixty reserved words, typically bolded in print, and some with "brief symbol" equivalents:
The basic language construct is theunit. A unit may be aformula, anenclosed clause, aroutine textor one of several technically needed constructs (assignation, jump, skip, nihil). The technical termenclosed clauseunifies some of the inherently bracketing constructs known asblock,do statement,switch statementin other contemporary languages. When keywords are used, generally the reversed character sequence of the introducing keyword is used for terminating the enclosure, e.g. (IF~THEN~ELSE~FI,CASE~IN~OUT~ESAC,FOR~WHILE~DO~OD). ThisGuarded Commandsyntax was reused byStephen Bournein the commonUnixBourne shell. An expression may also yield amultiple value, which is constructed from other values by acollateral clause. This construct just looks like the parameter pack of a procedure call.
The basicdata types(calledmodes in Algol 68 parlance) arereal,int,compl(complex number),bool,char,bitsandbytes. For example:
However, the declarationREALx;is justsyntactic sugarforREFREALx =LOCREAL;. That is,xis really theconstant identifierfor areference toa newly generated localREALvariable.
Furthermore, instead of defining bothfloatanddouble, orintandlongandshort, etc., ALGOL 68 providesmodifiers, so that the presently commondoublewould be written asLONGREALorLONGLONGREALinstead, for example. Theprelude constantsmax realandmin long intare provided to adapt programs to different implementations.
All variables need to be declared, but declaration does not have to precede the first use.
primitive-declarer:INT,REAL,COMPL,COMPLEXG,BOOL,CHAR,STRING,BITS,BYTES,FORMAT,FILE,PIPEG,CHANNEL,SEMA
Complex types can be created from simpler ones using various type constructors:
Other declaration symbols include:FLEX,HEAP,LOC,REF,LONG,SHORT,EVENTS
A name for a mode (type) can be declared using aMODEdeclaration,
which is similar toTYPEDEFin C/C++ andTYPEin Pascal:
This is similar to the following C code:
For ALGOL 68, only theNEWMODEmode-indication appears to the left of the equals symbol, and most notably the construction is made, and can be read, from left to right without regard to priorities. Also, thelower boundof Algol 68 arrays is one by default, but can be any integer from -max inttomax int.
Mode declarations allow types to berecursive: defined directly or indirectly in terms of themselves.
This is subject to some restrictions – for instance, these declarations are illegal:
while these are valid:
Thecoercionsproduce a coercee from a coercend according to three criteria: the a priori mode of the coercend before the application of any coercion, the a posteriori mode of the coercee required after those coercions, and the syntactic position or "sort" of the coercee. Coercions may be cascaded.
The six possible coercions are termeddeproceduring,dereferencing,uniting,widening,rowing, andvoiding. Each coercion, except foruniting, prescribes a corresponding dynamic effect on the associated values. Hence, many primitive actions can be programmed implicitly by coercions.
Context strength – allowed coercions:
ALGOL 68 has a hierarchy of contexts which determine the kind of coercions available at a particular point in the program. These contexts are:
Also:
Widening is always applied in theINTtoREALtoCOMPLdirection, provided the modes have the same size. For example: AnINTwill be coerced to aREAL, but not vice versa. Examples:
A variable can also be coerced (rowed) to an array of length 1.
For example:
UNION(INT,REAL) var := 1
IF~THEN...FIandFROM~BY~TO~WHILE~DO...ODetc
For more details about Primaries, Secondaries, Tertiary & Quaternaries refer toOperator precedence.
Pragmats aredirectivesin the program, typically hints to the compiler; in newer languages these are called "pragmas" (no 't'). e.g.
Comments can be inserted in a variety of ways:
Normally, comments cannot be nested in ALGOL 68. This restriction can be circumvented by using different comment delimiters (e.g. use hash only for temporary code deletions).
ALGOL 68 being anexpression-oriented programming language, the value returned by anassignmentstatement is a reference to the destination. Thus, the following is valid ALGOL 68 code:
This notion is present inCandPerl, among others. Note that as in earlier languages such asAlgol 60andFORTRAN, spaces are allowed in identifiers, so thathalf piis asingleidentifier (thus avoiding theunderscoresversuscamel caseversusall lower-caseissues).
As another example, to express the mathematical idea of asumoff(i)from i=1 to n, the following ALGOL 68integer expressionsuffices:
Note that, being an integer expression, the former block of code can be used inany context where an integer value can be used. A block of code returns the value of the last expression it evaluated; this idea is present inLisp, among other languages.
Compound statements are all terminated by distinctive closing brackets:
This scheme not only avoids thedangling elseproblem but also avoids having to useBEGINandENDin embeddedstatementsequences.
Choice clause example withBriefsymbols:
Choice clause example withBoldsymbols:
Choice clause example mixingBoldandBriefsymbols:
Algol68 allowed the switch to be of either typeINTor(uniquely)UNION. The latter allows the enforcingstrong typingontoUNIONvariables. cf.unionbelow for example.
This was consideredthe"universal" loop, the full syntax is:
The construct have several unusual aspects:
Subsequent "extensions" to the standard Algol68 allowed theTOsyntactic element to be replaced withUPTOandDOWNTOto achieve a small optimisation. The same compilers also incorporated:
Further examples can be found in the code examples below.
ALGOL 68 supportsarrayswith any number of dimensions, and it allows for theslicingof whole or partial rows or columns.
Matrices can be sliced either way, e.g.:
ALGOL 68 supports multiple field structures (STRUCT) andunited modes. Reference variables may point to anyMODEincluding array slices and structure fields.
For an example of all this, here is the traditional linked list declaration:
Usage example forUNIONCASEofNODE:
Procedure (PROC) declarations require type specifications for both the parameters and the result (VOIDif none):
or, using the "brief" form of the conditional statement:
The return value of aprocis the value of the last expression evaluated in the procedure. References to procedures (ref proc) are also permitted.Call-by-referenceparameters are provided by specifying references (such asref real) in the formal argument list. The following example defines a procedure that applies a function (specified as a parameter) to each element of an array:
This simplicity of code was unachievable in ALGOL 68's predecessorALGOL 60.
The programmer may define newoperatorsandboththose and the pre-defined ones may beoverloadedand their priorities may be changed by the coder. The following example defines operatorMAXwith both dyadic and monadic versions (scanning across the elements of an array).
These are technically not operators, rather they are considered "units associated with names"
-,ABS,ARG,BIN,ENTIER,LENG,LEVEL,ODD,REPR,ROUND,SHORTEN
-:=, +:=, *:=, /:=, %:=, %*:=, +=:
Specific details:
These are technically not operators, rather they are considered "units associated with names"
Note: Quaternaries include namesSKIPand ~.
:=:(alternativelyIS) tests if two pointers are equal;:/=:(alternativelyISNT) tests if they are unequal.
Consider trying to compare two pointer values, such as the following variables, declared as pointers-to-integer:
Now consider how to decide whether these two are pointing to the same location, or whether one of them is pointing toNIL. The following expression
will dereference both pointers down to values of typeINT, and compare those, since the=operator is defined forINT, but notREFINT. It isnot legalto define=for operands of typeREFINTandINTat the same time, because then calls become ambiguous, due to the implicit coercions that can be applied: should the operands be left asREFINTand that version of the operator called? Or should they be dereferenced further toINTand that version used instead? Therefore the following expression can never be made legal:
Hence the need for separate constructs not subject to the normal coercion rules for operands to operators. But there is a gotcha. The following expressions:
while legal, will probably not do what might be expected. They will always returnFALSE, because they are comparing theactual addresses of the variablesipandjp, rather than what they point to. To achieve the right effect, one would have to write
Most of Algol's "special" characters (⊂, ≡, ␣, ×, ÷, ≤, ≥, ≠, ¬, ⊃, ≡, ∨, ∧, →, ↓, ↑, ⌊, ⌈, ⎩, ⎧, ⊥, ⏨, ¢, ○ and □) can be found on theIBM 2741keyboard with theAPL"golf-ball" print head inserted; these became available in the mid-1960s while ALGOL 68 was being drafted. These characters are also part of theUnicodestandard and most of them are available in several popularfonts.
Transputis the term used to refer to ALGOL 68's input and output facilities. It includes pre-defined procedures for unformatted, formatted and binary transput. Files and other transput devices are handled in a consistent and machine-independent manner. The following example prints out some unformatted output to thestandard outputdevice:
Note the predefined proceduresnewpageandnewlinepassed as arguments.
TheTRANSPUTis considered to be ofBOOKS,CHANNELSandFILES:
"Formatted transput" in ALGOL 68's transput has its own syntax and patterns (functions), withFORMATs embedded between two $ characters.[53]
Examples:
ALGOL 68supports programming of parallel processing. Using the keywordPAR, acollateral clauseis converted to aparallel clause, where the synchronisation of actions is controlled usingsemaphores. In A68G the parallel actions are mapped to threads when available on the hostingoperating system. In A68S a different paradigm of parallel processing was implemented (see below).
For its technical intricacies, ALGOL 68 needs a cornucopia of methods to deny the existence of something:
The termNILISvaralways evaluates toTRUEfor any variable (but see above for correct use ofIS:/=:), whereas it is not known to which value a comparisonx<SKIPevaluates for any integerx.
ALGOL 68 leaves intentionally undefined what happens in case ofinteger overflow, the integer bit representation, and the degree of numerical accuracy for floating point.
Both official reports included some advanced features that were not part of the standard language. These were indicated with an ℵ and considered effectively private. Examples include "≮" and "≯" for templates, theOUTTYPE/INTYPEfor crudeduck typing, and theSTRAIGHTOUTandSTRAIGHTINoperators for "straightening" nested arrays and structures
This sample program implements theSieve of Eratosthenesto find all theprime numbersthat are less than 100.NILis the ALGOL 68 analogue of thenull pointerin other languages. The notationxOFyaccesses a memberxof aSTRUCTy.
Note: The Soviet Era computersЭльбрус-1 (Elbrus-1)and Эльбрус-2 were created using high-level language Эль-76 (AL-76), rather than the traditional assembly. Эль-76 resembles Algol-68, The main difference is the dynamic binding types in Эль-76 supported at the hardware level. Эль-76 is used for application, job control, system programming.[57]
BothALGOL 68CandALGOL 68-Rare written in ALGOL 68, effectively making ALGOL 68 an application of itself. Other applications include:
A feature of ALGOL 68, inherited from theALGOLtradition, is its different representations. Programs in thestrict language(which is rigorously defined in the Report) denote production trees in the form of a sequence of grammar symbols, and should be represented using somerepresentation language, of which there are many and tailored to different purposes.
The Revised Report defines areference languageand it is recommended for representation languages that are intended to be read by humans to be close enough to the reference language so symbols can be distinguished "without further elucidation". These representation languages are calledimplementations of the reference language.
For example, the construct in the strict languagebold-begin-symbolcould be represented asbeginin a publication language, asBEGINin a programming language or as the bytes 0xC000 in some hardware language. Similarly, the strict languagediffers from symbolcould be represented as ≠ or as /=.
ALGOL 68's reserved words are effectively in a differentnamespacefrom identifiers, and spaces are allowed in identifiers in most stropping regimes, so this next fragment is legal:
The programmer who writes executable code does not always have an option ofBOLDtypeface orunderliningin the code as this may depend on hardware and cultural issues. Different methods to denote these identifiers have been devised. This is called astroppingregime. For example, all or some of the following may be availableprogramming representations:
All implementations must recognize at least POINT, UPPER and RES inside PRAGMAT sections. Of these, POINT and UPPER stropping are quite common. QUOTE (single apostrophe quoting) was the original recommendation[citation needed].
It may seem that RES stropping is a contradiction to the specification, as there are no reserved words in Algol 68. This is not so. In RES stropping the representation of the bold word (or keyword)beginisbegin, and the representation of the identifierbeginisbegin_. Note that the underscore character is just a representation artifact and not part of the represented identifier. In contrast, in non-stropped languages with reserved words, like for example C, it is not possible to represent an identifierif, since the representationif_represents the identifierif_, notif.
The following characters were recommended for portability, and termed "worthy characters" in theReport on the Standard Hardware Representation of Algol 68:
This reflected a problem in the 1960s where some hardware didn't support lower-case, nor some other non-ASCIIcharacters, indeed in the 1973 report it was written: "Four worthy characters — "|", "_", "[", and "]" — are often coded differently, even at installations which nominally use the same character set."
ALGOL 68 allows for every natural language to define its own set of keywords Algol-68. As a result, programmers are able to write programs using keywords from their native language. Below is an example of a simple procedure that calculates "the day following", the code is in two languages: English and German.[citation needed]
Russian/Soviet example:In English Algol68's case statement readsCASE~IN~OUT~ESAC, inCyrillicthis readsвыб~в~либо~быв.
Except where noted (with asuperscript), the language described above is that of the "Revised Report(r1)".
The original language (As per the "Final Report"r0) differs in syntax of themode cast, and it had the feature ofproceduring, i.e. coercing the value of a term into a procedure which evaluates the term. Proceduring would be intended to make evaluationslazy. The most useful application could have been the short-circuited evaluation of Boolean operators. In:
bis only evaluated ifais true.
As defined in ALGOL 68, it did not work as expected, for example in the code:
against the programmers naïve expectations the printwouldbe executed as it is only thevalueof the elaborated enclosed-clause afterANDFthat was procedured. Textual insertion of the commented-outPROCBOOL: makes it work.
Some implementations emulate the expected behaviour for this special case by extension of the language.
Before revision, the programmer could decide to have the arguments of a procedure evaluated serially instead of collaterally by using semicolons instead of commas (gommas).
For example in:
The first argument to test is guaranteed to be evaluated before the second, but in the usual:
then the compiler could evaluate the arguments in whatever order it felt like.
After the revision of the report, some extensions to the language have been proposed to widen the applicability:
So far, only partial parametrisation has been implemented, in Algol 68 Genie.
TheS3 languagethat was used to write theICL VMEoperating system and much other system software on theICL 2900 Serieswas a direct derivative of Algol 68. However, it omitted many of the more complex features, and replaced the basic modes with a set of data types that mapped directly to the 2900 Series hardware architecture.
ALGOL 68R fromRREwas the first ALGOL 68 subset implementation, running on theICL 1900. Based on the original language, the main subset restrictions weredefinition before useand no parallel processing. This compiler was popular inUKuniversities in the 1970s, where manycomputer sciencestudents learnt ALGOL 68 as their first programming language; the compiler was renowned for good error messages.
ALGOL 68RS(RS)fromRSREwas a portable compiler system written in ALGOL 68RS (bootstrapped from ALGOL 68R), and implemented on a variety of systems including theICL 2900/Series 39,MulticsandDEC VAX/VMS. The language was based on the Revised Report, but with similar subset restrictions to ALGOL 68R. This compiler survives in the form of an Algol68-to-C compiler.
In ALGOL 68S(S)fromCarnegie Mellon Universitythe power of parallel processing was improved by adding an orthogonal extension,eventing. Any variable declaration containing keywordEVENTmade assignments to this variable eligible for parallel evaluation, i.e. the right hand side was made into a procedure which was moved to one of the processors of theC.mmpmultiprocessor system. Accesses to such variables were delayed after termination of the assignment.
CambridgeALGOL 68C(C)was a portable compiler that implemented a subset of ALGOL 68, restricting operator definitions and omitting garbage collection, flexible rows and formatted transput.
Algol 68 Genie(G)by M. van der Veer is an ALGOL 68 implementation for today's computers and operating systems.
"Despite good intentions, a programmer may violate portability by inadvertently employing a local extension. To guard against this, each implementation should provide a PORTCHECK pragmat option. While this option is in force, the compiler prints a message for each construct that it recognizes as violating some portability constraint."[69] | https://en.wikipedia.org/wiki/Algol68 |
Wirth syntax notation(WSN) is ametasyntax, that is, a formal way to describeformal languages. Originally proposed byNiklaus Wirthin 1977 as an alternative toBackus–Naur form(BNF). It has several advantages over BNF in that it contains an explicit iteration construct, and it avoids the use of an explicit symbol for the empty string (such as <empty> or ε).[1]
WSN has been used in severalinternational standards, starting withISO 10303-21.[2]It was also used to define the syntax ofEXPRESS, thedata modellinglanguage ofSTEP.
The equals sign indicates a production. The element on the left is defined to be the combination of elements on the right. A production is terminated by a full stop (period).
We take these concepts for granted today, but they
were novel and even controversial in 1977. Wirth later incorporated some
of the concepts (with a different syntax and notation) intoextended Backus–Naur form.
Notice thatletterandcharacterare left undefined. This is because numeric characters (digits 0 to 9) may be included in both definitions or excluded from one, depending on the language being defined,e.g.:
Ifcharactergoes on to includedigitand other printableASCIIcharacters, then it diverges even more fromletter, which one can assume does not include the digit characters or any of the special (non-alphanumeric) characters.
The syntax of BNF can be represented with WSN as follows, based on translatingthe BNF example of itself:
This definition appears overcomplicated because the concept of "optionalwhitespace" must be explicitly defined in BNF, but it is implicit in WSN. Even in this example,textis left undefined, but it is assumed to mean "ASCII-character{ ASCII-character }". (EOLis also left undefined.) Notice how thekludge"<" rule-name ">"has been used twice becausetextwas not explicitly defined.
One of the problems with BNF which this example illustrates is that by allowing both single-quote and double-quote characters to be used for aliteral, there is an added potential for human error in attempting to create a machine-readable syntax. One of the concepts that migrated to later meta syntaxes was the idea that giving the user multiple choices made it harder to write parsers for grammars defined by the syntax, so computer languages in general have become more restrictive in how aquoted-literalis defined.
Syntax diagram: | https://en.wikipedia.org/wiki/Wirth_syntax_notation |
AFibonacci wordis a specific sequence ofbinarydigits (or symbols from any two-letteralphabet). The Fibonacci word is formed by repeatedconcatenationin the same way that theFibonacci numbersare formed by repeated addition.
It is a paradigmatic example of aSturmian wordand specifically, amorphic word.
The name "Fibonacci word" has also been used to refer to the members of aformal languageLconsisting of strings of zeros and ones with no two repeated ones. Any prefix of the specific Fibonacci word belongs toL, but so do many other strings.Lhas a Fibonacci number of members of each possible length.
LetS0{\displaystyle S_{0}}be "0" andS1{\displaystyle S_{1}}be "01". NowSn=Sn−1Sn−2{\displaystyle S_{n}=S_{n-1}S_{n-2}}(the concatenation of the previous sequence and the one before that).
The infinite Fibonacci word is the limitS∞{\displaystyle S_{\infty }}, that is, the (unique) infinite sequence that contains eachSn{\displaystyle S_{n}}, for finiten{\displaystyle n}, as a prefix.
Enumerating items from the above definition produces:
The first few elements of the infinite Fibonacci word are:
0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, ... (sequenceA003849in theOEIS)
Thenthdigit of the word is2+⌊nφ⌋−⌊(n+1)φ⌋{\displaystyle 2+\lfloor n\varphi \rfloor -\lfloor (n+1)\varphi \rfloor }whereφ{\displaystyle \varphi }is thegolden ratioand⌊⌋{\displaystyle \lfloor \,\ \rfloor }is thefloor function(sequenceA003849in theOEIS). As a consequence, the infinite Fibonacci word can be characterized by a cutting sequence of a line of slope1/φ{\displaystyle 1/\varphi }orφ−1{\displaystyle \varphi -1}. See the figure above.
Another way of going fromSntoSn+1is to replace each symbol 0 inSnwith the pair of consecutive symbols 0, 1 inSn+1, and to replace each symbol 1 inSnwith the single symbol 0 inSn+1.
Alternatively, one can imagine directly generating the entire infinite Fibonacci word by the following process: start with a cursor pointing to the single digit 0. Then, at each step, if the cursor is pointing to a 0, append 1, 0 to the end of the word, and if the cursor is pointing to a 1, append 0 to the end of the word. In either case, complete the step by moving the cursor one position to the right.
A similar infinite word, sometimes called therabbit sequence, is generated by a similar infinite process with a different replacement rule: whenever the cursor is pointing to a 0, append 1, and whenever the cursor is pointing to a 1, append 0, 1. The resulting sequence begins
However this sequence differs from the Fibonacci word only trivially, by swapping 0s for 1s and shifting the positions by one.
A closed form expression for the so-called rabbit sequence:
Thenthdigit of the word is⌊nφ⌋−⌊(n−1)φ⌋−1.{\displaystyle \lfloor n\varphi \rfloor -\lfloor (n-1)\varphi \rfloor -1.}
The word is related to the famous sequence of the same name (theFibonacci sequence) in the sense that addition of integers in theinductive definitionis replaced with string concatenation. This causes the length ofSnto beFn+2, the (n+2)nd Fibonacci number. Also the number of 1s inSnisFnand the number of 0s inSnisFn+1.
Fibonacci based constructions are currently used to model physical systems with aperiodic order such asquasicrystals, and in this context the Fibonacci word is also called theFibonacci quasicrystal.[11]Crystal growth techniques have been used to grow Fibonacci layered crystals and study their light scattering properties.[12] | https://en.wikipedia.org/wiki/Fibonacci_word |
Inmathematics, theKolakoski sequence, sometimes also known as theOldenburger–Kolakoski sequence,[1]is aninfinite sequenceof symbols {1,2} that is the sequence of run lengths in its ownrun-length encoding.[2]It is named after therecreational mathematicianWilliam Kolakoski(1944–97) who described it in 1965,[3]but it was previously discussed byRufus Oldenburgerin 1939.[1][4]
The initial terms of the Kolakoski sequence are:
Each symbol occurs in a "run" (a sequence of equal elements) of either one or two consecutive terms, and writing down the lengths of these runs gives exactly the same sequence:
The description of the Kolakoski sequence is therefore reversible. IfKstands for "the Kolakoski sequence", description #1 logically implies description #2 (and vice versa):
Accordingly, one can say that each term of the Kolakoski sequence generates a run of one or two future terms. The first 1 of the sequence generates a run of "1", i.e. itself; the first 2 generates a run of "22", which includes itself; the second 2 generates a run of "11"; and so on. Each number in the sequence is thelengthof the next run to be generated, and theelementto be generated alternates between 1 and 2:
As can be seen, the length of the sequence at each stage is equal to the sum of terms in the previous stage. This animation illustrates the process:
These self-generating properties, which remain if the sequence is written without the initial 1, mean that the Kolakoski sequence can be described as afractal, or mathematical object that encodes its own representation on other scales.[1]Bertran Steinsky has created a recursive formula for thei-th term of the sequence.[5]
The sequence is not eventuallyperiodic, that is, its terms do not have a general repeating pattern (cf.irrational numberslikeπand√2). More generally, the sequence is cube-free, i.e., has no substring of the formwww{\displaystyle www}withw{\displaystyle w}some nonempty finite string.[6]
It seems plausible that the density of 1s in the Kolakoski {1,2}-sequence is 1/2, but this conjecture remains unproved.[7]Václav Chvátalhas proved that the upper density of 1s is less than 0.50084.[8]Nilsson has used the same method with far greater computational power to obtain the bound 0.500080.[9]
Although calculations of the first 3×108values of the sequence appeared to show its density converging to a value slightly different from 1/2,[5]later calculations that extended the sequence to its first 1013values show the deviation from a density of 1/2 growing smaller, as one would expect if the limiting density actually is 1/2.[10]
The Kolakoski sequence can also be described as the result of a simple cyclictag system. However, as this system is a 2-tag system rather than a 1-tag system (that is, it replaces pairs of symbols by other sequences of symbols, rather than operating on a single symbol at a time) it lies in the region of parameters for which tag systems areTuring complete, making it difficult to use this representation to reason about the sequence.[11]
The Kolakoski sequence may be generated by analgorithmthat, in thei-th iteration, reads the valuexithat has already been output as thei-th value of the sequence (or, if no such value has been output yet, setsxi=i). Then, ifiis odd, it outputsxicopies of the number 1, while ifiis even, it outputsxicopies of the number 2.
Thus, the first few steps of the algorithm are:
This algorithm takeslinear time, but because it needs to refer back to earlier positions in the sequence it needs to store the whole sequence, taking linear space. An alternative algorithm that generates multiple copies of the sequence at different speeds, with each copy of the sequence using the output of the previous copy to determine what to do at each step, can be used to generate the sequence in linear time and onlylogarithmic space.[10] | https://en.wikipedia.org/wiki/Kolakoski_sequence |
Intheoretical computer scienceandmathematics, especially in the area ofcombinatorics on words, theLevi lemmastates that, for allstringsu,v,xandy, ifuv=xy, then there exists a stringwsuch that either
or
That is, there is a stringwthat is "in the middle", and can be grouped to one side or the other. Levi's lemma is named afterFriedrich Wilhelm Levi, who published it in 1944.[1]
Levi's lemma can be applied repeatedly in order to solveword equations; in this context it is sometimes called theNielsen transformationby analogy with theNielsen transformation for groups. For example, starting with an equationxα=yβwherexandyare the unknowns, we can transform it (assuming|x| ≥ |y|, so there existstsuch thatx=yt) toytα=yβ, thus totα=β. This approach results in a graph of substitutions generated by repeatedly applying Levi's lemma. If each unknown appears at most twice, then a word equation is called quadratic; in a quadratic word equation the graph obtained by repeatedly applying Levi's lemma is finite, so it isdecidableif a quadratic word equationhas a solution.[2]A more general method for solving word equations isMakanin's algorithm.[3][4]
The above is known as theLevi lemma for strings; the lemma can occur in a more general form ingraph theoryand inmonoid theory; for example, there is a more general Levi lemma fortracesoriginally due to Christine Duboc.[5]Several proofs of Levi's Lemma for traces can be found inThe Book of Traces.[6]
A monoid in which Levi's lemma holds is said to have theequidivisibility property.[7]Thefree monoidof strings and string concatenation has this property (by Levi's lemma for strings), but by itself equidivisibility is not enough to guarantee that a monoid is free. However an equidivisible monoidMis free if additionally there exists ahomomorphismffromMto themonoid of natural numbers(free monoid on one generator) with the property that thepreimageof 0 contains only the identity element ofM, i.e.f−1(0)={1M}{\displaystyle f^{-1}(0)=\{1_{M}\}}. (Note thatfsimply being a homomorphism does not guarantee this latter property, as there could be multiple elements ofMmapped to 0.)[8]A monoid for which such a homomorphism exists is also calledgraded(and thefis called a gradation).[9] | https://en.wikipedia.org/wiki/Levi%27s_lemma |
Incomputer scienceand the study ofcombinatorics on words, apartial wordis astringthat may contain a number of "do not know" or "do not care" symbols i.e. placeholders in the string where the symbol value is not known or not specified. More formally, a partial word is apartial functionu:{0,…,n−1}→A{\displaystyle u:\{0,\ldots ,n-1\}\rightarrow A}whereA{\displaystyle A}is some finite alphabet. Ifu(k) is not defined for somek∈{0,…,n−1}{\displaystyle k\in \{0,\ldots ,n-1\}}then the unknown element at placekin the string is called a "hole". Inregular expressions(following thePOSIXstandard) a hole is represented by themetacharacter".". For example,aab.ab.bis a partial word of length 8 over the alphabetA={a,b} in which the fourth and seventh characters are holes.[1]
Several algorithms have been developed for the problem of "string matching with don't cares", in which the input is a long text and a shorter partial word and the goal is to find all strings in the text that match the given partial word.[2][3][4]
Two partial words are said to becompatiblewhen they have the same length and when every position that is a non-wildcard in both of them has the same character in both. If one forms anundirected graphwith a vertex for each partial word in a collection of partial words, and an edge for each compatible pair, then thecliquesof this graph come from sets of partial words that all match at least one common string. This graph-theoretical interpretation of compatibility of partial words plays a key role in the proof ofhardness of approximationof theclique problem, in which a collection of partial words representing successful runs of aprobabilistically checkable proofverifier has a large clique if and only if there exists a valid proof of an underlyingNP-completeproblem.[5]
The faces (subcubes) of ann{\displaystyle n}-dimensionalhypercubecan be described by partial words of lengthn{\displaystyle n}over a binary alphabet, whose
symbols are theCartesian coordinatesof the hypercube vertices (e.g., 0 or 1 for aunit cube). The dimension of a subcube, in this representation, equals the number of don't-care symbols it contains. The same representation may also be used to describe theimplicantsofBoolean functions.[6]
Partial words may be generalized toparameter words, in which some of the "do not know" symbols are marked as being equal to each other. A partial word is a special case of a parameter word in which each do not know symbol may be substituted by a character independently of all of the other ones.[7] | https://en.wikipedia.org/wiki/Partial_word |
Insymbolic dynamicsand related branches ofmathematics, ashift spaceorsubshiftis a set ofinfinitewordsthat represent the evolution of adiscrete system. In fact, shift spaces andsymbolic dynamical systemsare often consideredsynonyms. The most widely studied shift spaces are thesubshifts of finite typeand thesofic shifts.
In theclassical framework[1]a shift space is any subsetΛ{\displaystyle \Lambda }ofAZ:={(xi)i∈Z:xi∈A∀i∈Z}{\displaystyle A^{\mathbb {Z} }:=\{(x_{i})_{i\in \mathbb {Z} }:\ x_{i}\in A\ \forall i\in \mathbb {Z} \}}, whereA{\displaystyle A}is afinite set, which is closed for the Tychonov topology and invariant by translations. More generally one can define a shift space as the closed and translation-invariant subsets ofAG{\displaystyle A^{\mathbb {G} }}, whereA{\displaystyle A}is any non-empty set andG{\displaystyle \mathbb {G} }is anymonoid.[2][3]
LetG{\displaystyle \mathbb {G} }be amonoid, and giveng,h∈G{\displaystyle g,h\in \mathbb {G} }, denote the operation ofg{\displaystyle g}withh{\displaystyle h}by the productgh{\displaystyle gh}. Let1G{\displaystyle \mathbf {1} _{\mathbb {G} }}denote the identity ofG{\displaystyle \mathbb {G} }. Consider a non-empty setA{\displaystyle A}(an alphabet) with thediscrete topology, and defineAG{\displaystyle A^{\mathbb {G} }}as the set of all patterns overA{\displaystyle A}indexed byG{\displaystyle \mathbb {G} }. Forx=(xi)i∈G∈AG{\displaystyle \mathbf {x} =(x_{i})_{i\in \mathbb {G} }\in A^{\mathbb {G} }}and a subsetN⊂G{\displaystyle N\subset \mathbb {G} }, we denote the restriction ofx{\displaystyle \mathbf {x} }to the indices ofN{\displaystyle N}asxN:=(xi)i∈N{\displaystyle \mathbf {x} _{N}:=(x_{i})_{i\in N}}.
OnAG{\displaystyle A^{\mathbb {G} }}, we consider the prodiscrete topology, which makesAG{\displaystyle A^{\mathbb {G} }}a Hausdorff and totally disconnected topological space. In the case ofA{\displaystyle A}being finite, it follows thatAG{\displaystyle A^{\mathbb {G} }}is compact. However, ifA{\displaystyle A}is not finite, thenAG{\displaystyle A^{\mathbb {G} }}is not even locally compact.
This topology will be metrizable if and only ifG{\displaystyle \mathbb {G} }is countable, and, in any case, the base of this topology consists of a collection of open/closed sets (called cylinders), defined as follows: given a finite set of indicesD⊂G{\displaystyle D\subset \mathbb {G} }, and for eachi∈D{\displaystyle i\in D}, letai∈A{\displaystyle a_{i}\in A}. Thecylindergiven byD{\displaystyle D}and(ai)i∈D∈A|D|{\displaystyle (a_{i})_{i\in D}\in A^{|D|}}is the set
WhenD={g}{\displaystyle D=\{g\}}, we denote the cylinder fixing the symbolb{\displaystyle b}at the entry indexed byg{\displaystyle g}simply as[b]g{\displaystyle [b]_{g}}.
In other words, a cylinder[(ai)i∈D]D{\displaystyle {\big [}(a_{i})_{i\in D}{\big ]}_{D}}is the set of all set of all infinite patterns ofAG{\displaystyle A^{\mathbb {G} }}which contain the finite pattern(ai)i∈D∈A|D|{\displaystyle (a_{i})_{i\in D}\in A^{|D|}}.
Giveng∈G{\displaystyle g\in \mathbb {G} }, theg-shift maponAG{\displaystyle A^{\mathbb {G} }}is denoted byσg:AG→AG{\displaystyle \sigma ^{g}:A^{\mathbb {G} }\to A^{\mathbb {G} }}and defined as
Ashift spaceover the alphabetA{\displaystyle A}is a setΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}that is closed under the topology ofAG{\displaystyle A^{\mathbb {G} }}and invariant under translations, i.e.,σg(Λ)⊂Λ{\displaystyle \sigma ^{g}(\Lambda )\subset \Lambda }for allg∈G{\displaystyle g\in \mathbb {G} }.[note 1]We consider in the shift spaceΛ{\displaystyle \Lambda }the induced topology fromAG{\displaystyle A^{\mathbb {G} }}, which has as basic open sets the cylinders[(ai)i∈D]Λ:=[(ai)i∈D]∩Λ{\displaystyle {\big [}(a_{i})_{i\in D}{\big ]}_{\Lambda }:={\big [}(a_{i})_{i\in D}{\big ]}\cap \Lambda }.
For eachk∈N∗{\displaystyle k\in \mathbb {N} ^{*}}, defineNk:=⋃N⊂G#N=kAN{\displaystyle {\mathcal {N}}_{k}:=\bigcup _{N\subset \mathbb {G} \atop \#N=k}A^{N}}, andNAGf:=⋃k∈NNk=⋃N⊂G#N<∞AN{\displaystyle {\mathcal {N}}_{A^{\mathbb {G} }}^{f}:=\bigcup _{k\in \mathbb {N} }{\mathcal {N}}_{k}=\bigcup _{N\subset \mathbb {G} \atop \#N<\infty }A^{N}}. An equivalent way to define a shift space is to take a set offorbidden patternsF⊂NAGf{\displaystyle F\subset {\mathcal {N}}_{A^{\mathbb {G} }}^{f}}and define a shift space as the set
Intuitively, a shift spaceXF{\displaystyle X_{F}}is the set of all infinite patterns that do not contain any forbidden finite pattern ofF{\displaystyle F}.
Given a shift spaceΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}and a finite set of indicesN⊂G{\displaystyle N\subset \mathbb {G} }, letW∅(Λ):={ϵ}{\displaystyle W_{\emptyset }(\Lambda ):=\{\epsilon \}}, whereϵ{\displaystyle \epsilon }stands for the empty word, and forN≠∅{\displaystyle N\neq \emptyset }letWN(Λ)⊂AN{\displaystyle W_{N}(\Lambda )\subset A^{N}}be the set of all finite configurations ofAN{\displaystyle A^{N}}that appear in some sequence ofΛ{\displaystyle \Lambda }, i.e.,
Note that, sinceΛ{\displaystyle \Lambda }is a shift space, ifM⊂G{\displaystyle M\subset \mathbb {G} }is a translation ofN⊂G{\displaystyle N\subset \mathbb {G} }, i.e.,M=gN{\displaystyle M=gN}for someg∈G{\displaystyle g\in \mathbb {G} }, then(wj)j∈M∈WM(Λ){\displaystyle (w_{j})_{j\in M}\in W_{M}(\Lambda )}if and only if there exists(vi)i∈N∈WN(Λ){\displaystyle (v_{i})_{i\in N}\in W_{N}(\Lambda )}such thatwj=vi{\displaystyle w_{j}=v_{i}}ifj=gi{\displaystyle j=gi}. In other words,WM(Λ){\displaystyle W_{M}(\Lambda )}andWN(Λ){\displaystyle W_{N}(\Lambda )}contain the same configurations modulo translation. We will call the set
thelanguageofΛ{\displaystyle \Lambda }. In the general context stated here, the language of a shift space has not the same mean of that inFormal Language Theory, but in theclassical frameworkwhich considers the alphabetA{\displaystyle A}being finite, andG{\displaystyle \mathbb {G} }beingN{\displaystyle \mathbb {N} }orZ{\displaystyle \mathbb {Z} }with the usual addition, the language of a shift space is a formal language.
The classical framework for shift spaces consists of considering the alphabetA{\displaystyle A}as finite, andG{\displaystyle \mathbb {G} }as the set of non-negative integers (N{\displaystyle \mathbb {N} }) with the usual addition, or the set of all integers (Z{\displaystyle \mathbb {Z} }) with the usual addition. In both cases, the identity element1G{\displaystyle \mathbf {1} _{\mathbb {G} }}corresponds to the number 0. Furthermore, whenG=N{\displaystyle \mathbb {G} =\mathbb {N} }, since allN∖{0}{\displaystyle \mathbb {N} \setminus \{0\}}can be generated from the number 1, it is sufficient to consider a unique shift map given byσ(x)n=xn+1{\displaystyle \sigma (\mathbf {x} )_{n}=x_{n+1}}for alln{\displaystyle n}. On the other hand, for the case ofG=Z{\displaystyle \mathbb {G} =\mathbb {Z} }, since allZ{\displaystyle \mathbb {Z} }can be generated from the numbers {-1, 1}, it is sufficient to consider two shift maps given for alln{\displaystyle n}byσ(x)n=xn+1{\displaystyle \sigma (\mathbf {x} )_{n}=x_{n+1}}and byσ−1(x)n=xn−1{\displaystyle \sigma ^{-1}(\mathbf {x} )_{n}=x_{n-1}}.
Furthermore, wheneverG{\displaystyle \mathbb {G} }isN{\displaystyle \mathbb {N} }orZ{\displaystyle \mathbb {Z} }with the usual addition (independently of the cardinality ofA{\displaystyle A}), due to its algebraic structure, it is sufficient consider only cylinders in the form
Moreover, the language of a shift spaceΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}will be given by
whereW0:={ϵ}{\displaystyle W_{0}:=\{\epsilon \}}andϵ{\displaystyle \epsilon }stands for the empty word, and
In the same way, for the particular case ofG=Z{\displaystyle \mathbb {G} =\mathbb {Z} }, it follows that to define a shift spaceΛ=XF{\displaystyle \Lambda =X_{F}}we do not need to specify the index ofG{\displaystyle \mathbb {G} }on which the forbidden words ofF{\displaystyle F}are defined, that is, we can just considerF⊂⋃n≥1An{\displaystyle F\subset \bigcup _{n\geq 1}A^{n}}and then
However, ifG=N{\displaystyle \mathbb {G} =\mathbb {N} }, if we define a shift spaceΛ=XF{\displaystyle \Lambda =X_{F}}as above, without to specify the index of where the words are forbidden, then we will just capture shift spaces which are invariant through the shift map, that is, such thatσ(XF)=XF{\displaystyle \sigma (X_{F})=X_{F}}. In fact, to define a shift spaceXF⊂AN{\displaystyle X_{F}\subset A^{\mathbb {N} }}such thatσ(XF)⊊XF{\displaystyle \sigma (X_{F})\subsetneq X_{F}}it will be necessary to specify from which index on the words ofF{\displaystyle F}are forbidden.
In particular, in the classical framework ofA{\displaystyle A}being finite, andG{\displaystyle \mathbb {G} }beingN{\displaystyle \mathbb {N} }) orZ{\displaystyle \mathbb {Z} }with the usual addition, it follows thatMF{\displaystyle M_{F}}is finite if and only ifF{\displaystyle F}is finite, which leads to classical definition of a shift of finite type as those shift spacesΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}such thatΛ=XF{\displaystyle \Lambda =X_{F}}for some finiteF{\displaystyle F}.
Among several types of shift spaces, the most widely studied are theshifts of finite typeand thesofic shifts.
In the case when the alphabetA{\displaystyle A}is finite, a shift spaceΛ{\displaystyle \Lambda }is ashift of finite typeif we can take a finite set of forbidden patternsF{\displaystyle F}such thatΛ=XF{\displaystyle \Lambda =X_{F}}, andΛ{\displaystyle \Lambda }is asofic shiftif it is the image of a shift of finite type undersliding block code[1](that is, a mapΦ{\displaystyle \Phi }that is continuous and invariant for allg{\displaystyle g}-shift maps ). IfA{\displaystyle A}is finite andG{\displaystyle \mathbb {G} }isN{\displaystyle \mathbb {N} }orZ{\displaystyle \mathbb {Z} }with the usual addition, then the shiftΛ{\displaystyle \Lambda }is a sofic shift if and only ifW(Λ){\displaystyle W(\Lambda )}is aregular language.
The name "sofic" was coined byWeiss (1973), based on theHebrewword סופי meaning "finite", to refer to the fact that this is a generalization of a finiteness property.[4]
WhenA{\displaystyle A}is infinite, it is possible to define shifts of finite type as shift spacesΛ{\displaystyle \Lambda }for those one can take a setF{\displaystyle F}of forbidden words such that
is finite andΛ=XF{\displaystyle \Lambda =X_{F}}.[3]In this context of infinite alphabet, a sofic shift will be defined as the image of a shift of finite type under a particular class ofsliding block codes.[3]Both, the finiteness ofMF{\displaystyle M_{F}}and the additional conditions thesliding block codes, are trivially satisfied wheneverA{\displaystyle A}is finite.
Shift spaces are thetopological spaceson whichsymbolic dynamical systemsare usually defined.
Given a shift spaceΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}and ag{\displaystyle g}-shift mapσg:Λ→Λ{\displaystyle \sigma ^{g}:\Lambda \to \Lambda }it follows that the pair(Λ,σg){\displaystyle (\Lambda ,\sigma ^{g})}is atopological dynamical system.
Two shift spacesΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}andΓ⊂BG{\displaystyle \Gamma \subset B^{\mathbb {G} }}are said to be topologically conjugate (or simply conjugate) if for eachg{\displaystyle g}-shift map it follows that the topological dynamical systems(Λ,σg){\displaystyle (\Lambda ,\sigma ^{g})}and(Γ,σg){\displaystyle (\Gamma ,\sigma ^{g})}aretopologically conjugate, that is, if there exists a continuous mapΦ:Λ→Γ{\displaystyle \Phi :\Lambda \to \Gamma }such thatΦ∘σg=σg∘Φ{\displaystyle \Phi \circ \sigma ^{g}=\sigma ^{g}\circ \Phi }. Such maps are known asgeneralized sliding block codesor just assliding block codeswheneverΦ{\displaystyle \Phi }is uniformly continuous.[3]
Although any continuous mapΦ{\displaystyle \Phi }fromΛ⊂AG{\displaystyle \Lambda \subset A^{\mathbb {G} }}to itself will define a topological dynamical system(Λ,Φ){\displaystyle (\Lambda ,\Phi )}, in symbolic dynamics it is usual to consider only continuous mapsΦ:Λ→Λ{\displaystyle \Phi :\Lambda \to \Lambda }which commute with allg{\displaystyle g}-shift maps, i. e., maps which are generalized sliding block codes. The dynamical system(Λ,Φ){\displaystyle (\Lambda ,\Phi )}is known as a 'generalized cellular automaton'(or just as acellular automatonwheneverΦ{\displaystyle \Phi }is uniformly continuous).
The first trivial example of shift space (of finite type) is thefull shiftAN{\displaystyle A^{\mathbb {N} }}.
LetA={a,b}{\displaystyle A=\{a,b\}}. The set of all infinite words overAcontaining at most onebis a sofic subshift, not of finite type. The set of all infinite words overAwhosebform blocks of prime length is not sofic (this can be shown by using thepumping lemma).
The space of infinite strings in two letters,{0,1}N{\displaystyle \{0,1\}^{\mathbb {N} }}is called theBernoulli process. It is isomorphic to theCantor set.
The bi-infinite space of strings in two letters,{0,1}Z{\displaystyle \{0,1\}^{\mathbb {Z} }}is commonly known as theBaker's map, or rather is homomorphic to the Baker's map. | https://en.wikipedia.org/wiki/Shift_space |
Ingroup theory, aword metricon adiscrete groupG{\displaystyle G}is a way to measure distance between any two elements ofG{\displaystyle G}. As the name suggests, the word metric is ametriconG{\displaystyle G}, assigning to any two elementsg{\displaystyle g},h{\displaystyle h}ofG{\displaystyle G}a distanced(g,h){\displaystyle d(g,h)}that measures how efficiently their differenceg−1h{\displaystyle g^{-1}h}can be expressed as awordwhose letters come from agenerating setfor the group. The word metric onGis very closely related to theCayley graphofG: the word metric measures the length of the shortest path in the Cayley graph between two elements ofG.
Agenerating setforG{\displaystyle G}must first be chosen before a word metric onG{\displaystyle G}is specified. Different choices of a generating set will typically yield different word metrics. While this seems at first to be a weakness in the concept of the word metric, it can be exploited to prove theorems about geometric properties of groups, as is done ingeometric group theory.
The group ofintegersZ{\displaystyle \mathbb {Z} }is generated by the set {-1,+1}. The integer -3 can be expressed as -1-1-1+1-1, a word of length 5 in these generators. But the word that expresses -3 most efficiently is -1-1-1, a word of length 3. The distance between 0 and -3 in the word metric is therefore equal to 3. More generally, the distance between two integersmandnin the word metric is equal to |m-n|, because the shortest word representing the differencem-nhas length equal to |m-n|.
For a more illustrative example, the elements of the groupZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }can be thought of asvectorsin theCartesian planewith integer coefficients. The groupZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }is generated by the standard unit vectorse1=⟨1,0⟩{\displaystyle e_{1}=\langle 1,0\rangle },e2=⟨0,1⟩{\displaystyle e_{2}=\langle 0,1\rangle }and their inverses−e1=⟨−1,0⟩{\displaystyle -e_{1}=\langle -1,0\rangle },−e2=⟨0,−1⟩{\displaystyle -e_{2}=\langle 0,-1\rangle }. TheCayley graphofZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }is the so-calledtaxicab geometry. It can be pictured in the plane as an infinite square grid of city streets, where each horizontal and vertical line with integer coordinates is a street, and each point ofZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }lies at the intersection of a horizontal and a vertical street. Each horizontal segment between two vertices represents the generating vectore1{\displaystyle e_{1}}or−e1{\displaystyle -e_{1}}, depending on whether the segment is travelled in the forward or backward direction, and each vertical segment representse2{\displaystyle e_{2}}or−e2{\displaystyle -e_{2}}. A car starting from⟨1,2⟩{\displaystyle \langle 1,2\rangle }and travelling along the streets to⟨−2,4⟩{\displaystyle \langle -2,4\rangle }can make the trip by many different routes. But no matter what route is taken, the car must travel at least |1 - (-2)| = 3 horizontal blocks and at least |2 - 4| = 2 vertical blocks, for a total trip distance of at least 3 + 2 = 5. If the car goes out of its way the trip may be longer, but the minimal distance travelled by the car, equal in value to the word metric between⟨1,2⟩{\displaystyle \langle 1,2\rangle }and⟨−2,4⟩{\displaystyle \langle -2,4\rangle }is therefore equal to 5.
In general, given two elementsv=⟨i,j⟩{\displaystyle v=\langle i,j\rangle }andw=⟨k,l⟩{\displaystyle w=\langle k,l\rangle }ofZ⊕Z{\displaystyle \mathbb {Z} \oplus \mathbb {Z} }, the distance betweenv{\displaystyle v}andw{\displaystyle w}in the word metric is equal to|i−k|+|j−l|{\displaystyle |i-k|+|j-l|}.
LetGbe a group, letSbe agenerating setforG, and suppose thatSis closed under the inverse operation onG. Awordover the setSis just a finite sequencew=s1…sL{\displaystyle w=s_{1}\ldots s_{L}}whose entriess1,…,sL{\displaystyle s_{1},\ldots ,s_{L}}are elements ofS. The integerLis called the length of the wordw{\displaystyle w}. Using the group operation inG, the entries of a wordw=s1…sL{\displaystyle w=s_{1}\ldots s_{L}}can be multiplied in order, remembering that the entries are elements ofG. The result of this multiplication is an elementw¯{\displaystyle {\bar {w}}}in the groupG, which is called theevaluationof the wordw. As a special case, the empty wordw=∅{\displaystyle w=\emptyset }has length zero, and its evaluation is the identity element ofG.
Given an elementgofG, itsword norm|g| with respect to the generating setSis defined to be the shortest length of a wordw{\displaystyle w}overSwhose evaluationw¯{\displaystyle {\bar {w}}}is equal tog. Given two elementsg,hinG, the distance d(g,h) in the word metric with respect toSis defined to be|g−1h|{\displaystyle |g^{-1}h|}. Equivalently, d(g,h) is the shortest length of a wordwoverSsuch thatgw¯=h{\displaystyle g{\bar {w}}=h}.
The word metric onGsatisfies the axioms for ametric, and it is not hard to prove this. The proof of the symmetry axiom d(g,h) = d(h,g) for a metric uses the assumption that the generating setSis closed under inverse.
The word metric has an equivalent definition formulated in more geometric terms using theCayley graphofGwith respect to the generating setS. When each edge of the Cayley graph is assigned a metric of length 1, the distance between two group elementsg,hinGis equal to the shortest length of a path in the Cayley graph from the vertexgto the vertexh.
The word metric onGcan also be defined without assuming that the generating setSis closed under inverse. To do this, first symmetrizeS, replacing it by a larger generating set consisting of eachs{\displaystyle s}inSas well as its inverses−1{\displaystyle s^{-1}}. Then define the word metric with respect toSto be the word metric with respect to the symmetrization ofS.
Suppose thatFis the free group on the two element set{a,b}{\displaystyle \{a,b\}}. A wordwin the symmetric generating set{a,b,a−1,b−1}{\displaystyle \{a,b,a^{-1},b^{-1}\}}is said to be reduced if the lettersa,a−1{\displaystyle a,a^{-1}}do not occur next to each other inw, nor do the lettersb,b−1{\displaystyle b,b^{-1}}. Every elementg∈F{\displaystyle g\in F}is represented by a unique reduced word, and this reduced word is the shortest word representingg. For example, since the wordw=b−1a{\displaystyle w=b^{-1}a}is reduced and has length 2, the word norm ofw{\displaystyle w}equals 2, so the distance in the word norm betweenb{\displaystyle b}anda{\displaystyle a}equals 2. This can be visualized in terms of the Cayley graph, where the shortest path betweenbandahas length 2.
The groupGactson itself by left multiplication: the action of eachk∈G{\displaystyle k\in G}takes eachg∈G{\displaystyle g\in G}tokg{\displaystyle kg}. This action is anisometryof the word metric. The proof is simple: the distance betweenkg{\displaystyle kg}andkh{\displaystyle kh}equals|(kg)−1(kh)|=|g−1h|{\displaystyle |(kg)^{-1}(kh)|=|g^{-1}h|}, which equals the distance betweeng{\displaystyle g}andh{\displaystyle h}.
In general, the word metric on a groupGis not unique, because different symmetric generating sets give different word metrics. However, finitely generated word metrics are unique up tobilipschitzequivalence: ifS{\displaystyle S},T{\displaystyle T}are two symmetric, finite generating sets forGwith corresponding word metricsdS{\displaystyle d_{S}},dT{\displaystyle d_{T}}, then there is a constantK≥1{\displaystyle K\geq 1}such that for anyg,h∈G{\displaystyle g,h\in G},
This constantKis just the maximum of thedS{\displaystyle d_{S}}word norms of elements ofT{\displaystyle T}and thedT{\displaystyle d_{T}}word norms of elements ofS{\displaystyle S}. This proof is also easy: any word overScan be converted by substitution into a word overT, expanding the length of the word by a factor of at mostK, and similarly for converting words overTinto words overS.
The bilipschitz equivalence of word metrics implies in turn that thegrowth rateof a finitely generated group is a well-defined isomorphism invariant of the group, independent of the choice of a finite generating set. This implies in turn that various properties of growth, such as polynomial growth, the degree of polynomial growth, and exponential growth, are isomorphism invariants of groups. This topic is discussed further in the article on thegrowth rateof a group.
Ingeometric group theory, groups are studied by theiractionson metric spaces. A principle that generalizes the bilipschitz invariance of word metrics says that any finitely generated word metric onGisquasi-isometricto anyproper,geodesic metric spaceon whichGacts,properly discontinuouslyandcocompactly. Metric spaces on whichGacts in this manner are calledmodel spacesforG.
It follows in turn that any quasi-isometrically invariant property satisfied by the word metric ofGor by any model space ofGis an isomorphism invariant ofG. Moderngeometric group theoryis in large part the study of quasi-isometry invariants. | https://en.wikipedia.org/wiki/Word_metric |
Incomputability theoryandcomputational complexity theory, adecision problemis acomputational problemthat can be posed as ayes–no questionon asetof input values. An example of a decision problem is deciding whether a given natural number isprime. Another example is the problem, "given two numbersxandy, doesxevenly dividey?"
Adecision procedurefor a decision problem is analgorithmicmethod that answers the yes-no question on all inputs, and a decision problem is calleddecidableif there is a decision procedure for it. For example, the decision problem "given two numbersxandy, doesxevenly dividey?" is decidable since there is a decision procedure calledlong divisionthat gives the steps for determining whetherxevenly dividesyand the correct answer,YESorNO, accordingly. Some of the most important problems in mathematics areundecidable, e.g. thehalting problem.
The field of computational complexity theory categorizesdecidabledecision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of thecomputational resourcesneeded by the most efficient algorithm for a certain problem. On the other hand, the field ofrecursion theorycategorizesundecidabledecision problems byTuring degree, which is a measure of the noncomputability inherent in any solution.
Adecision problemis theformal languageof all inputs for which the output (the answer to the yes-no question on a given input) isYES.[notes 1]
A classic example of a decidable decision problem is the set of prime numbers. It is possible to effectively decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient procedures ofprimality testingare known, the existence of any effective procedure is enough to establish decidability.
Problems that are not decidable areundecidable, which means it is not possible to create an algorithm (efficient or not) that solves them. Thehalting problemis an important undecidable decision problem; for more examples, seelist of undecidable problems.
Decision problems can be ordered according tomany-one reducibilityand related to feasible reductions such aspolynomial-time reductions. A decision problemPis said to becompletefor a set of decision problemsSifPis a member ofSand every problem inScan be reduced toP. Complete decision problems are used incomputational complexity theoryto characterizecomplexity classesof decision problems. For example, theBoolean satisfiability problemis complete for the classNPof decision problems under polynomial-time reducibility.
Decision problems are closely related tofunction problems, which can have answers that are more complex than a simpleYESorNO. A corresponding function problem is "given two numbersxandy, what isxdivided byy?".
Afunction problemconsists of apartial functionf; the informal "problem" is to compute the values offon the inputs for which it is defined.
Every function problem can be turned into a decision problem; the decision problem is just the graph of the associated function. (The graph of a functionfis the set of pairs (x,y) such thatf(x) =y.) If this decision problem were effectively solvable then the function problem would be as well. This reduction does not respect computational complexity, however. For example, it is possible for the graph of a function to be decidable in polynomial time (in which case running time is computed as a function of the pair (x,y)) when the function is not computable inpolynomial time(in which case running time is computed as a function ofxalone). The functionf(x) = 2xhas this property.
Every decision problem can be converted into the function problem of computing thecharacteristic functionof the set associated to the decision problem. If this function is computable then the associated decision problem is decidable. However, this reduction is more liberal than the standard reduction used in computational complexity (sometimes called polynomial-time many-one reduction); for example, the complexity of the characteristic functions of anNP-completeproblem and itsco-NP-completecomplementis exactly the same even though the underlying decision problems may not be considered equivalent in some typical models of computation.
Unlike decision problems, for which there is only one correct answer for each input, optimization problems are concerned with finding thebestanswer to a particular input. Optimization problems arise naturally in many applications, such as thetraveling salesman problemand many questions inlinear programming.
Function and optimization problems are often transformed into decision problems by considering the question of whether the output isequal toorless than or equal toa given value. This allows the complexity of the corresponding decision problem to be studied; and in many cases the original function or optimization problem can be solved by solving its corresponding decision problem. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for eachN, to decide whether the graph has any tour with weight less thanN. By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour.
Because the theory of decision problems is very well developed, research in complexity theory has typically focused on decision problems. Optimization problems themselves are still of interest in computability theory, as well as in fields such asoperations research. | https://en.wikipedia.org/wiki/Word_problem_(computability) |
Incomputational mathematics, aword problemis theproblem of decidingwhether two given expressions are equivalent with respect to a set ofrewritingidentities. A prototypical example is theword problem for groups, but there are many other instances as well. Somedeep resultsof computational theory concern theundecidablityof this question in many important cases.[1]
Incomputer algebraone often wishes to encode mathematical expressions using an expression tree. But there are often multiple equivalent expression trees. The question naturally arises of whether there is an algorithm which, given as input two expressions, decides whether they represent the same element. Such an algorithm is called asolution to the word problem. For example, imagine thatx,y,z{\displaystyle x,y,z}are symbols representingreal numbers- then a relevant solution to the word problem would, given the input(x⋅y)/z=?(x/z)⋅y{\displaystyle (x\cdot y)/z\mathrel {\overset {?}{=}} (x/z)\cdot y}, produce the outputEQUAL, and similarly produceNOT_EQUALfrom(x⋅y)/z=?(x/x)⋅y{\displaystyle (x\cdot y)/z\mathrel {\overset {?}{=}} (x/x)\cdot y}.
The most direct solution to a word problem takes the form of a normal form theorem and algorithm which maps every element in anequivalence classof expressions to a single encoding known as thenormal form- the word problem is then solved by comparing these normal forms viasyntactic equality.[1]For example one might decide thatx⋅y⋅z−1{\displaystyle x\cdot y\cdot z^{-1}}is the normal form of(x⋅y)/z{\displaystyle (x\cdot y)/z},(x/z)⋅y{\displaystyle (x/z)\cdot y}, and(y/z)⋅x{\displaystyle (y/z)\cdot x}, and devise a transformation system to rewrite those expressions to that form, in the process proving that all equivalent expressions will be rewritten to the same normal form.[2]But not all solutions to the word problem use a normal form theorem - there are algebraic properties which indirectly imply the existence of an algorithm.[1]
While the word problem asks whether two terms containingconstantsare equal, a proper extension of the word problem known as theunification problemasks whether two termst1,t2{\displaystyle t_{1},t_{2}}containingvariableshaveinstancesthat are equal, or in other words whether the equationt1=t2{\displaystyle t_{1}=t_{2}}has any solutions. As a common example,2+3=?8+(−3){\displaystyle 2+3\mathrel {\overset {?}{=}} 8+(-3)}is a word problem in theinteger groupZ{\displaystyle \mathbb {Z} },
while2+x=?8+(−x){\displaystyle 2+x\mathrel {\overset {?}{=}} 8+(-x)}is a unification problem in the same group; since the former terms happen to be equal inZ{\displaystyle \mathbb {Z} }, the latter problem has thesubstitution{x↦3}{\displaystyle \{x\mapsto 3\}}as a solution.
One of the most deeply studied cases of the word problem is in the theory ofsemigroupsandgroups. A timeline of papers relevant to theNovikov-Boone theoremis as follows:[3][4]
The accessibility problem forstring rewriting systems(semi-Thue systems or semigroups) can be stated as follows: Given a semi-Thue systemT:=(Σ,R){\displaystyle T:=(\Sigma ,R)}and two words (strings)u,v∈Σ∗{\displaystyle u,v\in \Sigma ^{*}}, canu{\displaystyle u}be transformed intov{\displaystyle v}by applying rules fromR{\displaystyle R}? Note that the rewriting here is one-way. The word problem is the accessibility problem for symmetric rewrite relations, i.e. Thue systems.[27]
The accessibility and word problems areundecidable, i.e. there is no general algorithm for solving this problem.[28]This even holds if we limit the systems to have finite presentations, i.e. a finite set of symbols and a finite set of relations on those symbols.[27]Even the word problem restricted toground termsis not decidable for certain finitely presented semigroups.[29][30]
Given apresentation⟨S∣R⟩{\displaystyle \langle S\mid {\mathcal {R}}\rangle }for a groupG, the word problem is the algorithmic problem of deciding, given as input two words inS, whether they represent the same element ofG. The word problem is one of three algorithmic problems for groups proposed byMax Dehnin 1911. It was shown byPyotr Novikovin 1955 that there exists a finitely presented groupGsuch that the word problem forGisundecidable.[31]
One of the earliest proofs that a word problem is undecidable was forcombinatory logic: when are two strings of combinators equivalent? Because combinators encode all possibleTuring machines, and the equivalence of two Turing machines is undecidable, it follows that the equivalence of two strings of combinators is undecidable.Alonzo Churchobserved this in 1936.[32]
Likewise, one has essentially the same problem in (untyped)lambda calculus: given two distinct lambda expressions, there is no algorithm which can discern whether they are equivalent or not;equivalence is undecidable. For several typed variants of the lambda calculus, equivalence is decidable by comparison of normal forms.
The word problem for anabstract rewriting system(ARS) is quite succinct: given objectsxandyare they equivalent under↔∗{\displaystyle {\stackrel {*}{\leftrightarrow }}}?[29]The word problem for an ARS isundecidablein general. However, there is acomputablesolution for the word problem in the specific case where every object reduces to a unique normal form in a finite number of steps (i.e. the system isconvergent): two objects are equivalent under↔∗{\displaystyle {\stackrel {*}{\leftrightarrow }}}if and only if they reduce to the same normal form.[33]TheKnuth-Bendix completion algorithmcan be used to transform a set of equations into a convergentterm rewriting system.
Inuniversal algebraone studiesalgebraic structuresconsisting of agenerating setA, a collection ofoperationsonAof finite arity, and a finite set of identities that these operations must satisfy. The word problem for an algebra is then to determine, given two expressions (words) involving the generators and operations, whether they represent the same element of the algebra modulo the identities. The word problems for groups and semigroups can be phrased as word problems for algebras.[1]
The word problem on freeHeyting algebrasis difficult.[34]The only known results are that the free Heyting algebra on one generator is infinite, and that the freecomplete Heyting algebraon one generator exists (and has one more element than the free Heyting algebra).
The word problem onfree latticesand more generally freebounded latticeshas a decidable solution. Bounded lattices are algebraic structures with the twobinary operations∨ and ∧ and the two constants (nullary operations) 0 and 1. The set of all well-formedexpressionsthat can be formulated using these operations on elements from a given set of generatorsXwill be calledW(X). This set of words contains many expressions that turn out to denote equal values in every lattice. For example, ifais some element ofX, thena∨ 1 = 1 anda∧ 1 =a. The word problem for free bounded lattices is the problem of determining which of these elements ofW(X) denote the same element in the free bounded latticeFX, and hence in every bounded lattice.
The word problem may be resolved as follows. A relation ≤~onW(X) may be definedinductivelyby settingw≤~vif and only ifone of the following holds:
This defines apreorder≤~onW(X), so anequivalence relationcan be defined byw~vwhenw≤~vandv≤~w. One may then show that thepartially orderedquotient setW(X)/~ is the free bounded latticeFX.[35][36]Theequivalence classesofW(X)/~ are the sets of all wordswandvwithw≤~vandv≤~w. Two well-formed wordsvandwinW(X) denote the same value in every bounded lattice if and only ifw≤~vandv≤~w; the latter conditions can be effectively decided using the above inductive definition. The table shows an example computation to show that the wordsx∧zandx∧z∧(x∨y) denote the same value in every bounded lattice. The case of lattices that are not bounded is treated similarly, omitting rules 2 and 3 in the above construction of ≤~.
Bläsius and Bürckert[37]demonstrate theKnuth–Bendix algorithmon an axiom set for groups.
The algorithm yields aconfluentandnoetherianterm rewrite systemthat transforms every term into a uniquenormal form.[38]The rewrite rules are numbered incontiguous since some rules became redundant and were deleted during the algorithm run.
The equality of two terms follows from the axioms if and only if both terms are transformed into literally the same normal form term. For example, the terms
share the same normal form, viz.1{\displaystyle 1}; therefore both terms are equal in every group.
As another example, the term1⋅(a⋅b){\displaystyle 1\cdot (a\cdot b)}andb⋅(1⋅a){\displaystyle b\cdot (1\cdot a)}has the normal forma⋅b{\displaystyle a\cdot b}andb⋅a{\displaystyle b\cdot a}, respectively. Since the normal forms are literally different, the original terms cannot be equal in every group. In fact, they are usually different innon-abelian groups. | https://en.wikipedia.org/wiki/Word_problem_(mathematics) |
Inmathematics, especially in the area ofabstract algebraknown ascombinatorial group theory, theword problemfor afinitely generated groupG{\displaystyle G}is the algorithmic problem of deciding whether two words in the generators represent the same element ofG{\displaystyle G}. The word problem is a well-known example of anundecidable problem.
IfA{\displaystyle A}is a finite set ofgeneratorsforG{\displaystyle G}, then the word problem is the membership problem for theformal languageof all words inA{\displaystyle A}and a formal set of inverses that map to the identity under the natural map from thefree monoid with involutiononA{\displaystyle A}to the groupG{\displaystyle G}. IfB{\displaystyle B}is another finite generating set forG{\displaystyle G}, then the word problem over the generating setB{\displaystyle B}is equivalent to the word problem over the generating setA{\displaystyle A}. Thus one can speak unambiguously of the decidability of the word problem for the finitely generated groupG{\displaystyle G}.
The related but differentuniform word problemfor a classK{\displaystyle K}of recursively presented groups is the algorithmic problem of deciding, given as input apresentationP{\displaystyle P}for a groupG{\displaystyle G}in the classK{\displaystyle K}and two words in the generators ofG{\displaystyle G}, whether the words represent the same element ofG{\displaystyle G}. Some authors require the classK{\displaystyle K}to be definable by arecursively enumerableset of presentations.
Throughout the history of the subject, computations in groups have been carried out using variousnormal forms. These usually implicitly solve the word problem for the groups in question. In 1911Max Dehnproposed that the word problem was an important area of study in its own right,[1]together with theconjugacy problemand thegroup isomorphism problem. In 1912 he gave an algorithm that solves both the word and conjugacy problem for thefundamental groupsof closed orientable two-dimensional manifolds of genus greater than or equal to 2.[2]Subsequent authors have greatly extendedDehn's algorithmand applied it to a wide range of group theoreticdecision problems.[3][4][5]
It was shown byPyotr Novikovin 1955 that there exists a finitely presented groupG{\displaystyle G}such that the word problem forG{\displaystyle G}isundecidable.[6]It follows immediately that the uniform word problem is also undecidable. A different proof was obtained byWilliam Boonein 1958.[7]
The word problem was one of the first examples of an unsolvable problem to be found not inmathematical logicor thetheory of algorithms, but in one of the central branches of classical mathematics,algebra. As a result of its unsolvability, several other problems in combinatorial group theory have been shown to be unsolvable as well.
The word problem is in fact solvable for many groupsG{\displaystyle G}. For example,polycyclic groupshave solvable word problems since the normal form of an arbitrary word in a polycyclic presentation is readily computable; other algorithms for groups may, in suitable circumstances, also solve the word problem, see theTodd–Coxeter algorithm[8]and theKnuth–Bendix completion algorithm.[9]On the other hand, the fact that a particular algorithm does not solve the word problem for a particular group does not show that the group has an unsolvable word problem. For instance Dehn's algorithm does not solve the word problem for the fundamental group of thetorus. However this group is the direct product of two infinite cyclic groups and so has a solvable word problem.
In more concrete terms, the uniform word problem can be expressed as arewritingquestion, forliteral strings.[10]For a presentationP{\displaystyle P}of a groupG{\displaystyle G},P{\displaystyle P}will specify a certain number of generators
forG{\displaystyle G}. We need to introduce one letter forx{\displaystyle x}and another (for convenience) for the group element represented byx−1{\displaystyle x^{-1}}. Call these letters (twice as many as the generators) the alphabetΣ{\displaystyle \Sigma }for our problem. Then each element inG{\displaystyle G}is represented insome wayby a product
of symbols fromΣ{\displaystyle \Sigma }, of some length, multiplied inG{\displaystyle G}. The string of length 0 (null string) stands for theidentity elemente{\displaystyle e}ofG{\displaystyle G}. The crux of the whole problem is to be able to recogniseallthe wayse{\displaystyle e}can be represented, given some relations.
The effect of therelationsinG{\displaystyle G}is to make various such strings represent the same element ofG{\displaystyle G}. In fact the relations provide a list of strings that can be either introduced where we want, or cancelled out whenever we see them, without changing the 'value', i.e. the group element that is the result of the multiplication.
For a simple example, consider the group given by the presentation⟨a|a3=e⟩{\displaystyle \langle a\,|\,a^{3}=e\rangle }. WritingA{\displaystyle A}for the inverse ofa{\displaystyle a}, we have possible strings combining any number of the symbolsa{\displaystyle a}andA{\displaystyle A}. Whenever we seeaaa{\displaystyle aaa}, oraA{\displaystyle aA}orAa{\displaystyle Aa}we may strike these out. We should also remember to strike outAAA{\displaystyle AAA}; this says that since the cube ofa{\displaystyle a}is the identity element ofG{\displaystyle G}, so is the cube of the inverse ofa{\displaystyle a}. Under these conditions the word problem becomes easy. First reduce strings to the empty string,a{\displaystyle a},aa{\displaystyle aa},A{\displaystyle A}orAA{\displaystyle AA}. Then note that we may also multiply byaaa{\displaystyle aaa}, so we can convertA{\displaystyle A}toaa{\displaystyle aa}and convertAA{\displaystyle AA}toa{\displaystyle a}. The result is that the word problem, here for thecyclic groupof order three, is solvable.
This is not, however, the typical case. For the example, we have acanonical formavailable that reduces any string to one of length at most three, by decreasing the length monotonically. In general, it is not true that one can get a canonical form for the elements, by stepwise cancellation. One may have to use relations to expand a string many-fold, in order eventually to find a cancellation that brings the length right down.
The upshot is, in the worst case, that the relation between strings that says they are equal inG{\displaystyle G}is anUndecidable problem.
The following groups have a solvable word problem:
Examples with unsolvable word problems are also known:
The word problem for a recursively presented group can be partially solved in the following sense:
More informally, there exists an algorithm that halts ifu=v{\displaystyle u=v}, but does not do so otherwise.
It follows that to solve the word problem forP{\displaystyle P}it is sufficient to construct a recursive functiong{\displaystyle g}such that:
Howeveru=v{\displaystyle u=v}inG{\displaystyle G}if and only ifuv−1=1{\displaystyle uv^{-1}=1}inG{\displaystyle G}. It follows that to solve the word problem forP{\displaystyle P}it is sufficient to construct a recursive functionh{\displaystyle h}such that:
The following will be proved as an example of the use of this technique:
Proof:SupposeG=⟨X|R⟩{\displaystyle G=\langle X\,|\,R\rangle }is a finitely presented, residually finite group.
LetS{\displaystyle S}be the group of all permutations of the natural numbersN{\displaystyle \mathbb {N} }that fixes all but finitely many numbers. Then:
Given these facts, the algorithm defined by the following pseudocode:
defines a recursive functionh{\displaystyle h}such that:
This shows thatG{\displaystyle G}has solvable word problem.
The criterion given above, for the solvability of the word problem in a single group, can be extended by a straightforward argument. This gives the following criterion for the uniform solvability of the word problem for a class of finitely presented groups:
In other words, the uniform word problem for the class of all finitely presented groups with solvable word problem is unsolvable. This has some interesting consequences. For instance, theHigman embedding theoremcan be used to construct a group containing an isomorphic copy of every finitely presented group with solvable word problem. It seems natural to ask whether this group can have solvable word problem. But it is a consequence of the Boone-Rogers result that:
Remark:SupposeG=⟨X|R⟩{\displaystyle G=\langle X\,|\,R\rangle }is a finitely presented group with solvable word problem andH{\displaystyle H}is a finite subset ofG{\displaystyle G}. LetH∗=⟨H⟩{\displaystyle H^{*}=\langle H\rangle }, be the group generated byH{\displaystyle H}. Then the word problem inH∗{\displaystyle H^{*}}is solvable: given two wordsh,k{\displaystyle h,k}in the generatorsH{\displaystyle H}ofH∗{\displaystyle H^{*}}, write them as words inX{\displaystyle X}and compare them using the solution to the word problem inG{\displaystyle G}. It is easy to think that this demonstrates a uniform solution of the word problem for the classK{\displaystyle K}(say) of finitely generated groups that can be embedded inG{\displaystyle G}. If this were the case, the non-existence of a universal solvable word problem group would follow easily from Boone-Rogers. However, the solution just exhibited for the word problem for groups inK{\displaystyle K}is not uniform. To see this, consider a groupJ=⟨Y|T⟩∈K{\displaystyle J=\langle Y\,|\,T\rangle \in K}; in order to use the above argument to solve the word problem inJ{\displaystyle J}, it is first necessary to exhibit a mappinge:Y→G{\displaystyle e:Y\to G}that extends to an embeddinge∗:J→G{\displaystyle e^{*}:J\to G}. If there were a recursive function that mapped (finitely generated) presentations of groups inK{\displaystyle K}to embeddings intoG{\displaystyle G}, then a uniform solution of the word problem inK{\displaystyle K}could indeed be constructed. But there is no reason, in general, to suppose that such a recursive function exists. However, it turns out that, using a more sophisticated argument, the word problem inJ{\displaystyle J}can be solvedwithoutusing an embeddinge:J→G{\displaystyle e:J\to G}. Instead anenumeration of homomorphismsis used, and since such an enumeration can be constructed uniformly, it results in a uniform solution to the word problem inK{\displaystyle K}.
SupposeG{\displaystyle G}were a universal solvable word problem group. Given a finite presentationP=⟨X|R⟩{\displaystyle P=\langle X\,|\,R\rangle }of a groupH{\displaystyle H}, one can recursively enumerate all homomorphismsh:H→G{\displaystyle h:H\to G}by first enumerating all mappingsh†:X→G{\displaystyle h^{\dagger }:X\to G}. Not all of these mappings extend to homomorphisms, but, sinceh†(R){\displaystyle h^{\dagger }(R)}is finite, it is possible to distinguish between homomorphisms and non-homomorphisms, by using the solution to the word problem inG{\displaystyle G}. "Weeding out" non-homomorphisms gives the required recursive enumeration:h1,h2,…,hn,…{\displaystyle h_{1},h_{2},\ldots ,h_{n},\ldots }.
IfH{\displaystyle H}has solvable word problem, then at least one of these homomorphisms must be an embedding. So given a wordw{\displaystyle w}in the generators ofH{\displaystyle H}:
Consider the algorithm described by the pseudocode:
This describes a recursive function:
The functionf{\displaystyle f}clearly depends on the presentationP{\displaystyle P}. Considering it to be a function of the two variables, a recursive functionf(P,w){\displaystyle f(P,w)}has been constructed that takes a finite presentationP{\displaystyle P}for a groupH{\displaystyle H}and a wordw{\displaystyle w}in the generators of a groupG{\displaystyle G}, such that wheneverG{\displaystyle G}has soluble word problem:
But this uniformly solves the word problem for the class of all finitely presented groups with solvable word problem, contradicting Boone-Rogers. This contradiction provesG{\displaystyle G}cannot exist.
There are a number of results that relate solvability of the word problem and algebraic structure. The most significant of these is theBoone-Higman theorem:
It is widely believed that it should be possible to do the construction so that the simple group itself is finitely presented. If so one would expect it to be difficult to prove as the mapping from presentations to simple groups would have to be non-recursive.
The following has been proved byBernhard NeumannandAngus Macintyre:
What is remarkable about this is that the algebraically closed groups are so wild that none of them has a recursive presentation.
The oldest result relating algebraic structure to solvability of the word problem is Kuznetsov's theorem:
To prove this let⟨X|R⟩{\displaystyle \langle X|R\rangle }be a recursive presentation forS{\displaystyle S}. Choose a nonidentity elementa∈S{\displaystyle a\in S}, that is,a≠1{\displaystyle a\neq 1}inS{\displaystyle S}.
Ifw{\displaystyle w}is a word on the generatorsX{\displaystyle X}ofS{\displaystyle S}, then let:
There is a recursive functionf⟨X|R∪{w}⟩{\displaystyle f_{\langle X|R\cup \{w\}\rangle }}such that:
Write:
Then because the construction off{\displaystyle f}was uniform, this is a recursive function of two variables.
It follows that:h(w)=g(w,a){\displaystyle h(w)=g(w,a)}is recursive. By construction:
SinceS{\displaystyle S}is a simple group, its only quotient groups are itself and the trivial group. Sincea≠1{\displaystyle a\neq 1}inS{\displaystyle S}, we seea=1{\displaystyle a=1}inSw{\displaystyle S_{w}}if and only ifSw{\displaystyle S_{w}}is trivial if and only ifw≠1{\displaystyle w\neq 1}inS{\displaystyle S}. Therefore:
The existence of such a function is sufficient to prove the word problem is solvable forS{\displaystyle S}.
This proof does not prove the existence of a uniform algorithm for solving the word problem for this class of groups. The non-uniformity resides in choosing a non-trivial element of the simple group. There is no reason to suppose that there is a recursive function that maps a presentation of a simple groups to a non-trivial element of the group. However, in the case of a finitely presented group we know that not all the generators can be trivial (Any individual generator could be, of course). Using this fact it is possible to modify the proof to show: | https://en.wikipedia.org/wiki/Word_problem_for_groups |
Inmathematics, theYoung–Fibonacci graphandYoung–Fibonacci lattice, named afterAlfred YoungandLeonardo Fibonacci, are two closely related structures involving sequences of the digits 1 and 2. Any digit sequence of this type can be assigned arank, the sum of its digits: for instance, the rank of 11212 is 1 + 1 + 2 + 1 + 2 = 7. As was already known in ancient India, the number of sequences with a given rank is aFibonacci number. The Young–Fibonacci lattice is an infinitemodular latticehaving these digit sequences as its elements, compatible with this rank structure. The Young–Fibonacci graph is thegraphof this lattice, and has a vertex for each digit sequence. As the graph of a modular lattice, it is amodular graph.
The Young–Fibonacci graph and the Young–Fibonacci lattice were both initially studied in two papers byFomin (1988)andStanley (1988). They are named after the closely relatedYoung's latticeand after the Fibonacci number of their elements at any given rank.
A digit sequence with rankrmay be formed either by adding the digit 2 to a sequence with rankr− 2, or by adding the digit 1 to a sequence with rankr− 1. Iffis thefunctionthat mapsrto the number of different digit sequences of that rank, therefore,fsatisfies therecurrence relationf(r) =f(r− 2) +f(r− 1)defining the Fibonacci numbers, but with slightly different initial conditions:f(0) =f(1) = 1(there is one rank-0 string, theempty string, and one rank-1 string, consisting of the single digit 1). These initial conditions cause the sequence of values offto be shifted by one position from theFibonacci numbers:f(r) =Fr+1.
In the ancient Indian study ofprosody, the Fibonacci numbers were used to count the number of different sequences of short and long syllables with a given total length; if the digit 1 corresponds to a short syllable, and the digit 2 corresponds to a long syllable, the rank of a digit sequence measures the total length of the corresponding sequence of syllables. See theFibonacci numberarticle for details.
The Young–Fibonacci graph is an infinitegraph, with a vertex for each string of the digits "1" and "2" (including theempty string). Theneighborsof a stringsare the strings formed fromsby one of the following operations:
It is straightforward to verify that each operation can be inverted: operations 1 and 3 are inverse to each other, as are operations 2 and 4. Therefore, the resulting graph may be considered to beundirected. However, it is usually considered to be adirected acyclic graphin which each edge connects from a vertex of lower rank to a vertex of higher rank.
As bothFomin (1988)andStanley (1988)observe, this graph has the following properties:
Fomin (1988)calls a graph with these properties aY-graph;Stanley (1988)calls a graph with a weaker version of these properties (in which the numbers of common predecessors and common successors of any pair of nodes must be equal but may be greater than one) the graph of adifferential poset.
Thetransitive closureof the Young–Fibonacci graph is apartial order. AsStanley (1988)shows, any two verticesxandyhave a unique greatest common predecessor in this order (theirmeet) and a unique least common successor (theirjoin); thus, this order is alattice, called the Young–Fibonacci lattice.
To find the meet ofxandy, one may first test whether one ofxandyis a predecessor of the other. A stringxis a predecessor of another stringyin this order exactly when the number of "2" digits remaining iny, after removing the longest common suffix ofxandy, is at least as large as the number of all digits remaining inxafter removing the common suffix. Ifxis a predecessor ofyaccording to this test, then their meet isx, and similarly ifyis a predecessor ofxthen their meet isy. In a second case, if neitherxnoryis the predecessor of the other, but one or both of them begins with a "1" digit, the meet is unchanged if these initial digits are removed. And finally, if bothxandybegin with the digit "2", the meet ofxandymay be found by removing this digit from both of them, finding the meet of the resulting suffixes, and adding the "2" back to the start.
A common successor ofxandy(though not necessarily the least common successor) may be found by taking a string of "2" digits with length equal to the longer ofxandy. The least common successor is then the meet of the finitely many strings that are common successors ofxandyand predecessors of this string of "2"s.
AsStanley (1988)further observes, the Young–Fibonacci lattice ismodular.Fomin (1988)incorrectly claims that it isdistributive; however, the sublattice formed by the strings {21, 22, 121, 211, 221} forms a diamond sublattice, forbidden in distributive lattices. | https://en.wikipedia.org/wiki/Young%E2%80%93Fibonacci_lattice |
Incomputer science,formal specificationsare mathematically based techniques whose purpose is to help with the implementation of systems and software. They are used to describe a system, to analyze its behavior, and to aid in its design by verifying key properties of interest through rigorous and effective reasoning tools.[1][2]These specifications areformalin the sense that they have a syntax, their semantics fall within one domain, and they are able to be used to infer useful information.[3]
In each passing decade, computer systems have become increasingly more powerful and, as a result, they have become more impactful to society. Because of this, better techniques are needed to assist in the design and implementation of reliable software. Established engineering disciplines use mathematical analysis as the foundation of creating and validating product design. Formal specifications are one such way to achieve this insoftware engineeringreliability as once predicted. Other methods such astestingare more commonly used to enhance code quality.[1]
Given such aspecification, it is possible to useformal verificationtechniques to demonstrate that a system design iscorrectwith respect to its specification. This allows incorrect system designs to be revised before any major investments have been made into an actual implementation. Another approach is to use provably correctrefinementsteps to transform a specification into a design, which is ultimately transformed into an implementation that iscorrect by construction.
A formal specification isnotanimplementation, but rather it may be used to develop an implementation. Formal specifications describewhata system should do, nothowthe system should do it.
A good specification must have some of the following attributes: adequate, internally consistent, unambiguous, complete, satisfied, minimal.[3]
A good specification will have:[3]
One of the main reasons there is interest in formal specifications is that they will provide an ability toperform proofson software implementations.[2]These proofs may be used to validate a specification, verify correctness of design, or to prove that a program satisfies a specification.[2]
A design (or implementation) cannot ever be declared “correct” on its own. It can only ever be “correct with respect to a given specification”. Whether the formal specification correctly describes the problem to be solved is a separate issue. It is also a difficult issue to address since it ultimately concerns the problem constructing abstracted formal representations of an informal concreteproblem domain, and such an abstraction step is not amenable to formal proof. However, it is possible tovalidatea specification by proving “challenge”theoremsconcerning properties that the specification is expected to exhibit. If correct, these theorems reinforce the specifier's understanding of the specification and its relationship with the underlying problem domain. If not, the specification probably needs to be changed to better reflect the domain understanding of those involved with producing (and implementing) the specification.
Formal methodsof software development are not widely used in industry. Most companies do not consider it cost-effective to apply them in their software development processes.[4]This may be for a variety of reasons, some of which are:
Other limitations:[3]
Formal specification techniques have existed in various domains and on various scales for quite some time.[6]Implementations of formal specifications will differ depending on what kind of system they are attempting to model, how they are applied and at what point in the software life cycle they have been introduced.[2]These types of models can be categorized into the following specification paradigms:
In addition to the above paradigms, there are ways to apply certain heuristics to help improve the creation of these specifications. The paper referenced here best discusses heuristics to use when designing a specification.[6]They do so by applying adivide-and-conquerapproach.
TheZ notationis an example of a leading formalspecification language. Others include the Specification Language (VDM-SL) of theVienna Development Methodand theAbstract Machine Notation(AMN) of theB-Method. In theWeb servicesarea, formal specification is often used to describe non-functional properties[7](Web servicesquality of service).
Some tools are:[4] | https://en.wikipedia.org/wiki/Formal_specification |
Aformal systemis anabstract structureandformalizationof anaxiomatic systemused fordeducing, usingrules of inference,theoremsfromaxioms.[1]
In 1921,David Hilbertproposed to use formal systems as the foundation of knowledge inmathematics.[2]
The termformalismis sometimes a rough synonym forformal system, but it also refers to a given style ofnotation, for example,Paul Dirac'sbra–ket notation.
A formal system has the following:[3][4][5]
A formal system is said to berecursive(i.e. effective) or recursively enumerable if the set of axioms and the set of inference rules aredecidable setsorsemidecidable sets, respectively.
Aformal languageis a language that is defined by a formal system. Like languages inlinguistics, formal languages generally have two aspects:
Usually only thesyntaxof a formal language is considered via the notion of aformal grammar. The two main categories of formal grammar are that ofgenerative grammars, which are sets of rules for how strings in a language can be written, and that ofanalytic grammars(or reductive grammar[6][unreliable source?][7]), which are sets of rules for how a string can be analyzed to determine whether it is a member of the language.
Adeductive system, also called adeductive apparatus,[8]consists of theaxioms(oraxiom schemata) andrules of inferencethat can be used toderivetheoremsof the system.[1]
Such deductive systems preservedeductivequalities in theformulasthat are expressed in the system. Usually the quality we are concerned with istruthas opposed to falsehood. However, othermodalities, such asjustificationorbeliefmay be preserved instead.
In order to sustain its deductive integrity, adeductive apparatusmust be definable without reference to anyintended interpretationof the language. The aim is to ensure that each line of aderivationis merely alogical consequenceof the lines that precede it. There should be no element of anyinterpretationof the language that gets involved with the deductive nature of the system.
Thelogical consequence(or entailment) of the system by its logical foundation is what distinguishes a formal system from others which may have some basis in an abstract model. Often the formal system will be the basis for or even identified with a largertheoryor field (e.g.Euclidean geometry) consistent with the usage in modern mathematics such asmodel theory.[clarification needed]
An example of a deductive system would be the rules of inference andaxioms regarding equalityused infirst order logic.
The two main types of deductive systems are proof systems and formal semantics.[8][9]
Formal proofs are sequences ofwell-formed formulas(or WFF for short) that might either be anaxiomor be the product of applying an inference rule on previous WFFs in the proof sequence. The last WFF in the sequence is recognized as atheorem.
Once a formal system is given, one can define the set of theorems which can be proved inside the formal system. This set consists of all WFFs for which there is a proof. Thus all axioms are considered theorems. Unlike the grammar for WFFs, there is no guarantee that there will be adecision procedurefor deciding whether a given WFF is a theorem or not.
The point of view that generating formal proofs is all there is to mathematics is often calledformalism.David Hilbertfoundedmetamathematicsas a discipline for discussing formal systems. Any language that one uses to talk about a formal system is called ametalanguage. The metalanguage may be a natural language, or it may be partially formalized itself, but it is generally less completely formalized than the formal language component of the formal system under examination, which is then called theobject language, that is, the object of the discussion in question. The notion oftheoremjust defined should not be confused withtheorems about the formal system, which, in order to avoid confusion, are usually calledmetatheorems.
Alogical systemis a deductive system (most commonlyfirst order logic) together with additionalnon-logical axioms. According tomodel theory, a logical system may be giveninterpretationswhich describe whether a givenstructure- the mapping of formulas to a particular meaning - satisfies a well-formed formula. A structure that satisfies all the axioms of the formal system is known as amodelof the logical system.
A logical system is:
An example of a logical system isPeano arithmetic. The standard model of arithmetic sets thedomain of discourseto be thenonnegative integersand gives the symbols their usual meaning.[10]There are alsonon-standard models of arithmetic.
Early logic systems includes Indian logic ofPāṇini, syllogistic logic of Aristotle, propositional logic of Stoicism, and Chinese logic ofGongsun Long(c. 325–250 BCE). In more recent times, contributors includeGeorge Boole,Augustus De Morgan, andGottlob Frege.Mathematical logicwas developed in 19th centuryEurope.
David Hilbertinstigated aformalistmovement calledHilbert’s programas a proposed solution to thefoundational crisis of mathematics, that was eventually tempered byGödel's incompleteness theorems.[2]TheQED manifestorepresented a subsequent, as yet unsuccessful, effort at formalization of known mathematics. | https://en.wikipedia.org/wiki/Formal_system |
Christianity•Protestantism
Methodism, also called theMethodist movement, is aProtestantChristiantraditionwhose origins,doctrineand practice derive from the life and teachings ofJohn Wesley.[1]George Whitefieldand John's brotherCharles Wesleywere also significant early leaders in the movement. They were namedMethodistsfor "the methodical way in which they carried out their Christian faith".[2][3]Methodism originated as arevivalmovement withinAnglicanismwith roots in theChurch of Englandin the 18th century and became a separate denomination after Wesley's death. The movement spread throughout theBritish Empire, theUnited Statesand beyond because of vigorousmissionary work,[4]and today has about 80 million adherents worldwide.[nb 1][5]
Wesleyan theology, which is upheld by theMethodist denominations, focuses onsanctificationand the transforming effect of faith on the character of aChristian. Distinguishing doctrines include thenew birth,[6]assurance,[7][8]imparted righteousness, and obedience to God manifested in performingworks of piety. John Wesley held thatentire sanctificationwas "the grand depositum", or foundational doctrine, of the Methodist faith, and its propagation was the reason God brought Methodists into existence.[9][10]Scriptureis considered theprimary authority, but Methodists also look toChristian tradition, including the historiccreeds. Most Methodists teach thatJesus Christ, theSon of God,died for all of humanityand thatsalvationis achievable for all.[11]This is theArminiandoctrine,[nb 2]as opposed to theCalvinistposition that God haspredestinedthe salvation of aselect groupof people. However, Whitefield and several other early leaders of the movement were consideredCalvinistic Methodistsand held to the Calvinist position.
The movement has a wide variety of forms ofworship, ranging fromhigh churchtolow churchinliturgicalusage, in addition totent revivalsandcamp meetingsheld at certain times of the year.[12]Denominations that descend from the British Methodist tradition are generally less ritualistic, while worship in American Methodism varies depending on theMethodist denominationand congregation.[13]Methodist worship distinctiveness includes the observance of the quarterlylovefeast, thewatchnight serviceon New Year's Eve, as well asaltar callsin which people are invited to experience the new birth and entire sanctification.[14][15]Its emphasis ongrowing in graceafter the new birth (and after being entirely sanctified) led to the creation ofclass meetingsfor encouragement in the Christian life.[16]Methodism is known for its rich musical tradition, and Charles Wesley was instrumental in writing much of thehymnodyof Methodism.[17]
In addition toevangelism, Methodism is known for itscharity, as well as support for the sick, the poor, and the afflicted throughworks of mercythat "flow from the love of God and neighbor" evidenced in the entirely sanctified believer.[18][19][20]These ideals, theSocial Gospel, are put into practice by the establishment of hospitals, orphanages, soup kitchens, and schools to follow Christ's command to spreadthe gospeland serve all people.[21][22][19]Methodists are historically known for their adherence to the doctrine ofnonconformity to the world, reflected by their traditional standards of a commitment to sobriety, prohibition of gambling, regular attendance at class meetings, and weekly observance of theFriday fast.[23][24]
Early Methodists were drawn from all levels of society, including the aristocracy,[nb 3]but the Methodist preachers took the message to social outcasts such as criminals. In Britain, the Methodist Church had a major effect in the early decades of the developingworking class(1760–1820).[26]In the United States, it became the religion of many slaves, who later formedblack churchesin the Methodist tradition.[27]
The Methodist revival began in England with a group of men, includingJohn Wesley(1703–1791) and his younger brotherCharles(1707–1788), as a movement within the Church of England in the 18th century.[28][29]The Wesley brothers founded the "Holy Club" at theUniversity of Oxford, where John was a fellow and later a lecturer atLincoln College.[30]The club met weekly and they systematically set about living a holy life. They were accustomed to receivingCommunionevery week, fasting regularly, abstaining from most forms of amusement and luxury, and frequently visiting the sick and the poor and prisoners. The fellowship were branded as "Methodist" by their fellow students because of the way they used "rule" and "method" in their religious affairs.[31][31][32]
In 1735, at the invitation of the founder of theGeorgia Colony, GeneralJames Oglethorpe, both John and Charles Wesley set out for America to beministersto the colonists and missionaries to the Native Americans.[33]Unsuccessful in their work, the brothers returned to England conscious of their lack of genuine Christian faith. They looked for help fromPeter Boehlerand other members of theMoravian Church. At a MoravianserviceinAldersgateon 24 May 1738, John experienced what has come to be called hisevangelicalconversion, when he felt his "heart strangely warmed".[34]He records in his journal: "I felt I did trust in Christ, Christ alone, for salvation; and an assurance was given me that He had taken away my sins, even mine, and saved me from the law of sin and death."[35]Charles had reported a similar experience a few days previously. Considering this a pivotal moment, Daniel L. Burnett writes: "The significance of [John] Wesley's Aldersgate Experience is monumental ... Without it the names of Wesley and Methodism would likely be nothing more than obscure footnotes in the pages of church history."[36]
The Wesley brothers immediately began to preach salvation by faith to individuals and groups, in houses, in religioussocieties, and in the few churches which had not closed their doors to evangelical preachers.[37]John Wesley came under the influence of the Dutch theologianJacobus Arminius(1560–1609). Arminius had rejected theCalvinistteaching that God hadpredestinedan elect number of people to eternal bliss while others perished eternally. Conversely,George Whitefield(1714–1770),Howell Harris(1714–1773),[38]andSelina Hastings, Countess of Huntingdon(1707–1791)[39]were notable for beingCalvinistic Methodists.
Returning from his mission in Georgia, George Whitefield joined the Wesley brothers in what was rapidly becoming a national crusade.[37]Whitefield, who had been a fellow student of the Wesleys and prominent member of the Holy Club at Oxford, became well known for his unorthodox,itinerantministry, in which he was dedicated toopen-air preaching– reaching crowds of thousands.[37]A key step in the development of John Wesley's ministry was, like Whitefield, to preach in fields, collieries, and churchyards to those who did not regularly attendparish churchservices.[37]Accordingly, many Methodist converts were those disconnected from the Church of England; Wesley remained a cleric of the Established Church and insisted that Methodists attend their local parish church as well as Methodist meetings because only an ordained minister could perform the sacraments of Baptism and Holy Communion.[2]
Faced with growing evangelistic andpastoralresponsibilities, Wesley and Whitefield appointedlaypreachers and leaders.[37]Methodist preachers focused particularly on evangelising people who had been "neglected" by the established Church of England. Wesley and his assistant preachers organized the new converts into Methodist societies.[37]These societies were divided into groups calledclasses– intimate meetings where individuals were encouraged to confess their sins to one another and to build up each other. They also took part inlove feastswhich allowed for the sharing oftestimony, a key feature of early Methodism.[40]Growth in numbers and increasing hostility impressed upon the revival converts a deep sense of their corporate identity.[37]Three teachings that Methodists saw as the foundation of Christian faith were:
Wesley's organisational skills soon established him as the primary leader of the movement. Whitefield was a Calvinist, whereas Wesley was an outspoken opponent of the doctrine ofpredestination.[42]Wesley argued (against Calvinist doctrine) that Christians could enjoy asecond blessing– entire sanctification (Christian perfection) in this life: loving God and their neighbours, meekness and lowliness of heart and abstaining from all appearance of evil.[6][43]These differences put strains on the alliance between Whitefield and Wesley,[42]with Wesley becoming hostile toward Whitefield in what had been previously close relations. Whitefield consistently begged Wesley not to let theological differences sever their friendship, and, in time, their friendship was restored, though this was seen by many of Whitefield's followers to be a doctrinal compromise.[44]
Manyclergyin the established church feared that new doctrines promulgated by the Methodists, such as the necessity of anew birthfor salvation – the first work of grace, ofjustification by faithand of the constant and sustained action of theHoly Spiritupon the believer's soul, would produce ill effects upon weak minds.[45]Theophilus Evans, an early critic of the movement, even wrote that it was "the natural Tendency of their Behaviour, in Voice and Gesture and horrid Expressions, to make People mad". In one of his prints,William Hogarthlikewise attacked Methodists as "enthusiasts" full of "Credulity, Superstition, and Fanaticism".[45]Other attacks against the Methodists were physically violent – Wesley was nearly murdered by a mob atWednesburyin 1743.[46]The Methodists responded vigorously to their critics and thrived despite the attacks against them.[47]
Initially, the Methodists merely sought reform within the Church of England (Anglicanism), but the movement graduallydeparted from that Church. George Whitefield's preference for extemporaneous prayer rather than the fixed forms of prayer in theBook of Common Prayer, in addition to his insistence on the necessity of the new birth, set him at odds with Anglican clergy.[48]
As Methodist societies multiplied, and elements of anecclesiastical systemwere, one after another, adopted, the breach between John Wesley and the Church of England gradually widened. In 1784, Wesley responded to the shortage of priests in the American colonies due to theAmerican Revolutionary Warbyordainingpreachers for America with the power to administer thesacraments.[49]Wesley's actions precipitated the split between American Methodists and the Church of England (which held that only bishops could ordain people to ministry).[50]
With regard to the position of Methodism withinChristendom, "John Wesley once noted that what God had achieved in the development of Methodism was no mere human endeavor but the work of God. As such it would be preserved by God so long as history remained."[51]Calling it "the grand depositum" of the Methodist faith, Wesley specifically taught that the propagation of the doctrine ofentire sanctificationwas the reason that God raised up the Methodists in the world.[9][10]In light of this, Methodists traditionally promote the motto "Holiness unto the Lord".[3]
The influence of Whitefield and Lady Huntingdon on the Church of England was a factor in the founding of theFree Church of Englandin 1844. At the time of Wesley's death, there were over 500 Methodist preachers in British colonies and the United States.[37]Total membership of the Methodist societies in Britain was recorded as 56,000 in 1791, rising to 360,000 in 1836 and 1,463,000 by the national census of 1851.[52]
Early Methodism experienced a radical and spiritual phase that allowedwomen authority in church leadership. The role of the woman preacher emerged from the sense that the home should be a place of community care and should foster personal growth. Methodist women formed a community that cared for the vulnerable, extending the role of mothering beyond physical care. Women were encouraged totestifytheir faith. However, the centrality of women's role sharply diminished after 1790 as Methodist churches became more structured and more male-dominated.[53]
The Wesleyan Education Committee, which existed from 1838 to 1902, has documented the Methodist Church's involvement in the education of children. At first, most effort was placed in creating Sunday Schools. Still, in 1836 the British Methodist Conference gave its blessing to the creation of "Weekday schools".[54][55]
Methodism spread throughout the British Empire and, mostly through Whitefield's preaching during what historians call theFirst Great Awakening, in colonial America. However, after Whitefield's death in 1770, American Methodism entered a more lastingWesleyanand Arminian development phase.[56]Revival services and camp meetings were used "for spreading the Methodist message", withFrancis Asburystating that they were "our harvest seasons".[57]Henry Boehmreported that at a camp meeting inDoverin 1805, 1100 persons received theNew Birthand 600 believers wereentirely sanctified.[57]Around the time ofJohn Swanel Inskip's leadership of theNational Camp Meeting Association for the Promotion of Christian Holinessin the mid to latter 1800s, 80 percent of the membership of the North Georgia Conference of the Methodist Episcopal Church, South professed being entirely sanctified.[57]
All need to besaved.All may be saved.All mayknow themselves saved.All may besaved to the uttermost.
Many Methodist bodies, such as theAfrican Methodist Episcopal Churchand theUnited Methodist Church, base their doctrinal standards on theArticles of Religion,[59]John Wesley's abridgment of theThirty-nine Articlesof the Church of England that excised its Calvinist features.[60]Some Methodist denominations also publishcatechisms, which concisely summarise Christiandoctrine.[58]Methodists generally accept theApostles' Creedand theNicene Creedas declarations of shared Christian faith.[58]: 30–33[61]Methodism affirms the traditional Christian belief in thetriune Godhead(Father, Son and Holy Spirit) as well as theorthodoxunderstanding of the person of Jesus Christ asGod incarnatewho is bothfully divine and fully human.[62]Methodism also emphasizes doctrines that indicate the power of theHoly Spiritto strengthen the faith of believers and to transform their personal lives.[63]
Methodism is broadlyevangelicalin doctrine and is characterized by Wesleyan theology;[64]John Wesley is studied by Methodists for his interpretation of church practice and doctrine.[58]: 38At its heart, the theology of John Wesley stressed the life ofChristian holiness: to love God with all one's heart, mind, soul and strength and tolove one's neighbour as oneself.[65][66]One popular expression of Methodist doctrine is in thehymnsof Charles Wesley.[67]Since enthusiasticcongregational singingwas a part of the early evangelical movement, Wesleyan theology took root and spread through this channel.[68][69]Martin V. Clarke, who documented the history of Methodist hymnody, states:
Theologically and doctrinally, the content of the hymns has traditionally been a primary vehicle for expressing Methodism's emphasis on salvation for all, social holiness, and personal commitment, while particular hymns and the communal act of participating in hymn singing have been key elements in the spiritual lives of Methodists.[70]
Wesleyan Methodists identify with theArminianconception offree will, as opposed to thetheological determinismof absolutepredestination.[71][nb 2]Methodism teaches thatsalvationis initiated when one chooses to respond to God, who draws the individual near to him (the Wesleyan doctrine ofprevenient grace), thus teachingsynergism.[75][76]Methodists interpret Scripture as teaching that thesaving workof Jesus Christ is for all people (unlimited atonement) but effective only to those who respond and believe, in accordance with theReformationprinciples ofsola gratia(grace alone) andsola fide(faith alone).[77]John Wesley taught four key points fundamental to Methodism:
After thefirst work of grace(the new birth),[6]Methodistsoteriologyemphasizes the importance of the pursuit of holiness in salvation,[80]a concept best summarized in a quote by Methodist evangelistPhoebe Palmerwho stated that "justification would have ended with me had I refused to be holy."[81]Thus, for Methodists, "true faith ... cannot subsist without works."[82]Methodism, inclusive of theholiness movement, thus teaches that "justification [is made] conditional on obedience and progress insanctification",[81]emphasizing "a deep reliance upon Christ not only in coming to faith, but in remaining in the faith."[83]John Wesley taught that the keeping of the moral law contained in theTen Commandments,[84]as well as engaging in theworks of pietyand theworks of mercy, were "indispensable for our sanctification".[82]In its categorization of sin, Methodist doctrine distinguishes between (1) "sin, properly so called" and (2) "involuntary transgression of a divine law, known or unknown"; the former category includes voluntary transgression against God, while the second category includes infirmities (such as "immaturity, ignorance, physical handicaps, forgetfulness, lack of discernment, and poor communication skills").[85][86]
Wesley explains that those born of God do not sinhabituallysince to do so means that sin still reigns, which is a mark of an unbeliever. Neither does the Christian sinwillfullysince the believer's will is now set on living for Christ. He further claims that believers do not sin by desire because the heart has been thoroughly transformed to desire only God's perfect will. Wesley then addresses "sin by infirmities". Since infirmities involve no "concurrence of (the) will", such deviations, whether in thought, word, or deed, are not "properly" sin. He therefore concludes that those born of God do not commit sin, having been saved from "all their sins" (II.2, 7).[86]
This is reflected in the Articles of Religion of theFree Methodist Church(emphasis added in italics), which uses the wording of John Wesley:[87]
Justified persons, while they do not outwardly commit sin, are nevertheless conscious of sin still remaining in the heart. They feel a natural tendency to evil, a proneness to depart from God, and cleave to the things of earth. Those that are sanctified wholly are saved from all inward sin-from evil thoughts and evil tempers. No wrong temper, none contrary to love remains in the soul. All their thoughts, words, and actions are governed by pure love. Entire sanctification takes place subsequently to justification, and is the work of God wrought instantaneously upon the consecrated, believing soul. After a soul is cleansed from all sin, it is then fully prepared to grow in grace" (Discipline, "Articles of Religion", ch. i, § 1, p. 23).[87]
Methodists also believe in thesecond work of grace– Christian perfection, also known as entire sanctification, which removesoriginal sin, makes the believer holy and empowers him/her with power to wholly serve God.[6][88]John Wesley explained, "entire sanctification, or Christian perfection, is neither more nor less than pure love; love expelling sin, and governing both the heart and life of a child of God. The Refiner's fire purges out all that is contrary to love."[89][90]
Methodist churches teach thatapostasycan occur through a loss of faith or throughsinning.[91][92]If a personbackslidesbut later decides to return to God, he or she mustrepentfor sins and be entirely sanctified again (the Arminian doctrine ofconditional security).[93][94]
Methodists hold thatsacramentsare sacred acts of divine institution. Methodism has inherited its liturgy fromAnglicanism, although Wesleyan theology tends to have a stronger "sacramental emphasis" than that held byevangelical Anglicans.[95]
In common with most Protestants, Methodists recognize two sacraments as being instituted by Christ:BaptismandHoly Communion(also called the Lord's Supper).[96]Most Methodist churches practiceinfant baptism, in anticipation of a response to be made later (confirmation), as well asbaptism of believing adults.[97]TheCatechism for the Use of the People Called Methodistsstates that, "[in Holy Communion] Jesus Christ ispresentwith his worshipping people and gives himself to them as their Lord and Saviour."[58]: 26In the United Methodist Church, the explanation of howChrist's presence is made manifestin the elements (bread and wine) is described as a "Holy Mystery".[98]
Methodist churches generally recognize sacraments to be ameans of grace.[99]John Wesley held that God also impartedgraceby other established means such as public and privateprayer, Scripture reading,studyandpreaching,public worship, andfasting; these constitute the works of piety.[100]Wesley considered means of grace to be "outward signs, words, or actions ... to be the ordinary channels whereby [God] might convey to men, preventing [i.e., preparing], justifying or sanctifying grace."[101]Specifically Methodist means, such as theclass meetings, provided his chief examples for these prudential means of grace.[102]
American Methodist theologianAlbert Outler, in assessing John Wesley's own practices of theological reflection, proposes a methodology termed the "Wesleyan Quadrilateral".[103]Wesley's Quadrilateral is referred to in Methodism as "our theological guidelines" and is taught to itsministers(clergy) inseminaryas the primary approach to interpreting Scripture and gaining guidance for moral questions and dilemmas faced in daily living.[104]: 76–88
Traditionally, Methodists declare theBible(OldandNew Testaments) to be the only divinely inspired Scripture and the primary source of authority for Christians.[105]The historic Methodist understanding of Scripture is based on the superstructure ofWesleyan covenant theology.[106]Methodists also make use oftradition, drawing primarily from the teachings of theChurch Fathers, as a secondary source of authority. Tradition may serve as a lens through which Scripture is interpreted. Theological discourse for Methodists almost always makes use of Scripture read inside the wider theological tradition of Christianity.[107][108]
John Wesley contended that a part of the theological method would involve experiential faith.[103]In other words, truth would be vivified in personal experience of Christians (overall, not individually), if it were really truth. And every doctrine must be able to be defended rationally. He did not divorcefaithfromreason. By reason, one asks questions of faith and seeks to understand God's action and will. Tradition, experience and reason, however, were subject always to Scripture, Wesley argued, because only there is the Word of Godrevealed"so far as it is necessary for our salvation."[104]: 77
With respect to public worship, Methodism was endowed by the Wesley brothers with worship characterised by a twofold practice: the ritual liturgy of theBook of Common Prayeron the one hand and the non-ritualistic preaching service on the other.[109]This twofold practice became distinctive of Methodism because worship in the Church of England was based, by law, solely on theBook of Common Prayerand worship in theNonconformistchurches was almost exclusively that of "services of the word", i.e. preaching services, withHoly Communionbeing observed infrequently. John Wesley's influence meant that, in Methodism, the two practices were combined, a situation which remains characteristic of the tradition.[109][110]Methodism has heavily emphasized "offerings ofextemporeand spontaneous prayer".[111]To this end, Methodistrevival servicesandcamp meetingshave been characterized by groaning and shouting, as people sought the fullness of salvation that Methodists taught to be embodied by the experience ofentire sanctification.[112][113]To outsiders, Wesleyans were labeled as "Shouting Methodists" due to their free expression during worship.[114]
Historically, Methodist churches have devoutly observed theLord's Day(Sunday) with a morningservice of worship, along with an evening service of worship (with the evening service being aimed at seekers and focusing on "singing, prayer, and preaching"); the holding of a midweek prayer meeting on Wednesday evenings has been customary.[115][116]18th-century Methodist church services were characterized by the following pattern: "preliminaries (e.g., singing, prayers, testimonies), to a 'message,' followed by an invitation to commitment", the latter of which took the formaltar call—a practice that a remains "a vital part" of worship.[117][118]A number of Methodist congregations devote a portion of their Sunday evening service and mid-week Wednesday evening prayer meeting to having congregants share their prayer requests, in addition to hearing personaltestimoniesabout their faith and experiences in living the Christian life.[119]After listening to various members of the congregation voice their prayer requests, congregants may kneel forintercessory prayer.[116]TheLovefeast, traditionally practiced quarterly, was another practice that characterized early Methodism as John Wesley taught that it was an apostolicordinance.[14]Worship, hymnology, devotional and liturgical practices in Methodism were also influenced byPietistic Lutheranismand, in turn, Methodist worship became influential in theHoliness movement.[120]
Early Methodism was known for its "almost monastic rigors, its living by rule, [and] its canonical hours of prayer".[121]It inherited from itsAnglican patrimonythe practice of reciting theDaily Office, which Methodist Christians were expected topray.[122]The first prayer book of Methodism,The Sunday Service of the Methodists with other occasional Servicesthus included the canonical hours of both Morning Prayer and Evening Prayer; these services were observed everyday inearly Christianity, though on theLord's Day, worship included the Eucharist.[123][122][124]Later Methodist liturgical books, such as theMethodist Worship Book(1999) provide for Morning Prayer and Evening Prayer to be prayed daily; theUnited Methodist Churchencourages its communicants to pray the canonical hours as "one of the essential practices" of being a disciple of Jesus.[125][126]Some Methodist religious orders publish the Daily Office to be used for that community, for example,The Book of Offices and Services of The Order of Saint Lukecontains the canonical hours to be prayed traditionally atseven fixed prayer times:Lauds(6 am),Terce(9 am),Sext(12 pm),None(3 pm),Vespers(6 pm),Compline(9 pm) andVigil(12 am).[127]Some Methodist congregations offer daily Morning Prayer.[128]
In America, the United Methodist Church andFree Methodist Church, as well as thePrimitive Methodist ChurchandWesleyan Methodist Church, have a wide variety of forms of worship, ranging fromhigh churchtolow churchinliturgicalusage. When the Methodists in America were separated from the Church of England because of the American Revolution, John Wesley provided a revised version of theBook of Common PrayercalledThe Sunday Service of the Methodists; With Other Occasional Services(1784).[129][130]Today, the primaryliturgical booksof the United Methodist Church areThe United Methodist HymnalandThe United Methodist Book of Worship(1992). Congregations employ its liturgy and rituals as optional resources, but their use is not mandatory. These books contain the liturgies of the church that are generally derived from Wesley'sSunday Serviceand from the 20th-centuryliturgical renewal movement.
TheBritish Methodist Churchis less ordered, or less liturgical, in worship. It makes use of theMethodist Worship Book(similar to the Church of England'sCommon Worship), containing set services andrubricsfor the celebration of otherrites, such as marriage. TheWorship Bookis also ultimately derived from Wesley'sSunday Service.[131]
A unique feature of American Methodism has been the observance of theseasonofKingdomtide, encompassing the last 13 weeks before Advent, thus dividing the long season after Pentecost into two segments. During Kingdomtide, Methodist liturgy has traditionally emphasized charitable work and alleviating the suffering of the poor.[132]
A second distinctive liturgical feature of Methodism is the use ofCovenant Services. Although practice varies between national churches, most Methodist churches annually follow the call of John Wesley for a renewal of theircovenantwith God. It is common for each congregation to use the Covenant Renewal liturgy during thewatchnight servicein the night ofNew Year's Eve,[133]though in Britain, these are often on the first Sunday of the year. Wesley's covenant prayer is still used, with minor modification, in the order of service:
Christ has many services to be done. Some are easy, others are difficult. Some bring honour, others bring reproach. Some are suitable to our natural inclinations and temporal interests, others are contrary to both ... Yet the power to do all these things is given to us in Christ, who strengthens us.
...I am no longer my own but yours. Put me to what you will, rank me with whom you will; put me to doing, put me to suffering; let me be employed for you or laid aside for you, exalted for you or brought low for you; let me be full, let me be empty, let me have all things, let me have nothing; I freely and wholeheartedly yield all things to your pleasure and disposal.[125]: 290
As John Wesley advocated outdoor evangelism,revival servicesare a traditional worship practice of Methodism that are often held in churches, as well as atcamp meetings,brush arbor revivals, andtent revivals.[134][135][136]
Traditionally, Methodistconnexionsdescending from the tradition of theMethodist Episcopal Churchhave a probationary period of six months before an individual is admitted intochurch membershipas a full member of a congregation.[23]Given the wide attendance at Methodistrevival meetings, many people started to attend Methodist services of worship regularly, though they had not yet committed to membership.[23]When they made that commitment, becoming a probationer was the first step and during this period, probationers "receive additional instruction and provide evidence of the seriousness of their faith and willingness to abide by church discipline before being accepted into full membership."[23]In addition to this, to be a probationary member of a Methodist congregation, a person traditionally requires an "earnest desire to be saved from [one's] sins".[23]In the historic Methodist system, probationers were eligible to become members ofclass meetings, where they could be further discipled in their faith.[23]
Catechismssuch asThe Probationer's Handbook, authored by ministerStephen O. Garrison, have been used by probationers to learn the Methodist faith.[137]After six months, probationers were examined before the Leaders and Stewards' Meeting (which consisted ofClass LeadersandStewards) where they were to provide "satisfactory assurance both of the correctness of his faith and of his willingness to observe and keep the rules of the church."[23]If probationers were able to do this, they were admitted as full members of the congregation by thepastor.[23]
Full members of a Methodist congregation "were obligated to attend worship services on a regular basis" and "were to abide by certain moral precepts, especially as they related to substance use, gambling, divorce, and immoral pastimes."[23]This practice continues in certain Methodist connexions, such as the Lumber River Conference of the Holiness Methodist Church, in which probationers must be examined by the pastor, class leader, and board for full membership, in addition to beingbaptized.[138]The same structure is found in theAfrican Methodist Episcopal Zion Church, which teaches:[139]
In order that we may not admit improper persons into our church, great care be taken in receiving persons on probation, and let not one be so received or enrolled who does not give satisfactory evidence of his/her desire to flee the wrath to come and to be saved from his/her sins. Such a person satisfying us in these particulars may be received into our church on six months probation; but shall not be admitted to full membership until he/she shall have given satisfactory evidence of saving faith in the Lord Jesus Christ.
The pastor and class leader are to ensure "that all persons on probation be instructed in the Rules and Doctrines of The African Methodist Episcopal Zion Church before they are admitted to Full Membership" and that "probationers are expected to conform to the rules and usages of the Church, and to show evidence of their desire for fellowship in the Church".[139]After the six-month probation period, "A probationer may be admitted to full membership, provided he/she has served out his/her probation, has been baptized, recommended at the Leaders' Meeting, and, if none has been held according to law, recommended by the Leader, and, on examination by the Pastor before the Church as required in ¶600 has given satisfactory assurance both of the correctness of his/her faith, and of his/her willingness to observe and keep the rules of our Church."[139]TheAllegheny Wesleyan Methodist Connectionadmits to associate membership, by vote of the congregation, those who give affirmation to two questions: "1) Does the Lord now forgive your sins? 2) Will you acquaint yourself with the discipline of our connection and earnestly endeavor to govern your life by its rules as God shall give you understanding?"[140]Probationers who wish to become full members are examined by the advisory board before being received as such through four vows (on thenew birth,entire sanctification,outward holiness, and assent to theArticles of Religion) and acovenant.[140]In theUnited Methodist Church, the process of becoming a professing member of a congregation is done through the taking membership vows (normatively in the rite ofconfirmation) after a period of instruction and receiving the sacrament of baptism.[141]It is the practice of certain Methodist connexions that when people become members of a congregation, they are offered theRight Hand of Fellowship.[140][142]Methodists traditionally celebrate theCovenant Renewal Serviceas thewatchnight serviceannually on New Year's Eve, in which members renew theircovenantwith God and the Church.[143]
Early Methodists woreplain dress, with Methodist clergy condemning "high headdresses, ruffles, laces, gold, and 'costly apparel' in general".[144]John Wesley recommended that Methodists annually read his thoughtsOn Dress;[145]in that sermon, Wesley expressed his desire for Methodists: "Let me see, before I die, a Methodist congregation, full as plain dressed as aQuakercongregation."[146]The 1858 Discipline of theWesleyan Methodist Connectionthus stated that "we would ... enjoin on all who fear God plain dress."[147]Peter Cartwright, a Methodistrevivalist, stated that in addition to wearing plain dress, the early Methodists distinguished themselves from other members of society byfastingonce a week,abstaining from alcohol(teetotalism), and devoutlyobserving the Sabbath.[148]Methodistcircuit riderswere known for practicing thespiritual disciplineofmortifying the fleshas they "arose well before dawn for solitary prayer; they remained on their kneeswithout food or drinkor physical comforts sometimes for hours on end."[149]The early Methodists did not participate in, and condemned, "worldly habits" including "playing cards, racing horses, gambling, attending the theater, dancing (both in frolics and balls), and cockfighting."[144]
In Methodism, fasting is considered one of theworks of piety.[150]The Directions Given to Band Societies (25 December 1744) by John Wesley mandate fasting and abstinence from meat onall Fridays of the year(in remembrance of the crucifixion of Jesus).[24][151]Wesley himself also fasted before receiving Holy Communion "for the purpose of focusing his attention on God," and asked other Methodists to do the same.[152]
Over time, many of these practices were relaxed in mainline Methodism, although practices such as teetotalism and fasting are still encouraged, in addition to the current prohibition of gambling.[153][154]Denominations of theconservative holiness movement, such as theAllegheny Wesleyan Methodist ConnectionandEvangelical Methodist Church Conference, continue to reflect the spirit of the historic Methodist practice of wearing plain dress, withmembersabstaining from the "wearing of apparel which does not modestly and properly clothe the person" and "refraining from the wearing of jewelry" and "superfluous ornaments (including the wedding ring)".[155][156]TheFellowship of Independent Methodist Churches, which continues to observe theordinanceofwomen's headcovering, stipulates "renouncing all vain pomp and glory" and "adorning oneself with modest attire."[157]The General Rules of the Methodist Church in America, which are among the doctrinal standards of many Methodist Churches, promote first-day Sabbatarianism as they require "attending upon all the ordinances of God" including "the public worship of God" and prohibit "profaning the day of the Lord, either by doing ordinary work therein or by buying or selling."[117][158]
Methodism is a worldwide movement and Methodist churches are present on all populated continents.[159]Although Methodism is declining in Great Britain and North America, it is growing in other places – at a rapid pace in, for example, South Korea.[160]There is no single Methodist Church with universal juridical authority; Methodists belong to multiple independent denominations or "connexions". The great majority of Methodists are members of denominations which are part of theWorld Methodist Council, an international association of 80 Methodist, Wesleyan, and relatedunitingdenominations,[161]representing about 80 million people.[5]
I look on all the world as my parish;thus far I mean, that, in whatever part of it I am, I judge it meet, right, and my bounden duty, to declare unto all that are willing to hear, the glad tidings of salvation.
Methodism is prevalent in the English-speaking world but it is also organized in mainland Europe, largely due to missionary activity of British and American Methodists. British missionaries were primarily responsible for establishing Methodism across Ireland and Italy.[162]Today theUnited Methodist Church(UMC) – a large denomination based in the United States – has a presence in Albania, Austria, Belarus, Belgium, Bulgaria, the Czech Republic, Croatia, Denmark, Estonia, Finland, France, Germany, Hungary, Latvia, Lithuania, Moldova, North Macedonia, Norway, Poland, Romania, Serbia, Slovakia, Sweden, Switzerland, and Ukraine. Collectively the European and Eurasian regions of the UMC constitute a little over 100,000 Methodists (as of 2017[update]).[163][164][165][needs update]Other smaller Methodist denominations exist in Europe.
The original body founded as a result of Wesley's work came to be known as theWesleyan Methodist Church.Schismswithin the original church, and independentrevivals, led to the formation of a number of separate denominations calling themselves "Methodist". The largest of these were thePrimitive Methodists, deriving from a revival atMow CopinStaffordshire, theBible Christians, and theMethodist New Connexion. The original church adopted the name "Wesleyan Methodist" to distinguish it from these bodies. In 1907, a union of smaller groups with the Methodist New Connexion and Bible Christian Church brought about theUnited Methodist Church; then the three major streams of British Methodismunited in 1932to form the presentMethodist Church of Great Britain.[166]The fourth-largest denomination in the country, the Methodist Church of Great Britain has about 202,000 members in 4,650 congregations.[167]
Early Methodism was particularly prominent inDevonandCornwall, which were key centers of activity by theBible Christianfaction of Methodists.[168]The Bible Christians produced many preachers, and sent many missionaries to Australia.[169]Methodism also grew rapidly in the old mill towns ofYorkshireandLancashire, where the preachers stressed that the working classes were equal to the upper classes in the eyes of God.[170]In Wales, three elements separately welcomed Methodism: Welsh-speaking, English-speaking, andCalvinistic.[171]
British Methodists, in particular the Primitive Methodists, took a leading role in thetemperance movementof the 19th and early 20th centuries. Methodists saw alcoholic beverages, and alcoholism, as the root of many social ills and tried to persuade people to abstain from these.[172][173]Temperance appealed strongly to the Methodist doctrines of sanctification and perfection. To this day, alcohol remains banned in Methodist premises, however this restriction no longer applies to domestic occasions in private homes (i.e. the minister may have a drink at home in themanse).[174]The choice to consume alcohol is now a personal decision for any member.[174]
British Methodism does not havebishops; however, it has always been characterised by a strong central organisation, theConnexion, which holds an annual Conference (the church retains the 18th-century spellingconnexionfor many purposes). The Connexion is divided into Districts in the charge of the chairperson (who may be male or female). Methodist districts often correspond approximately, in geographical terms, to counties – as do Church of Englanddioceses. The districts are divided intocircuitsgoverned by the Circuit Meeting and led and administrated principally by a superintendent minister.Ministersare appointed to Circuits rather than to individual churches, although some large inner-city churches, known as "central halls", are designated as circuits in themselves – of theseWestminster Central Hall, oppositeWestminster Abbeyin central London, is the best known. Most circuits have fewer ministers than churches, and the majority of services are led by laylocal preachers, or by supernumerary ministers (ministers who have retired, called supernumerary because they are not counted for official purposes in the numbers of ministers for the circuit in which they are listed). The superintendent and other ministers are assisted in the leadership and administration of the Circuit by circuit stewards - laypeople with particular skills who, who with the ministers, collectively form what is normally known as the Circuit Leadership Team.[175]
The Methodist Council also helps to run a number of schools, including twopublic schoolsinEast Anglia:Culford Schoolandthe Leys School. The council promotes an all round education with a strong Christianethos.[176]
Other Methodist denominations in Britain include: theFree Methodist Church, theFellowship of Independent Methodist Churches, theChurch of the Nazarene, andThe Salvation Army, all of which are Methodist churches aligned with theholiness movement, as well as theWesleyan Reform Union,[177]an early secession from the Wesleyan Methodist Church, and theIndependent Methodist Connexion.[178]
John Wesley visited Ireland on at least twenty-four occasions and established classes and societies.[179]TheMethodist Church in Ireland(Irish:Eaglais Mheitidisteach in Éirinn) today operates across both Northern Ireland and the Republic of Ireland on an all-Ireland basis. As of 2018[update], there were around 50,000 Methodists across Ireland.[180]In 2013, the biggest concentration – 13,171 – was inBelfast, with 2,614 inDublin.[181]As of 2021[update], it is the fourth-largest denomination in Northern Ireland, with Methodists accounting for 2.3% of the population, compared to 3% in 2011.[182][183]
Eric Gallagherwas the President of the Church in the 1970s, becoming a well-known figure in Irish politics.[184]He was one of the group of Protestant churchmen who met withProvisional IRAofficers inFeakle, County Clare to try to broker peace. The meeting was unsuccessful due to aGardaraid on the hotel.[citation needed]
In 1973, theFellowship of Independent Methodist Churches(FIMC) was established as a number of theologically conservative congregations departed both theMethodist Church in IrelandandFree Methodist Churchdue to what they perceived as the rise ofModernismin those denominations.[185][186]
TheItalian Methodist Church(Italian:Chiesa Metodista Italiana) is a small Protestant community in Italy,[187]with around 7,000 members.[188]Since 1975, it is in a formal covenant ofpartnership with the Waldensian Church, with a total of 45,000 members.[188]Waldensiansare a Protestant movement which started inLyon, France, in the late 1170s.
Italian Methodism has its origins in the Italian Free Church, BritishWesleyan MethodistMissionary Society, and theAmerican Methodist Episcopal Mission. These movements flowered in the second half of the 19th century in the new climate of political and religious freedom that was established with the end of thePapal Statesand unification of Italy in 1870.[162]
Bertrand M. Tipple, minister of the American Methodist Church in Rome, founded a college there in 1914.[189]
In April 2016, the World Methodist Council opened an Ecumenical Office in Rome. Methodist leaders and the leader of the Roman Catholic Church,Pope Francis, jointly dedicated the new office.[190]It helps facilitate Methodist relationships with the wider Church, especially the Roman Catholic Church.[191]
The "Nordic and Baltic Area" of the United Methodist Church covers theNordic countries(Denmark, Sweden, Norway, and Finland) and theBaltic countries(Estonia, Latvia, and Lithuania). Methodism was introduced to the Nordic countries in the late 19th century.[192]Today theUnited Methodist Church in Norway(Norwegian:Metodistkirken) is the largest annual meeting in the region with 10,684 members in total (as of 2013[update]).[164]TheUnited Methodist Church in Sweden(Swedish:Metodistkyrkan) joined theUniting Church in Swedenin 2011.[193]
In Finland, Methodism arrived throughOstrobothnianssailors in the 1860s, and Methodism spread especially inSwedish-speakingOstrobothnia. The first Methodist congregation was founded inVaasain 1881 and the first Finnish-speaking congregation inPoriin 1887.[194]At the turn of the century, the congregation in Vaasa became the largest and most active congregation in Methodism.[195]
The French Methodist movement was founded in the 1820s by Charles Cook in the village ofCongéniesinLanguedocnearNîmesandMontpellier. The most important chapel of department was built in 1869, where there had been aQuakercommunity since the 18th century.[196]Sixteen Methodist congregations voted to join theReformed Church of Francein 1938.[197]In the 1980s, missionary work of a Methodist church inAgenled to new initiatives inFleuranceandMont de Marsan.[198]
Methodism exists today in France under various names. The best-known is the Union of Evangelical Methodist Churches (French:l'Union de l'Eglise Evangélique Méthodiste) or UEEM. It is an autonomous regional conference of the United Methodist Church and is the fruit of a fusion in 2005 between the "Methodist Church of France" and the "Union of Methodist Churches". As of 2014[update], the UEEM has around 1,200 members and 30 ministers.[197]
In Germany, Switzerland and Austria,Evangelisch-methodistische Kircheis the name of theUnited Methodist Church. The German part of the church had about 52,031 members in 2015[update].[165]Members are organized into three annual conferences: north, east and south.[165]All three annual conferences belong to theGermany Central Conference.[199]Methodism is most prevalent in southernSaxonyand aroundStuttgart.[citation needed]
A Methodist missionary returning from Britain introduced (British) Methodism to Germany in 1830, initially in the region ofWürttemberg. Methodism was also spread in Germany through the missionary work of theMethodist Episcopal Churchwhich began in 1849 inBremen, soon spreading toSaxonyand other parts of Germany. Other Methodist missionaries of theEvangelical Associationwent near Stuttgart (Württemberg) in 1850.[199]Further Methodist missionaries of theChurch of the United Brethren in Christworked inFranconiaand other parts of Germany from 1869 until 1905.[200]Therefore, Methodism has four roots in Germany.
Early opposition towards Methodism was partly rooted in theological differences – northern and eastern regions of Germany were predominantly Lutheran and Reformed, and Methodists were dismissed as fanatics. Methodism was also hindered by its unfamiliar church structure (Connectionalism), which was more centralised than the hierarchical polity in the Lutheran and Reformed churches. AfterWorld War I, the 1919Weimar Constitutionallowed Methodists to worship freely and many new chapels were established. In 1936, German Methodists elected their first bishop.[201]
The first Methodist mission in Hungary was established in 1898 inBácska, in a then mostly German-speaking town ofVerbász(since 1918 part of the Serbian province ofVojvodina).[citation needed]In 1905 a Methodist mission was established also inBudapest. In 1974, a group later known as theHungarian Evangelical Fellowshipseceded from the Hungarian Methodist Church over the question of interference by the communist state.
As of 2017[update], the United Methodist Church in Hungary, known locally as the Hungarian Methodist Church (Hungarian:Magyarországi Metodista Egyház), had 453 professing members in 30 congregations.[202]It runs two student homes, two homes for the elderly, the Forray Methodist High School, the Wesley Scouts and the Methodist Library and Archives.[203]The church has a special ministry among theRoma.[204][205]
The seceding Hungarian Evangelical Fellowship (Magyarországi Evangéliumi Testvérközösség) also remains Methodist in its organisation and theology. It has eight full congregations and several mission groups, and runs a range of charitable organisations: hostels and soup kitchens for the homeless, a non-denominational theological college,[206]a dozen schools of various kinds, and four old people's homes.
Today there are a dozen Methodist/Wesleyan churches and mission organisations in Hungary, but all Methodist churches lost official church status under new legislation passed in 2011, when the number of officially recognized churches in the country fell to 14.[207]However, the list of recognized churches was lengthened to 32 at the end of February 2012.[208]This gave recognition to the Hungarian Methodist Church and theSalvation Army, which was banned in Hungary in 1949 but had returned in 1990, but not to the Hungarian Evangelical Fellowship. The legislation has been strongly criticised by theVenice Commissionof theCouncil of Europeas discriminatory.[209]
The Hungarian Methodist Church, the Salvation Army and the Church of the Nazarene and other Wesleyan groups formed the Wesley Theological Alliance for theological and publishing purposes in 1998.[210]Today the Alliance has 10 Wesleyan member churches and organisations. The Hungarian Evangelical Fellowship does not belong to it and has its own publishing arm.[211]
The Methodist Church established several strongholds in Russia –Saint Petersburgin the west and theVladivostokregion in the east, with large Methodist centres inMoscowandEkaterinburg (former Sverdlovsk). Methodists began their work in the west among Swedish immigrants in 1881 and started their work in the east in 1910.[212]On 26 June 2009, Methodists celebrated the 120th year since Methodism arrived in Czarist Russia by erecting a new Methodist centre in Saint Petersburg.[212]A Methodist presence was continued in Russia for 14 years after theRussian Revolution of 1917through the efforts ofDeaconess Anna Eklund.[213]In 1939, political antagonism stymied the work of the Church and Deaconess Anna Eklund was coerced to return to her native Finland.[212]
After 1989, the Soviet Union allowed greatly increased religious freedoms[214]and this continued after the USSR's collapse in 1991. During the 1990s, Methodism experienced a powerful wave of revival in the nation.[212]Three sites in particular carried the torch – Samara, Moscow and Ekaterinburg. As of 2011[update], the United Methodist Church in Eurasia comprised 116 congregations, each with a native pastor. There are currently 48 students enrolled in residential and extension degree programs at the United Methodist Seminary in Moscow.[212]
Methodism came to the Caribbean in 1760 when the planter, lawyer and Speaker of the Antiguan House of Assembly,Nathaniel Gilbert(c. 1719–1774), returned to his sugar estate home in Antigua.[215]A Methodist revival spread in theBritish West Indiesdue to the work of British missionaries.[216]Missionaries established societies which would later become theMethodist Church in the Caribbean and the Americas(MCCA). The MCCA has about 62,000 members in over 700 congregations, ministered by 168 pastors.[216]There are smaller Methodist denominations that have seceded from the parent church.[citation needed]
The story is often told that in 1755, Nathaniel Gilbert, while convalescing, read a treatise of John Wesley,An Appeal to Men of Reason and Religionsent to him by his brother Francis. As a result of having read this book Gilbert, two years later, journeyed to England with three of his slaves and there in a drawing room meeting arranged in Wandsworth on 15 January 1759, met the preacher John Wesley. He returned to the Caribbean that same year and on his subsequent return began to preach to his slaves in Antigua.[215]
When Gilbert died in 1774 his work in Antigua was continued by his brother Francis Gilbert to approximately 200 Methodists. However, within a year Francis took ill and returned to Britain and the work was carried on by Sophia Campbell ("a Negress") and Mary Alley ("a Mulatto"), two devoted women who kept the flock together with class andprayer meetingsas well as they could.[216]
On 2 April 1778, John Baxter, a local preacher and skilled shipwright fromChathaminKent, England, landed atEnglish Harbourin Antigua (now called Nelson's Dockyard) where he was offered a post at the naval dockyard. Baxter was a Methodist and had heard of the work of the Gilberts and their need for a new preacher. He began preaching and meeting with the Methodist leaders, and within a year the Methodist community had grown to 600 persons. By 1783, the first Methodist chapel was built in Antigua, with John Baxter as the local preacher, its wooden structure seating some 2,000 people.[217]
In 1785, William Turton (1761–1817) a Barbadian son of a planter, met John Baxter in Antigua, and later, as layman, assisted in the Methodist work in the Swedish colony of St. Bartholomew from 1796.[215]
In 1786, the missionary endeavour in the Caribbean was officially recognized by the Methodist Conference in England, and that same yearThomas Coke, having been made Superintendent of the church two years previously in America by Wesley, was travelling toNova Scotia, but weather forced his ship to Antigua.[218][219][220]
In 1818 Edward Fraser (1798 – aft. 1850), a privileged Barbadian slave, moved to Bermuda and subsequently met the new minister James Dunbar. The Nova Scotia Methodist Minister noted young Fraser's sincerity and commitment to his congregation and encouraged him by appointing him as assistant. By 1827 Fraser assisted in building a new chapel. He was later freed and admitted to the Methodist Ministry to serve in Antigua and Jamaica.[215]
FollowingWilliam J. Shrewsbury'spreaching in the 1820s,Sarah Ann Gill(1779–1866), a free-born black woman, usedcivil disobediencein an attempt to thwart magistrate rulings that prevented parishioners holding prayer meetings. In hopes of building a new chapel, she paid an extraordinary £1,700-0s–0d and ended up having militia appointed by the Governor to protect her home from demolition.[221]
In 1884 an attempt was made at autonomy with the formation of two West Indian Conferences, however by 1903 the venture had failed. It was not until the 1960s that another attempt was made at autonomy. This second attempt resulted in the emergence of the Methodist Church in the Caribbean and the Americas in May 1967.[216]
Francis Godson(1864–1953), a Methodist minister, who having served briefly in several of the Caribbean islands, eventually immersed himself in helping those in hardship of theFirst World Warin Barbados. He was later appointed to theLegislative Council of Barbados, and fought for the rights ofpensioners. He was later followed by renowned BarbadianAugustus Rawle Parkinson(1864–1932),[222]who also was the first principal of the Wesley Hall School,Bridgetownin Barbados (which celebrated its 125th anniversary in September 2009).[215]
In more recent times in Barbados, Victor Alphonso Cooke (born 1930) and Lawrence Vernon Harcourt Lewis (born 1932) are strong influences on the Methodist Church on the island.[215]Their contemporary and late member of the Dalkeith Methodist Church, was the former secretary of theUniversity of the West Indies, consultant of theCanadian Training Aid Programmeand a man of letters – Francis Woodbine Blackman (1922–2010). It was his research and published works that enlightened much of this information on Caribbean Methodism.[223][224]
Most Methodist denominations in Africa follow the British Methodist tradition and see theMethodist Church of Great Britainas their mother church. Originally modelled on the British structure, since independence most of these churches have adopted anepiscopal modelof church governance.
The Nigerian Methodist Church is one of the largest Methodist denominations in the world and one of the largest Christian churches in Nigeria, with around two million members in 2000 congregations.[225]It has seen exponential growth since the turn of the millennium.[226]
Christianity was established in Nigeria with the arrival in 1842 of aWesleyan Methodistmissionary.[225]He had come in response to the request for missionaries by theex-slaves who returned to Nigeria from Sierra Leone. From the mission stations established inBadagryandAbeokuta, the Methodist church spread to various parts of the country west of the River Niger and part of the north. In 1893 missionaries of thePrimitive Methodist Churcharrived from Fernando Po, an island off the southern coast of Nigeria. From there the Methodist Church spread to other parts of the country, east of the River Niger and also to parts of the north. The church west of the River Niger and part of the north was known as the Western Nigeria District and east of the Niger and another part of the north as the Eastern Nigeria District. Both existed independently of each other until 1962 when they constituted the Conference of Methodist Church Nigeria. The conference is composed of seven districts. The church has continued to spread into new areas and has established a department for evangelism and appointed a director of evangelism. Anepiscopal systemof church governance adopted in 1976 was not fully accepted by all sections of the church until the two sides came together and resolved to end the disagreement. A new constitution was ratified in 1990. The system is still episcopal but the points which caused discontent were amended to be acceptable to both sides. Today, the Nigerian Methodist Church has a prelate, eight archbishops and 44 bishops.[225]
Methodist Church Ghana is one of the largest Methodist denominations, with around 800,000 members in 2,905 congregations, ministered by 700 pastors.[227]It has fraternal links with the British Methodist and United Methodist churches worldwide.
Methodism in Ghana came into existence as a result of the missionary activities of theWesleyan Methodist Church, inaugurated with the arrival of Joseph Rhodes Dunwell to theGold Coastin 1835.[228]Like the mother church, the Methodist Church in Ghana was established by people of Protestant background. Roman Catholic and Anglican missionaries came to the Gold Coast from the 15th century. A school was established in Cape Coast by the Anglicans during the time of Philip Quaque, a Ghanaian priest. Those who came out of this school had Bible copies and study supplied by theSociety for the Propagation of Christian Knowledge. A member of the resulting Bible study groups, William De-Graft, requested Bibles through Captain Potter of the shipCongo. Not only were Bibles sent, but also a Methodist missionary. In the first eight years of the Church's life, 11 out of 21 missionaries who worked in the Gold Coast died.Thomas Birch Freeman, who arrived at the Gold Coast in 1838 was a pioneer of missionary expansion. Between 1838 and 1857 he carried Methodism from the coastal areas toKumasiin theAsantehinterland of the Gold Coast. He also established Methodist Societies in Badagry and AbeoKuta in Nigeria with the assistance of William De-Graft.[229]
By 1854, the church was organized into circuits constituting a district with T. B. Freeman as chairman. Freeman was replaced in 1856 by William West. The district was divided and extended to include areas in the then Gold Coast and Nigeria by the synod in 1878, a move confirmed at the British Conference. The districts were Gold Coast District, with T. R. Picot as chairman and Yoruba and Popo District, with John Milum as chairman. Methodist evangelisation of northern Gold Coast began in 1910. After a long period of conflict with the colonial government, missionary work was established in 1955. Paul Adu was the first indigenous missionary to northern Gold Coast.[230]
In July 1961, the Methodist Church in Ghana became autonomous, and was called the Methodist Church Ghana, based on a deed of foundation, part of the church'sConstitution and Standing Orders.[227]
TheMethodist Churchoperates across South Africa, Namibia, Botswana, Lesotho and Swaziland, with a limited presence in Zimbabwe and Mozambique. It is a member church of theWorld Methodist Council.
Methodism inSouthern Africabegan as a result of lay Christian work by an Irish soldier of the English Regiment, John Irwin, who was stationed at the Cape and began to hold prayer meetings as early as 1795.[231]The first Methodist lay preacher at the Cape, George Middlemiss, was a soldier of the 72nd Regiment of the British Army stationed at the Cape in 1805.[232]This foundation paved the way for missionary work by Methodist missionary societies from Great Britain, many of whom sent missionaries with the 1820 English settlers to the Western and Eastern Cape. Among the most notable of the early missionaries were Barnabas Shaw and William Shaw.[233][234][235]The largest group was the Wesleyan Methodist Church, but there were a number of others that joined to form the Methodist Church of South Africa, later known as the Methodist Church of Southern Africa.[236]
The Methodist Church of Southern Africa is the largestmainline Protestantdenomination in South Africa – 7.3% of the South African population recorded their religious affiliation as 'Methodist' in the last national census.[237]
Methodism was brought to China in the autumn of 1847 by theMethodist Episcopal Church. The first missionaries sent out wereJudson Dwight CollinsandMoses Clark White, who sailed fromBoston15 April 1847, and reachedFuzhou6 September. They were followed by Henry Hickok andRobert Samuel Maclay, who arrived 15 April 1848. In 1857, the first convert was baptised in connection with its labours. In August 1856, a brick built church was dedicated named the "Church of the True God" (Chinese:真神堂;pinyin:Zhēnshén táng), the first substantial church building erected in Fuzhou by Protestant Missions. In the winter of the same year another brick built church, located on the hill in the suburbs on the south bank of theMin, was finished and dedicated, called the "Church of Heavenly Peace". In 1862, the number of members was 87. The Fuzhou Conference was organized byIsaac W. Wileyon 6 December 1867, by which time the number of members and probationers had reached 2,011.[citation needed]
Hok Chau (周學;Zhōu Xué; also known as Lai-Tong Chau,周勵堂;Zhōu Lìtáng) was the first ordained Chinese minister of the South China District of the Methodist Church (incumbent 1877–1916).Benjamin Hobson, a medical missionary sent by theLondon Missionary Societyin 1839, set up Wai Ai Clinic (惠愛醫館;Huì ài yī guǎn).[238][239]Liang Fa, Hok Chau and others worked there. Liang baptized Chau in 1852. The Methodist Church based in Britain sent missionaryGeorge Piercyto China. In 1851, Piercy went to Guangzhou (Canton), where he worked in a trading company. In 1853, he started a church in Guangzhou. In 1877, Chau was ordained by the Methodist Church, where he pastored for 39 years.[240][241]
In 1867, the mission sent out the first missionaries to Central China, who began work atJiujiang. In 1869, missionaries were also sent to thecapital cityBeijing, where they laid the foundations of the work of the North China Mission. In November 1880, theWest China Missionwas established inSichuan Province. In 1896, the work in the Hinghua prefecture (modern-dayPutian) and surrounding regions was also organized as a Mission Conference.[242]
In 1947, the Methodist Church in the Republic of China celebrated its centenary. In 1949, however, the Methodist Church moved to Taiwan with theKuomintanggovernment.
Methodism came to India twice, in 1817 and in 1856, according to P. Dayanandan who has extensively researched the subject.[243]Thomas Coke and six other missionaries set sail for India on New Year's Day in 1814. Coke, then 66, died en route. Rev. James Lynch was the one who finally arrived inMadrasin 1817 at a place called Black Town (Broadway), later known as George Town. Lynch conducted the first Methodist missionary service on 2 March 1817, in a stable.[244]
The first Methodist church was dedicated in 1819 atRoyapettah. A chapel at Broadway (Black Town) was later built and dedicated on 25 April 1822.[245]This church was rebuilt in 1844 since the earlier structure was collapsing.[245]At this time there were about 100 Methodist members in all of Madras, and they were either Europeans or Eurasians (European and Indian descent). Among names associated with the founding period of Methodism in India areElijah Hooleand Thomas Cryer, who came as missionaries to Madras.[246]
In 1857, the Methodist Episcopal Church started its work in India, and with prominent evangelists likeWilliam Taylorof the Emmanuel Methodist Church,Vepery, born in 1874. Taylor and the evangelistJames Mills Thoburnestablished the Thoburn Memorial Church in Calcutta in 1873 and the Calcutta Boys' School in 1877.[247]
In 1947, the Wesleyan Methodist Church in India merged with Presbyterians, Anglicans and other Protestant churches to form the Church of South India while the American Methodist Church remained affiliated as theMethodist Church in Southern Asia(MCSA) to the mother church in the USA – the United Methodist Church until 1981, when by an enabling act, the Methodist Church in India (MCI) became an autonomous church in India. Today, the Methodist Church in India is governed by the General Conference of the Methodist Church of India headed by six bishops, with headquarters in Mumbai, India.[248]
Missionaries from Britain, North America, and Australia founded Methodist churches in manyCommonwealthcountries. These are now independent from their former "mother" churches. In addition to the churches, these missionaries often also founded schools to serve the local community. A good example of such schools are theMethodist Boys' School in Kuala Lumpur,Methodist Girls' School and Methodist Boys' SchoolinGeorge Town, andAnglo-Chinese School,Methodist Girls' School,Paya Lebar Methodist Girls SchoolandFairfield Methodist Schoolsin Singapore.[249]
Methodism in the Philippines began shortly after the United States acquired the Philippines in 1898 as a result theSpanish–American War. On 21 June 1898, after theBattle of Manila Baybut before theTreaty of Paris, executives of the American Mission Society of the Methodist Episcopal Church expressed their desire to join otherProtestantdenominations in starting mission work in the islands and to enter into aComity Agreementthat would facilitate the establishment of such missions. The first Protestant worship service was conducted on 28 August 1898 by an American military chaplain named George C. Stull. Stull was an ordained Methodist minister from the Montana Annual Conference of The Methodist Episcopal Church (later part of the United Methodist Church after 1968).[250]
Methodist and Wesleyan traditions in the Philippines are shared by three of the largest mainline Protestant churches in the country:The United Methodist Church in the Philippines,Iglesia Evangelica Metodista En Las Islas Filipinas("Evangelical Methodist Church in the Philippine Islands", abbreviated IEMELIF), and TheUnited Church of Christ in the Philippines.[251]There are also evangelical Protestant churches in the country of the Methodist tradition like the Wesleyan Church of the Philippines, theFree Methodist Churchof the Philippines,[252]and theChurch of the Nazarene.[253]There are also the IEMELIF Reform Movement (IRM), The Wesleyan (Pilgrim Holiness) Church of the Philippines, the Philippine Bible Methodist Church, Incorporated, the Pentecostal Free Methodist Church, Incorporated, the Fundamental Christian Methodist Church, The Reformed Methodist Church, Incorporated, The Methodist Church of the Living Bread, Incorporated, and the Wesley Evangelical Methodist Church & Mission, Incorporated.
There are threeepiscopal areasof the United Methodist Church in the Philippines: the Baguio Episcopal Area, Davao Episcopal Area and Manila Episcopal Area.[254]
A call for autonomy from groups within the United Methodist Church in the Philippines was discussed at several conferences led mostly by episcopal candidates. This led to the establishment of theAng Iglesia Metodista sa Pilipinas("The Methodist Church in the Philippines") in 2010,[255]led by BishopLito C. Tangonan, George Buenaventura, Chita Milan and Joe Frank E. Zuñiga. The group finally declared full autonomy and legal incorporation with theSecurities and Exchange Commissionwas approved on 7 December 2011 with papers held by present procurators. It now has 126 local churches inMetro Manila,Palawan,Bataan,Zambales,Pangasinan,Bulacan,[256]Aurora,Nueva Ecija, as well as parts ofPampangaandCavite. Tangonan was consecrated as the denomination's first Presiding Bishop on 17 March 2012.[257]
The Korean Methodist Church (KMC) is one of the largest churches in South Korea with around 1.5 million members and 8,306 ministers.[258]Methodism in Korea grew out of British and American mission work which began in the late 19th century. The first missionary wasRobert Samuel Maclayof theMethodist Episcopal Church, who sailed from Japan in 1884 and was given the authority of medical and schooling permission from emperorGojong.[259]The Korean church became fully autonomous in 1930, retaining affiliation with Methodist churches in America and later the United Methodist Church.[258]The church experienced rapid growth in membership throughout most of the 20th century – in spite of theKorean War– before stabilizing in the 1990s.[258]The KMC is a member of the World Methodist Council and hosted the first Asia Methodist Convention in 2001.[258]
There are manyKorean-languageMethodist churches in North America catering to Korean-speaking immigrants.[260]
In 1947, the Methodist Church in the Republic of China celebrated its centenary. In 1949, however, the Methodist Church moved to Taiwan with theKuomintanggovernment. On 21 June 1953, Taipei Methodist Church was erected, then local churches and chapels with a baptized membership numbering over 2,500. Various types of educational, medical and social services are provided (includingTunghai University). In 1972, the Methodist Church in the Republic of China became autonomous, and the first bishop was installed in 1986.[261]
TheMethodist Church in Brazilwas founded by American missionaries in 1867 after an initial unsuccessful founding in 1835. It has grown steadily since, becoming autonomous in 1930. In the 1970s it ordained its first woman minister. In 1975 it also founded the first Methodist university in Latin America, theMethodist University of Piracicaba.[262]As of 2011[update], the Brazilian Methodist Church is divided into eight annual conferences with 162,000 members.[263]
The father of Methodism in Canada was Rev. Coughlan, who arrived in Newfoundland in 1763, where he opened a school and travelled widely.
The second wasWilliam Black(1760–1834) who began preaching in settlements along thePetitcodiac RiverofNew Brunswickin 1781.[264]A few years afterwards, Methodist Episcopal circuit riders from theU.S. stateofNew Yorkbegan to arrive inCanada Westat Niagara, and the north shore ofLake Eriein 1786, and at theKingstonregion on the northeast shore ofLake Ontarioin the early 1790s. At the time the region was part ofBritish North Americaand became part of Upper Canada after theConstitutional Act of 1791.UpperandLower Canadawere both parts of the New York Episcopal Methodist Conference until 1810 when they were transferred to the newly formed Genesee Conference. Reverend Major George Neal began to preach in Niagara in October 1786 and was ordained in 1810 by Bishop Philip Asbury, at the Lyons, New York Methodist Conference. He was Canada's first saddlebag preacher and travelled from Lake Ontario to Detroit for 50 years preaching the gospel.[265]
The spread of Methodism in the Canadas was seriously disrupted by theWar of 1812but quickly gained lost ground after theTreaty of Ghentwas signed in 1815. In 1817, the British Wesleyans arrived in the Canadas from the Maritimes but by 1820 had agreed, with the Episcopal Methodists, to confine their work to Lower Canada (present-dayQuebec) while the latter would confine themselves to Upper Canada (present-dayOntario). In the summer of 1818, the first place of public worship was erected for the Wesleyan Methodists inYork, later Toronto. The chapel for the First Methodist Church was built on the corner of King Street and Jordan Street, the entire cost of the building was $250, an amount that took the congregation three years to raise.[266]In 1828, Upper Canadian Methodists were permitted by the General Conference in the United States to form an independent Canadian Conference and, in 1833, the Canadian Conference merged with the British Wesleyans to form the Wesleyan Methodist Church in Canada. In 1884, most Canadian Methodists were brought under the umbrella of theMethodist Church, Canada.[citation needed]
In the fall of 1873 and winter of 1874, General SuperintendentB. T. Robertsof theFree Methodist ChurchvisitedScarboroughon the invitation of Robert Loveless, a Primitive Methodist layman. Later, in 1876 while presiding over the very young North Michigan Conference, he read conference appointments that assigned C.H. Sage his field of labour—Canada. This led to the expansion of the Free Methodist Church in Canada.
In 1925, the Methodist Church, Canada and mostPresbyteriancongregations (then by far the largest Protestant communion in Canada), most Congregational Union of Ontario and Quebec congregations, Union Churches in Western Canada, and the American Presbyterian Church inMontrealmerged to form theUnited Church of Canada. In 1968, theEvangelical United BrethrenChurch's Canadian congregations joined the United Church of Canada.
TheFree Methodist Church in Canadais the largest Methodist denomination in the country at present. A smaller denomination, theBritish Methodist Episcopal Church, remains active today as well.
The Methodist Church came toMexicoin 1872, with the arrival of two Methodist commissioners from the United States to observe the possibilities of evangelistic work in México. In December 1872, Bishop Gilbert Haven arrived inMexico City. He was ordered by M. D. William Butler to go to México. Bishop John C. Keener arrived from theMethodist Episcopal Church, Southin January 1873.[267][268]
In 1874, M. D. William Butler established the first Protestant Methodist school of México, inPuebla. The school was founded under the name "Instituto Metodista Mexicano". Today the school is called "Instituto Mexicano Madero". It is still a Methodist school, and it is one of the most elite, selective, expensive and prestigious private schools in the country,[269]with two campuses in Puebla State, and one inOaxaca. A few years later the principal of the school created a Methodist university.[270]
On 18 January 1885, the first Annual Conference of the United Episcopal Church of México was established.[271]
Wesley came to believe that the New Testament evidence did not leave the power of ordination to the priesthood in the hands ofbishopsbut that other priests could ordain. In 1784, he ordained preachers for Scotland, England, and America, with power to administer the sacraments (this was a major reason for Methodism's final split from the Church of England after Wesley's death). At that time, Wesley sentThomas Coketo America.Francis Asburyfounded theMethodist Episcopal Churchat the Baltimore Christmas Conference in 1784; Coke (already ordained in the Church of England) ordained Asbury deacon, elder, and bishop each on three successive days.[272]Circuit riders, many of whom were laymen, travelled by horseback to preach the gospel and establish churches in many places. One of the most famous circuit riders was Robert Strawbridge who lived in the vicinity of Carroll County, Maryland, soon after arriving in the Colonies around 1760.[273]
TheFirst Great Awakeningwas a religious movement in the 1730s and 1740s, beginning inNew Jersey, then spreading toNew England, and eventually south intoVirginiaandNorth Carolina. George Whitefield played a major role, traveling across the colonies and preaching in a dramatic and emotional style, accepting everyone as his audience.[274]
The new style of sermons and the way people practiced their faith breathed new life into religion in America. People became passionately and emotionally involved in their religion, rather than passively listening to intellectual discourse in a detached manner. People began to study the Bible at home. The effect was akin to the individualistic trends present in Europe during the Protestant Reformation.[citation needed]
TheSecond Great Awakeningwas a nationwide wave of revivals, from 1790 to 1840. InNew England, the renewed interest in religion inspired a wave of social activism among Yankees; Methodism grew and established several colleges, notablyBoston University. In the "burned over district" of western New York, the spirit of revival burned brightly. Methodism saw the emergence of aHoliness movement. In the west, especially atCane Ridge, Kentucky, and inTennessee, the revival strengthened the Methodists and theBaptists. Methodism grew rapidly in theSecond Great Awakening, becoming the nation's largest denomination by 1820. From 58,000 members in 1790, it reached 258,000 in 1820 and 1,661,000 in 1860, growing by a factor of 28.6 in 70 years, while the total American population grew by a factor of eight.[276]Other denominations also used revivals, but the Methodists grew fastest of all because "they combined popular appeal with efficient organization under the command of missionary bishops."[277]Methodism attracted German immigrants, and the firstGerman Methodist Churchwas erected inCincinnati, Ohio.[278]
Disputes over slavery placed the church in difficulty in the first half of the 19th century, with the northern church leaders fearful of a split with the South, and reluctant to take a stand. TheWesleyan Methodist Connexion(later renamed the Wesleyan Methodist Church) and theFree Methodist Churchwere formed by staunch abolitionists, and the Free Methodists were especially active in theUnderground Railroad, which helped to free slaves. In 1962, theEvangelical Wesleyan Churchseparated from the Free Methodist Church.[279]In 1968 the Wesleyan Methodist Church andPilgrim Holiness Churchmerged to form theWesleyan Church; a significant amount dissented from this decision resulting in the independence of theAllegheny Wesleyan Methodist Connectionand the formation of theBible Methodist Connection of Churches, both of which fall within theconservative holiness movement.[280]
In a much larger split, in 1845 at Louisville, Kentucky, the churches of the slaveholding states left the Methodist Episcopal Church and formed theMethodist Episcopal Church, South. The northern and southern branches were reunited in 1939, when slavery was no longer an issue. In this merger also joined theMethodist Protestant Church. Some southerners, more conservative in theology, opposed the merger, and formed theSouthern Methodist Churchin 1940.
TheThird Great Awakeningfrom 1858 to 1908 saw enormous growth in Methodist membership, and a proliferation of institutions such as colleges (e.g.,Morningside College). Methodists were often involved in theMissionary Awakeningand theSocial GospelMovement. The awakening in so many cities in 1858 started the movement, but in the North it was interrupted by the Civil War. In the South, on the other hand, the Civil War stimulated revivals, especially in Lee's army.[281]
In 1914–1917 many Methodist ministers made strong pleas for world peace. PresidentWoodrow Wilson(a Presbyterian), promised "a war to end all wars", using language of a future peace that had been a watchword for the postmillennial movement.[282]In the 1930s many Methodists favored isolationist policies. Thus in 1936, Methodist Bishop James Baker, of the San Francisco Conference, released a poll of ministers showing 56% opposed warfare. However, the Methodist Federation called for a boycott of Japan, which had invaded China and was disrupting missionary activity there.[283]In Chicago, 62 local African Methodist Episcopal churches voted their support for the Roosevelt administration's policy, while opposing any plan to send American troops overseas to fight. When war came in 1941, the vast majority of Methodists supported the national war effort, but there were also a few (673)[284]conscientious objectors.
TheUnited Methodist Church(UMC) was formed in 1968 as a result of a merger between theEvangelical United Brethren Church(EUB) and theMethodist Church. The former church had resulted from mergers of several groups of German Methodist heritage; however, there was no longer any need or desire to worship in the German language. The latter church was a result of union between the Methodist Protestant Church and the northern and southern factions of the Methodist Episcopal Church. The merged church had approximately nine million members as of the late 1990s. While United Methodist Church in America membership has been declining, associated groups in developing countries are growing rapidly.[285]Prior to the merger that led to the formation of the United Methodist Church, theEvangelical Methodist Churchentered into aschismwith the Methodist Church, citing modernism in its parent body as the reason for the departure in 1946.[286]
American Methodist churches are generally organized on aconnectionalmodel, related, but not identical to that used in Britain. Pastors are assigned to congregations bybishops, distinguishing it frompresbyteriangovernment. Methodist denominations typically give lay members representation at regional and national Conferences at which the business of the church is conducted, making it different from mostepiscopal government. This connectional organizational model differs further from thecongregationalmodel, for example ofBaptist, andCongregationalist Churches, among others.[citation needed]
In addition to the United Methodist Church, there are over 40 other denominations that descend from John Wesley's Methodist movement. Some, such as theAfrican Methodist Episcopal Church, the Free Methodists and the Wesleyan Church (formerly Wesleyan Methodist), are explicitly Methodist. There are also independent Methodist churches, many of which are affiliated with theAssociation of Independent Methodists.[287]The Salvation Army and the Church of the Nazarene adhere to Methodist theology.[288]
TheHoliness Revivalwas primarily among people of Methodist persuasion, who felt that the church had once again become apathetic, losing the Wesleyan zeal.[289]Some important events of this revival were the writings ofPhoebe Palmerduring the mid-1800s,[290]the establishment of the first of many holinesscamp meetingsat Vineland, New Jersey, in 1867, and the founding of Asbury College (1890), and other similar institutions in the U.S. around the turn of the 20th century.
In 2020, United Methodists announced a plan tosplit the denominationover the issue of same-sex marriage,[291]which resulted in traditionalist clergy, laity and theologians forming theGlobal Methodist Church, an evangelical Methodist denomination that came into being on 1 May 2022.[292][293][294]
Methodism is particularly widespread in somePacific Islandnations, such asFiji,SamoaandTonga.
In the 19th century there were annual conferences in each Australasian colony (including New Zealand). Various branches of Methodism in Australia merged during the 20 years from 1881. TheMethodist Church of Australasiawas formed on 1 January 1902 when five Methodist denominations in Australia – theWesleyan MethodistChurch, thePrimitive Methodists, theBible Christian Church, theUnited Methodist Freeand theMethodist New ConnexionChurches merged.[295][296]In polity it largely followed the Wesleyan Methodist Church.
In 1945 Kingsley Ridgway offered himself as a Melbourne-based "field representative" for a possible Australian branch of the Wesleyan Methodist Church of America, after meeting an American serviceman who was a member of that denomination.[297]TheWesleyan Methodist Church of Australiawas founded on his work.
The Methodist Church of Australasia merged with the majority of thePresbyterian Church of Australiaand theCongregational Union of Australiain 1977, becoming theUniting Church. The Wesleyan Methodist Church of Australia and some independent congregations chose not to join the union.[298]
Wesley Missionin Pitt Street,Sydney, the largest parish in the Uniting Church, remains strongly in the Wesleyan tradition.[299]There are many local churches named after John Wesley.
From the mid-1980s a number of independent Methodist churches were founded by missionaries and other members from the Methodist Churches of Malaysia and Singapore. Some of these came together to form what is now known as theChinese Methodist Church in Australiain 1993, and it held its first full Annual Conference in 2002.[300]Since the 2000s many independent Methodist churches have also been established or grown byTonganimmigrants.[301]
As a result of the early efforts of missionaries, most of the natives of the Fiji Islands were converted to Methodism in the 1840s and 1850s.[302]According to the 2007 census, 34.6% of the population (including almost two-thirds ofethnic Fijians),[303]are adherents of Methodism, making Fiji one of the most Methodist nations. TheMethodist Church of Fiji and Rotuma, the largest religious denomination, is an important social force along with the traditionalchiefly system. In the past, the church once called for atheocracyand fueledanti-Hindu sentiment.[304]
In June 1823 Wesleydale, the firstWesleyan Methodistmission in New Zealand, was established atKaeo.[305]TheMethodist Church of New Zealand, which is directly descended from the 19th-century missionaries, was the fourth-most common Christian denomination recorded in the 2018 New Zealand census.[306]
Since the early 1990s, missionaries and other Methodists from Malaysia and Singapore established Methodist churches around major urban areas in New Zealand. These congregations came together to form the Chinese Methodist Church in New Zealand (CMCNZ) in 2003.{[307]
The Methodist Church is the third largest denomination throughout the Samoan Islands, in both Samoa and American Samoa.[308]In 1868,Piula Theological Collegewas established inLufilufion the north coast ofUpoluisland in Samoa and serves as the main headquarters of the Methodist church in the country.[309]The college includes the historic Piula Monastery as well asPiula Cave Pool, a natural spring situated beneath the church by the sea.
Methodism had a particular resonance with the inhabitants of Tonga. In the 1830s Wesleyan missionaries converted paramount chiefTaufa'ahau Tupouwho in turn converted fellow islanders. Today, Methodism is represented on the islands by theFree Church of Tongaand theFree Wesleyan Church, which is the largest church in Tonga. As of 2011[update]48% of Tongans adhered to Methodist churches.[310]The royal family of the country are prominent members of the Free Wesleyan Church, and the late king was a lay preacher.[311][312]Tongan Methodist ministerSione 'Amanaki Haveadevelopedcoconut theology, which tailors theology to a Pacific Islands context.[313]
Many Methodists have been involved in theecumenical movement,[314]which has sought to unite the fractured denominations of Christianity. Because Methodism grew out of the Church of England, a denomination from which neither of the Wesley brothers seceded, some Methodist scholars and historians, such as Rupert E. Davies, have regarded their 'movement' more as a preaching order within wider Christian life than as a church, comparing them with theFranciscans, who formed a religious order within the medieval European church and not a separate denomination.[315]Certainly, Methodists have been deeply involved in early examples ofchurch union, especially theUnited Church of Canadaand theChurch of South India.
A disproportionate number of Methodists take part in inter-faith dialogue. For example,Wesley Ariarajah, a long-serving director of theWorld Council of Churches' sub-unit on "Dialogue with People of Living Faiths and Ideologies" is a Methodist.[316]
In October 1999, an executive committee of the World Methodist Council resolved to explore the possibility of its member churches becoming associated with thedoctrinal agreementwhich had been reached by theCatholic ChurchandLutheran World Federation(LWF). In May 2006, the International Methodist–Catholic Dialogue Commission completed its most recent report, entitled "TheGraceGiven You in Christ: Catholics and Methodists Reflect Further on the Church", and submitted the text to Methodist and Catholic authorities. In July of the same year, inSeoul, South Korea, the Member Churches of the World Methodist Council (WMC) voted to approve and sign a "Methodist Statement of Association" with theJoint Declaration on the Doctrine of Justification, the agreement which was reached and officially accepted in 1999 by the Catholic Church and the Lutheran World Federation and which proclaimed that:
"Togetherwe confess: Bygrace alone, infaithin Christ'ssaving workand not because of anymeriton our part, we are accepted byGodand receive theHoly Spirit, whorenews our heartswhileequippingandcalling ustogood works... assinnersournew lifeissolely dueto theforgivingand renewing mercy that Godimpartsas a gift and that we receive in faith, and nevercan meritin any way," affirming "fundamental doctrinal agreement" concerningjustificationbetween the Catholic Church, the LWF, and the World Methodist Council.[317]
This is not to say there is perfect agreement between the three denominational traditions; while Catholics and Methodists believe thatsalvationinvolvescooperation between God and man, Lutherans believe that Godbrings about the salvationof individualswithout any cooperationon their part.
Commenting on the ongoing dialogues with Catholic Churchleaders, Ken Howcroft, Methodist minister and the Ecumenical Officer for the Methodist Church of Great Britain, noted that "these conversations have been immensely fruitful."[318]Methodists are increasingly recognizing that the 15 centuries prior to theReformationconstitute ashared historywith Catholics, and are gaining new appreciation for neglected aspects of the Catholic tradition.[319]There are, however, important unresolved doctrinal differencesseparatingRoman Catholicism and Methodism, which include "the nature and validity of theministryof those who preside at the Eucharist [Holy Communion], theprecise meaning of the Eucharistas the sacramental 'memorial' of Christ's saving death and resurrection, the particular way in whichChrist is presentin Holy Communion, and the link between eucharistic communion andecclesial communion".[320]
In the 1960s, the Methodist Church of Great Britain made ecumenical overtures to the Church of England, aimed at denominational union. Formally, these failed when they were rejected by the Church of England'sGeneral Synodin 1972; conversations and co-operation continued, however, leading in 2003 to the signing of a covenant between the two churches.[321]From the 1970s onward, the Methodist Church also started severalLocal Ecumenical Projects(LEPs, later renamed Local Ecumenical Partnerships) with local neighbouring denominations, which involved sharing churches, schools and in some cases ministers. In many towns and villages Methodists are involved in LEPs which are sometimes with Anglican or Baptist churches, but most commonly Methodist andUnited Reformed Church. In terms of belief, practice and churchmanship, many Methodists see themselves as closer to the United Reformed Church (anotherNonconformistchurch) than to the Church of England.[citation needed]In the 1990s and early 21st century, the British Methodist Church was involved in the Scottish Church Initiative for Union, seeking greater unity with the established and PresbyterianChurch of Scotland, theScottish Episcopal Churchand the United Reformed Church in Scotland.[322]
The Methodist Church of Great Britain is a member of several ecumenical organisations, including theWorld Council of Churches, theConference of European Churches, theCommunity of Protestant Churches in Europe,Churches Together in Britain and Ireland,Churches Together in England,Action of Churches Together in ScotlandandCytûn(Wales).
Methodist denominations in the United States have also strengthened ties with other Christian traditions. In April 2005, bishops in theUnited Methodist ChurchapprovedA Proposal for Interim Eucharistic Sharing. This document was the first step towardfull communionwith theEvangelical Lutheran Church in America(ELCA). The ELCA approved this same document in August 2005.[323]At the 2008 General Conference, the United Methodist Church approved full communion with the ELCA.[324]The UMC is also in dialogue with theEpiscopal Churchfor full communion.[325]The UMC and ELC worked together on a document called "Confessing Our Faith Together".[326] | https://en.wikipedia.org/wiki/Methodism |
In its most common sense,methodologyis the study ofresearchmethods. However, the term can also refer to the methods themselves or to thephilosophicaldiscussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal, like acquiringknowledgeor verifying knowledge claims. This normally involves various steps, like choosing asample,collecting datafrom this sample, and interpreting the data. The study of methods concerns a detailed description and analysis of these processes. It includes evaluative aspects by comparing different methods. This way, it is assessed what advantages and disadvantages they have and for what research goals they may be used. These descriptions and evaluations depend on philosophical background assumptions. Examples are how toconceptualizethe studied phenomena and what constitutesevidencefor or against them. When understood in the widest sense, methodology also includes the discussion of these more abstract issues.
Methodologies are traditionally divided intoquantitativeandqualitative research. Quantitative research is the main methodology of thenatural sciences. It uses precise numericalmeasurements. Its goal is usually to find universal laws used to makepredictionsabout future events. The dominant methodology in the natural sciences is called thescientific method. It includes steps likeobservationand the formulation of ahypothesis. Further steps are to test the hypothesis using an experiment, to compare the measurements to the expected results, and to publish the findings.
Qualitative research is more characteristic of thesocial sciencesand gives less prominence to exact numerical measurements. It aims more at an in-depth understanding of the meaning of the studied phenomena and less at universal and predictive laws. Common methods found in the social sciences aresurveys,interviews,focus groups, and thenominal group technique. They differ from each other concerning their sample size, the types of questions asked, and the general setting. In recent decades, many social scientists have started usingmixed-methods research, which combines quantitative and qualitative methodologies.
Many discussions in methodology concern the question of whether the quantitative approach is superior, especially whether it is adequate when applied to the social domain. A few theorists reject methodology as a discipline in general. For example, some argue that it is useless since methods should be used rather than studied. Others hold that it is harmful because it restricts thefreedomandcreativityof researchers. Methodologists often respond to these objections by claiming that a good methodology helps researchers arrive at reliable theories in an efficient way. The choice of method often matters since the same factual material can lead to different conclusions depending on one's method. Interest in methodology has risen in the 20th century due to the increased importance ofinterdisciplinarywork and the obstacles hindering efficient cooperation.
The term "methodology" is associated with a variety of meanings. In its most common usage, it refers either to a method, to the field of inquiry studying methods, or tophilosophicaldiscussions of background assumptions involved in these processes.[1][2][3]Some researchers distinguish methods from methodologies by holding that methods are modes ofdata collectionwhile methodologies are more general research strategies that determine how to conduct a research project.[1][4]In this sense, methodologies include various theoretical commitments about the intended outcomes of the investigation.[5]
The term "methodology" is sometimes used as a synonym for the term "method". A method is a way of reaching some predefined goal.[6][7][8]It is a planned and structured procedure forsolving a theoretical or practical problem. In this regard, methods stand in contrast to free and unstructured approaches to problem-solving.[7]For example,descriptive statisticsis a method ofdata analysis,radiocarbon datingis a method of determining the age of organic objects,sautéingis a method of cooking, andproject-based learningis aneducationalmethod. The term "technique" is often used as a synonym both in the academic and the everyday discourse. Methods usually involve a clearly defined series of decisions andactionsto be used under certain circumstances, usually expressable as a sequence of repeatable instructions. The goal of following the steps of a method is to bring about the result promised by it. In the context of inquiry, methods may be defined as systems of rules and procedures to discover regularities ofnature,society, andthought.[6][7]In this sense, methodology can refer to procedures used to arrive at newknowledgeor to techniques of verifying and falsifying pre-existing knowledge claims.[9]This encompasses various issues pertaining both to the collection of data and their analysis. Concerning the collection, it involves the problem ofsamplingand of how to go about the data collection itself, like surveys, interviews, or observation. There are also numerous methods of how the collected data can be analyzed using statistics or other ways of interpreting it to extract interesting conclusions.[10]
However, many theorists emphasize the differences between the terms "method" and "methodology".[1][7][2][11]In this regard, methodology may be defined as "the study or description of methods" or as "theanalysisof the principles of methods, rules, and postulates employed by a discipline".[12][13]This study or analysis involves uncovering assumptions and practices associated with the different methods and a detailed description ofresearch designsandhypothesis testing. It also includes evaluative aspects: forms of data collection, measurement strategies, and ways to analyze data are compared and their advantages and disadvantages relative to different research goals and situations are assessed. In this regard, methodology provides theskills, knowledge, and practical guidance needed to conduct scientific research in an efficient manner. It acts as a guideline for various decisions researchers need to take in the scientific process.[14][10]
Methodology can be understood as the middle ground between concrete particular methods and the abstract and general issues discussed by thephilosophy of science.[11][15]In this regard, methodology comes after formulating a research question and helps the researchers decide what methods to use in the process. For example, methodology should assist the researcher in deciding why one method of sampling is preferable to another in a particular case or which form of data analysis is likely to bring the best results. Methodology achieves this by explaining, evaluating and justifying methods. Just as there are different methods, there are also different methodologies. Different methodologies provide different approaches to how methods are evaluated and explained and may thus make different suggestions on what method to use in a particular case.[15][11]
According to Aleksandr Georgievich Spirkin, "[a] methodology is a system of principles and general ways of organising and structuring theoretical and practical activity, and also the theory of this system".[16][17]Helen Kara defines methodology as "a contextual framework for research, a coherent and logical scheme based on views, beliefs, and values, that guides the choices researchers make".[18]Ginny E. Garcia and Dudley L. Poston understand methodology either as a complex body of rules and postulates guiding research or as the analysis of such rules and procedures. As a body of rules and postulates, a methodology defines the subject of analysis as well as theconceptualtools used by the analysis and the limits of the analysis. Research projects are usually governed by a structured procedure known as the research process. The goal of this process is given by aresearch question, which determines what kind of information one intends to acquire.[19][20]
Some theorists prefer an even wider understanding of methodology that involves not just the description, comparison, and evaluation of methods but includes additionally more general philosophical issues. One reason for this wider approach is that discussions of when to use which method often take various background assumptions for granted, for example, concerning the goal and nature of research. These assumptions can at times play an important role concerning which method to choose and how to follow it.[14][11][21]For example,Thomas Kuhnargues in hisThe Structure of Scientific Revolutionsthat sciences operate within a framework or aparadigmthat determines which questions are asked and what counts as good science. This concerns philosophical disagreements both about how to conceptualize the phenomena studied, what constitutesevidencefor and against them, and what the general goal of researching them is.[14][22][23]So in this wider sense, methodology overlaps with philosophy by making these assumptions explicit and presenting arguments for and against them.[14]According to C. S. Herrman, a good methodology clarifies the structure of the data to be analyzed and helps the researchers see the phenomena in a new light. In this regard, a methodology is similar to a paradigm.[3][15]A similar view is defended by Spirkin, who holds that a central aspect of every methodology is theworld viewthat comes with it.[16]
The discussion of background assumptions can includemetaphysicalandontologicalissues in cases where they have important implications for the proper research methodology. For example, arealistperspective considering the observed phenomena as an external and independentrealityis often associated with an emphasis on empirical data collection and a more distanced and objective attitude.Idealists, on the other hand, hold that external reality is not fully independent of themindand tend, therefore, to include more subjective tendencies in the research process as well.[5][24][25]
For thequantitative approach, philosophical debates in methodology include the distinction between theinductiveand thehypothetico-deductiveinterpretation of the scientific method. Forqualitative research, many basic assumptions are tied to philosophical positions such ashermeneutics,pragmatism,Marxism,critical theory, andpostmodernism.[14][26]According to Kuhn, an important factor in such debates is that the different paradigms areincommensurable. This means that there is no overarching framework to assess the conflicting theoretical and methodological assumptions. This critique puts into question various presumptions of the quantitative approach associated with scientific progress based on the steady accumulation of data.[14][22]
Other discussions of abstract theoretical issues in the philosophy of science are also sometimes included.[6][9]This can involve questions like how and whether scientific research differs fromfictionalwriting as well as whether research studies objective facts rather than constructing the phenomena it claims to study. In the latter sense, some methodologists have even claimed that the goal of science is less to represent a pre-existing reality and more to bring about some kind of social change in favor of repressed groups in society.[14]
Viknesh Andiappan and Yoke Kin Wan use the field ofprocess systems engineeringto distinguish the term "methodology" from the closely related terms "approach", "method", "procedure", and "technique".[27]On their view, "approach" is the most general term. It can be defined as "a way or direction used to address a problem based on a set of assumptions". An example is the difference between hierarchical approaches, which consider one task at a time in a hierarchical manner, and concurrent approaches, which consider them all simultaneously. Methodologies are a little more specific. They are general strategies needed to realize an approach and may be understood as guidelines for how to make choices. Often the term "framework" is used as a synonym. A method is a still more specific way of practically implementing the approach. Methodologies provide the guidelines that help researchers decide which method to follow. The method itself may be understood as a sequence of techniques. A technique is a step taken that can be observed and measured. Each technique has some immediate result. The whole sequence of steps is termed a "procedure".[27][28]A similar but less complex characterization is sometimes found in the field oflanguage teaching, where the teaching process may be described through a three-level conceptualization based on "approach", "method", and "technique".[29]
One question concerning the definition of methodology is whether it should be understood as a descriptive or anormativediscipline. The key difference in this regard is whether methodology just provides a value-neutral description of methods or what scientists actually do. Many methodologists practice their craft in a normative sense, meaning that they express clear opinions about the advantages and disadvantages of different methods. In this regard, methodology is not just about what researchersactually dobut about what theyought to door how to performgoodresearch.[14][8]
Theorists often distinguish various general types or approaches to methodology. The most influential classification contrastsquantitativeandqualitative methodology.[4][30][19][16]
Quantitative research is closely associated with thenatural sciences. It is based on precise numerical measurements, which are then used to arrive at exact general laws. This precision is also reflected in the goal of making predictions that can later be verified by other researchers.[4][8]Examples of quantitative research include physicists at theLarge Hadron Collidermeasuring the mass of newly created particles andpositive psychologistsconducting an online survey to determine the correlation between income andself-assessed well-being.[31]
Qualitative research is characterized in various ways in the academic literature but there are very few precise definitions of the term. It is often used in contrast to quantitative research for forms of study that do not quantify their subject matter numerically.[32][30]However, the distinction between these two types is not always obvious and various theorists have argued that it should be understood as a continuum and not as a dichotomy.[33][34][35]A lot of qualitative research is concerned with some form of humanexperienceorbehavior, in which case it tends to focus on a few individuals and their in-depth understanding of the meaning of the studied phenomena.[4]Examples of the qualitative method are a market researcher conducting afocus groupin order to learn how people react to a new product or a medical researcher performing an unstructuredin-depth interviewwith a participant from a new experimental therapy to assess its potential benefits and drawbacks.[30]It is also used to improve quantitative research, such as informing data collection materials and questionnaire design.[36]Qualitative research is frequently employed in fields where the pre-existing knowledge is inadequate. This way, it is possible to get a first impression of the field and potential theories, thus paving the way for investigating the issue in further studies.[32][30]
Quantitative methods dominate in the natural sciences but both methodologies are used in the social sciences.[4]Some social scientists focus mostly on one method while others try to investigate the same phenomenon using a variety of different methods.[4][16]It is central to both approaches how the group of individuals used for the data collection is selected. This process is known assampling. It involves the selection of a subset of individuals or phenomena to be measured. Important in this regard is that the selected samples are representative of the whole population, i.e. that no significant biases were involved when choosing. If this is not the case, the data collected does not reflect what the population as a whole is like. This affects generalizations and predictions drawn from the biased data.[4][19]The number of individuals selected is called thesample size. For qualitative research, the sample size is usually rather small, while quantitative research tends to focus on big groups and collecting a lot of data. After the collection, the data needs to be analyzed and interpreted to arrive at interesting conclusions that pertain directly to the research question. This way, the wealth of information obtained is summarized and thus made more accessible to others. Especially in the case of quantitative research, this often involves the application of some form of statistics to make sense of the numerous individual measurements.[19][8]
Many discussions in the history of methodology center around the quantitative methods used by the natural sciences. A central question in this regard is to what extent they can be applied to other fields, like the social sciences andhistory.[14]The success of the natural sciences was often seen as an indication of the superiority of the quantitative methodology and used as an argument to apply this approach to other fields as well.[14][37]However, this outlook has been put into question in the more recent methodological discourse. In this regard, it is often argued that the paradigm of the natural sciences is a one-sided development ofreason, which is not equally well suited to all areas of inquiry.[10][14]The divide between quantitative and qualitative methods in the social sciences is one consequence of this criticism.[14]
Which method is more appropriate often depends on the goal of the research. For example, quantitative methods usually excel for evaluating preconceived hypotheses that can be clearly formulated and measured. Qualitative methods, on the other hand, can be used to study complex individual issues, often with the goal of formulating new hypotheses. This is especially relevant when the existing knowledge of the subject is inadequate.[30]Important advantages of quantitative methods include precision and reliability. However, they have often difficulties in studying very complex phenomena that are commonly of interest to the social sciences. Additional problems can arise when the data is misinterpreted to defend conclusions that are not directly supported by the measurements themselves.[4]In recent decades, many researchers in the social sciences have started combining both methodologies. This is known asmixed-methods research. A central motivation for this is that the two approaches can complement each other in various ways: some issues are ignored or too difficult to study with one methodology and are better approached with the other. In other cases, both approaches are applied to the same issue to produce more comprehensive and well-rounded results.[4][38][39]
Qualitative and quantitative research are often associated with different research paradigms and background assumptions. Qualitative researchers often use an interpretive or critical approach while quantitative researchers tend to prefer a positivistic approach. Important disagreements between these approaches concern the role of objectivity and hard empirical data as well as the research goal of predictive success rather than in-depth understanding or social change.[19][40][41]
Various other classifications have been proposed. One distinguishes between substantive and formal methodologies. Substantive methodologies tend to focus on one specific area of inquiry. The findings are initially restricted to this specific field but may be transferrable to other areas of inquiry. Formal methodologies, on the other hand, are based on a variety of studies and try to arrive at more general principles applying to different fields. They may also give particular prominence to the analysis of the language of science and the formal structure of scientific explanation.[42][16][43]A closely related classification distinguishes between philosophical, general scientific, and special scientific methods.[16][44][17]
One type of methodological outlook is called "proceduralism". According to it, the goal of methodology is to boil down the research process to a simple set of rules or a recipe that automatically leads to good research if followed precisely. However, it has been argued that, while this ideal may be acceptable for some forms of quantitative research, it fails for qualitative research. One argument for this position is based on the claim that research is not a technique but a craft that cannot be achieved by blindly following a method. In this regard, research depends on forms of creativity and improvisation to amount to good science.[14][45][46]
Other types include inductive, deductive, and transcendental methods.[9]Inductive methods are common in the empirical sciences and proceed throughinductive reasoningfrom many particular observations to arrive at general conclusions, often in the form of universal laws.[47]Deductive methods, also referred to as axiomatic methods, are often found informal sciences, such asgeometry. They start from a set of self-evidentaxiomsor first principles and use deduction to infer interesting conclusions from these axioms.[48]Transcendental methodsare common inKantianandpost-Kantianphilosophy. They start with certain particular observations. It is then argued that the observed phenomena can only exist if their conditions of possibility are fulfilled. This way, the researcher may draw general psychological or metaphysical conclusions based on the claim that the phenomenon would not be observable otherwise.[49]
It has been argued that a proper understanding of methodology is important for various issues in the field of research. They include both the problem of conducting efficient and reliable research as well as being able to validate knowledge claims by others.[3]Method is often seen as one of the main factors ofscientific progress. This is especially true for the natural sciences where thedevelopments of experimental methods in the 16th and 17th centuryare often seen as the driving force behind the success and prominence of the natural sciences.[14]In some cases, the choice of methodology may have a severe impact on a research project. The reason is that very different and sometimes even opposite conclusions may follow from the same factual material based on the chosen methodology.[16]
Aleksandr Georgievich Spirkin argues that methodology, when understood in a wide sense, is of great importance since the world presents us with innumerable entities and relations between them.[16]Methods are needed to simplify this complexity and find a way of mastering it. On the theoretical side, this concerns ways of forming truebeliefsand solving problems. On the practical side, this concerns skills of influencing nature and dealing with each other. These different methods are usually passed down from one generation to the next. Spirkin holds that the interest in methodology on a more abstract level arose in attempts to formalize these techniques to improve them as well as to make it easier to use them and pass them on. In the field of research, for example, the goal of this process is to find reliable means to acquire knowledge in contrast to mere opinions acquired by unreliable means. In this regard, "methodology is a way of obtaining and building up ... knowledge".[16][44]
Various theorists have observed that the interest in methodology has risen significantly in the 20th century.[16][14]This increased interest is reflected not just in academic publications on the subject but also in the institutionalized establishment of training programs focusing specifically on methodology.[14]This phenomenon can be interpreted in different ways. Some see it as a positive indication of the topic's theoretical and practical importance. Others interpret this interest in methodology as an excessive preoccupation that draws time and energy away from doing research on concrete subjects by applying the methods instead of researching them. This ambiguous attitude towards methodology is sometimes even exemplified in the same person.Max Weber, for example, criticized the focus on methodology during his time while making significant contributions to it himself.[14][50]Spirkin believes that one important reason for this development is that contemporary society faces many global problems. These problems cannot be solved by a single researcher or a single discipline but are in need of collaborative efforts from many fields. Such interdisciplinary undertakings profit a lot from methodological advances, both concerning the ability to understand the methods of the respective fields and in relation to developing more homogeneous methods equally used by all of them.[16][51]
Most criticism of methodology is directed at one specific form or understanding of it. In such cases, one particular methodological theory is rejected but not methodology at large when understood as a field of research comprising many different theories.[14][10]In this regard, many objections to methodology focus on the quantitative approach, specifically when it is treated as the only viable approach.[14][37]Nonetheless, there are also more fundamental criticisms of methodology in general. They are often based on the idea that there is little value to abstract discussions of methods and the reasons cited for and against them. In this regard, it may be argued that what matters is the correct employment of methods and not their meticulous study.Sigmund Freud, for example, compared methodologists to "people who clean their glasses so thoroughly that they never have time to look through them".[14][52]According toC. Wright Mills, the practice of methodology often degenerates into a "fetishism of method and technique".[14][53]
Some even hold that methodological reflection is not just a waste of time but actually has negative side effects. Such an argument may be defended by analogy to otherskillsthat work best when the agent focuses only on employing them. In this regard, reflection may interfere with the process and lead to avoidable mistakes.[54]According to an example byGilbert Ryle, "[w]e run, as a rule, worse, not better, if we think a lot about our feet".[55][54]A less severe version of this criticism does not reject methodology per se but denies its importance and rejects an intense focus on it. In this regard, methodology has still a limited and subordinate utility but becomes a diversion or even counterproductive by hindering practice when given too much emphasis.[56]
Another line of criticism concerns more the general and abstract nature of methodology. It states that the discussion of methods is only useful in concrete and particular cases but not concerning abstract guidelines governing many or all cases. Some anti-methodologists reject methodology based on the claim that researchers needfreedomto do their work effectively. But this freedom may be constrained and stifled by "inflexible and inappropriate guidelines". For example, according toKerry Chamberlain, a good interpretation needscreativityto be provocative and insightful, which is prohibited by a strictly codified approach. Chamberlain uses the neologism "methodolatry" to refer to this alleged overemphasis on methodology.[56][14]Similar arguments are given inPaul Feyerabend's book "Against Method".[57][14]
However, these criticisms of methodology in general are not always accepted. Many methodologists defend their craft by pointing out how the efficiency and reliability of research can be improved through a proper understanding of methodology.[14][10]
A criticism of more specific forms of methodology is found in the works of the sociologistHoward S. Becker. He is quite critical of methodologists based on the claim that they usually act as advocates of one particular method usually associated with quantitative research.[10]An often-cited quotation in this regard is that "[m]ethodology is too important to be left to methodologists".[58][10][14]Alan Brymanhas rejected this negative outlook on methodology. He holds that Becker's criticism can be avoided by understanding methodology as an inclusive inquiry into all kinds of methods and not as a mere doctrine for converting non-believers to one's preferred method.[10]
Part of the importance of methodology is reflected in the number of fields to which it is relevant. They include the natural sciences and the social sciences as well as philosophy and mathematics.[54][8][19]
The dominant methodology in thenatural sciences(likeastronomy,biology,chemistry,geoscience, andphysics) is called thescientific method.[8][59]Its main cognitive aim is usually seen as the creation ofknowledge, but various closely related aims have also been proposed, like understanding, explanation, or predictive success. Strictly speaking, there is no one single scientific method. In this regard, the expression "scientific method" refers not to one specific procedure but to different general or abstract methodological aspects characteristic of all the aforementioned fields. Important features are that the problem is formulated in a clear manner and that the evidence presented for or against a theory is public, reliable, and replicable. The last point is important so that other researchers are able to repeat the experiments toconfirmor disconfirm the initial study.[8][60][61]For this reason, various factors and variables of the situation often have to be controlled to avoid distorting influences and to ensure that subsequent measurements by other researchers yield the same results.[14]The scientific method is a quantitative approach that aims at obtaining numerical data. This data is often described using mathematical formulas. The goal is usually to arrive at some universal generalizations that apply not just to the artificial situation of the experiment but to the world at large. Some data can only be acquired using advanced measurement instruments. In cases where the data is very complex, it is often necessary to employ sophisticated statistical techniques to draw conclusions from it.[8][60][61]
The scientific method is often broken down into several steps. In a typical case, the procedure starts with regular observation and the collection of information. These findings then lead the scientist to formulate ahypothesisdescribing and explaining the observed phenomena. The next step consists in conducting anexperimentdesigned for this specific hypothesis. The actual results of the experiment are then compared to the expected results based on one's hypothesis. The findings may then be interpreted and published, either as a confirmation or disconfirmation of the initial hypothesis.[60][8][61]
Two central aspects of the scientific method areobservationandexperimentation.[8]This distinction is based on the idea that experimentation involves some form of manipulation or intervention.[62][63][64][4]This way, the studied phenomena are actively created or shaped. For example, a biologist inserting viralDNAinto a bacterium is engaged in a form of experimentation. Pure observation, on the other hand, involves studying independent entities in a passive manner. This is the case, for example, whenastronomersobserve the orbits of astronomical objects far away.[65]Observation played the main role inancient science. The scientific revolution in the 16th and 17th century affected a paradigm change that gave a much more central role to experimentation in the scientific methodology.[62][8]This is sometimes expressed by stating thatmodern scienceactively "puts questions to nature".[65]While the distinction is usually clear in the paradigmatic cases, there are also many intermediate cases where it is not obvious whether they should be characterized as observation or as experimentation.[65][62]
A central discussion in this field concerns the distinction between theinductiveand thehypothetico-deductive methodology. The core disagreement between these two approaches concerns their understanding of the confirmation of scientific theories. The inductive approach holds that a theory is confirmed or supported by all its positive instances, i.e. by all the observations that exemplify it.[66][67][68]For example, the observations of many white swans confirm the universal hypothesis that "all swans are white".[69][70]The hypothetico-deductive approach, on the other hand, focuses not on positive instances but on deductive consequences of the theory.[69][70][71][72]This way, the researcher usesdeductionbefore conducting an experiment to infer what observations they expect.[73][8]These expectations are then compared to the observations they actually make. This approach often takes a negative form based on falsification. In this regard, positive instances do not confirm a hypothesis but negative instances disconfirm it. Positive indications that the hypothesis is true are only given indirectly if many attempts to find counterexamples have failed.[74]A cornerstone of this approach is thenull hypothesis, which assumes that there is no connection (seecausality) between whatever is being observed. It is up to the researcher to do all they can to disprove their own hypothesis through relevant methods or techniques, documented in a clear and replicable process. If they fail to do so, it can be concluded that the null hypothesis is false, which provides support for their own hypothesis about the relation between the observed phenomena.[75]
Significantly more methodological variety is found in thesocial sciences, where both quantitative and qualitative approaches are used. They employ various forms of data collection, such assurveys, interviews, focus groups, and the nominal group technique.[4][30][19][76]Surveys belong to quantitative research and usually involve some form of questionnaire given to a large group of individuals. It is paramount that the questions are easily understandable by the participants since the answers might not have much value otherwise. Surveys normally restrict themselves toclosed questionsin order to avoid various problems that come with the interpretation of answers toopen questions. They contrast in this regard to interviews, which put more emphasis on the individual participant and often involve open questions.Structured interviewsare planned in advance and have a fixed set of questions given to each individual. They contrast withunstructured interviews, which are closer to a free-flow conversation and require more improvisation on the side of the interviewer for finding interesting and relevant questions.Semi-structured interviewsconstitute a middle ground: they include both predetermined questions and questions not planned in advance.[4][77][78]Structured interviews make it easier to compare the responses of the different participants and to draw general conclusions. However, they also limit what may be discovered and thus constrain the investigation in many ways.[4][30]Depending on the type and depth of the interview, this method belongs either to quantitative or to qualitative research.[30][4]The terms research conversation[79]and muddy interview[80]have been used to describe interviews conducted in informal settings which may not occur purely for the purposes of data collection. Some researcher employ the go-along method by conducting interviews while they and the participants navigate through and engage with their environment.[81]
Focus groupsare a qualitative research method often used inmarket research. They constitute a form of group interview involving a small number ofdemographicallysimilar people. Researchers can use this method to collect data based on the interactions and responses of the participants. The interview often starts by asking the participants about their opinions on the topic under investigation, which may, in turn, lead to a free exchange in which the group members express and discuss their personal views. An important advantage of focus groups is that they can provide insight into how ideas and understanding operate in a cultural context. However, it is usually difficult to use these insights to discern more general patterns true for a wider public.[4][30][82]One advantage of focus groups is that they can help the researcher identify a wide range of distinct perspectives on the issue in a short time. The group interaction may also help clarify and expand interesting contributions. One disadvantage is due to the moderator's personality andgroup effects, which may influence the opinions stated by the participants.[30]When applied tocross-culturalsettings, cultural and linguistic adaptations and group composition considerations are important to encourage greater participation in the group discussion.[36]
Thenominal group techniqueis similar to focus groups with a few important differences. The group often consists of experts in the field in question. The group size is similar but the interaction between the participants is more structured. The goal is to determine how much agreement there is among the experts on the different issues. The initial responses are often given in written form by each participant without a prior conversation between them. In this manner, group effects potentially influencing the expressed opinions are minimized. In later steps, the different responses and comments may be discussed and compared to each other by the group as a whole.[30][83][84]
Most of these forms of data collection involve some type ofobservation. Observation can take place either in a natural setting, i.e. thefield, or in a controlled setting such as a laboratory. Controlled settings carry with them the risk of distorting the results due to their artificiality. Their advantage lies in precisely controlling the relevant factors, which can help make the observations more reliable and repeatable. Non-participatory observation involves a distanced or external approach. In this case, the researcher focuses on describing and recording the observed phenomena without causing or changing them, in contrast toparticipatory observation.[4][85][86]
An important methodological debate in the field of social sciences concerns the question of whether they deal with hard, objective, and value-neutral facts, as the natural sciences do.Positivistsagree with this characterization, in contrast to interpretive and critical perspectives on the social sciences.[19][87][41]According to William Neumann, positivism can be defined as "an organized method for combining deductive logic with precise empirical observations of individual behavior in order to discover and confirm a set of probabilisticcausallaws that can be used to predict general patterns of human activity". This view is rejected byinterpretivists.Max Weber, for example, argues that the method of the natural sciences is inadequate for the social sciences. Instead, more importance is placed on meaning and how people create and maintain their social worlds. Thecritical methodologyin social science is associated withKarl MarxandSigmund Freud. It is based on the assumption that many of the phenomena studied using the other approaches are mere distortions or surface illusions. It seeks to uncover deeper structures of the material world hidden behind these distortions. This approach is often guided by the goal of helping people effect social changes and improvements.[19][87][41]
Philosophical methodology is themetaphilosophicalfield of inquiry studying the methods used inphilosophy. These methods structure how philosophers conduct their research, acquire knowledge, and select between competing theories.[88][54][89]It concerns both descriptive issues of what methods have been used by philosophers in the past and normative issues of which methods should be used. Many philosophers emphasize that these methods differ significantly from the methods found in the natural sciences in that they usually do not rely onexperimental dataobtained throughmeasuring equipment.[90][91][92]Which method one follows can have wide implications for how philosophical theories are constructed, what theses are defended, and what arguments are cited in favor or against.[54][93][94]In this regard, many philosophical disagreements have their source in methodological disagreements. Historically, the discovery of new methods, likemethodological skepticismand thephenomenological method, has had important impacts on the philosophical discourse.[95][89][54]
A great variety of methods has been employed throughout the history of philosophy:
In the field ofmathematics, various methods can be distinguished, such as synthetic, analytic, deductive, inductive, and heuristic methods. For example, the difference between synthetic and analytic methods is that the former start from the known and proceed to the unknown while the latter seek to find a path from the unknown to the known.Geometrytextbooks often proceed using the synthetic method. They start by listing knowndefinitionsandaxiomsand proceed by takinginferential steps, one at a time, until the solution to the initial problem is found. An important advantage of the synthetic method is its clear and short logical exposition. One disadvantage is that it is usually not obvious in the beginning that the steps taken lead to the intended conclusion. This may then come as a surprise to the reader since it is not explained how the mathematician knew in the beginning which steps to take. The analytic method often reflects better how mathematicians actually make their discoveries. For this reason, it is often seen as the better method for teaching mathematics. It starts with the intended conclusion and tries to find another formula from which it can be deduced. It then goes on to apply the same process to this new formula until it has traced back all the way to already proven theorems. The difference between the two methods concerns primarily how mathematicians think and present theirproofs. The two are equivalent in the sense that the same proof may be presented either way.[115][116][117]
Statistics investigates the analysis, interpretation, and presentation ofdata. It plays a central role in many forms of quantitative research that have to deal with the data of many observations and measurements. In such cases,data analysisis used tocleanse,transform, andmodelthe data to arrive at practically useful conclusions. There are numerous methods of data analysis. They are usually divided intodescriptive statisticsandinferential statistics. Descriptive statistics restricts itself to the data at hand. It tries to summarize the most salient features and present them in insightful ways. This can happen, for example, by visualizing its distribution or by calculatingindicessuch as themeanor thestandard deviation. Inferential statistics, on the other hand, uses this data based on a sample to draw inferences about the population at large. That can take the form of making generalizations and predictions or by assessing the probability of a concrete hypothesis.[118][119][120]
Pedagogy can be defined as the study orscienceofteaching methods.[121][122]In this regard, it is the methodology ofeducation: it investigates the methods and practices that can be applied to fulfill theaims of education.[123][122][1]These aims include the transmission ofknowledgeas well as fosteringskillsandcharacter traits.[123][124]Its main focus is on teaching methods in the context of regularschools. But in its widest sense, it encompasses all forms of education, both inside and outside schools.[125]In this wide sense, pedagogy is concerned with "any conscious activity by one person designed to enhance learning in another".[121]The teaching happening this way is a process taking place between two parties: teachers and learners. Pedagogy investigates how the teacher can help the learner undergoexperiencesthat promote theirunderstandingof the subject matter in question.[123][122]
Various influential pedagogical theories have been proposed. Mental-discipline theories were already common in ancient Greek and state that the main goal of teaching is to train intellectual capacities. They are usually based on a certain ideal of the capacities, attitudes, and values possessed by educated people. According to naturalistic theories, there is an inborn natural tendency in children to develop in a certain way. For them, pedagogy is about how to help this process happen by ensuring that the required external conditions are set up.[123][122]Herbartianismidentifies five essential components of teaching: preparation, presentation, association, generalization, and application. They correspond to different phases of the educational process: getting ready for it, showing new ideas, bringing these ideas in relation to known ideas, understanding the general principle behind their instances, and putting what one has learned into practice.[126]Learning theoriesfocus primarily on how learning takes place and formulate the proper methods of teaching based on these insights.[127]One of them is apperception orassociation theory, which understands themindprimarily in terms ofassociationsbetween ideas and experiences. On this view, the mind is initially ablank slate. Learning is a form of developing the mind by helping it establish the right associations.Behaviorismis a more externally oriented learning theory. It identifies learning withclassical conditioning, in which the learner's behavior is shaped by presenting them with a stimulus with the goal of evoking and solidifying the desiredresponse pattern to this stimulus.[123][122][127]
The choice of which specific method is best to use depends on various factors, such as the subject matter and the learner's age.[123][122]Interest and curiosity on the side of the student are among the key factors of learning success. This means that one important aspect of the chosen teaching method is to ensure that these motivational forces are maintained, throughintrinsic or extrinsic motivation.[123]Many forms of education also include regular assessment of the learner's progress, for example, in the form of tests. This helps to ensure that the teaching process is successful and to make adjustments to the chosen method if necessary.[123]
Methodology has several related concepts, such asparadigmandalgorithm. In the context ofscience, a paradigm is a conceptualworldview. It consists of a number of basic concepts and general theories, that determine how the studied phenomena are to be conceptualized and which scientific methods are considered reliable for studying them.[128][22]Various theorists emphasize similar aspects of methodologies, for example, that they shape the general outlook on the studied phenomena and help the researcher see them in a new light.[3][15][16]
Incomputer science, an algorithm is a procedure or methodology to reach thesolution of a problemwith a finite number of steps. Each step has to be precisely defined so it can be carried out in an unambiguous manner for each application.[129][130]For example, theEuclidean algorithmis an algorithm that solves the problem of finding thegreatest common divisorof twointegers. It is based on simple steps like comparing the two numbers and subtracting one from the other.[131] | https://en.wikipedia.org/wiki/Methodology |
Thescientific methodis anempiricalmethod for acquiringknowledgethat has been referred to while doingsciencesince at least the 17th century. Historically, it was developed through the centuries from the ancient and medieval world. The scientific method involves carefulobservationcoupled with rigorousskepticism, becausecognitive assumptionscan distort the interpretation of theobservation. Scientific inquiry includes creating a testablehypothesisthroughinductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.[1][2][3]
Although procedures vary acrossfields, the underlyingprocessis often similar. In more detail: the scientific method involves makingconjectures(hypothetical explanations), predicting the logical consequences of hypothesis, then carrying out experiments or empirical observations based on those predictions.[4]A hypothesis is a conjecture based on knowledge obtained while seeking answers to the question. Hypotheses can be very specific or broad but must befalsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.[5]
While the scientific method is often presented as a fixed sequence of steps, it actually represents a set of general principles. Not all steps take place in everyscientific inquiry(nor to the same degree), and they are not always in the same order.[6][7]Numerous discoveries have not followed the textbook model of the scientific method and chance has played a role, for instance.[8][9][10]
The history of the scientific method considers changes in the methodology of scientific inquiry, not thehistory of scienceitself. The development of rules forscientific reasoninghas not been straightforward; the scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of various approaches to establishing scientific knowledge.
Different early expressions ofempiricismand the scientific method can be found throughout history, for instance with the ancientStoics,Aristotle,[11]Epicurus,[12]Alhazen,[A][a][B][i]Avicenna,Al-Biruni,[17][18]Roger Bacon[α], andWilliam of Ockham.[21]
In theScientific Revolutionof the 16th and 17th centuries, some of the most important developments were the furthering ofempiricismbyFrancis BaconandRobert Hooke,[22][23]therationalistapproach described byRené Descartes, andinductivism, brought to particular prominence byIsaac Newtonand those who followed him. Experiments were advocated byFrancis Baconand performed byGiambattista della Porta,[24]Johannes Kepler,[25][d]andGalileo Galilei.[β]There was particular development aided by theoretical works by the skepticFrancisco Sanches,[27]by idealists as well as empiricistsJohn Locke,George Berkeley, andDavid Hume.[e]C. S. Peirceformulated thehypothetico-deductive modelin the 20th century, and the model has undergone significant revision since.[30]
The term "scientific method" emerged in the 19th century, as a result of significant institutional development of science, and terminologies establishing clearboundariesbetween science and non-science, such as "scientist" and "pseudoscience".[31]Throughout the 1830s and 1850s, when Baconianism was popular, naturalists like William Whewell, John Herschel, and John Stuart Mill engaged in debates over "induction" and "facts," and were focused on how to generate knowledge.[31]In the late 19th and early 20th centuries, a debate overrealismvs.antirealismwas conducted as powerful scientific theories extended beyond the realm of the observable.[32]
The term "scientific method" came into popular use in the twentieth century;Dewey's 1910 book,How We Think, inspiredpopular guidelines.[33]It appeared in dictionaries and science textbooks, although there was little consensus on its meaning.[31]Although there was growth through the middle of the twentieth century,[f]by the 1960s and 1970s numerous influential philosophers of science such asThomas KuhnandPaul Feyerabendhad questioned the universality of the "scientific method," and largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice.[31]In particular,Paul Feyerabend, in the 1975 first edition of his bookAgainst Method, argued against there being any universal rules ofscience;[32]Karl Popper,[γ]and Gauch 2003,[6]disagreed with Feyerabend's claim.
Later stances include physicistLee Smolin's 2013 essay "There Is No Scientific Method",[35]in which he espouses twoethical principles,[δ]andhistorian of scienceDaniel Thurs' chapter in the 2015 bookNewton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization.[36]Asmythsare beliefs,[37]they are subject to thenarrative fallacy, as pointed out by Taleb.[38]PhilosophersRobert Nolaand Howard Sankey, in their 2007 bookTheories of Scientific Method, said that debates over the scientific method continue, and argued that Feyerabend, despite the title ofAgainst Method, accepted certain rules of method and attempted to justify those rules with a meta methodology.[39]Staddon (2017) argues it is a mistake to try following rules in the absence of an algorithmic scientific method; in that case, "science is best understood through examples".[40][41]But algorithmic methods, such asdisproof of existing theory by experimenthave been used sinceAlhacen(1027) and hisBook of Optics,[a]and Galileo (1638) and hisTwo New Sciences,[26]andThe Assayer,[42]which still stand as scientific method.
The scientific method is the process by whichscienceis carried out.[43]As in other areas of inquiry, science (through the scientific method) can build on previous knowledge, and unify understanding of its studied topics over time.[g]Historically, the development of the scientific method was critical to theScientific Revolution.[45]
The overall process involves making conjectures (hypotheses), predicting their logical consequences, then carrying out experiments based on those predictions to determine whether the original conjecture was correct.[4]However, there are difficulties in a formulaic statement of method. Though the scientific method is often presented as a fixed sequence of steps, these actions are more accurately general principles.[46]Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order.
There are different ways of outlining the basic method used for scientific inquiry. Thescientific communityandphilosophers of sciencegenerally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic ofexperimental sciencesthansocial sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.The scientific method is an iterative, cyclical process through which information is continually revised.[47][48]It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions:[49][50]
Each element of the scientific method is subject topeer reviewfor possible mistakes. These activities do not describe all that scientists do butapply mostly to experimental sciences(e.g., physics, chemistry, biology, and psychology). The elements above are often taught inthe educational systemas "the scientific method".[C]
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity.[51]In this sense, it is not a mindless set of standards and procedures to follow but is rather anongoing cycle, constantly developing more useful, accurate, and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton'sPrincipia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work.
An iterative,[48]pragmatic[16]scheme of the four points above is sometimes offered as a guideline for proceeding:[52]
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again.
While this schema outlines a typical hypothesis/testing method,[53]many philosophers, historians, and sociologists of science, includingPaul Feyerabend,[h]claim that such descriptions of scientific method have little relation to the ways that science is actually practiced.
The basic elements of the scientific method are illustrated by the following example (which occurred from 1944 to 1953) from the discovery of the structure of DNA (marked withand indented).
In 1950, it was known thatgenetic inheritancehad a mathematical description, starting with the studies ofGregor Mendel, and that DNA contained genetic information (Oswald Avery'stransforming principle).[55]But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers inBragg'slaboratory atCambridge UniversitymadeX-raydiffractionpictures of variousmolecules, starting withcrystalsofsalt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.[56]
The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (Thesubjectscan also be calledunsolved problemsor theunknowns.)[C]For example,Benjamin Franklinconjectured, correctly, thatSt. Elmo's firewaselectricalinnature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may alsoentailsome definitions andobservations; these observations often demand carefulmeasurementsand/or counting can take the form of expansiveempirical research.
Ascientific questioncan refer to the explanation of a specificobservation,[C]as in "Why is the sky blue?" but can also be open-ended, as in "How can Idesign a drugto cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation.[57]
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference betweenpseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such ascorrelationandregression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specializedscientific instrumentssuch asthermometers,spectroscopes,particle accelerators, orvoltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
I am not accustomed to saying anything with certainty after only one or two observations.
The scientific definition of a term sometimes differs substantially from itsnatural languageusage. For example,massandweightoverlap in meaning in common discourse, but have distinct meanings inmechanics. Scientific quantities are often characterized by theirunits of measurewhich can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example,Albert Einstein's first paper onrelativitybegins by definingsimultaneityand the means for determininglength. These ideas were skipped over byIsaac Newtonwith, "I do not definetime, space, place andmotion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations.Francis Crickcautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood.[59]In Crick's study ofconsciousness, he actually found it easier to studyawarenessin thevisual system, rather than to studyfree will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
Linus Paulingproposed that DNA might be atriple helix.[60][61]This hypothesis was also considered byFrancis CrickandJames D. Watsonbut discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong.[62]and that Pauling would soon admit his difficulties with that structure.
Ahypothesisis a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally, hypotheses have the form of amathematical model. Sometimes, but not always, they can also be formulated asexistential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form ofuniversal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have – their own creativity, ideas from other fields,inductive reasoning,Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study.Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles."[63][i]Charles Sanders Peirce, borrowing a page fromAristotle(Prior Analytics,2.25)[65]described the incipient stages ofinquiry, instigated by the "irritation of doubt" to venture a plausible guess, asabductive reasoning.[66]: II, p.290The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea.Michael Polanyimade such creativity the centerpiece of his discussion of methodology.
William Glenobserves that[67]
the success of a hypothesis, or its service to science, lies not simply in its perceived "truth", or power to displace, subsume or reduce a predecessor idea, but perhaps more in its ability to stimulate the research that will illuminate ... bald suppositions and areas of vagueness.
In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle.Occam's Razorserves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
To minimize theconfirmation biasthat results from entertaining a single hypothesis,strong inferenceemphasizes the need for entertaining multiple alternative hypotheses,[68]and avoiding artifacts.[69]
James D. Watson,Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.[70][71]This prediction followed from the work of Cochran, Crick and Vand[72](and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x-shaped patterns.
In their first paper, Watson and Crick also noted that thedouble helixstructure they proposed provided a simple mechanism forDNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".[73]
Any useful hypothesis will enablepredictions, byreasoningincludingdeductive reasoning.[j]It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered whileformulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yettestableand so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, no known experiment can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science.
For example, Einstein's theory ofgeneral relativitymakes several specific predictions about the observable structure ofspacetime, such as thatlightbends in agravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field.Arthur Eddington'sobservations made during a 1919 solar eclipsesupported General Relativity rather than Newtoniangravitation.[74]
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team fromKing's College London–Rosalind Franklin,Maurice Wilkins, andRaymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin'sphoto 51, a detailed X-ray diffraction image, which showed an X-shape[75][76]and was able to confirm the structure was helical.[77][78][k]
Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to acrucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject tofurther testing.Theexperimental controlis a technique for dealing with observational error. This technique uses the contrast between multiple samples, or observations, or populations, under differing conditions, to see what varies or what remains the same. We vary the conditions for the acts of measurement, to help isolate what has changed.Mill's canonscan then help us figure out what the important factor is.[82]Factor analysisis one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, adouble-blindstudy or an archaeologicalexcavation. Even taking a plane fromNew YorktoParisis an experiment that tests theaerodynamicalhypotheses used for constructing the plane.
These institutions thereby reduce the research function to a cost/benefit,[83]which is expressed as money, and the time and attention of the researchers to be expended,[83]in exchange for a report to their constituents.[84]Current large instruments, such as CERN'sLarge Hadron Collider(LHC),[85]orLIGO,[86]or theNational Ignition Facility(NIF),[87]or theInternational Space Station(ISS),[88]or theJames Webb Space Telescope(JWST),[89][90]entail expected costs of billions of dollars, and timeframes extending over decades. These kinds of institutions affect public policy, on a national or even international basis, and the researchers would require shared access to such machines and theiradjunct infrastructure.[ε][91]
Scientists assume an attitude of openness and accountability on the part of those experimenting. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work ofHipparchus(190–120 BCE), when determining a value for the precession of the Earth, whilecontrolled experimentscan be seen in the works ofal-Battani(853–929 CE)[92]andAlhazen(965–1039 CE).[93][l][b]
Watson and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing.[81]After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,[95][96][97]Watson and Crick were able to infer the essential structure ofDNAby concretemodelingof the physical shapesof thenucleotideswhich comprise it.[81][98][99]They were guided by the bond lengths which had been deduced byLinus Paulingand byRosalind Franklin's X-ray diffraction images.
The scientific method is iterative. At any stage, it is possible to refine itsaccuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
This manner of iteration can span decades and sometimes centuries.Published paperscan be built upon. For example: By 1027,Alhazen, based on his measurements of therefractionof light, was able to deduce thatouter spacewas less dense thanair, that is: "the body of the heavens is rarer than the body of air".[14]In 1079Ibn Mu'adh'sTreatise On Twilightwas able to infer that Earth's atmosphere was 50 miles thick, based onatmospheric refractionof the sun's rays.[m]
This is why the scientific method is often represented as circular – new information leads to new characterisations, and the cycle of science continues. Measurements collectedcan be archived, passed onwards and used by others.Other scientists may start their own research andenter the processat any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision;Georg Wilhelm Richmannwas killed byball lightning(1753) when attempting to replicate the 1752 kite-flying experiment ofBenjamin Franklin.[101]
If an experiment cannot berepeatedto produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications ofexperimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work.[102]Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. Typically anexperimental groupgets the treatment, such as a drug, and thecontrol groupgets a placebo.John Ioannidisin 2005 pointed out that the method being used has led to many findings that cannot be replicated.[103]
The process ofpeer reviewinvolves the evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewedscientific journal. The specific journal that publishes the results indicates the perceived quality of the work.[n]
Scientists typically are careful in recording their data, a requirement promoted byLudwik Fleck(1896–1961) and others.[104]Though not typically required, they might be requested tosupply this datato other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain.[105]To protect against bad science and fraudulent data, government research-granting agencies such as theNational Science Foundation, and science journals, includingNatureandScience, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before.Scientific data archivingcan be done at several national archives in the U.S. or theWorld Data Center.
The unfettered principles of science are to strive for accuracy and the creed of honesty; openness already being a matter of degrees. Openness is restricted by the general rigour of scepticism. And of course the matter of non-science.
Smolin, in 2013, espoused ethical principles rather than giving any potentially limited definition of the rules of inquiry.[δ]His ideas stand in the context of the scale of data–driven andbig science, which has seen increased importance of honesty and consequentlyreproducibility. His thought is that science is a community effort by those who have accreditation and are working within thecommunity. He also warns against overzealous parsimony.
Popper previously took ethical principles even further, going as far as to ascribe value to theories only if they were falsifiable. Popper used the falsifiability criterion to demarcate a scientific theory from a theory like astrology: both "explain" observations, but the scientific theory takes the risk of making predictions that decide whether it is right or wrong:[106][107]
"Those among us who are unwilling to expose their ideas to the hazard of refutation do not take part in the game of science."
Science has limits. Those limits are usually deemed to be answers to questions that aren't in science's domain, such as faith. Science has other limits as well, as it seeks to make true statements about reality.[108]The nature oftruthand the discussion on how scientific statements relate to reality is best left to the article on thephilosophy of sciencehere. More immediately topical limitations show themselves in the observation of reality.
It is the natural limitations of scientific inquiry that there is no pure observation as theory is required to interpret empirical data, and observation is therefore influenced by the observer's conceptual framework.[110]As science is an unfinished project, this does lead to difficulties. Namely, that false conclusions are drawn, because of limited information.
An example here are the experiments of Kepler and Brahe, used by Hanson to illustrate the concept. Despite observing the same sunrise the two scientists came to different conclusions—theirintersubjectivityleading to differing conclusions.Johannes KeplerusedTycho Brahe's method of observation, which was to project the image of the Sun on a piece of paper through a pinhole aperture, instead of looking directly at the Sun. He disagreed with Brahe's conclusion that total eclipses of the Sun were impossible because, contrary to Brahe, he knew that there were historical accounts of total eclipses. Instead, he deduced that the images taken would become more accurate, the larger the aperture—this fact is now fundamental for optical system design.[d]Another historic example here is thediscovery of Neptune, credited as being found via mathematics because previous observers didn't know what they were looking at.[111]
Scientific endeavour can be characterised as the pursuit of truths about the natural world or as the elimination of doubt about the same. The former is the direct construction of explanations from empirical data and logic, the latter the reduction of potential explanations.[ζ]It was establishedabovehow the interpretation of empirical data is theory-laden, so neither approach is trivial.
The ubiquitous element in the scientific method isempiricism, which holds that knowledge is created by a process involving observation; scientific theories generalize observations. This is in opposition to stringent forms ofrationalism, which holds that knowledge is created by the human intellect; later clarified by Popper to be built on prior theory.[113]The scientific method embodies the position that reason alone cannot solve a particular scientific problem; it unequivocally refutes claims thatrevelation, political or religiousdogma, appeals to tradition, commonly held beliefs, common sense, or currently held theories pose the only possible means of demonstrating truth.[16][80]
In 1877,[49]C. S. Peircecharacterized inquiry in general not as the pursuit of truthper sebut as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, the belief being that on which one is prepared to act. Hispragmaticviews framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or "hyperbolic doubt", which he held to be fruitless.[o]This "hyperbolic doubt" Peirce argues against here is of course just another name forCartesian doubtassociated withRené Descartes. It is a methodological route to certain knowledge by identifying what can't be doubted.
A strong formulation of the scientific method is not always aligned with a form ofempiricismin which the empirical data is put forward in the form of experience or other abstracted forms of knowledge as in current scientific practice the use ofscientific modellingand reliance on abstract typologies and theories is normally accepted. In 2010,Hawkingsuggested that physics' models of reality should simply be accepted where they prove to make useful predictions. He calls the conceptmodel-dependent realism.[116]
Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours.[117]The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences.
Scientific methodology often directs thathypothesesbe tested incontrolledconditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy.
The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as inconfirmation bias; this is aheuristicthat leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).[37]
[T]he action of thought is excited by the irritation of doubt, and ceases when belief is attained.
A historical example is the belief that the legs of agallopinghorse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop byEadweard Muybridgeshowed this to be false, and that the legs are instead gathered together.[118]
Another important human bias that plays a role is a preference for new, surprising statements (seeAppeal to novelty), which can result in a search for evidence that the new is true.[119]Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic.[120]
Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hard – though certainly never impossible – to overturn".[121]When a narrative is constructed its elements become easier to believe.[122][38]
Fleck (1979), p. 27 notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumeda priori, or contain some other logical or methodological flaw in the process that ultimately produced them.Donald M. MacKayhas analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement.[η]
The idea of there being two opposed justifications for truth has shown up throughout the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles.[123]
Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions.[124]
An example for how inductive and deductive reasoning works can be found in thehistory of gravitational theory.[p]It took thousands of years of measurements, from theChaldean,Indian,Persian,Greek,Arabic, andEuropeanastronomers, to fully record the motion of planetEarth.[q]Kepler(and others) were then able to build their early theories bygeneralizing the collected data inductively, andNewtonwas able to unify prior theory and measurements into the consequences of hislaws of motionin 1727.[r]
Another common example of inductive reasoning is the observation of acounterexampleto current theory inducing the need for new ideas.Le Verrierin 1859 pointed out problems with theperihelionofMercurythat showed Newton's theory to be at least incomplete. The observed difference of Mercury'sprecessionbetween Newtonian theory and observation was one of the things that occurred toEinsteinas a possible early test of histheory of relativity. His relativistic calculations matched observation much more closely than Newtonian theory did.[s]Though, today'sStandard Modelof physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively.
A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges.
This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure.
Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent.[t]Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation[34]— certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily.
Measurements in scientific work are usually accompanied by estimates of theiruncertainty.[83]The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due todata collectionlimitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon thesampling methodused and the number of samples taken.
In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different.Inductive statistical generalisationwill take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful,but never a complete representation of circumstances.
In statistical analysis, expected and unexpected bias is a large factor.[129]Research questions, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through aprocess for validation, for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find inpeer review, after all.[u]More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context.[124]Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology.
Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example,[v]multiple probabilities interacting is where, for example medical professionals,[131]have shown a lack of proper understanding.Bayes' theoremis the mathematical principle lining out how standing probabilities are adjusted given new information. Theboy or girl paradoxis a common example. In knowledge representation,Bayesian estimation of mutual informationbetweenrandom variablesis a way to measure dependence, independence, or interdependence of the information under scrutiny.[132]
Beyond commonly associatedsurvey methodologyoffield research, the concept together withprobabilistic reasoningis used to advance fields of science where research objects have no definitive states of being. For example, instatistical mechanics.
Thehypothetico-deductive model, or hypothesis-testing method, or "traditional" scientific method is, as the name implies, based on the formation ofhypothesesand their testing viadeductive reasoning. A hypothesis stating implications, often calledpredictions, that are falsifiable via experiment is of central importance here, as not the hypothesis but its implications are what is tested.[133]Basically, scientists will look at the hypothetical consequences a (potential)theoryholds and prove or disprove those instead of the theory itself. If anexperimentaltest of those hypothetical consequences shows them to be false, it follows logically that the part of the theory that implied them was false also. If they show as true however, it does not prove the theory definitively.
Thelogicof this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it".[134]This is whyPopperinsisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. AsGilliesput it, "successful theories are those that survive elimination through falsification".[133]
Deductive reasoning in this mode of inquiry will sometimes be replaced byabductive reasoning—the search for the most plausible explanation via logical inference. For example, in biology, where general laws are few,[133]as valid deductions rely on solid presuppositions.[124]
Theinductivist approachto deriving scientific truth first rose to prominence withFrancis Baconand particularly withIsaac Newtonand those who followed him.[135]After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though.[133]It is still valid to some degree, but today's inductive method is often far removed from the historic approach—the scale of the data collected lending new effectiveness to the method. It is most-associated with data-mining projects or large-scale observation projects. In both these cases, it is often not at all clear what the results of proposed experiments will be, and thus knowledge will arise after the collection of data through inductive reasoning.[r]
Where the traditional method of inquiry does both, the inductive approach usually formulates only aresearch question, not a hypothesis. Following the initial question instead, a suitable "high-throughput method" of data-collection is determined, the resulting data processed and 'cleaned up', and conclusions drawn after. "This shift in focus elevates the data to the supreme role of revealing novel insights by themselves".[133]
The advantage the inductive method has over methods formulating a hypothesis that it is essentially free of "a researcher's preconceived notions" regarding their subject. On the other hand, inductive reasoning is always attached to a measure of certainty, as all inductively reasoned conclusions are.[133]This measure of certainty can reach quite high degrees, though. For example, in the determination of largeprimes, which are used inencryption software.[136]
Mathematical modelling, or allochthonous reasoning, typically is the formulation of a hypothesis followed by building mathematical constructs that can be tested in place of conducting physical laboratory experiments. This approach has two main factors: simplification/abstraction and secondly a set of correspondence rules. The correspondence rules lay out how the constructed model will relate back to reality-how truth is derived; and the simplifying steps taken in the abstraction of the given system are to reduce factors that do not bear relevance and thereby reduce unexpected errors.[133]These steps can also help the researcher in understanding the important factors of the system, how far parsimony can be taken until the system becomes more and more unchangeable and thereby stable. Parsimony and related principles are further exploredbelow.
Once this translation into mathematics is complete, the resulting model, in place of the corresponding system, can be analysed through purely mathematical and computational means. The results of this analysis are of course also purely mathematical in nature and get translated back to the system as it exists in reality via the previously determined correspondence rules—iteration following review and interpretation of the findings. The way such models are reasoned will often be mathematically deductive—but they don't have to be. An example here areMonte-Carlo simulations. These generate empirical data "arbitrarily", and, while they may not be able to reveal universal principles, they can nevertheless be useful.[133]
Scientific inquiry generally aims to obtainknowledgein the form oftestable explanations[137][79]that scientists can use topredictthe results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those that explain and make accurate predictions in a wide range of circumstances – are often calledscientific theories.[C]
Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science.[138]Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations.
Scientific knowledge is closely tied toempirical findingsand can remain subject tofalsificationif new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory relates to how long it has persisted without major alteration to its core principles.
Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planetsalmost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection oflightbygravity. Thus, in certain cases independent, unconnected, scientific observations can be connected, unified by principles of increasing explanatory power.[139][121]
Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors.[139]For example, the theory ofevolutionexplains thediversity of life on Earth, how species adapt to their environments, and many otherpatternsobserved in the natural world;[140][141]its most recent major modification was unification withgeneticsto form themodern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such asbiochemistryandmolecular biology.
During the course of history, one theory has succeeded another, and some have suggested further work while others have seemed content just to explain the phenomena. The reasons why one theory has replaced another are not always obvious or simple. The philosophy of science includes the question:What criteria are satisfied by a 'good' theory. This question has a long history, and many scientists, as well as philosophers, have considered it. The objective is to be able to choose one theory as preferable to another without introducingcognitive bias.[142]Though different thinkers emphasize different aspects,[ι]a good theory:
In trying to look for such theories, scientists will, given a lack of guidance by empirical evidence, try to adhere to:
The goal here is to make the choice between theories less arbitrary. Nonetheless, these criteria contain subjective elements, and should be consideredheuristicsrather than a definitive.[κ]Also, criteria such as these do not necessarily decide between alternative theories. QuotingBird:[148]
"[Such criteria] cannot determine scientific choice. First, which features of a theory satisfy these criteria may be disputable (e.g.does simplicity concern the ontological commitments of a theory or its mathematical form?). Secondly, these criteria are imprecise, and so there is room for disagreement about the degree to which they hold. Thirdly, there can be disagreement about how they are to be weighted relative to one another, especially when they conflict."
It also is debatable whether existing scientific theories satisfy all these criteria, which may represent goals not yet achieved. For example, explanatory power over all existing observations is satisfied by no one theory at the moment.[149][150]
Thedesiderataof a "good" theory have been debated for centuries, going back perhaps even earlier thanOccam's razor,[w]which is often taken as an attribute of a good theory. Science tries to be simple. When gathered data supports multiple explanations, the most simple explanation for phenomena or the most simple formation of a theory is recommended by the principle of parsimony.[151]Scientists go as far as to call simple proofs of complex statementsbeautiful.
We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.
The concept of parsimony should not be held to imply complete frugality in the pursuit of scientific truth. The general process starts at the opposite end of there being a vast number of potential explanations and general disorder. An example can be seen inPaul Krugman's process, who makes explicit to "dare to be silly". He writes that in his work on new theories of international trade hereviewed prior workwith an open frame of mind and broadened his initial viewpoint even in unlikely directions. Once he had a sufficient body of ideas, he would try to simplify and thus find what worked among what did not. Specific to Krugman here was to "question the question". He recognised that prior work had applied erroneous models to already present evidence, commenting that "intelligent commentary was ignored".[152]Thus touching on the need to bridge the common bias against other circles of thought.[153]
Occam's razor might fall under the heading of "simple elegance", but it is arguable thatparsimonyandelegancepull in different directions. Introducing additional elements could simplify theory formulation, whereas simplifying a theory's ontology might lead to increased syntactical complexity.[147]
Sometimes ad-hoc modifications of a failing idea may also be dismissed as lacking "formal elegance". This appeal to what may be called "aesthetic" is hard to characterise, but essentially about a sort of familiarity. Though, argument based on "elegance" is contentious and over-reliance on familiarity will breed stagnation.[144]
Principles of invariance have been a theme in scientific writing, and especially physics, since at least the early 20th century.[θ]The basic idea here is that good structures to look for are those independent of perspective, an idea that has featured earlier of course for example inMill's Methodsof difference and agreement—methods that would be referred back to in the context of contrast and invariance.[154]But as tends to be the case, there is a difference between something being a basic consideration and something being given weight. Principles of invariance have only been given weight in the wake of Einstein's theories of relativity, which reduced everything to relations and were thereby fundamentally unchangeable, unable to be varied.[155][x]AsDavid Deutschput it in 2009: "the search for hard-to-vary explanations is the origin of all progress".[146]
An example here can be found in one ofEinstein's thought experiments. The one of a lab suspended in empty space is an example of a useful invariant observation. He imagined the absence of gravity and an experimenter free floating in the lab. — If now an entity pulls the lab upwards, accelerating uniformly, the experimenter would perceive the resulting force as gravity. The entity however would feel the work needed to accelerate the lab continuously.[x]Through this experiment Einstein was able to equate gravitational and inertial mass; something unexplained by Newton's laws, and an early but "powerful argument for a generalised postulate of relativity".[156]
The feature, which suggests reality, is always some kind of invariance of a structure independent of the aspect, the projection.
The discussion oninvariancein physics is often had in the more specific context ofsymmetry.[155]The Einstein example above, in the parlance of Mill would be an agreement between two values. In the context of invariance, it is a variable that remains unchanged through some kind of transformation or change in perspective. And discussion focused on symmetry would view the two perspectives as systems that share a relevant aspect and are therefore symmetrical.
Related principles here arefalsifiabilityandtestability. The opposite of something beinghard-to-varyare theories that resist falsification—a frustration that was expressed colourfully byWolfgang Paulias them being "not even wrong". The importance of scientific theories to be falsifiable finds especial emphasis in the philosophy of Karl Popper. The broader view here is testability, since it includes the former and allows for additional practical considerations.[157][158]
Philosophy of science looks atthe underpinning logicof the scientific method, at what separatesscience from non-science, and theethicthat is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist,[D][159]that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world.[159]These assumptions frommethodological naturalismform a basis on which science may be grounded.Logical positivist,empiricist,falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized.
There are several kinds of modern philosophical conceptualizations and attempts at definitions of the method of science.[λ]The one attempted by theunificationists, who argue for the existence of a unified definition that is useful (or at least 'works' in every context of science). Thepluralists, arguing degrees of science being too fractured for a universal definition of its method to by useful. And those, who argue that the very attempt at definition is already detrimental to the free flow of ideas.
Additionally, there have been views on the social framework in which science is done, and the impact of the sciences social environment on research. Also, there is 'scientific method' as popularised by Dewey inHow We Think(1910) and Karl Pearson inGrammar of Science(1892), as used in fairly uncritical manner in education.
Scientific pluralism is a position within thephilosophy of sciencethat rejects various proposedunitiesof scientific method and subject matter. Scientific pluralists hold that science is not unified in one or more of the following ways: themetaphysicsof its subject matter, theepistemologyof scientific knowledge, or theresearch methodsand models that should be used. Some pluralists believe that pluralism is necessary due to the nature of science. Others say that sincescientific disciplinesalready vary in practice, there is no reason to believe this variation is wrong until a specific unification isempiricallyproven. Finally, some hold that pluralism should be allowed fornormativereasons, even if unity were possible in theory.
Unificationism, in science, was a central tenet oflogical positivism.[161][162]Different logical positivists construed this doctrine in several different ways, e.g. as areductionistthesis, that the objects investigated by thespecial sciencesreduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common scientific method.[y]
Development of the idea has been troubled by accelerated advancement in technology that has opened up many new ways to look at the world.
The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend.
Paul Feyerabendexamined the history of science, and was led to deny that science is genuinely a methodological process. In his 1975 bookAgainst Methodhe argued that no description of scientific methodcould possibly be broad enoughto include all the approaches and methods used by scientists, and that there are no useful and exception-freemethodological rulesgoverning the progress of science. In essence, he said that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. He jokingly suggested that, if believers in the scientific method wish to express a single universally valid rule, it should be 'anything goes'.[164]As has been argued before him however, this is uneconomic;problem solvers, and researchers are to be prudent with their resources during their inquiry.[E]
A more general inference against formalised method has been found through research involving interviews with scientists regarding their conception of method. This research indicated that scientists frequently encounter difficulty in determining whether the available evidence supports their hypotheses. This reveals that there are no straightforward mappings between overarching methodological concepts and precise strategies to direct the conduct of research.[166]
Inscience education, the idea of a general and universal scientific method has been notably influential, and numerous studies (in the US) have shown that this framing of method often forms part of both students’ and teachers’ conception of science.[167][168]This convention of traditional education has been argued against by scientists, as there is a consensus that educations' sequential elements and unified view of scientific method do not reflect how scientists actually work.[169][170][171]Major organizations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning and proper understating of science includes understanding of philosophy and history, not just science in isolation.[172]
How the sciences make knowledge has been taught in the context of "the" scientific method (singular) since the early 20th century. Various systems of education, including but not limited to the US, have taught the method of science as a process or procedure, structured as a definitive series of steps:[176]observation, hypothesis, prediction, experiment.
This version of the method of science has been a long-established standard in primary and secondary education, as well as the biomedical sciences.[178]It has long been held to be an inaccurate idealisation of how some scientific inquiries are structured.[173]
The taught presentation of science had to defend demerits such as:[179]
The scientific method no longer features in the standards for US education of 2013 (NGSS) that replaced those of 1996 (NRC). They, too, influenced international science education,[179]and the standards measured for have shifted since from the singular hypothesis-testing method to a broader conception of scientific methods.[181]These scientific methods, which are rooted in scientific practices and not epistemology, are described as the 3dimensionsof scientific and engineering practices, crosscutting concepts (interdisciplinary ideas), and disciplinary core ideas.[179]
The scientific method, as a result of simplified and universal explanations, is often held to have reached a kind of mythological status; as a tool for communication or, at best, an idealisation.[36][170]Education's approach was heavily influenced by John Dewey's,How We Think(1910).[33]Van der Ploeg (2016) indicated that Dewey's views on education had long been used to further an idea of citizen education removed from "sound education", claiming that references to Dewey in such arguments were undue interpretations (of Dewey).[182]
The sociology of knowledge is a concept in the discussion around scientific method, claiming the underlying method of science to be sociological. King explains that sociology distinguishes here between the system of ideas that govern the sciences through an inner logic, and the social system in which those ideas arise.[μ][i]
A perhaps accessible lead into what is claimed isFleck'sthought, echoed inKuhn'sconcept ofnormal science. According to Fleck, scientists' work is based on a thought-style, that cannot be rationally reconstructed. It gets instilled through the experience of learning, and science is then advanced based on a tradition of shared assumptions held by what he calledthought collectives. Fleck also claims this phenomenon to be largely invisible to members of the group.[186]
Comparably, following thefield researchin an academic scientific laboratory byLatourandWoolgar,Karin Knorr Cetinahas conducted a comparative study of two scientific fields (namelyhigh energy physicsandmolecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept.[187][z]
On the idea of Fleck'sthought collectivessociologists built the concept ofsituated cognition: that the perspective of the researcher fundamentally affects their work; and, too, more radical views.
Norwood Russell Hanson, alongsideThomas KuhnandPaul Feyerabend, extensively explored the theory-laden nature of observation in science. Hanson introduced the concept in 1958, emphasizing that observation is influenced by theobserver's conceptual framework. He used the concept ofgestaltto show how preconceptions can affect both observation and description, and illustrated this with examples like the initial rejection ofGolgi bodiesas an artefact of staining technique, and the differing interpretations of the same sunrise by Tycho Brahe and Johannes Kepler.Intersubjectivityled to different conclusions.[110][d]
Kuhn and Feyerabend acknowledged Hanson's pioneering work,[191][192]although Feyerabend's views on methodological pluralism were more radical. Criticisms like those from Kuhn and Feyerabend prompted discussions leading to the development of thestrong programme, a sociological approach that seeks to explain scientific knowledge without recourse to the truth or validity of scientific theories. It examines how scientific beliefs are shaped by social factors such as power, ideology, and interests.
Thepostmodernistcritiques of science have themselves been the subject of intense controversy. This ongoing debate, known as thescience wars, is the result of conflicting values and assumptions betweenpostmodernistandrealistperspectives. Postmodernists argue that scientific knowledge is merely a discourse, devoid of any claim to fundamental truth. In contrast, realists within the scientific community maintain that science uncovers real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate way of deriving truth.[193]
Somewhere between 33% and 50% of allscientific discoveriesare estimated to have beenstumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky.[9]Scientists themselves in the 19th and 20th century acknowledged the role of fortunate luck or serendipity in discoveries.[10]Louis Pasteuris credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected.[9][195]This is whatNassim Nicholas Talebcalls "Anti-fragility"; while some systems of investigation are fragile in the face ofhuman error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.[196]
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what theythinkis an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious, and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.[9][195]
When the scientific method employs statistics as a key part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" byJohn Ioannidis, which is considered foundational to the field ofmetascience.[130]Much research in metascience seeks to identify poor use of statistics and improve its use, an example being themisuse of p-values.[197]
The points raised are both statistical and economical. Statistically, research findings are less likely to be true when studies are small and when there is significant flexibility in study design, definitions, outcomes, and analytical approaches. Economically, the reliability of findings decreases in fields with greater financial interests, biases, and a high level of competition among research teams. As a result, most research findings are considered false across various designs and scientific fields, particularly in modern biomedical research, which often operates in areas with very low pre- and post-study probabilities of yielding true findings. Nevertheless, despite these challenges, most new discoveries will continue to arise from hypothesis-generating research that begins with low or very low pre-study odds. This suggests that expanding the frontiers of knowledge will depend on investigating areas outside the mainstream, where the chances of success may initially appear slim.[130]
Science applied to complex systems can involve elements such astransdisciplinarity,systems theory,control theory, andscientific modelling.
In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used withinBig data, such aspredictive analytics, may be considered to be at odds with the scientific method,[198]as some of the data may have been stripped of the parameters which might be material in alternative hypotheses for an explanation; thus the stripped data would only serve to support thenull hypothesisin the predictive analytics application.Fleck (1979), pp. 38–50 notes "ascientific discovery remains incomplete without considerations of the social practicesthat condition it".[199]
Science is the process of gathering, comparing, and evaluating proposed models againstobservables.A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what isknownfrom what isunknownat each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to befalsifiable(capable of disproof). In mathematics, a statement need not yet be proved; at such a stage, that statement would be called aconjecture.[200]
Mathematical work and scientific work can inspire each other.[42]For example, the technical concept oftimearose inscience, and timelessness was a hallmark of a mathematical topic. But today, thePoincaré conjecturehas been proved using time as a mathematical concept in which objects can flow (seeRicci flow).[201]
Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure.Eugene Wigner's paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", is a very well-known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such asGregory Chaitin, and others such asLakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.[202]
George Pólya's work onproblem solving,[203]the construction of mathematicalproofs, andheuristic[204][205]show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
In Pólya's view,understandinginvolves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already;analysis, which Pólya takes fromPappus,[206]involves free and heuristic construction of plausible arguments,working backward from the goal, and devising a plan for constructing the proof;synthesisis the strictEuclideanexposition of step-by-step details[207]of the proof;reviewinvolves reconsidering and re-examining the result and the path taken to it.
Building on Pólya's work,Imre Lakatosargued that mathematicians actually use contradiction, criticism, and revision as principles for improving their work.[208][ν]In like manner to science, where truth is sought, but certainty is not found, inProofs and Refutations, what Lakatos tried to establish was that no theorem ofinformal mathematicsis final or perfect. This means that, in non-axiomatic mathematics, we should not think that a theorem is ultimately true, only that nocounterexamplehas yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (However, if axioms are given for a branch of mathematics, this creates a logical system —Wittgenstein 1921Tractatus Logico-Philosophicus5.13; Lakatos claimed that proofs from such a system weretautological, i.e.internally logically true, byrewriting forms, as shown by Poincaré, who demonstrated the technique of transforming tautologically true forms (viz. theEuler characteristic) into or out of forms fromhomology,[209]or more abstractly, fromhomological algebra.[210][211][ν]
Lakatos proposed an account of mathematical knowledge based on Polya's idea ofheuristics. InProofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs.[213]
Gauss, when asked how he came about histheorems, once replied "durch planmässiges Tattonieren" (throughsystematic palpable experimentation).[214]
The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. | https://en.wikipedia.org/wiki/Scientific_method |
Aspecification languageis aformal languageincomputer scienceused duringsystems analysis,requirements analysis, andsystems designto describe a system at a much higher level than aprogramming language, which is used to produce the executable code for a system.[1]
Specification languages are generally not directly executed. They are meant to describe thewhat, not thehow. It is considered an error if a requirement specification is cluttered with unnecessary implementation detail.
A common fundamental assumption of many specification approaches is that programs are modelled asalgebraicormodel-theoreticstructures that include a collection ofsetsof data values together withfunctionsover those sets. This level of abstraction coincides with the view that the correctness of the input/output behaviour of a program takes precedence over all its other properties.
In theproperty-orientedapproach to specification (taken e.g. byCASL), specifications of programs consist mainly of logicalaxioms, usually in alogical systemin which equality has a prominent role, describing the properties that the functions are required to satisfy—often just by their interrelationship.
This is in contrast to so-calledmodel-oriented specificationin frameworks likeVDMandZ, which consist of a simple realization of the required behaviour.
Specifications must be subject to a process ofrefinement(the filling-in of implementation detail) before they can actually be implemented. The result of such a refinement process is an executable algorithm, which is either formulated in a programming language, or in an executable subset of the specification language at hand. For example,Hartmann pipelines, when properly applied, may be considered adataflowspecification whichisdirectly executable. Another example is theactor modelwhich has no specific application content and must bespecializedto be executable.
An important use of specification languages is enabling the creation ofproofsofprogram correctness(seetheorem prover). | https://en.wikipedia.org/wiki/Specification_language |
Incomputer science, in the area offormal language theory, frequent use is made of a variety ofstring functions; however, the notation used is different from that used forcomputer programming, and some commonly used functions in the theoretical realm are rarely used when programming. This article defines some of these basic terms.
A string is a finite sequence of characters.
Theempty stringis denoted byε{\displaystyle \varepsilon }.
The concatenation of two strings{\displaystyle s}andt{\displaystyle t}is denoted bys⋅t{\displaystyle s\cdot t}, or shorter byst{\displaystyle st}.
Concatenating with the empty string makes no difference:s⋅ε=s=ε⋅s{\displaystyle s\cdot \varepsilon =s=\varepsilon \cdot s}.
Concatenation of strings isassociative:s⋅(t⋅u)=(s⋅t)⋅u{\displaystyle s\cdot (t\cdot u)=(s\cdot t)\cdot u}.
For example,(⟨b⟩⋅⟨l⟩)⋅(ε⋅⟨ah⟩)=⟨bl⟩⋅⟨ah⟩=⟨blah⟩{\displaystyle (\langle b\rangle \cdot \langle l\rangle )\cdot (\varepsilon \cdot \langle ah\rangle )=\langle bl\rangle \cdot \langle ah\rangle =\langle blah\rangle }.
Alanguageis a finite or infinite set of strings.
Besides the usual set operations like union, intersection etc., concatenation can be applied to languages:
if bothS{\displaystyle S}andT{\displaystyle T}are languages, their concatenationS⋅T{\displaystyle S\cdot T}is defined as the set of concatenations of any string fromS{\displaystyle S}and any string fromT{\displaystyle T}, formallyS⋅T={s⋅t∣s∈S∧t∈T}{\displaystyle S\cdot T=\{s\cdot t\mid s\in S\land t\in T\}}.
Again, the concatenation dot⋅{\displaystyle \cdot }is often omitted for brevity.
The language{ε}{\displaystyle \{\varepsilon \}}consisting of just the empty string is to be distinguished from the empty language{}{\displaystyle \{\}}.
Concatenating any language with the former doesn't make any change:S⋅{ε}=S={ε}⋅S{\displaystyle S\cdot \{\varepsilon \}=S=\{\varepsilon \}\cdot S},
while concatenating with the latter always yields the empty language:S⋅{}={}={}⋅S{\displaystyle S\cdot \{\}=\{\}=\{\}\cdot S}.
Concatenation of languages is associative:S⋅(T⋅U)=(S⋅T)⋅U{\displaystyle S\cdot (T\cdot U)=(S\cdot T)\cdot U}.
For example, abbreviatingD={⟨0⟩,⟨1⟩,⟨2⟩,⟨3⟩,⟨4⟩,⟨5⟩,⟨6⟩,⟨7⟩,⟨8⟩,⟨9⟩}{\displaystyle D=\{\langle 0\rangle ,\langle 1\rangle ,\langle 2\rangle ,\langle 3\rangle ,\langle 4\rangle ,\langle 5\rangle ,\langle 6\rangle ,\langle 7\rangle ,\langle 8\rangle ,\langle 9\rangle \}}, the set of all three-digit decimal numbers is obtained asD⋅D⋅D{\displaystyle D\cdot D\cdot D}. The set of all decimal numbers of arbitrary length is an example for an infinite language.
Thealphabet of a stringis the set of all of the characters that occur in a particular string. Ifsis a string, itsalphabetis denoted by
Thealphabet of a languageS{\displaystyle S}is the set of all characters that occur in any string ofS{\displaystyle S}, formally:Alph(S)=⋃s∈SAlph(s){\displaystyle \operatorname {Alph} (S)=\bigcup _{s\in S}\operatorname {Alph} (s)}.
For example, the set{⟨a⟩,⟨c⟩,⟨o⟩}{\displaystyle \{\langle a\rangle ,\langle c\rangle ,\langle o\rangle \}}is the alphabet of the string⟨cacao⟩{\displaystyle \langle cacao\rangle },
and theaboveD{\displaystyle D}is the alphabet of theabovelanguageD⋅D⋅D{\displaystyle D\cdot D\cdot D}as well as of the language of all decimal numbers.
LetLbe alanguage, and let Σ be its alphabet. Astring substitutionor simply asubstitutionis a mappingfthat maps characters in Σ to languages (possibly in a different alphabet). Thus, for example, given a charactera∈ Σ, one hasf(a)=LawhereLa⊆ Δ*is some language whose alphabet is Δ. This mapping may be extended to strings as
for theempty stringε, and
for strings∈Land charactera∈ Σ. String substitutions may be extended to entire languages as[1]
Regular languagesare closed under string substitution. That is, if each character in the alphabet of a regular language is substituted by another regular language, the result is still a regular language.[2]Similarly,context-free languagesare closed under string substitution.[3][note 1]
A simple example is the conversionfuc(.) to uppercase, which may be defined e.g. as follows:
For the extension offucto strings, we have e.g.
For the extension offucto languages, we have e.g.
Astring homomorphism(often referred to simply as ahomomorphisminformal language theory) is a string substitution such that each character is replaced by a single string. That is,f(a)=s{\displaystyle f(a)=s}, wheres{\displaystyle s}is a string, for each charactera{\displaystyle a}.[note 2][4]
String homomorphisms aremonoid morphismson thefree monoid, preserving the empty string and thebinary operationofstring concatenation. Given a languageL{\displaystyle L}, the setf(L){\displaystyle f(L)}is called thehomomorphic imageofL{\displaystyle L}. Theinverse homomorphic imageof a strings{\displaystyle s}is defined as
f−1(s)={w∣f(w)=s}{\displaystyle f^{-1}(s)=\{w\mid f(w)=s\}}
while the inverse homomorphic image of a languageL{\displaystyle L}is defined as
f−1(L)={s∣f(s)∈L}{\displaystyle f^{-1}(L)=\{s\mid f(s)\in L\}}
In general,f(f−1(L))≠L{\displaystyle f(f^{-1}(L))\neq L}, while one does have
f(f−1(L))⊆L{\displaystyle f(f^{-1}(L))\subseteq L}
and
L⊆f−1(f(L)){\displaystyle L\subseteq f^{-1}(f(L))}
for any languageL{\displaystyle L}.
The class of regular languages is closed under homomorphisms and inverse homomorphisms.[5]Similarly, the context-free languages are closed under homomorphisms[note 3]and inverse homomorphisms.[6]
A string homomorphism is said to be ε-free (or e-free) iff(a)≠ε{\displaystyle f(a)\neq \varepsilon }for allain the alphabetΣ{\displaystyle \Sigma }. Simple single-lettersubstitution ciphersare examples of (ε-free) string homomorphisms.
An example string homomorphismguccan also be obtained by defining similar to theabovesubstitution:guc(‹a›) = ‹A›, ...,guc(‹0›) = ε, but lettinggucbe undefined on punctuation chars.
Examples for inverse homomorphic images are
For the latter language,guc(guc−1({ ‹A›, ‹bb› })) =guc({ ‹a› }) = { ‹A› } ≠ { ‹A›, ‹bb› }.
The homomorphismgucis not ε-free, since it maps e.g. ‹0› to ε.
A very simple string homomorphism example that maps each character to just a character is the conversion of anEBCDIC-encoded string toASCII.
Ifsis a string, andΣ{\displaystyle \Sigma }is an alphabet, thestring projectionofsis the string that results by removing all characters that are not inΣ{\displaystyle \Sigma }. It is written asπΣ(s){\displaystyle \pi _{\Sigma }(s)\,}. It is formally defined by removal of characters from the right hand side:
Hereε{\displaystyle \varepsilon }denotes theempty string. The projection of a string is essentially the same as aprojection in relational algebra.
String projection may be promoted to theprojection of a language. Given aformal languageL, its projection is given by
Theright quotientof a characterafrom a stringsis the truncation of the characterain the strings, from the right hand side. It is denoted ass/a{\displaystyle s/a}. If the string does not haveaon the right hand side, the result is the empty string. Thus:
The quotient of the empty string may be taken:
Similarly, given a subsetS⊂M{\displaystyle S\subset M}of a monoidM{\displaystyle M}, one may define the quotient subset as
Left quotientsmay be defined similarly, with operations taking place on the left of a string.[citation needed]
Hopcroft and Ullman (1979) define the quotientL1/L2of the languagesL1andL2over the same alphabet asL1/L2= {s| ∃t∈L2.st∈L1}.[7]This is not a generalization of the above definition, since, for a stringsand distinct charactersa,b, Hopcroft's and Ullman's definition impliesyielding {}, rather than { ε }.
The left quotient (when defined similar to Hopcroft and Ullman 1979) of a singleton languageL1and an arbitrary languageL2is known asBrzozowski derivative; ifL2is represented by aregular expression, so can be the left quotient.[8]
The right quotient of a subsetS⊂M{\displaystyle S\subset M}of a monoidM{\displaystyle M}defines anequivalence relation, called therightsyntactic relationofS. It is given by
The relation is clearly of finite index (has a finite number of equivalence classes) if and only if the family right quotients is finite; that is, if
is finite. In the case thatMis the monoid of words over some alphabet,Sis then aregular language, that is, a language that can be recognized by afinite-state automaton. This is discussed in greater detail in the article onsyntactic monoids.[citation needed]
Theright cancellationof a characterafrom a stringsis the removal of the first occurrence of the characterain the strings, starting from the right hand side. It is denoted ass÷a{\displaystyle s\div a}and is recursively defined as
The empty string is always cancellable:
Clearly, right cancellation and projectioncommute:
Theprefixes of a stringis the set of allprefixesto a string, with respect to a given language:
wheres∈L{\displaystyle s\in L}.
Theprefix closure of a languageis
Example:L={abc}thenPref(L)={ε,a,ab,abc}{\displaystyle L=\left\{abc\right\}{\mbox{ then }}\operatorname {Pref} (L)=\left\{\varepsilon ,a,ab,abc\right\}}
A language is calledprefix closedifPref(L)=L{\displaystyle \operatorname {Pref} (L)=L}.
The prefix closure operator isidempotent:
Theprefix relationis abinary relation⊑{\displaystyle \sqsubseteq }such thats⊑t{\displaystyle s\sqsubseteq t}if and only ifs∈PrefL(t){\displaystyle s\in \operatorname {Pref} _{L}(t)}. This relation is a particular example of aprefix order.[citation needed] | https://en.wikipedia.org/wiki/String_operations |
Incomputer science, anambiguous grammaris acontext-free grammarfor which there exists astringthat can have more than oneleftmost derivationorparse tree.[1][2]Every non-emptycontext-free languageadmits an ambiguous grammar by introducing e.g. a duplicate rule. A language that only admits ambiguous grammars is called aninherently ambiguous language.Deterministic context-free grammarsare always unambiguous, and are an important subclass of unambiguous grammars; there are non-deterministic unambiguous grammars, however.
For computerprogramming languages, the reference grammar is often ambiguous, due to issues such as thedangling elseproblem. If present, these ambiguities are generally resolved by adding precedence rules or othercontext-sensitiveparsing rules, so the overall phrase grammar is unambiguous.[citation needed]Some parsing algorithms (such asEarley[3]orGLRparsers) can generate sets of parse trees (or "parse forests") from strings that aresyntactically ambiguous.[4]
The simplest example is the following ambiguous grammar (with start symbol A) for the trivial language that consists of only the empty string:
... meaning that the nonterminal A can be derived to either itself again, or to the empty string. Thus the empty string has leftmost derivations of length 1, 2, 3, and indeed of any length, depending on how many times the rule A → A is used.
This language also has an unambiguous grammar, consisting of a singleproduction rule:
... meaning that the unique production can produce only the empty string, which is the unique string in the language.
In the same way, any grammar for a non-empty language can be made ambiguous by adding duplicates.
Theregular languageof unary strings of a given character, say'a'(the regular expressiona*), has the unambiguous grammar:
... but also has the ambiguous grammar:
These correspond to producing aright-associativetree (for the unambiguous grammar) or allowing both left- and right- association. This is elaborated below.
Thecontext free grammar
is ambiguous since there are two leftmost derivations for the string a + a + a:
As another example, the grammar is ambiguous since there are twoparse treesfor the string a + a − a:
The language that it generates, however, is not inherently ambiguous; the following is a non-ambiguous grammar generating the same language:
A common example of ambiguity in computer programming languages is thedangling elseproblem. In many languages, theelsein anIf–then(–else)statement is optional, which results in nested conditionals having multiple ways of being recognized in terms of the context-free grammar.
Concretely, in many languages one may write conditionals in two valid forms: the if-then form, and the if-then-else form – in effect, making the else clause optional.
In a grammar containing the rules[a]
some ambiguous phrase structures can appear. The expression
can be parsed as either
or as
depending on whether theelseis associated with the firstifor secondif.
This is resolved in various ways in different languages. Sometimes the grammar is modified so that it is unambiguous, such as by requiring anendifstatement or makingelsemandatory. In other cases the grammar is left ambiguous, but the ambiguity is resolved by making the overall phrase grammar context-sensitive, such as by associating anelsewith the nearestif. In this latter case the grammar is unambiguous, but the context-free grammar is ambiguous.[clarification needed]
The existence of multiple derivations of the same string does not suffice to indicate that the grammar is ambiguous; only multipleleftmostderivations (or, equivalently, multiple parse trees) indicate ambiguity.
For example, the simple grammar
is an unambiguous grammar for the language { 0+0, 0+1, 1+0, 1+1 }. While each of these four strings has only one leftmost derivation, it has two different derivations, for example
and
Only the former derivation is a leftmost one.
Thedecision problemof whether an arbitrary grammar is ambiguous isundecidablebecause it can be shown that it is equivalent to thePost correspondence problem.[5]At least, there are tools implementing somesemi-decision procedurefor detecting ambiguity of context-free grammars.[6]
The efficiency of parsing a context-free grammar is determined by the automaton that accepts it.Deterministic context-free grammarsare accepted bydeterministic pushdown automataand can be parsed in linear time, for example by anLR parser.[7]They are a strict subset of thecontext-free grammars, which are accepted bypushdown automataand can be parsed in polynomial time, for example by theCYK algorithm.
Unambiguous context-free grammars can be nondeterministic. For example, the language of even-lengthpalindromeson the alphabet of 0 and 1 has the unambiguous context-free grammar S → 0S0 | 1S1 | ε. An arbitrary string of this language cannot be parsed without reading all its symbols first, which means that a pushdown automaton has to try alternative state transitions to accommodate for the different possible lengths of a semi-parsed string.[8]
Nevertheless, removing grammar ambiguity may produce a deterministic context-free grammar and thus allow for more efficient parsing. Compiler generators such asYACCinclude features for resolving some kinds of ambiguity, such as by using the precedence and associativity constraints.
While some context-free languages (the set of strings that can be generated by a grammar) have both ambiguous and unambiguous grammars, there exist context-free languages for which no unambiguous context-free grammar exists. Such languages are calledinherently ambiguous.
There are no inherently ambiguous regular languages.[9][10]
The existence of inherently ambiguous context-free languages was proven withParikh's theoremin 1961 byRohit Parikhin an MIT research report.[11]
The language{x|x=anbman′bmorx=anbmanbm′,wheren,n′,m,m′≥1}{\displaystyle \{x|x=a^{n}b^{m}a^{n^{\prime }}b^{m}{\text{ or }}x=a^{n}b^{m}a^{n}b^{m^{\prime }},{\text{ where }}n,n',m,m'\geq 1\}}is inherently ambiguous.[12]
Ogden's lemma[13]can be used to prove that certain context-free languages, such as{anbmcm|m,n≥1}∪{ambmcn|m,n≥1}{\displaystyle \{a^{n}b^{m}c^{m}|m,n\geq 1\}\cup \{a^{m}b^{m}c^{n}|m,n\geq 1\}}, are inherently ambiguous. SeeOgden's lemma § Inherent ambiguityfor a proof.
The union of{anbncmdm∣n,m>0}{\displaystyle \{a^{n}b^{n}c^{m}d^{m}\mid n,m>0\}}with{anbmcmdn∣n,m>0}{\displaystyle \{a^{n}b^{m}c^{m}d^{n}\mid n,m>0\}}is inherently ambiguous. This set is context-free, since the union of two context-free languages is always context-free. ButHopcroft & Ullman (1979)give a proof that no context-free grammar for this union language can unambiguously parse strings of formanbncndn,(n>0){\displaystyle a^{n}b^{n}c^{n}d^{n},(n>0)}.[14]
More examples, and a general review of techniques for proving inherent ambiguity of context-free languages, are found given by Bassino and Nicaud (2011).[15] | https://en.wikipedia.org/wiki/Ambiguous_grammar |
Harmonic grammaris a linguistic model proposed byGeraldine Legendre,Yoshiro Miyata, andPaul Smolenskyin 1990. It is aconnectionistapproach to modeling linguisticwell-formedness. During the late 2000s and early 2010s, the term 'harmonic grammar' has been used to refer more generally to models of language that use weighted constraints, including ones that are not explicitly connectionist – see e.g. Pater (2009) and Potts et al. (2010). | https://en.wikipedia.org/wiki/Harmonic_grammar |
Higher order grammar(HOG) is agrammar theorybased onhigher-order logic.[1][2]It can be viewed simultaneously asgenerative-enumerative (likecategorial grammarandprinciples and parameters) ormodel theoretic(likehead-driven phrase structure grammarorlexical functional grammar). | https://en.wikipedia.org/wiki/Higher_order_grammar |
Inapplied linguistics, anerroris an unintended deviation from the immanent rules of alanguage varietymade by asecond languagelearner. Such errors result from the learner's lack of knowledge of the correct rules of the target language variety.[1]A significant distinction is generally made[by whom?]betweenerrors(systematic deviations) andmistakes(speech performance errors) which are not treated the same from a linguistic viewpoint. The study of learners' errors has been the main area of investigation by linguists in the history ofsecond-language acquisitionresearch.[2]
Inprescriptivistcontexts, the terms "error" and "mistake" are also used to describe usages that are considerednon-standardor otherwise discouraged normatively.[3]Such usages, however, would not be considered true errors by the majority of linguistic scholars.[4][5]Modern linguistics generally does not make such judgments about regularly occurring native speech, rejecting the idea of linguistic correctness as scientifically untenable,[6]or at least approaching the concept of correct usage in relative terms.[7]Social perceptions and value claims about differentspeech varieties, although common socially, are not normally supported by linguistics.[8]
H. Douglas Browndefines linguistic errors as "a noticeable deviation from the adult grammar of a native speaker, reflecting the interlanguage competence of the learner." He cites an example,Does John can sing?, where a precedingdoauxiliary verb has been used as an error.[9]
In linguistics, it is considered important to distinguish errors from mistakes. A distinction is always made between errors and mistakes where the former is defined as resulting from a learner's lack of proper grammatical knowledge, whilst the latter as a failure tousea known system correctly.[9]Brown terms these mistakes asperformance errors. Mistakes of this kind are frequently made by bothnative speakersand second language learners. However, native speakers are generally able to correct themselves quickly. Such mistakes include slips of the tongue and random ungrammatical formations. On the other hand, errors are systematic in that they occur repeatedly and are not recognizable by the learner. They are a part of the learner's interlanguage, and the learner does not generally consider them as errors. They areerrorsonly from the perspective of teachers and others who are aware that the learner has deviated from a grammatical norm.[10]That is, mistakes (performance errors) can be self-corrected with or without being pointed out to the speaker but systematic errors cannot be self-corrected.[11]
S. Pit Corder was probably the first to point out and discuss the importance of errors learners make in course of their learning a second language. Soon after, the study and analysis of learners’ errors took a prominent place in applied linguistics. Brown suggests that the process of second language learning is not very different from learning a first language, and the feedback an L2 learner gets upon making errors benefits them in developing the L2 knowledge.[9] | https://en.wikipedia.org/wiki/Error_(linguistics) |
Model-theoretic grammars, also known as constraint-based grammars, contrast withgenerative grammarsin the way they define sets of sentences: they state constraints on syntactic structure rather than providing operations for generating syntactic objects.[1]A generative grammar provides a set of operations such as rewriting, insertion, deletion, movement, or combination, and is interpreted as a definition of the set of all and only the objects that these operations are capable of producing through iterative application. A model-theoretic grammar simply states a set of conditions that an object must meet, and can be regarded as defining the set of all and only the structures of a certain sort that satisfy all of the constraints.[2]The approach applies the mathematical techniques ofmodel theoryto the task of syntactic description: a grammar is atheoryin the logician's sense (a consistent set of statements) and the well-formed structures are themodelsthat satisfy the theory.
David E. JohnsonandPaul M. Postalintroduced the idea of model-theoretic syntax in their 1980 bookArc Pair Grammar.[3]
The following is a sample of grammars falling under the model-theoretic umbrella:
One benefit of model-theoretic grammars over generative grammars is that they allow for gradience ingrammaticality. A structure may deviate only slightly from a theory or it may be highly deviant. Generative grammars, in contrast "entail a sharp boundary between the perfect and the nonexistent, and do not even permit gradience in ungrammaticality to be represented."[7] | https://en.wikipedia.org/wiki/Model-theoretic_grammar |
Paragrammatismis the confused or incomplete use of grammatical structures, found in certain forms of speech disturbance.[1]Paragrammatism is the inability to formgrammatically correctsentences. It is characteristic offluentaphasia, most commonlyreceptive aphasia. Paragrammatism is sometimes called "extended paraphasia," although it is different fromparaphasia. Paragrammatism is roughly synonymous with "word salad," which concerns the semantic coherence of speech rather than its production.
Huber assumes a disturbance of the sequential organization of sentences as the cause of the syntactic errors (1981:3). Most students and practitioners regard paragrammatism as the morphosyntactic "leitsymptom" of Wernicke's aphasia.[citation needed]
However, ever since the introduction of the termparagrammatismsome students have
pointed out that paragrammatic andagrammaticphenomena, which in classical theory form
part of Broca's aphasia, may co-occur in the same patient.[2]
Since Kleist introduced the term in 1916,[3]paragrammatism has denoted a disordered mode of
expression that is characterized by confused and erroneous word order, syntactic
structure or grammatical morphology (Schlenck 1991:199f).[2]
Most researchers suppose that the faulty syntactic structure (sentence blends,
contaminations, break-offs) results from a disturbance of the syntactic plan of the utterance
(de Bleser/Bayer 1993:160f).
In non-fluent aphasia, oral expression is often agrammatic, i.e. grammatically incomplete or incorrect. By contrast, expression in fluent aphasia usually appears grammatical, albeit with disruptions in content. Despite this persistent impression, errors of sentence structure and morphology do occur in fluent aphasia, although they take the form of substitutions rather than omissions.[4] | https://en.wikipedia.org/wiki/Paragrammatism |
Aspeech error, commonly referred to as aslip of the tongue[1](Latin:lapsus linguae, or occasionally self-demonstratingly,lipsus languae) ormisspeaking, is adeviation(conscious or unconscious) from the apparently intended form of anutterance.[2]They can be subdivided into spontaneously and inadvertently producedspeecherrors and intentionally produced word-plays or puns. Another distinction can be drawn between production and comprehension errors. Errors in speech production and perception are also called performance errors.[3]Some examples of speech error include sound exchange or sound anticipation errors. In sound exchange errors, the order of two individual morphemes is reversed, while in sound anticipation errors a sound from a later syllable replaces one from an earlier syllable.[4]Slips of the tongue are a normal and common occurrence. One study shows that most people can make up to as much as 22 slips of the tongue per day.[5]
Speech errors are common amongchildren, who have yet to refine their speech, and can frequently continue into adulthood. When errors continue past the age of 9 they are referred to as "residual speech errors" or RSEs.[6]They sometimes lead to embarrassment and betrayal of the speaker'sregionalorethnicorigins. However, it is also common for them to enter thepopular cultureas a kind of linguistic "flavoring". Speech errors may be used intentionally for humorous effect, as withspoonerisms.
Within the field ofpsycholinguistics, speech errors fall under the category oflanguage production. Types of speech errors include: exchange errors, perseveration, anticipation, shift, substitution, blends, additions, and deletions. The study of speech errors has contributed to the establishment/refinement of models of speech production sinceVictoria Fromkin's pioneering work on this topic.[7]
Speech errors are made on an occasional basis by all speakers.[1]They occur more often when speakers are nervous, tired, anxious or intoxicated.[1]During live broadcasts on TV or on the radio, for example, nonprofessional speakers and even hosts often make speech errors because they are under stress.[1]Some speakers seem to be more prone to speech errors than others. For example, there is a certain connection between stuttering and speech errors.[8]Charles F. Hockett explains that "whenever a speaker feels some anxiety about possible lapse, he will be led to focus attention more than normally on what he has just said and on what he is just about to say. These are ideal breeding grounds for stuttering."[8]Another example of a "chronic sufferer" is ReverendWilliam Archibald Spooner, whose peculiar speech may be caused by a cerebral dysfunction, but there is much evidence that he invented his famous speech errors (spoonerisms).[1]
An explanation for the occurrence of speech errors comes frompsychoanalysis, in the so-calledFreudian slip. Sigmund Freud assumed that speech errors are the result of an intrapsychic conflict of concurrent intentions.[1]"Virtually all speech errors [are] caused by the intrusion of repressed ideas from the unconscious into one's conscious speech output", Freud explained.[1]In fact, his hypothesis explains only a minority of speech errors.[1]
There are few speech errors that clearly fall into only one category. The majority of speech errors can be interpreted in different ways and thus fall into more than one category.[9]For this reason, percentage figures for the different kinds of speech errors may be of limited accuracy.[10]Moreover, the study of speech errors gave rise to different terminologies and different ways of classifying speech errors. Here is a collection of the main types:
Error:The box is wed.
Speech errors can affect different kinds of segments or linguistic units:
Speech production is a highly complex and extremely rapid process, and thus research into the involved mental mechanisms proves to be difficult.[10]Investigating the audible output of the speech production system is a way to understand these mental mechanisms. According to Gary S. Dell "the inner workings of a highly complex system are often revealed by the way in which the system breaks down".[10]Therefore, speech errors are of an explanatory value with regard to the nature of language and language production.[12]
Performance errors may provide the linguist with empirical evidence for linguistic theories and serve to test hypotheses about language and speech production models.[13]For that reason, the study of speech errors is significant for the construction of performance models and gives insight into language mechanisms.[13]
An example of the information that can be obtained is the use of "um" or "uh" in a conversation.[15]These might be meaningful words that tell different things, one of which is to hold a place in the conversation so as not to be interrupted. There seems to be a hesitant stage and fluent stage that suggest speech has different levels of production. The pauses seem to occur between sentences, conjunctional points and before the first content word in a sentence. That suggests that a large part of speech production happens there.
Schachter et al. (1991) conducted an experiment to examine if the numbers of word choices affect pausing. They sat in on the lectures of 47 undergraduate professors from 10 different departments and calculated the number and times of filled pauses and unfilled pauses. They found significantly more pauses in the humanities departments as opposed to the natural sciences.[16]These findings suggest that the greater the number of word choices, the more frequent are the pauses, and hence the pauses serve to allow us time to choose our words.
Slips of the tongue are another form of "errors" that can help us understand the process of speech production better. Slips can occur at various levels: syntactic, phrasal, lexical-semantic, morphological, and phonological. They can take multiple forms, such as additions, substitutions, deletions, exchanges, anticipations, perseverations, shifts, and haplologies M.F. Garrett, (1975).[17]Slips are orderly because language production is orderly.
There are some biases shown through slips of the tongue. One kind is a lexical bias which shows that the slips people generate are more often actual words than random sound strings. Baars Motley and Mackay (1975) found that it was more common for people to turn two actual words to two other actual words than when they do not create real words.[14]This suggests that lexemes might overlap somewhat or be stored similarly.
A second kind is a semantic bias which shows a tendency for sound bias to create words that are semantically related to other words in the linguistic environment. Motley and Baars (1976) found that a word pair like "get one" will more likely slip to "wet gun" if the pair before it is "damp rifle". These results suggest that we are sensitive to how things are laid out semantically.[18]
Since the 1980s, the wordmisspeakinghas been used increasingly in politics to imply that errors made by a speaker are accidental and should not be construed as a deliberate attempt to misrepresent the facts of a case. As such, its usage has attracted a degree of media coverage, particularly from critics who feel that the term is overlyapprobativein cases where either ignorance of the facts or intent to misrepresent should not be discarded as possibilities.[19][20]
The word was used by a White House spokesman afterGeorge W. Bushseemed to say that his government was always "thinking about new ways to harm our country and our people" (a classic example of aBushism), and more famously by then American presidential candidateHillary Clintonwho recalled landing in at the US military outpost ofTuzla"under sniper fire" (in fact, video footage demonstrates that there were no such problems on her arrival).[20][21]Other users of the term include American politicianRichard Blumenthal, who incorrectly stated on a number of occasions that he had served in Vietnam during theVietnam War.[20] | https://en.wikipedia.org/wiki/Speech_error |
Atagmemeis the smallest functional element in thegrammaticalstructure of a language. The term was introduced in the 1930s by the linguistLeonard Bloomfield, who defined it as the smallestmeaningfulunit of grammatical form (analogous to themorpheme, defined as the smallest meaningful unit oflexicalform). The term was later adopted, and its meaning broadened, byKenneth Pikeand others beginning in the 1950s, as the basis for theirtagmemics.
According to the scheme set out byLeonard Bloomfieldin his bookLanguage(1933), the tagmeme is the smallest meaningful unit of grammatical form.[1]A tagmeme consists of one or moretaxemes, where a taxeme is a primitive grammatical feature, in the same way that aphonemeis a primitive phonological feature. Taxemes and phonemes do not as a rule have meaning on their own, but combine into tagmemes andmorphemesrespectively, which carry meaning.
For example, an utterance such as "John runs" is a concrete example of a tagmeme (anallotagm) whose meaning is that an actor performs an action. The taxemes making up this tagmeme include the selection of anominativeexpression, the selection of afinite verbexpression, and the ordering of the two such that the nominative expression precedes the finite verb expression.
Bloomfield makes the taxeme and tagmeme part of a system ofemic units:[2]
More generally, he defines any meaningful unit of linguistic signaling (not necessarily smallest) as alinguistic form, and its meaning as alinguistic meaning; it may be either alexical form(with alexical meaning) or agrammatical form(with agrammatical meaning).
Bloomfield's term was adopted byKenneth Pikeand others to denote what they had previously been calling thegrammeme(earliergrameme).[3]In Pike's approach, consequently calledtagmemics, the hierarchical organization of levels (e.g. in syntax: word, phrase, sentence, paragraph, discourse) results from the fact that the elements of a tagmeme on a higher level (e.g. 'sentence') are analyzed as syntagmemes on the next lower level (e.g. 'phrase').
The tagmeme is the correlation of asyntagmaticfunction (e.g. subject, object) andparadigmaticfillers (e.g. nouns, pronouns or proper nouns as possible fillers of the subject position). Tagmemes combine to form asyntagmeme, a syntactic construction consisting of a sequence of tagmemes.
Tagmemics as a linguistic methodology was developed by Pike in his bookLanguage in Relation to a Unified Theory of the Structure of Human Behavior, 3 vol. (1954–1960). It was primarily designed to assist linguists to efficiently extract coherent descriptions out of corpora of fieldwork data. Tagmemics is particularly associated with the early work of theSummer Institute of Linguistics, an association of missionary linguists devoted largely toBibletranslations, of which Pike was an early member.
Tagmemics makes the kind of distinction made betweenphoneandphonemeinphonologyandphoneticsat higher levels of linguistic analysis (grammaticalandsemantic); for instance, contextually conditioned synonyms are considered different instances of a single tagmeme, as sounds which are (in a given language) contextually conditioned areallophonesof a single phoneme. Theemic and eticdistinction also applies in othersocial sciences. | https://en.wikipedia.org/wiki/Tagmeme |
Theusageof alanguageis the ways in which itswrittenandspokenvariations are routinely employed by its speakers; that is, it refers to "the collective habits of a language's native speakers",[1]as opposed to idealized models of how a language works (or should work) in the abstract. For instance,Fowlercharacterized usage as "the way in which a word or phrase is normally and correctly used" and as the "points ofgrammar,syntax,style, and the choice of words."[2]In everyday usage, language is used differently, depending on the situation and individual.[3]Individual language users can shape language structures and language usage based on their community.[4]
In thedescriptivetradition of language analysis, by way of contrast, "correct" tends to mean functionally adequate for the purposes of the speaker or writer using it, and adequatelyidiomaticto be accepted by the listener or reader; usage is also, however, a concern for theprescriptivetradition, for which "correctness" is a matter of arbitrating style.[5][6]
Common usage may be used as one of the criteria of laying outprescriptive normsforcodifiedstandard languageusage.[7]
Everyday language users, including editors and writers, look at dictionaries, style guides, usage guides, and other published authoritative works to help inform their language decisions. This takes place because of the perception that Standard English is determined by language authorities.[8]For many language users, the dictionary is the source of correct language use, as far as accurate vocabulary and spelling go.[9]Moderndictionariesare not generally prescriptive, but they often include "usage notes" which may describe words as "formal", "informal", "slang", and so on.[10]"Despite occasional usage notes,lexicographersgenerally disclaim any intent to guide writers and editors on the thorny points of English usage."[1]
According to Jeremy Butterfield, "The first person we know of who madeusagerefer to language wasDaniel Defoe, at the end of the seventeenth century". Defoe proposed the creation of alanguage societyof 36 individuals who would setprescriptivelanguage rules for the approximately six million English speakers.[5]
The Latin equivalentususwas a crucial term in the research of Danish linguistsOtto JespersenandLouis Hjelmslev.[11]They used the term to designate usage that has widespread or significant acceptance among speakers of a language, regardless of its conformity to the sanctioned standard language norms.[12] | https://en.wikipedia.org/wiki/Usage_(language) |
Inmathematics,abuse of notationoccurs when an author uses amathematical notationin a way that is not entirely formally correct, but which might help simplify the exposition or suggest the correctintuition(while possibly minimizing errors and confusion at the same time). However, since the concept of formal/syntactical correctness depends on both time and context, certain notations in mathematics that are flagged as abuse in one context could be formally correct in one or more other contexts. Time-dependent abuses of notation may occur when novel notations are introduced to a theory some time before the theory is first formalized; these may be formally corrected by solidifying and/or otherwise improving the theory.Abuse of notationshould be contrasted withmisuseof notation, which does not have the presentational benefits of the former and should be avoided (such as the misuse ofconstants of integration[1]).
A related concept isabuse of languageorabuse of terminology,where aterm— rather than a notation — is misused. Abuse of language is an almost synonymous expression for abuses that are non-notational by nature. For example, while the wordrepresentationproperly designates agroup homomorphismfrom agroupGtoGL(V), whereVis avector space, it is common to callV"a representation ofG". Another common abuse of language consists in identifying two mathematical objects that are different, butcanonically isomorphic.[2]Other examples include identifying aconstant functionwith its value, identifying a group with abinary operationwith the name of its underlying set, or identifying toR3{\displaystyle \mathbb {R} ^{3}}theEuclidean spaceof dimension three equipped with aCartesian coordinate system.[3]
Manymathematical objectsconsist of aset, often called the underlying set, equipped with some additional structure, such as amathematical operationor atopology. It is a common abuse of notation to use the same notation for the underlying set and the structured object (a phenomenon known assuppression of parameters[3]). For example,Z{\displaystyle \mathbb {Z} }may denote the set of theintegers, thegroupof integers together withaddition, or theringof integers with addition andmultiplication. In general, there is no problem with this if the object under reference is well understood, and avoiding such an abuse of notation might even make mathematical texts more pedantic and more difficult to read. When this abuse of notation may be confusing, one may distinguish between these structures by denoting(Z,+){\displaystyle (\mathbb {Z} ,+)}the group of integers with addition, and(Z,+,⋅){\displaystyle (\mathbb {Z} ,+,\cdot )}the ring of integers.
Similarly, atopological spaceconsists of a setX(the underlying set) and a topologyT,{\displaystyle {\mathcal {T}},}which is characterized by a set ofsubsetsofX(theopen sets). Most frequently, one considers only one topology onX, so there is usually no problem in referringXas both the underlying set, and the pair consisting ofXand its topologyT{\displaystyle {\mathcal {T}}}— even though they are technically distinct mathematical objects. Nevertheless, it could occur on some occasions that two different topologies are considered simultaneously on the same set. In which case, one must exercise care and use notation such as(X,T){\displaystyle (X,{\mathcal {T}})}and(X,T′){\displaystyle (X,{\mathcal {T}}')}to distinguish between the different topological spaces.
One may encounter, in many textbooks, sentences such as "Letf(x){\displaystyle f(x)}be a function ...". This is an abuse of notation, as the name of thefunctionisf,{\displaystyle f,}andf(x){\displaystyle f(x)}denotes the value off{\displaystyle f}for the elementx{\displaystyle x}of its domain. More precisely correct phrasings include "Letf{\displaystyle f}be a function of the variablex{\displaystyle x}..." or "Letx↦f(x){\displaystyle x\mapsto f(x)}be a function ..." This abuse of notation is widely used, as it simplifies the formulation, and the systematic use of a correct notation quickly becomes pedantic.
A similar abuse of notation occurs in sentences such as "Let us consider the functionx2+x+1{\displaystyle x^{2}+x+1}...", when in factx2+x+1{\displaystyle x^{2}+x+1}is apolynomialexpression, not a function per se. The function that associatesx2+x+1{\displaystyle x^{2}+x+1}tox{\displaystyle x}can be denotedx↦x2+x+1.{\displaystyle x\mapsto x^{2}+x+1.}Nevertheless, this abuse of notation is widely used, since it is more concise but generally not confusing.
Many mathematical structures are defined through a characterizing property (often auniversal property). Once this desired property is defined, there may be various ways to construct the structure, and the corresponding results are formally different objects, but which have exactly the same properties (i.e.,isomorphic). As there is no way to distinguish these isomorphic objects through their properties, it is standard to consider them as equal, even if this is formally wrong.[2]
One example of this is theCartesian product, which is often seen as associative:
But this is strictly speaking not true: ifx∈E{\displaystyle x\in E},y∈F{\displaystyle y\in F}andz∈G{\displaystyle z\in G}, the identity((x,y),z)=(x,(y,z)){\displaystyle ((x,y),z)=(x,(y,z))}would imply that(x,y)=x{\displaystyle (x,y)=x}andz=(y,z){\displaystyle z=(y,z)}, and so((x,y),z)=(x,y,z){\displaystyle ((x,y),z)=(x,y,z)}would mean nothing. However, these equalities can be legitimized and made rigorous incategory theory—using the idea of anatural isomorphism.
Another example of similar abuses occurs in statements such as "there are two non-Abelian groups of order 8", which more strictly stated means "there are two isomorphism classes of non-Abelian groups of order 8".
Referring to anequivalence classof anequivalence relationbyxinstead of [x] is an abuse of notation. Formally, if a setXispartitionedby an equivalence relation ~, then for eachx∈X, the equivalence class {y∈X|y~x} is denoted [x]. But in practice, if the remainder of the discussion is focused on the equivalence classes rather than the individual elements of the underlying set, then it is common to drop the square brackets in the discussion.
For example, inmodular arithmetic, afinite groupoforderncan be formed by partitioning the integers via the equivalence relation "x~yif and only ifx≡y(modn)". The elements of that group would then be [0], [1], ..., [n− 1], but in practice they are usually denoted simply as 0, 1, ...,n− 1.
Another example is the space of (classes of) measurable functions over ameasure space, or classes ofLebesgue integrablefunctions, where the equivalence relation is equality "almost everywhere".
The terms "abuse of language" and "abuse of notation" depend on context. Writing "f:A→B" for apartial functionfromAtoBis almost always an abuse of notation, but not in acategory theoreticcontext, wherefcan be seen as amorphismin the category of sets and partial functions. | https://en.wikipedia.org/wiki/Abuse_of_notation |
Inlinguisticsandphilosophy,[1]thedenotationof a word or expression is its strictly literal meaning. For instance, theEnglishword "warm" denotes thepropertyof having high temperature. Denotation is contrasted with other aspects of meaning includingconnotation. For instance, the word "warm" may evoke calmness, coziness, or kindness (as in the warmth of someone's personality) but theseassociationsare not part of the word's denotation. Similarly, an expression's denotation is separate frompragmaticinferences it may trigger. For instance, describing something as "warm" oftenimplicatesthat it is not hot, but this is once again not part of the word's denotation.
Denotation plays a major role in several fields. Withinsemanticsandphilosophy of language, denotation is studied as an important aspect ofmeaning. Inmathematicsandcomputer science, assignments of denotations are assigned to expressions are a crucial step in defining interpretedformal languages. The main task offormal semanticsis to reverse engineer the computational system which assigns denotations to expressions ofnatural languages.
Innatural language semantics, denotations are conceived of as the outputs of the semantic component of the grammar. For example, the denotation of the word "blue" is the property of being blue and the denotation of the word "Barack Obama" is the person who goes by that name. Phrases also have denotations which are computed according to theprinciple of compositionality. For instance, theverb phrase"passed the class" denotes the property of having passed the class. Depending on one's particular theory of semantics, denotations may be identified either with terms'extensions,intensions, or other structures such ascontext change potentials.[2][3][4][5]
When uttered indiscourse, expressions may convey other associations which are not computed by the grammar and thus are not part of its denotation. For instance, depending on the context, saying "I ran five miles" may convey that you ran exactly five miles and not more. This content is not part of the sentence's denotation but ratherpragmaticinferences arrived at by applyingsocial cognitionto its denotation.[2]
Linguistic discussion of the differences between denotation,meaning, andreferenceis rooted in the work ofFerdinand de Saussure, specifically in his theory ofsemioticswritten in the bookCourse in General Linguistics.[6]PhilosophersGottlob FregeandBertrand Russellhave also made influential contributions to this subject.[7]
Although they have similar meanings, denotation should not be confused withreference.[8]A reference is a specific person, place, or thing that a speaker identifies when using a word.[6]Vocabulary fromJohn Searle'sspeech act theorycan be used to define this relationship.[9]According to this theory, the speaker's action of identifying a person, place, or thing is called referring. The specific person, place, or thing identified by the speaker is called the referent. Reference itself captures the relationship between the referent and the word or phrase used by the speaker. Forreferring expressions, the denotation of the phrase is most likely the phrase's referent. Forcontent words, the denotation of the word can refer to any object, real or imagined, to which the word could be applied.[2]
In"On Sense and Reference", philosopherGottlob Fregebegan the conversation about distinctions between meaning and denotation when he evaluated words like the German words "Morgenstern" and "Abendstern".[6]Author Thomas Herbst uses the words "kid" and "child" to illustrate the same concept.[6]According to Herbst, these two words have the same denotation, as they have the same member set; however, "kid" may be used in an informal speech situation whereas "child" may be used in a more formal speech situation. | https://en.wikipedia.org/wiki/Denotation |
Inmathematics,Knuth's up-arrow notationis a method of notation forvery largeintegers, introduced byDonald Knuthin 1976.[1]
In his 1947 paper,[2]R. L. Goodsteinintroduced the specific sequence of operations that are now calledhyperoperations. Goodstein also suggested the Greek namestetration,pentation, etc., for the extended operations beyondexponentiation. The sequence starts with aunary operation(thesuccessor functionwithn= 0), and continues with thebinary operationsofaddition(n= 1),multiplication(n= 2),exponentiation(n= 3),tetration(n= 4),pentation(n= 5), etc.Various notationshave been used to represent hyperoperations. One such notation isHn(a,b){\displaystyle H_{n}(a,b)}.
Knuth's up-arrow notation↑{\displaystyle \uparrow }is another.
For example:
The general definition of the up-arrow notation is as follows (fora≥0,n≥1,b≥0{\displaystyle a\geq 0,n\geq 1,b\geq 0}):a↑nb=Hn+2(a,b)=a[n+2]b.{\displaystyle a\uparrow ^{n}b=H_{n+2}(a,b)=a[n+2]b.}Here,↑n{\displaystyle \uparrow ^{n}}stands fornarrows, so for example2↑↑↑↑3=2↑43.{\displaystyle 2\uparrow \uparrow \uparrow \uparrow 3=2\uparrow ^{4}3.}The square brackets are another notation for hyperoperations.
Thehyperoperationsnaturally extend thearithmeticoperations ofadditionandmultiplicationas follows.Additionby anatural numberis defined as iterated incrementation:
Multiplicationby anatural numberis defined as iteratedaddition:
For example,
Exponentiationfor a natural powerb{\displaystyle b}is defined as iterated multiplication, which Knuth denoted by a single up-arrow:
For example,
Tetrationis defined as iterated exponentiation, which Knuth denoted by a “double arrow”:
For example,
Expressions are evaluated from right to left, as the operators are defined to beright-associative.
According to this definition,
This already leads to some fairly large numbers, but the hyperoperator sequence does not stop here.
Pentation, defined as iterated tetration, is represented by the “triple arrow”:
Hexation, defined as iterated pentation, is represented by the “quadruple arrow”:
and so on. The general rule is that ann{\displaystyle n}-arrow operator expands into a right-associative series of (n−1{\displaystyle n-1})-arrow operators. Symbolically,
Examples:
In expressions such asab{\displaystyle a^{b}}, the notation for exponentiation is usually to write the exponentb{\displaystyle b}as a superscript to the base numbera{\displaystyle a}. But many environments — such asprogramming languagesand plain-texte-mail— do not supportsuperscripttypesetting. People have adopted the linear notationa↑b{\displaystyle a\uparrow b}for such environments; the up-arrow suggests 'raising to the power of'. If thecharacter setdoes not contain an up arrow, thecaret(^) is used instead.
The superscript notationab{\displaystyle a^{b}}doesn't lend itself well to generalization, which explains why Knuth chose to work from the inline notationa↑b{\displaystyle a\uparrow b}instead.
a↑nb{\displaystyle a\uparrow ^{n}b}is a shorter alternative notation for n uparrows. Thusa↑4b=a↑↑↑↑b{\displaystyle a\uparrow ^{4}b=a\uparrow \uparrow \uparrow \uparrow b}.
Attempting to writea↑↑b{\displaystyle a\uparrow \uparrow b}using the familiar superscript notation gives apower tower.
Ifb{\displaystyle b}is a variable (or is too large), the power tower might be written using dots and a note indicating the height of the tower.
Continuing with this notation,a↑↑↑b{\displaystyle a\uparrow \uparrow \uparrow b}could be written with a stack of such power towers, each describing the size of the one above it.
Again, ifb{\displaystyle b}is a variable or is too large, the stack might be written using dots and a note indicating its height.
Furthermore,a↑↑↑↑b{\displaystyle a\uparrow \uparrow \uparrow \uparrow b}might be written using several columns of such stacks of power towers, each column describing the number of power towers in the stack to its left:
And more generally:
This might be carried out indefinitely to representa↑nb{\displaystyle a\uparrow ^{n}b}as iterated exponentiation of iterated exponentiation for anya{\displaystyle a},n{\displaystyle n}, andb{\displaystyle b}(although it clearly becomes rather cumbersome).
The Rudy Rucker notationba{\displaystyle ^{b}a}fortetrationallows us to make these diagrams slightly simpler while still employing a geometric representation (we could call thesetetration towers).
Finally, as an example, the fourth Ackermann number4↑44{\displaystyle 4\uparrow ^{4}4}could be represented as:
Some numbers are so large that multiple arrows of Knuth's up-arrow notation become too cumbersome; then ann-arrow operator↑n{\displaystyle \uparrow ^{n}}is useful (and also for descriptions with a variable number of arrows), or equivalently,hyper operators.
Some numbers are so large that even that notation is not sufficient. TheConway chained arrow notationcan then be used: a chain of three elements is equivalent with the other notations, but a chain of four or more is even more powerful.
Even faster-growing functions can be categorized using anordinalanalysis called thefast-growing hierarchy. The fast-growing hierarchy uses successive function iteration and diagonalization to systematically create faster-growing functions from some base functionf(x){\displaystyle f(x)}. For the standard fast-growing hierarchy usingf0(x)=x+1{\displaystyle f_{0}(x)=x+1},f2(x){\displaystyle f_{2}(x)}already exhibits exponential growth,f3(x){\displaystyle f_{3}(x)}is comparable to tetrational growth and is upper-bounded by a function involving the first four hyperoperators;. Then,fω(x){\displaystyle f_{\omega }(x)}is comparable to theAckermann function,fω+1(x){\displaystyle f_{\omega +1}(x)}is already beyond the reach of indexed arrows but can be used to approximateGraham's number, andfω2(x){\displaystyle f_{\omega ^{2}}(x)}is comparable to arbitrarily-long Conway chained arrow notation.
These functions are all computable. Even faster computable functions, such as theGoodstein sequenceand theTREE sequencerequire the usage of large ordinals, may occur in certain combinatorical and proof-theoretic contexts. There exist functions which grow uncomputably fast, such as theBusy Beaver, whose very nature will be completely out of reach from any up-arrow, or even any ordinal-based analysis.
Without reference tohyperoperationthe up-arrow operators can be formally defined by
for all integersa,b,n{\displaystyle a,b,n}witha≥0,n≥1,b≥0{\displaystyle a\geq 0,n\geq 1,b\geq 0}.[nb 1]
This definition usesexponentiation(a↑1b=a↑b=ab){\displaystyle (a\uparrow ^{1}b=a\uparrow b=a^{b})}as the base case, andtetration(a↑2b=a↑↑b){\displaystyle (a\uparrow ^{2}b=a\uparrow \uparrow b)}as repeated exponentiation. This is equivalent to thehyperoperation sequenceexcept it omits the three more basic operations ofsuccession,additionandmultiplication.
One can alternatively choosemultiplication(a↑0b=a×b){\displaystyle (a\uparrow ^{0}b=a\times b)}as the base case and iterate from there. Thenexponentiationbecomes repeated multiplication. The formal definition would be
for all integersa,b,n{\displaystyle a,b,n}witha≥0,n≥0,b≥0{\displaystyle a\geq 0,n\geq 0,b\geq 0}.
Note, however, that Knuth did not define the "nil-arrow" (↑0{\displaystyle \uparrow ^{0}}). One could extend the notation to negative indices (n ≥ -2) in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:
The up-arrow operation is aright-associative operation, that is,a↑b↑c{\displaystyle a\uparrow b\uparrow c}is understood to bea↑(b↑c){\displaystyle a\uparrow (b\uparrow c)}, instead of(a↑b)↑c{\displaystyle (a\uparrow b)\uparrow c}. If ambiguity is not an issue parentheses are sometimes dropped.
Computing0↑nb=Hn+2(0,b)=0[n+2]b{\displaystyle 0\uparrow ^{n}b=H_{n+2}(0,b)=0[n+2]b}results in
Computing2↑nb{\displaystyle 2\uparrow ^{n}b}can be restated in terms of an infinite table. We place the numbers2b{\displaystyle 2^{b}}in the top row, and fill the left column with values 2. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
The table is the same asthat of the Ackermann function, except for a shift inn{\displaystyle n}andb{\displaystyle b}, and an addition of 3 to all values.
We place the numbers3b{\displaystyle 3^{b}}in the top row, and fill the left column with values 3. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
We place the numbers4b{\displaystyle 4^{b}}in the top row, and fill the left column with values 4. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
We place the numbers10b{\displaystyle 10^{b}}in the top row, and fill the left column with values 10. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
For 2 ≤b≤ 9 the numerical order of the numbers10↑nb{\displaystyle 10\uparrow ^{n}b}is thelexicographical orderwithnas the most significant number, so for the numbers of these 8 columns the numerical order is simply line-by-line. The same applies for the numbers in the 97 columns with 3 ≤b≤ 99, and if we start fromn= 1 even for 3 ≤b≤ 9,999,999,999. | https://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation |
Thelanguage of mathematicsormathematical languageis an extension of thenatural language(for exampleEnglish) that is used inmathematicsand insciencefor expressing results (scientific laws,theorems,proofs,logical deductions, etc.) with concision, precision and unambiguity.
The main features of the mathematical language are the following.
The consequence of these features is that a mathematical text is generally not understandable without some prerequisite knowledge. For example, the sentence "afree moduleis amodulethat has abasis" is perfectly correct, although it appears only as a grammatically correct nonsense, when one does not know the definitions ofbasis,module, andfree module.
H. B. Williams, anelectrophysiologist, wrote in 1927:
Now mathematics is both a body of truth and a special language, a language more carefully defined and more highly abstracted than our ordinary medium of thought and expression. Also it differs from ordinary languages in this important particular: it is subject to rules of manipulation. Once a statement is cast into mathematical form it may be manipulated in accordance with these rules and every configuration of the symbols will represent facts in harmony with and dependent on those contained in the original statement. Now this comes very close to what we conceive the action of the brain structures to be in performing intellectual acts with the symbols of ordinary language. In a sense, therefore, the mathematician has been able to perfect a device through which a part of the labor of logical thought is carried on outside thecentral nervous systemwith only that supervision which is requisite to manipulate the symbols in accordance with the rules.[1]: 291 | https://en.wikipedia.org/wiki/Language_of_mathematics |
Modern Arabic mathematical notationis amathematical notationbased on theArabic script, used especially atpre-universitylevels of education. Its form is mostly derived from Western notation, but has some notable features that set it apart from its Western counterpart. The most remarkable of those features is the fact that it is written from right to left following the normal direction of the Arabic script. Other differences include the replacement of theGreekandLatin alphabetletters for symbols with Arabic letters and the use of Arabic names for functions and relations.
Notation differs slightly from one region to another. Intertiary education, most regions use theWestern notation. The notation mainly differs in numeral system used, and in mathematical symbols used.
There are three numeral systems used in right to left mathematical notation.
Written numerals are arranged with their lowest-value digit to the right, with higher value positions added to the left. That is identical to the arrangement used by Western texts using Hindu-Arabic numerals even though Arabic script is read from right to left: Indeed, Western texts are written with the ones digit on the right because when the arithmetical manuals were translated from the Arabic, the numerals were treated as figures (like in a Euclidean diagram), and so were not flipped to match the Left-Right order of Latin text.[1]The symbols "٫" and "٬" may be used as thedecimal markand thethousands separatorrespectively when writing with Eastern Arabic numerals, e.g.٣٫١٤١٥٩٢٦٥٣٥٨3.14159265358,١٬٠٠٠٬٠٠٠٬٠٠٠1,000,000,000. Negative signs are written to the left of magnitudes, e.g.٣−−3. In-line fractions are written with the numerator and denominator on the left and right of the fraction slash respectively, e.g.٢/٧2/7.[citation needed]
Sometimes, symbols used in Arabic mathematical notation differ according to the region:
Sometimes, mirrored Latin and Greek symbols are used in Arabic mathematical notation (especially in western Arabic regions):
However, in Iran, usually Latin and Greek symbols are used.
The letter(زzayn, from the first letter of the second word ofدالة زائدية"hyperbolic function") is added to the end of trigonometric functions to express hyperbolic functions. This is similar to the wayh{\displaystyle \operatorname {h} }is added to the end of trigonometric functions in Latin-based notation.
For inverse trigonometric functions, the superscript−١in Arabic notation is similar in usage to the superscript−1{\displaystyle -1}in Latin-based notation. | https://en.wikipedia.org/wiki/Modern_Arabic_mathematical_notation |
Probability theoryandstatisticshave some commonly used conventions, in addition to standardmathematical notationandmathematical symbols.
Theα-level uppercritical valueof aprobability distributionis the value exceeded with probabilityα{\textstyle \alpha }, that is, the valuexα{\textstyle x_{\alpha }}such thatF(xα)=1−α{\textstyle F(x_{\alpha })=1-\alpha }, whereF{\textstyle F}is the cumulative distribution function. There are standard notations for the upper critical values of some commonly used distributions in statistics:
Common abbreviations include: | https://en.wikipedia.org/wiki/Notation_in_probability_and_statistics |
Insemantics,mathematical logicand related disciplines, theprinciple of compositionalityis the principle that the meaning of a complex expression is determined by the meanings of its constituent expressions and the rules used to combine them. The principle is also calledFrege's principle, becauseGottlob Fregeis widely credited for the first modern formulation of it. However, the principle has never been explicitly stated by Frege,[1]and arguably it was already assumed byGeorge Boole[2]decades before Frege's work.
The principle of compositionality (also known assemantic compositionalism) is highly debated in linguistics. Among its most challenging problems there are the issues ofcontextuality, the non-compositionality ofidiomatic expressions, and the non-compositionality ofquotations.[3]
Discussion of compositionality started to appear at the beginning of the 19th century, during which it was debated whether what was most fundamental in language was compositionality orcontextuality, and compositionality was usually preferred.[4]Gottlob Fregenever adhered to the principle of compositionality as it is known today (Frege endorsed thecontext principleinstead), and the first to explicitly formulate it wasRudolf Carnapin 1947.[4]
A common formulation[4]of the principle of compositionality comes fromBarbara Partee, stating: "The meaning of a compound expression is a function of the meanings of its parts and of the way they are syntactically combined."[5]
It is possible to distinguish different levels of compositionality. Strong compositionality refers to compound expressions that are determined by the meaning of itsimmediateparts and a top-level syntactic function that describes their combination. Weak compositionality refers to compound expressions that are determined by the meaning of its parts as well as their complete syntactic combination.[6][7]However, there can also be further gradations in between these two extremes. This is possible, if one not only allows the meaning of immediate parts but also the meaning of the second-highest parts (third-highest parts, fourth-highest parts, etc.) together with functions that describes their respective combinations.[7]
On a sentence level, the principle claims that what remains if one removes thelexicalparts of a meaningfulsentence, are the rules of composition. The sentence "Socrates was a man", for example, becomes "S was a M" once the meaningful lexical items—"Socrates" and "man"—are taken away. The task of finding the rules of composition, then becomes a matter of describing what the connection between S and M is.
Among the most prominent linguistic problems that challenge the principle of compositionality are the issues ofcontextuality, the non compositionality ofidiomatic expressions, and the non compositionality ofquotations.[3]
It is frequently taken to mean that every operation of thesyntaxshould be associated with an operation of the semantics that acts on the meanings of the constituents combined by the syntactic operation. As a guideline for constructing semantic theories, this is generally taken, as in the influential work on the philosophy of language byDonald Davidson, to mean that every construct of the syntax should be associated by a clause of theT-schemawith an operator in the semantics that specifies how the meaning of the whole expression is built from constituents combined by the syntactic rule. In some general mathematical theories (especially those in the tradition ofMontague grammar), this guideline is taken to mean that the interpretation of a language is essentially given by ahomomorphismbetween an algebra of syntactic representations and an algebra of semantic objects.
The principle of compositionality also exists in a similar form in thecompositionality of programming languages.
The principle of compositionality has been the subject of intense debate. Indeed, there is no general agreement as to how the principle is to be interpreted, although there have been several attempts to provide formal definitions of it.[8]
Scholars are also divided as to whether the principle should be regarded as a factual claim, open toempiricaltesting; ananalytic truth, obvious from the nature of language and meaning; or amethodologicalprinciple to guide the development of theories of syntax and semantics. The Principle of Compositionality has been attacked in all three spheres, although so far none of the criticisms brought against it have been generally regarded as compelling.[citation needed]Most proponents of the principle, however, make certain exceptions foridiomaticexpressions in natural language.[8]
The principle of compositionality usually holds when only syntactic factors play in the increased complexity ofsentence processing, while it becomes more problematic and questionable when the complexity increase is due to sentence or discoursecontext,semantic memory, orsensory cues.[9]Among the problematic phenomena for traditional theories of compositionality is that oflogical metonymy, which has been studied at least since the mid 1990s by linguistsJames PustejovskyandRay Jackendoff.[10][11][12]Logical metonymies are sentences likeJohn began the book, where the verbto beginrequires (subcategorizes) an event as its argument, but in a logical metonymy an object (i.e.the book) is found instead, and this forces to interpret the sentence by inferring an implicit event ("reading", "writing", or other prototypical actions performed on a book).[10]The problem for compositionality is that the meaning of reading or writing is not present in the words of the sentence, neither in "begin" nor in "book".
Further, in the context of the philosophy of language, the principle of compositionality does not explain all of meaning. For example, you cannot infersarcasmpurely on the basis of words and their composition, yet a phrase used sarcastically means something completely different from the same phrase uttered straightforwardly. Thus, some theorists argue that the principle has to be revised to take into account linguistic and extralinguisticcontext, which includes the tone of voice used, common ground between the speakers, the intentions of the speaker, and so on.[8] | https://en.wikipedia.org/wiki/Principle_of_compositionality |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.